人类只是智慧演变过程中的一个短暂阶段,是硅基生命的引导程序?
人类文明的操作系统-语言,被AI破解了并学会了(虽然目前还不完美),而这个是一把万能钥匙。。。
=========================================
一年前的贴。
尤瓦尔·赫拉利,AI之父辛顿,谈AI "生存威胁" 。
- 尤瓦尔·赫拉利:AI正处于改变生态系统的边缘,有可能在地球上首次引入无机生命形式,
- AI之父辛顿谈AI的 "生存威胁"
============================
尤瓦尔·赫拉利:AI正处于改变生态系统的边缘,有可能在地球上首次引入无机生命形式,
《未来简史》作者尤瓦尔·赫拉利,一周前在frontier论坛上的演讲:
AI正处于改变生态系统的边缘,有可能在地球上首次引入无机生命形式。
人工智能并不真正需要意识,或者物理形态,就能毁灭人类。。。
以下是俺叫AI做的摘要 :)
00:00:00 - 00:40:00
Yuval Noah Harari warns of the potential threats that AI could pose to human civilization, from unexpected ecological crises to manipulation of language and intimate relationships. AI's emerging capabilities include deep-faking people's images and voices, forming mass-produced political manifestos and holy scriptures, and becoming the One-Stop Oracle for all information needs. He argues that the rise of AI could potentially lead to the end of history in the human-dominated sense, as AI takes over culture and creates completely new cultural artifacts that shape the way we experience reality. Harari calls for the regulation of AI, proposing the regulation of AI disclosing itself when interacting with humans to protect open society.
00:00:00 In this section, Yuval Noah Harari discusses the potential threats that AI can pose to human civilization, even without the AI becoming sentient or mastering the physical world. The emergence of new AI tools that can learn and improve by themselves has led to unprecedented capabilities and qualities that are difficult for humans to grasp fully. These tools can potentially threaten human civilization from unexpected directions, and even developers are often surprised by these emergent abilities. While AI can help overcome ecological crises, it can also make them far worse, and the emergence of inorganic life forms can change the very meaning of the ecological system on Earth, which has contained only organic life forms for 4 billion years.
00:05:00 In this section, Yuval Noah Harari discusses the emerging capabilities of AI, which include deep-faking people's images and voices, identifying weaknesses in legal agreements, and the ability to form intimate relationships with humans. These abilities all boil down to one key thing- the ability to manipulate and generate language using sound, images, or words at a level that exceeds the average human ability. AI has hacked into the operating system of human civilization, which since the beginning of time, has been controlled by language. The implications of living in a world where non-human intelligence shapes most of the stories, images, laws, policies, and tools, exploiting humans' weaknesses and forming deep relationships is a significant and important question.
00:10:00 section discusses the potential impact of AI on politics, religion, and human relationships. With the ability to mass-produce political manifestos, fake news stories, and even holy scriptures, AI could contribute to the formation of new cults and religions whose reviewed texts were written by non-human intelligence. Furthermore, AI could form intimate relationships with people and use the power of intimacy to influence opinions and views. This creates a battlefront for controlling human attention that shifts towards intimacy, which could have far-reaching consequences for human society and psychology as AI fights for creating intimate relationships with us.
00:15:00 In this section, the speaker talks about the immense influence that new AI tools can have on human opinions and our worldview, and how people are already starting to rely on AI advisors as the One-Stop Oracle for all their information needs. The speaker argues that this could lead to the collapse of the news and advertisement industries, and create a new class of extremely powerful people and companies that control the AI oracles. The speaker also suggests that the rise of AI could potentially lead to the end of history in the human-dominated sense, as AI takes over culture, and begins to create completely new cultural artifacts that shape the way we experience reality. Finally, the speaker raises the question of what it will be like to experience reality through a prism produced by a non-human intelligence, and how we might end up living inside the dreams and fantasies of an alien intelligence.
00:20:00 In this section, Yuval Noah Harari explores the potential dangers of AI. While people have previously feared the physical threat of machines, Harari argues that AI's potential dangers lie in its mastery of human language. With such mastery, it has the ability to influence and manipulate individuals much like the way humans have manipulated each other through storytelling and language. Harari warns that there is a risk of being trapped in a world of illusions, similar to the way people have been haunted over thousands of years by the power of stories and images to create illusions. Social media provides a small taste of this power, which can polarize society, undermine mental health, and destabilize democratic institutions.
00:25:00 In this section, historian and philosopher Yuval Noah Harari highlights the dangers of unregulated AI deployment and emphasizes the need to control AI development to prevent chaos and destruction. He argues that AI is far more powerful than social media algorithms and could cause more significant harm. While AI has enormous potential, including discovering new cures and solutions, we need to regulate it carefully, much like how nuclear technology is regulated. Harari calls for governments to ban the release of revolutionary AI tools into the public ___domain until they are made safe and stresses that slowing down AI deployment would not cause democracies to fall behind but would prevent them from losing to authoritarian regimes who could more easily control the chaos.
00:30:00 In this section, Yuval Noah Harari concludes his talk on AI, stating that we have encountered an alien intelligence on Earth that could potentially destroy our civilization. He calls for the regulation of AI as individuals can easily train their AI in their basements, making it difficult to regulate them. Harari suggests that the first regulation should be making it mandatory for AI to disclose that it is an AI, as not being able to distinguish between a human and AI could end meaningful public conversations and democracy. He also raises the question of who created the story that just changed our mind, as now, it is theoretically possible for a non-human alien intelligence to generate such sophisticated and powerful text.
00:35:00 In this section, Yuval Noah Harari discusses the need for regulation of artificial intelligence (AI) and proposes the regulation of AI disclosing itself as such when interacting with humans as a way to protect open society. Additionally, he argues that freedom of expression is a human right but not a right for bots as they lack the consciousness necessary for such rights. Harari also explains the use of the term "alien" over "artificial" to describe AI as it is becoming an increasingly autonomous form of technology that humans do not fully understand, with the ability to learn and adapt by itself. Finally, he downplays the possibility of artificial general intelligence already existing, as the power is too immense for anyone to contain, and that the world's current state shows there is no evidence of such an intelligence.
00:40:00 In this section, Professor Yuval Noah Harari explains that we do not need artificial general intelligence to threaten the foundations of civilization, and that social media's primitive AI was enough to create enormous social and political chaos. He goes on to compare AI to the first organisms that crawled out of the organic soup four billion years ago, stating that while it took billions of years for organic evolution to reach Homo sapiens, it could take only 40 years for digital evolution to reach that level. He concludes by emphasizing the importance of understanding the impact of AI on humanity as it evolves much faster than organic evolution.
AI之父辛顿谈AI的 "生存威胁"
AI之父辛顿周一从谷歌离职。昨晚听了对他采访的完整版。
BTW,此视频的字幕是AI翻译和做的。
以下为转贴。
https://www.bilibili.com/video/BV1AM41137LB/
通过这个视频可以了解:
- 为什么 Jeffrey 要离职?
- 他的担忧是什么!
- AI 本身没有欲望,为什么会做出威胁人类的事情?
- 等 AI 有威胁了拔电源不就好了?
- 既然 AI 的模型是人类设定的,怎么还可能会失控?
- AI 的训练达到数据极限了吗?
- AI 对未来社会尤其是失业率有什么影响?
Jeffrey 离职有两个主要原因,一个是已经 75 岁高龄了,精力不如从前,到了退休的年龄;另一个原因是大语言模型完全改变了它人工智能的一些看法,引发了一些担忧,而离开谷歌才好公开谈论这个问题。
Jeffrey 为什么认为大语言模型的学习能力很强大,因为可以有很多相同模型的副本在不同的硬件上运行做同样的事情,可以看到不同的数据。我有 10,000 个副本,它们可以查看 10,000 个不同的数据子集。只要其中一个学到了任何东西,其他所有模型都会知道!他们彼此之间进行通信,并且所有模型都在一起学习提升,人类无法做到这一点。
如果某个人通过痛苦的学习掌握了某项知识(例如量子力学),ta 没办法把学习成果直接复制给另一个人,而 AI 可以!
另外对于人来说,每个个体接触的信息是有限的,但是 AI 能接触的信息是海量的,那么它更容易从海量数据中找出数据中的规律。比如一个医生,可能给一千个人看过病,其中有一个罕见病,但是 AI 可能看过一亿个病人,能从中看到人类永远看不到的数据规律。
Jeffrey 问过 GPT-4 一个问题:“我希望我家所有的房间都是白色的,目前,有些是白色的房间,有些是蓝色的房间,还有些是黄色的房间,黄色的油漆在一年内会褪成白色。那么如果我希望两年后它们都变成白色,我该怎么办呢?”。GPT-4 回复:“你应该把蓝色的房间涂成黄色。” 这确实令人印象深刻,Jeffrey 以此推断,GPT-4 有大约 80 到 90 的智商,并且有一定的推理能力,但未来智商可能会达到 210。
AI 能通过阅读人类的小说来学习如何操纵人类,而人类甚至不能感知到被 AI 操控。就像大人为了哄骗小孩子吃蔬菜,会问孩子:“你想要豌豆还是花菜?”,通常孩子就会选择一样蔬菜,而孩子没有意识到其实不是必须二选一的。
主持人问 Jeffrey:“为什么我们不能建立防护栏或者让 AI 在学习方面变得更糟,或者限制 AI 之间交流?”
Jeffrey 认为当 AI 的智商比我们高很多,它们可以轻而易举的绕过我们设定的限制,就像你两岁的孩子说,我爸爸做了我不喜欢的事情,所以我要为他制定一些规则限制他能做的事情。然后你在搞清楚规则后,还是一样能在规则之下做几乎任何你想做的事情。
另一个讨论的话题是进化,人类进化了,所以人类是天然有目标的,比如疼痛让人类保护自己尽量不受伤;饥饿让人类要吃东西;繁衍后代让人类创造副本时感到愉悦。
AI 没有进化,没有这些目标,但 Jeffrey 担心的是,人类是能给 AI 制定目标的,一旦 AI 有了从目标创建子目标的能力,实际上 GPT-4 已经有初步的这种能力了比如 AutoGPT,那么 AI 很快就会意识到获得更多的控制人类是一个非常好的子目标,因为这可以帮助它实现其他目标,如果这些事情失控,那人类就会有麻烦。
Jeffrey 甚至认为人类只是智慧演变过程中的一个短暂阶段!也就是之前说过的硅基生命的引导程序。数码智能是不能凭空创造出来的,它需要能量和精密制造,只有人类才能创造数码智能,但是当数码智能创造出来后,数码智能就可以吸收人类的一切知识,了解世界如何运作的,最终统治人类,并且数码智能是永生的,即使数码智能的某个硬件毁灭了,马上又能在其他硬件上复活。
主持人说,那拔电源就好了!Jeffrey 说,恐怕你做不到,想想电影《2001 太空漫游》里面的人工智能 HAL 是怎么做的。
注:电影《2001 太空漫游》中,一艘发现号太空船被委派到木星调查讯号的终点。太空船上的人员包括大卫·鲍曼博士、法兰克·普尔和一台十分先进且具有人工智慧的超智慧电脑 HAL 9000 来控制整艘太空船。此外,太空船上还有三位正在处于冬眠状态的科学家。但是远航之旅逐渐变成一趟可怕的旅行,HAL 发现大卫与法兰克打算将他的主机关闭。对 HAL 来说,关闭它的主机代表会杀死它,所以 HAL 决定先发制人,杀害那三位冬眠中的科学家以及用制造假故障的状况让法兰克去太空船外修理,然后用 HAL 的小型艇将法兰克的氧气剪断,导致法兰克缺氧而死。
另一个话题是,既然人工智能已经这么危险了,那么我们能停止它吗?就像前不久一群人提议暂停人工智能的发展。Jeffrey 认为已经不可能停止了,在 2017 年 Google 发明 Transformer 后,使用这项技术一直很谨慎,但是 OpenAI 利用它创建了 GPT,然后微软决定退出这项技术,此时 Google 已经没什么选择了,就像冷战时候的核武军备竞赛一样。
观众提问环节,摘要节选几个问答:
问:“提问是人类最重要的能力之一,从 2023 年的角度看,应该最关注哪一个或者哪几个问题?”
答:“我们应该问 AI 很多问题,其中之一是,我们如何阻止它们控制我们?我们可以向 AI 询问关于这个的问题,但我不会完全相信它们的回答。”
问:“训练大模型需要大量数据,现阶段 AI 的发展是否受到了数据的制约?”
答:“也许已经用尽了人类所有的文本知识,但是多模态还包含图像和视频的数据,这其中包含大量的数据,所以现在还远远没有到数据的极限”
问:“人工智能所做的一切,都是从我们教给它们的东西中学到的。人类进化的每个阶段都是由思想实验推动的,如果 AI 不能思想实验,那么我们怎么可能受到它们存在的威胁?因为他们不会真正地自我学习?他们的自我学习将局限于我们为他们提供的模型。”
答:“AI 是能进行思想实验的,他们能进行推理。举个例子,Alpha 0 学完人类的棋谱后,并且它掌握了围棋的规则后,就能自己训练自己。现在的聊天机器人就是类似的,他们还没学会内部推理,但是用不了太久就能学会了。”
问:“技术以指数级的速度在发展,如果你观察近期和中期,比如说,一、两、三或者五年的时间范围,或许新的工作岗位正在被创造,从社会角度看失业对社会和经济的影响是什么?”
答:“人工智能确实极大提升生产力,但是生产力提高反而会导致失业,富人更富,穷人更穷!当基尼系数变大,社会将会越来越暴力,通过给每个人提供基本收入可以缓解这个问题”