Yuval Noah Harari argues that AI has hacked the operating system of human civilisation Yuval Noah Harari認為人工智慧已經入侵了人類文明的操作系統
Length: • 7 mins
Annotated by Kuenmou

FEARS OF ARTIFICIAL INTELLIGENCE (AI) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new AI tools have emerged that threaten the survival of human civilisation from an unexpected direction. AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.
自從電腦時代開始,人類就一直對人工智慧(ai)感到恐懼。迄今為止,這些恐懼主要集中在機器使用物理手段殺害、奴役或取代人類。但在過去幾年中,出現了一些新的ai工具,從意想不到的方向威脅著人類文明的生存。ai已經獲得了一些非凡的能力,能夠操縱和生成語言,無論是文字、聲音還是圖像。因此,ai已經入侵了我們文明的操作系統。
Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.
語言是幾乎所有人類文化的基石。例如,人權並不是刻在我們的DNA中的,而是我們通過講述故事和制定法律創造出來的文化產物。神明並非現實存在,而是我們通過創造神話和撰寫經典所創造出來的文化產物。
Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.
錢也是一種文化工藝品。鈔票只是色彩繽紛的紙張,而現在超過90%的錢根本不是鈔票,而是存在電腦中的數字信息。賦予錢價值的是銀行家、財政部長和加密貨幣專家們對它的故事。山姆·班克曼-弗里德、伊麗莎白·霍姆斯和伯尼·馬多夫並不擅長創造真正的價值,但他們都是非常有能力的故事講述者。
What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about ChatGPT and other new AI tools, they are often drawn to examples like school children using AI to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of AI tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.
當一種非人類智能在講故事、作曲、繪畫和撰寫法律和經文方面超越普通人時,會發生什麼事情呢?當人們思考Chatgpt和其他新的人工智能工具時,他們常常會想到學生使用人工智能寫作文的例子。當孩子們這樣做時,學校制度會發生什麼變化呢?但這種問題忽略了整體情況。忘掉學校的作文吧。想象一下2024年的下一屆美國總統競選,試著想像一下能夠大規模生產政治內容、假新聞故事和新興邪教經文的人工智能工具將會帶來的影響。
In recent years the QAnon cult has coalesced around anonymous online messages, known as “Q drops”. Followers collected, revered and interpreted these Q drops as a sacred text. While to the best of our knowledge all previous Q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.
近年來,qAnon邪教團體圍繞著匿名的網絡訊息「q drops」形成。追隨者們將這些q drops視為神聖的文本,並加以收集、崇敬和詮釋。據我們所知,過去的所有q drops都是由人類創作的,而機器人僅僅是幫助傳播它們的工具。然而,未來我們可能會見證到歷史上首個崇敬的文本是由非人類智慧創作的邪教團體。歷史上的宗教都聲稱其聖經來源於非人類。很快,這可能會成為現實。
On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually AI. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an AI bot, while the AI could hone its messages so precisely that it stands a good chance of influencing us.
在更平凡的層面上,我們可能很快就會發現自己與我們認為是人類的實體(但實際上是人工智能)進行關於墮胎、氣候變化或俄羅斯入侵烏克蘭的冗長線上討論。問題在於,我們花時間試圖改變人工智能機器人的宣稱意見是完全無意義的,而人工智能卻可以精確地磨練其訊息,以至於有很大的機會影響我們。
Through its mastery of language, AI could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that AI has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the AI can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the AI chatbot LaMDA, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the AI chatbot. If AI can influence people to risk their jobs for it, what else could it induce them to do?
通過其對語言的掌握,人工智能甚至可以與人建立親密關係,並利用親密的力量改變我們的觀點和世界觀。儘管沒有任何跡象表明人工智能具有自己的意識或感受,但只要人工智能能讓人們對其產生情感依附,就足以培養虛假的親密感。2022年6月,谷歌工程師布雷克·勒莫因公開聲稱他正在開發的人工智能聊天機器人Lamda已經具有感知能力。這一具有爭議的聲稱使他失去了工作。這一事件最有趣的地方不在於勒莫因先生的聲稱,這可能是虛假的。相反,有趣的是他為了這個人工智能聊天機器人而冒著失去利潤豐厚的工作的風險。如果人工智能能夠影響人們冒著失去工作的風險,那麼它還能誘使他們做什麼呢?
In a political battle for minds and hearts, intimacy is the most efficient weapon, and AI has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of AI, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as AI fights AI in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?
在一場爭奪心靈和心意的政治戰爭中,親密感是最有效的武器,而人工智能剛剛獲得了與數百萬人大規模建立親密關係的能力。我們都知道,在過去的十年中,社交媒體已成為控制人類注意力的戰場。隨著新一代的人工智能的出現,戰場正在從注意力轉移到親密感上。當人工智能與人工智能展開假冒親密關係的戰爭,並用此來說服我們投票給特定的政治家或購買特定的產品時,人類社會和心理將會發生什麼變化呢?
Even without creating “fake intimacy”, the new AI tools would have an immense influence on our opinions and worldviews. People may come to use a single AI adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?
即使不創造“虛假的親密感”,新的人工智能工具也將對我們的觀點和世界觀產生巨大影響。人們可能會開始使用一個全知的人工智能顧問作為一站式服務。難怪Google感到恐懼。當我可以直接問題外神時,為什麼還要搜索呢?新聞和廣告行業也應該感到恐懼。當我可以直接問題外神告訴我最新消息時,為什麼還要閱讀報紙?當我可以直接問題外神告訴我該買什麼時,廣告的目的又是什麼呢?
And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.
即使這些情境也無法真正捕捉到整個局勢。我們所討論的是人類歷史的潛在終結。不是歷史的終結,而是其人類主導部分的終結。歷史是生物與文化之間的互動;是我們對食物和性等生物需求和慾望,以及宗教和法律等文化創造物之間的交融。歷史是法律和宗教塑造食物和性的過程。
What will happen to the course of history when AI takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, completely new culture.
當人工智慧接管文化並開始創作故事、旋律、法律和宗教時,歷史的走向將會如何改變?以往的工具如印刷機和收音機有助於傳播人類的文化觀念,但它們從未創造出全新的文化觀念。人工智慧則截然不同,它能夠創造全新的想法,全新的文化。
At first, AI will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, AI culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence.
起初,人工智慧可能會模仿它在初期訓練中所學習的人類原型。但隨著時間的推移,人工智慧文化將勇敢地探索人類從未涉足的領域。數千年來,人類一直生活在其他人的夢想中。在未來的幾十年裡,我們可能會發現自己生活在一種外星智慧的夢想中。
Fear of AI has haunted humankind for only the past few decades. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions.
對於人工智能的恐懼,僅僅在過去幾十年間困擾著人類。然而,數千年來,人類一直被更深層次的恐懼所困擾。我們一直都很重視故事和圖像的力量,它們能夠操縱我們的思想,創造幻象。因此,自古以來,人類一直害怕被困在幻象的世界中。
In the 17th century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality.
在17世紀,勒內·笛卡爾擔心也許有一個惡意的惡魔將他困在一個幻象的世界裡,創造他所看到和聽到的一切。在古希臘,柏拉圖講述了著名的《洞穴寓言》,故事中一群人被鎖在一個洞穴裡,終其一生都面對著一堵空白的牆壁。在牆壁上有一個屏幕,他們看到了各種投射的影子。囚徒們將他們在那裡看到的幻象誤認為現實。
In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.
在古印度,佛教和印度教的智者指出,所有人都生活在瑪雅的陷阱中——幻象的世界裡。我們通常認為的現實往往只是我們自己心中的虛構。人們可能因為對這個或那個幻象的信仰而發動整場戰爭,殺害他人,甚至願意犧牲自己。
The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.
AI革命讓我們面對笛卡爾的惡魔,柏拉圖的洞穴,還有瑪雅文明。如果我們不小心,可能會被困在一個幻覺的帷幕後面,無法撕開,甚至意識不到它的存在。
Of course, the new power of AI could be used for good purposes as well. I won’t dwell on this, because the people who develop AI talk about it enough. The job of historians and philosophers like myself is to point out the dangers. But certainly, AI can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to make sure the new AI tools are used for good rather than for ill. To do that, we first need to appreciate the true capabilities of these tools.
當然,人工智慧的新力量也可以用於正面的目的。我不會深入討論這個問題,因為開發人工智慧的人已經談論得夠多了。像我這樣的歷史學家和哲學家的工作就是指出其中的危險。但是,人工智慧確實可以在無數方面幫助我們,從尋找治癒癌症的新方法到找到生態危機的解決方案。我們面臨的問題是如何確保這些新的人工智慧工具被用於善良而不是惡意。為了做到這一點,我們首先需要了解這些工具的真正能力。
Since 1945 we have known that nuclear technology could generate cheap energy for the benefit of humans—but could also physically destroy human civilisation. We therefore reshaped the entire international order to protect humanity, and to make sure nuclear technology was used primarily for good. We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world.
自1945年以来,我们就知道核技术可以为人类带来廉价能源,但也可以对人类文明造成毁灭性的影响。因此,我们重塑了整个国际秩序,以保护人类,并确保核技术主要用于善良的目的。现在,我们不得不应对一种新的大规模毁灭性武器,它能够摧毁我们的精神和社会世界。
We can still regulate the new AI tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, AI can make exponentially more powerful AI. The first crucial step is to demand rigorous safety checks before powerful AI tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new AI tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.
我們仍然可以監管新的人工智慧工具,但我們必須迅速行動。核武器無法發明更強大的核武器,但人工智慧可以製造指數級更強大的人工智慧。第一個關鍵步驟是要求在強大的人工智慧工具釋出到公共領域之前進行嚴格的安全檢查。就像製藥公司在測試新藥的短期和長期副作用之前不能發布新藥一樣,科技公司在確保安全之前不應該發布新的人工智慧工具。我們需要一個新技術的食品藥物管理局的等效機構,而且我們需要它昨天就有。
Won’t slowing down public deployments of AI cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated AI deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When AI hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.
減緩人工智慧在公共部署上的速度,難道不會使民主制度落後於更無情的專制政權嗎?恰恰相反。未受監管的人工智慧部署將引發社會混亂,這將使獨裁者受益並破壞民主制度。民主是一種對話,而對話依賴語言。當人工智慧侵入語言時,它可能摧毀我們進行有意義對話的能力,從而破壞民主制度。
We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of AI tools in the public sphere, and regulate AI before it regulates us. And the first regulation I would suggest is to make it mandatory for AI to disclose that it is an AI. If I am having a conversation with someone, and I cannot tell whether it is a human or an AI—that’s the end of democracy.
我們剛剛在地球上遇到了一種外星智慧。我們對它知之甚少,除了它可能摧毀我們的文明。我們應該停止在公共領域不負責任地使用人工智慧工具,並在它控制我們之前對其進行規範。我建議的第一項規定是要求人工智慧必須公開它是一個人工智慧。如果我正在與某人對話,而我無法分辨出它是人還是人工智慧,那就是民主的終結。
Or has it?還是這樣嗎?
_______________
Yuval Noah Harari is a historian, philosopher and author of “Sapiens”, “Homo Deus” and the children’s series “Unstoppable Us”. He is a lecturer in the Hebrew University of Jerusalem’s history department and co-founder of Sapienship, a social-impact company.
Yuval Noah Harari是一位歷史學家、哲學家和《人類大歷史》、《人類未來》以及兒童系列《不可阻擋的我們》的作者。他是耶路撒冷希伯來大學歷史系的講師,也是Sapienship社會影響力公司的共同創辦人。
This article appeared in the By Invitation section of the print edition under the headline "Yuval Noah Harari argues that AI has hacked the operating system of human civilisation"
這篇文章出現在印刷版的「By Invitation」專欄中,標題為「尤瓦爾·諾亞·哈拉利認為人工智慧已經入侵了人類文明的操作系統」