小程序
传感搜
传感圈

AI Is an Existential Threat—Just Not the Way You Think

2023-07-14 07:41:04
关注

The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.

The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp increase in anxiety about AI. For the past few months, executives and AI safety researchers have been offering predictions, dubbed “P(doom),” about the probability that AI will bring about a large-scale catastrophe.

Worries peaked in May 2023 when the nonprofit research and advocacy organization Center for AI Safety released a one-sentence statement: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by many key players in the field, including the leaders of OpenAI, Google and Anthropic, as well as two of the so-called “godfathers” of AI: Geoffrey Hinton and Yoshua Bengio.

You might ask how such existential fears are supposed to play out. One famous scenario is the “paper clip maximizer” thought experiment articulated by Oxford philosopher Nick Bostrom. The idea is that an AI system tasked with producing as many paper clips as possible might go to extraordinary lengths to find raw materials, like destroying factories and causing car accidents.

A less resource-intensive variation has an AI tasked with procuring a reservation to a popular restaurant shutting down cellular networks and traffic lights in order to prevent other patrons from getting a table.

Office supplies or dinner, the basic idea is the same: AI is fast becoming an alien intelligence, good at accomplishing goals but dangerous because it won’t necessarily align with the moral values of its creators. And, in its most extreme version, this argument morphs into explicit anxieties about AIs enslaving or destroying the human race.

Actual harm

In the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the impact of engagement with AI on people’s understanding of themselves, and I believe these catastrophic anxieties are overblown and misdirected.

Yes, AI’s ability to create convincing deep-fake video and audio is frightening, and it can be abused by people with bad intent. In fact, that is already happening: Russian operatives likely attempted to embarrass Kremlin critic Bill Browder by ensnaring him in a conversation with an avatar for former Ukrainian President Petro Poroshenko. Cybercriminals have been using AI voice cloning for a variety of crimes – from high-tech heists to ordinary scams.

AI decision-making systems that offer loan approval and hiring recommendations carry the risk of algorithmic bias, since the training data and decision models they run on reflect long-standing social prejudices.

These are big problems, and they require the attention of policymakers. But they have been around for a while, and they are hardly cataclysmic.

Not in the same league

The statement from the Center for AI Safety lumped AI in with pandemics and nuclear weapons as a major risk to civilization. There are problems with that comparison. COVID-19 resulted in almost 7 million deaths worldwide, brought on a massive and continuing mental health crisis and created economic challenges, including chronic supply chain shortages and runaway inflation.

Nuclear weapons probably killed more than 200,000 people in Hiroshima and Nagasaki in 1945, claimed many more lives from cancer in the years that followed, generated decades of profound anxiety during the Cold War and brought the world to the brink of annihilation during the Cuban Missile crisis in 1962. They have also changed the calculations of national leaders on how to respond to international aggression, as currently playing out with Russia’s invasion of Ukraine.

AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is far from being able to decide on and then plan out the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.

Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.

What it means to be human

Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.

For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are being automated and farmed out to algorithms. As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.

Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to reduce that kind of serendipity and replace it with planning and prediction.

Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.

Not dead but diminished

So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.

The human species will survive such losses. But our way of existing will be impoverished in the process. The fantastic anxieties around the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall T.S. Eliot’s famous closing lines of “The Hollow Men”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.”

This article was originally published on The Conversation. Read the original article.

参考译文
人工智能是一种存在性威胁——但并非你想象的那种方式
以下文章经The Conversation授权转载。The Conversation是一家在线出版机构,报道最新研究成果。ChatGPT及相关人工智能系统的兴起,伴随着对人工智能焦虑的急剧上升。过去几个月,高管和人工智能安全研究人员提出了各种预测,被称为“P(doom)”,即人工智能造成大规模灾难的可能性。2023年5月,这种担忧达到顶峰,当时非营利研究与倡导组织“人工智能安全中心”发布了一句话声明:“缓解人工智能带来的灭绝风险应当成为全球优先事项,与其他大规模社会风险并列,例如疫情和核战争。”该声明得到了该领域许多重要人物的签名,包括OpenAI、Google和Anthropic的领导者,以及两位被称为“人工智能之父”的人物:杰弗里·辛顿和约舒亚·本吉奥。你可能会问,这样的存在主义恐惧究竟是如何发生的。一个著名的场景是牛津哲学家尼克·博斯特罗姆提出的“回形针最大化者”思想实验。他的观点是,一个被赋予尽可能多地制造回形针任务的人工智能系统,可能会不择手段地寻找原材料,例如摧毁工厂和引发交通事故。一个不太极端的变体设想,一个被赋予订到热门餐厅座位任务的人工智能系统,可能会关闭蜂窝网络和交通信号灯,以防止其他人获取餐桌。办公用品或晚餐,基本想法是一样的:人工智能正在迅速成为一个陌生的智慧体,擅长实现目标,但危险之处在于它未必与创造者的道德价值观一致。而在其最极端的版本中,这一论点演变为对人工智能奴役或毁灭人类种族的明确担忧。近年来,我和波士顿大学应用伦理中心的同事一直在研究人们对人工智能的参与如何影响他们对自我的理解。我相信这些灾难性恐惧被夸大了,而且方向错误。的确,人工智能创建可信深度伪造视频和音频的能力令人害怕,而且它们可能会被怀有恶意的人滥用。实际上,这已经发生了:俄罗斯间谍很可能试图通过让Bill Browder与乌克兰前总统彼得罗·波罗申科的虚拟形象交谈,来羞辱这位克里姆林宫的批评者。网络罪犯一直在利用人工智能语音克隆技术实施各类犯罪——从高科技抢劫到普通诈骗。提供贷款批准和招聘建议的人工智能决策系统也存在算法偏见的风险,因为它们的训练数据和决策模型反映了根深蒂固的社会偏见。这些都是大问题,需要政策制定者的关注。但这些风险并非最近才出现,而且它们远非毁灭性的。不相提并论 人工智能安全中心的声明将人工智能与疫情和核武器并列为对文明的首要风险。但这种比较存在诸多问题。新冠疫情在全球造成了近700万人死亡,引发了巨大的、持续的心理健康危机,并带来了经济挑战,包括长期的供应链短缺和严重的通货膨胀。核武器在1945年广岛和长崎造成的死亡人数可能超过20万,之后的很多年里,许多人因癌症丧生,冷战时期带来了几十年的深刻焦虑,并在1962年古巴导弹危机中把世界推向毁灭的边缘。核武器还改变了国家领导人在面对国际侵略时的决策方式,目前这一点正在俄罗斯入侵乌克兰的过程中体现出来。而人工智能远未具备造成这种程度损害的能力。“回形针”场景及其类似设想纯属科幻。现有人工智能应用程序执行的是特定任务,而不是进行广泛判断。这项技术远远无法决定并策划出关闭交通以获得餐厅座位,或炸毁汽车工厂以满足你对回形针渴望所需的种种目标和次级目标。不仅技术缺乏这些场景中所需的复杂多层次判断能力,它也不具备对关键基础设施足够自主的访问权限,从而引发这种程度的破坏。什么是“人” 实际上,使用人工智能确实存在某种存在主义风险,但这种风险是哲学上的存在主义含义,而非末日意义上的。当前形式的人工智能可能会改变人类对自我的看法,削弱人们认为对“人”至关重要的能力和体验。例如,人类是做出判断的生物。人们理性地权衡各种情况,在工作和休闲时间里做出雇佣谁、谁应获得贷款、看什么节目等日常判断。但越来越多的这些判断正在被自动化并交由算法处理。当这种情况发生时,世界不会终结。但人们将逐渐失去自己做出这些判断的能力。他们做出的判断越少,他们在做出判断方面的能力就越差。再考虑一下人们生活中“偶然”的作用。人类重视偶遇:偶然发现一个地方、某人或某项活动,被吸引进去,并在回顾中欣赏这种偶然在有意义发现中所扮演的角色。但算法推荐系统的作用正是减少这种偶然性,取而代之的是规划和预测。最后,考虑一下ChatGPT的写作能力。这项技术正在逐步消除高等教育中写作作业的作用。如果真的如此,教育者将失去一个关键工具,用以教会学生如何批判性思考。没有灭亡,但逐渐退化 所以,不,人工智能不会毁灭世界。但我们在各种狭隘情境下对它的日益不加批判的接受,意味着人类最重要的一些能力正在逐渐削弱。算法已经在削弱人们做出判断的能力、享受偶遇的能力以及磨炼批判性思维的能力。人类种族很可能会在这种损失中幸存下来。但在这个过程中,我们存在的方式将变得贫乏。围绕即将到来的人工智能灾难、奇点、Skynet等的夸张恐惧,掩盖了这些更微妙的代价。请记住T·S·艾略特《空心人》中著名的结尾:“世界就是这样终结的,”他写道,“不是轰然一声,而是悄然无声。” 本文最初发布于The Conversation。阅读原文。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告

scientific

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

大模型又带火的一款2万星黑马项目!

提取码
复制提取码
点击跳转至百度网盘