小程序
传感搜
传感圈

Oppenheimer Offers Us a Fresh Warning of AI’s Danger

2023-07-28 20:12:06
关注

Eighty-one years ago, President Franklin D. Roosevelt tasked the young physicist J. Robert Oppenheimer with setting up a secret laboratory in Los Alamos, N.M. Along with his colleagues, Oppenheimer was tasked with developing the world’s first nuclear weapons under the code name the Manhattan Project. Less than three years later, they succeeded. In 1945 the U.S. dropped these weapons on the residents of the Japanese cities of Hiroshima and Nagasaki, killing hundreds of thousands of people.

Oppenheimer became known as “the father of the atomic bomb.” Despite his misplaced satisfaction with his wartime service and technological achievement, he also became vocal about the need to contain this dangerous technology.

But the U.S. didn’t heed his warnings, and geopolitical fear instead won the day. The nation raced to deploy ever more powerful nuclear systems with scant recognition of the immense and disproportionate harm these weapons would cause. Officials also ignored Oppenheimer’s calls for greater international collaboration to regulate nuclear technology.

Oppenheimer’s example holds lessons for us today, too. We must not make the same mistake with artificial intelligence as we made with nuclear weapons.

We are still in the early stages of a promised artificial intelligence revolution. Tech companies are racing to build and deploy AI-powered large language models, such as ChatGPT. Regulators need to keep up.

Though AI promises immense benefits, it has already exposed its potential for harm and abuse. Earlier this year the U.S. surgeon general released a report on the youth mental health crisis. It found that one in three teenage girls considered suicide in 2021. The data are unequivocal: big tech is a big part of the problem. AI will only amplify that manipulation and exploitation. The performance of AI rests on exploitative labor practices, both domestically and internationally. And massive, opaque AI models that are fed problematic data often exacerbate existing biases in society—affecting everything from criminal sentencing and policing to health care, lending, housing and hiring. In addition, the environmental impacts of running such energy-hungry AI models stress already fragile ecosystems reeling from the impacts of climate change.

AI also promises to make potentially perilous technology more accessible to rogue actors. Last year researchers asked a generative AI model to design new chemical weapons. It designed 40,000 potential weapons in six hours. An earlier version of ChatGPT generated bomb-making instructions. And a class exercise at the Massachusetts Institute of Technology recently demonstrated how AI can help create synthetic pathogens, which could potentially ignite the next pandemic. By spreading access to such dangerous information, AI threatens to become the computerized equivalent of an assault weapon or a high-capacity magazine: a vehicle for one rogue person to unleash devastating harm at a magnitude never seen before.

Yet companies and private actors are not the only ones racing to deploy untested AI. We should also be wary of governments pushing to militarize AI. We already have a nuclear launch system precariously perched on mutually assured destruction that gives world leaders just a few minutes to decide whether to launch nuclear weapons in the case of a perceived incoming attack. AI-powered automation of nuclear launch systems could soon remove the practice of having a “human in the loop”—a necessary safeguard to ensure faulty computerized intelligence doesn’t lead to nuclear war, which has come close to happening multiple times already. A military automation race, designed to give decision-makers greater ability to respond in an increasingly complex world, could lead to conflicts spiraling out of control. If countries rush to adopt militarized AI technology, we will all lose.

As in the 1940s, there is a critical window to shape the development of this emerging and potentially dangerous technology. Oppenheimer recognized that the U.S. should work with even its deepest antagonists to internationally control the dangerous side of nuclear technology—while still pursuing its peaceful uses. Castigating the man and the idea, the U.S. instead kicked off a vast cold war arms race by developing hydrogen bombs, along with related costly and occasionally bizarre delivery systems and atmospheric testing. The resultant nuclear-industrial complex disproportionately harmed the most vulnerable. Uranium mining and atmospheric testing caused cancer among groups that included residents of New Mexico, Marshallese communities and members of the Navajo Nation. The wasteful spending, opportunity cost and impact on marginalized communities were incalculable—to say nothing of the numerous close calls and the proliferation of nuclear weapons that ensued. Today we need both international cooperation and domestic regulation to ensure that AI develops safely.

Congress must act now to regulate tech companies to ensure that they prioritize the collective public interest. Congress should start by passing my Children’s Online Privacy Protection Act, my Algorithmic Justice and Online Transparency Act and my bill prohibiting the launch of nuclear weapons by AI. But that’s just the beginning. Guided by the White House’s Blueprint for an AI Bill of Rights, Congress needs to pass broad regulations to stop this reckless race to build and deploy unsafe artificial intelligence. Decisions about how and where to use AI cannot be left to tech companies alone. They must be made by centering on the communities most vulnerable to exploitation and harm from AI. And we must be open to working with allies and adversaries alike to avoid both military and civilian abuses of AI.

At the start of the nuclear age, rather than heed Oppenheimer’s warning on the dangers of an arms race, the U.S. fired the starting gun. Eight decades later, we have a moral responsibility and a clear interest in not repeating that mistake.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

参考译文
奥本海默再次向我们发出人工智能危险的新警示
八十年前,富兰克林·D·罗斯福总统委派年轻的物理学家J·罗伯特·奥本海默在新墨西哥州的洛斯阿拉莫斯建立一个秘密实验室。奥本海默与他的同事们被赋予了一个任务:在“曼哈顿计划”这一代号下,研发世界上第一种核武器。不到三年后,他们成功了。1945年,美国在日本广岛和长崎投下这些武器,造成数十万人死亡。奥本海默因此被称为“原子弹之父”。尽管他对自己的战时贡献和科技成就感到一丝不当的满足,他同时也大声疾呼要控制这种危险的技术。但美国并未听取他的警告,地缘政治的恐惧占了上风。国家竞相部署更强大的核系统,却几乎不考虑这些武器将会造成的巨大、不成比例的伤害。政府也忽视了奥本海默呼吁加强国际合作以监管核技术的呼声。奥本海默的经历对今天也有重要的启示。我们不能在人工智能上重犯当初在核武器上犯下的错误。我们目前仍处于人工智能革命的初期阶段。科技公司正竞相建设和部署如ChatGPT这样的AI驱动大型语言模型。监管机构必须跟上步伐。尽管AI承诺带来巨大好处,但它已经暴露了自身造成伤害和被滥用的潜力。今年早些时候,美国外科医生总长发布了一份关于青少年心理健康危机的报告。报告指出,2021年,三分之一的少女曾考虑过自杀。这些数据非常明确:大型科技公司是问题的重要组成部分。人工智能只会加剧这种操控与剥削。AI的表现依赖于国内外普遍存在的剥削性劳动实践。同时,大规模、不透明的AI模型在问题数据的驱动下,往往会加剧社会中已有的偏见——从刑事判决与执法,到医疗、借贷、住房与就业,无一例外。此外,运行这些能耗极高的AI模型对环境造成的影响也正在加剧本已脆弱的生态系统,而这些生态系统正因气候变化而遭受冲击。人工智能还可能使潜在危险的技术更容易被不法分子获取。去年,研究人员让一个生成式AI模型设计新的化学武器。它在六小时内设计出了4万种潜在武器。早前版本的ChatGPT甚至生成了制造炸弹的说明。最近,麻省理工学院的一次课堂实验还展示了AI如何帮助创造合成病原体,这可能会引发下一场疫情。通过传播这种危险信息,AI有可能变成一种计算机化的等价于突击武器或大容量弹匣的存在,成为某个不法分子释放前所未有的毁灭性伤害的工具。然而,公司和私人行为者并不是唯一在竞相部署未经验证AI的群体。我们还应警惕政府试图将AI军事化。我们目前拥有一个核发射系统,其危险性是建立在“相互确保毁灭”的基础上,世界领导人只能有几分钟的时间来决定是否在受到攻击的假设情境下发射核武器。AI驱动的核发射系统自动化可能很快就会消除“人类参与决策”这一必要的安全措施,以防止错误的计算机判断导致核战争。而核战争实际上已经多次接近发生。一场以提高决策者在日益复杂世界中的应对能力为目标的军事自动化竞赛,可能导致冲突失去控制。如果各国仓促采用军事化AI技术,我们都会输。正如20世纪40年代一样,现在正是塑造这一新兴且潜在危险技术发展的关键窗口期。奥本海默曾认识到,美国应即使与最深的敌手合作,也要在全球范围内控制核技术的危险方面,同时继续追求其和平用途。但美国不仅谴责了奥本海默本人和他提出的理念,还通过研发氢弹、以及相关昂贵且偶尔荒谬的投送系统和大气层测试,引发了大规模的冷战军备竞赛。由此产生的核工业复合体不成比例地伤害了最脆弱的群体。铀矿开采和大气层测试导致包括新墨西哥州居民、马绍尔群岛社区以及纳瓦霍族成员在内的群体患上癌症。这种浪费性的开支、机会成本以及对边缘化社区的冲击都是难以估量的,更不用说随之而来的众多险些事故和核武器扩散问题了。今天,我们需要国际协作和国内监管,以确保人工智能安全发展。国会必须立即行动,监管科技公司,确保他们以公众集体利益为优先。国会首先应通过我提出的《儿童在线隐私保护法案》、《算法正义与在线透明法案》以及我提出的《禁止人工智能启动核武器法案》。但这只是开始。在白宫《人工智能权利法案蓝图》的指导下,国会需要出台广泛的法规,以阻止这一无序的、竞相构建和部署不安全人工智能的疯狂竞赛。关于如何以及在何处使用AI的决定,不能仅由科技公司来做出。这些决定必须以那些最易受AI剥削和伤害的社区为中心。我们必须开放地与盟友与对手一起合作,以避免AI在军事和民用领域被滥用。在核时代之初,美国并未听取奥本海默关于军备竞赛危险的警告,而是打响了冷战的发令枪。八十年后,我们有道德责任和现实利益去避免重犯这一错误。这是一篇观点和分析文章,作者的观点不一定代表《科学美国人》的立场。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告

scientific

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

谷歌的下一款ChatGPT竞品,是它

提取码
复制提取码
点击跳转至百度网盘