小程序
传感搜
传感圈

AI Needs Rules, but Who Will Get to Make Them?

2023-11-07 19:01:38
关注

About 150 government and industry leaders from around the world, including Vice President Kamala Harris and billionaire Elon Musk, descended on England this week for the U.K.’s AI Safety Summit. The meeting acted as the focal point for a global conversation about how to regulate artificial intelligence. But for some experts, it also highlighted the outsize role that AI companies are playing in that conversation—at the expense of many who stand to be affected but lack a financial stake in AI’s success.

On November 1 representatives from 28 countries and the European Union signed a pact called the Bletchley Declaration (named after the summit’s venue, Bletchley Park in Bletchley, England), in which they agreed to keep deliberating on how to safely deploy AI. But for one in 10 of the forum’s participants, many of whom represented civil society organizations, the conversation taking place in the U.K. hasn’t been good enough.

Following the Bletchley Declaration, 11 organizations in attendance released an open letter saying that the summit was doing a disservice to the world by focusing on future potential risks—such as the terrorists or cybercriminals co-opting generative AI or the more science-fictional idea that AI could become sentient, wriggle free of human control and enslave us all. The letter said the summit overlooked the already real and present risks of AI, including discrimination, economic displacement, exploitation and other kinds of bias.

“We worried that the summit’s narrow focus on long-term safety harms might distract from the urgent need for policymakers and companies to address ways that AI systems are already impacting people’s rights,” says Alexandra Reeve Givens, one of the statement’s signatories and CEO of the nonprofit Center for Democracy & Technology (CDT). With AI developing so quickly, she says, focusing on rules to avoid theoretical future risks takes up effort that many feel could be better spent writing legislation that addresses the dangers in the here and now.

Some of these harms arise because generative AI models are trained on data sourced from the Internet, which contain bias. As a result, such models produce results that favor certain groups and disadvantage others. If you ask an image-generating AI to produce depictions of CEOs or business leaders, for instance, it will show users photographs of middle-aged white men. The CDT’s own research, meanwhile, highlights how non-English speakers are disadvantaged by the use of generative AI because the majority of models’ training data are in English.

More distant future-risk scenarios are clearly a priority, however, for some powerful AI companies, including OpenAI, which developed ChatGPT. And many who signed the open letter think the AI industry has an outsize influence in shaping major relevant events such as the Bletchley Park summit. For instance, the summit’s official schedule described the current raft of generative AI tools with the phrase “frontier AI,” which echoes the terminology used by the AI industry in naming its self-policing watchdog, the Frontier Model Forum.

By exerting influence on such events, powerful companies also play a disproportionate role in shaping official AI policy—a type of situation called “regulatory capture.” As a result, those policies tend to prioritize company interests. “In the interest of having a democratic process, this process should be independent and not an opportunity for capture by companies,” says Marietje Schaake, international policy director at Stanford University’s Cyber Policy Center.

For one example, most private companies do not prioritize open-source AI (although there are exceptions, such as Meta’s LLaMA model). In the U.S., two days before the start of the U.K. summit, President Joe Biden issued an executive order that included provisions that some in academia saw as favoring private-sector players at the expense of open-source AI developers. “It could have huge repercussions for open-source [AI], open science and the democratization of AI,” says Mark Riedl, an associate professor of computing at the Georgia Institute of Technology. On October 31 the nonprofit Mozilla Foundation issued a separate open letter that emphasized the need for openness and safety in AI models. Its signatories included Yann LeCun, a professor of AI at New York University and Meta’s chief AI scientist.

Some experts are only asking regulators to extend the conversation beyond AI companies’ primary worry—existential risk at the hands of some future artificial general intelligence (AGI)—to a broader catalog of potential harms. For others, even this broader scope isn’t good enough.

“While I completely appreciate the point about AGI risks being a distraction and the concern about corporate co-option, I’m starting to worry that even trying to focus on risks is overly helpful to corporations at the expense of people,” says Margaret Mitchell, chief ethics scientist at AI company Hugging Face. (The company was represented at the Bletchley Park summit, but Mitchell herself was in the U.S. at a concurrent forum held by Senator Chuck Schumer of New York State at the time.)

“AI regulation should focus on people, not technology,” Mitchell says. “And that means [having] less of a focus on ‘What might this technology do badly, and how do we categorize that?’ and more of a focus on ‘How should we protect people?’” Mitchell’s circumspection toward the risk-based approach arose in part because so many companies were so willing to sign up to that approach at the U.K. summit and other similar events this week. “It immediately set off red flags for me,” she says, adding that she made a similar point at Schumer’s forum.

Mitchell advocates for taking a rights-based approach to AI regulation rather than a risk-based one. So does Chinasa T. Okolo, a fellow at the Brookings Institution, who attended the U.K. event. “Primary conversations at the summit revolve around the risks that ‘frontier models’ pose to society,” she says, “but leave out the harms that AI causes to data labelers, the workers who are arguably the most essential to AI development.”

Focusing specifically on human rights situates the conversation in an area where politicians and regulators may feel more comfortable. Mitchell believes this will help lawmakers confidently craft legislation to protect more people who are at risk of harm from AI. It could also provide a compromise for the tech companies that are so keen to protect their incumbent positions—and their billions of dollars of investments. “By government focusing on rights and goals, you can mix top-down regulation, where government is most qualified,” she says, “with bottom-up regulation, where developers are most qualified.”

参考译文
人工智能需要规则,但谁将拥有制定规则的权力?
大约150位来自世界各地的政府和行业领袖,包括副总统卡马拉·哈里斯和亿万富翁埃隆·马斯克,本周齐聚英国参加英国的人工智能安全峰会。此次会议成为全球关于如何监管人工智能的讨论焦点。但对一些专家而言,这也凸显了人工智能公司在这一对话中所扮演的过大角色,而许多本应受到关注、却对人工智能的成功缺乏经济利益的人却被忽略了。11月1日,来自28个国家和欧盟的代表签署了一项名为《布莱切利宣言》的协议(以峰会举办地英国布莱切利的布莱切利庄园命名),承诺继续商讨如何安全地部署人工智能。但对论坛参与者中的十分之一人而言,其中许多人代表民间社会团体,英国的对话仍不够充分。《布莱切利宣言》发布后,11家与会组织发布了一封公开信,指出峰会将注意力集中在未来的潜在风险上,比如恐怖分子或网络犯罪分子利用生成式人工智能,或更科幻化的观点认为人工智能可能变得有意识、摆脱人类控制并奴役人类,这实际上对世界是有害的。信中指出,峰会忽略了那些已经在发生的人工智能风险,包括歧视、经济冲击、剥削及其他形式的偏见。“我们担心峰会对长期安全风险的狭窄关注,可能会分散政策制定者和公司当前急需解决AI系统正在如何影响人民权利的紧迫性。”《宣言》的签署者之一、非营利组织民主与技术中心(CDT)首席执行官亚历山德拉·里夫·吉文斯(Alexandra Reeve Givens)表示。她指出,人工智能发展如此迅速,专注于避免理论上的未来风险,花费的精力本可以用于起草应对当下危险的立法,这让人感到遗憾。一些伤害的产生是因为生成式人工智能模型是在来自互联网的数据上训练的,而这些数据本身包含偏见。因此,这些模型的输出结果会偏向某些群体、损害其他群体。例如,如果你要求一个图像生成AI描绘首席执行官或企业高管,它会展示中年白人男性的照片。与此同时,CDT自己的研究也表明,非英语使用者在使用生成式人工智能时处于劣势,因为大多数模型的训练数据都是英文的。然而,对于像OpenAI这样一些强大的AI公司(如开发ChatGPT的公司)来说,更远期的未来风险场景显然是一个优先事项。许多签署了公开信的人认为,人工智能行业在塑造类似布莱切利峰会的重要事件中拥有过度的影响力。例如,峰会的官方时间表将当前大量生成式AI工具描述为“前沿AI”(frontier AI),这与AI行业在命名其自我监管机构“前沿模型论坛”(Frontier Model Forum)时所使用的术语如出一辙。通过在这些事件中施加影响,强大的企业也在不成比例地塑造着官方的人工智能政策,这种现象被称为“监管俘获”。结果,这些政策倾向于优先考虑公司利益。“为了实现民主程序,这个过程应该独立进行,而不是成为公司操控的机会。”斯坦福大学网络政策中心国际政策主任玛丽特杰·沙卡(Marietje Schaake)表示。例如,大多数私营公司并不优先考虑开源人工智能(尽管也存在例外,如Meta的LLaMA模型)。在峰会开始前两天的美国,总统乔·拜登发布了一项行政命令,其中包含了一些学术界人士认为有利于私营企业、而不利于开源人工智能开发者的条款。“这可能对开源人工智能、开放科学以及人工智能的民主化产生巨大影响。”佐治亚理工学院计算学副教授马克·里德尔(Mark Riedl)表示。10月31日,非营利组织Mozilla基金会发布了一封不同的公开信,强调了人工智能模型中开放性和安全性的必要性。其签署者包括纽约大学人工智能教授和Meta首席人工智能科学家亚恩·乐昆(Yann LeCun)。一些专家只是要求监管机构将对话范围扩大到超越人工智能公司最关注的事项——由某种未来通用人工智能(AGI)带来的存在性风险——涵盖更广泛潜在的伤害。对其他人而言,即使这样的范围依然不够。AI公司Hugging Face的首席伦理科学家玛格丽特·米切尔(Margaret Mitchell)表示:“虽然我完全理解AGI风险是分心,也担心公司操控的问题,但我开始担心,即使试图专注于风险,也可能在损害人类利益的同时过度帮助了公司。”(尽管Hugging Face公司有代表出席了布莱切利峰会,但米切尔本人当时在纽约州参议员查克·舒默(Chuck Schumer)同时在美举办的论坛上。)“人工智能监管应该关注人,而不是技术,”米切尔表示。“这意味着我们应该少关注‘这项技术可能做错什么,我们该如何分类’,而更多关注‘我们该如何保护人们’。”米切尔对基于风险的监管方法持谨慎态度,部分原因是一些公司对这种做法非常热衷,并在英国的峰会及本周其他类似活动中大量签署支持。她表示:“这立刻引起我的警觉,我还在舒默的论坛上提出了类似观点。”米切尔倡导对人工智能监管采取基于人权的方法,而非基于风险的方法。布鲁金斯学会的研究员奇娜萨·T·奥科洛(Chinasa T. Okolo)也持相同立场,并参加了英国的活动。“峰会上的主要讨论围绕‘前沿模型’对社会的风险,”她表示,“但忽略了人工智能对数据标注者造成的伤害,他们可能是人工智能发展中最重要的群体。”具体聚焦于人权,可以将对话置于政治家和监管机构更舒适的领域。米切尔认为,这将有助于立法者更有信心地制定保护更多受人工智能伤害风险人群的法律。它还可能为那些渴望保护自身主导地位及数十亿美元投资的技术公司提供一种折中方案。“当政府关注权利和目标时,”她说,“你可以将自上而下的监管(政府最擅长的领域)和自下而上的监管(开发者最擅长的领域)结合起来。”
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘