小程序
传感搜
传感圈

White House secures AI safety commitment

2023-07-23 10:31:48
关注

  •  

The largest AI labs and tech giants have agreed to put their models through independent security and vulnerability testing before they go live. The voluntary agreement was part of a wider deal with the US government on improving AI safety and alignment. It has been signed by seven leading AI companies including Amazon, Anthropic, Inflection, Google, Meta, Microsoft and OpenAI.

President Joe Biden met with leaders from the big AI labs earlier this year to discuss safety and privacy issues  (Photo by Kevin Dietsch/Getty Images)
President Joe Biden met with leaders from the big AI labs earlier this year to discuss safety and privacy issues (Photo by Kevin Dietsch/Getty Images)

The White House agreement holds the AI labs to a set of voluntary AI safety commitments, including watermarking generated content and testing output for misinformation risk.  As well as tests and security measures, the companies will have to share information on ways to reduce risk and invest in cybersecurity measures.

The new agreement is seen as an early move by the Biden administration to regulate the development of AI in the US. There are calls from other American lawmakers to introduce EU-style comprehensive legislation, but so far nothing concrete is on the table. The White House said in a statement: “The Biden-Harris Administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation.”

Published as a Fact Sheet on the White House website, it holds seven of the largest AI companies to internal and external security testing before release. They also agree to share information across the industry, with the government and civil society on managing AI risks. It also holds those companies to facilitate third-party discovery and reporting of vulnerabilities. This could come in the form of a bug bounty, such as those run by tech giants like Google. 

Each of the seven organisations agreed to begin implementation immediately in work described by the administration as “fundamental to the future of AI”. The White House says the work underscores three important principles for its development – safety, security and trust. “To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety,” the Fact Sheet declares.

Aimed at future AI models

This new agreement on safety primarily applies to future and theoretically more powerful AI models, like GPT-5 from OpenAI and Google’s Gemini. As such, it does not currently apply to existing models GPT-4, Claude 2, PaLM 2 and Titan.

The agreement focuses heavily on the need to earn the public’s trust and includes four areas that signatories need to focus on. This includes the development of a watermarking system to ensure it is clear when content is AI-generated. They also need to report the capabilities, limitations and areas of appropriate and inappropriate use publicly and regularly. 

Each of the companies also agreed to prioritise research on the societal risks AI can pose, specifically around harmful bias, discrimination and privacy. “The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them,” says the White House fact sheet.

Content from our partners

How tech teams are driving the sustainability agenda across the public sector

How tech teams are driving the sustainability agenda across the public sector

Finding value in the hybrid cloud

Finding value in the hybrid cloud

Optimising business value through data centre operations

Optimising business value through data centre operations

One of the commitments went beyond protecting from AI and involved the use of AI to address the greatest challenges in society. The White House explained: “From cancer prevention to mitigating climate change to so much in between, AI – if properly managed – can contribute enormously to the prosperity, equality, and security of all.”

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

In less than a year general-purpose artificial intelligence has become one of the most important regulatory topics in technology. Countries around the world are investing and investigating how to best ensure AI is built, deployed and used safely. 

The UK government has already secured agreement from OpenAI, Anthropic and Google’s DeepMind to provide early access to models for AI safety researchers. This was announced alongside the new £100m Foundation Model Taskforce chaired by Ian Hogarth.

“Policymakers around the world are considering new laws for highly capable AI systems,”  said Anna Makanju, OpenAI’s VP of Global Affairs. “Today’s commitments contribute specific and concrete practices to that ongoing discussion. This announcement is part of our ongoing collaboration with governments, civil society organisations and others around the world to advance AI governance.”

Read more: OpenAI adds persistent personality options to ChatGPT

Topics in this article : AI , White House

  •  

参考译文
白宫确保人工智能安全承诺
美国最大的人工智能实验室和技术巨头已同意在模型上线前进行独立的安全性和漏洞测试。这项自愿协议是与美国政府就提高人工智能安全性和对齐性达成的更广泛协议的一部分。该协议已由七家领先的AI公司签署,包括亚马逊、Anthropic、Inflection、谷歌、Meta、微软和OpenAI。美国总统乔·拜登今年早些时候曾会见大型人工智能实验室的领导者,讨论安全和隐私问题(照片由Kevin Dietsch/Getty Images提供)。白宫的协议要求人工智能实验室遵守一系列自愿的人工智能安全承诺,包括对生成内容进行水印处理,并对输出内容进行虚假信息风险测试。除了测试和安全措施外,这些公司还必须分享降低风险的方法,并投资网络安全措施。这项新协议被视为拜登政府在监管美国人工智能发展方面的早期举措。其他美国立法者呼吁引入类似欧盟的全面立法,但截至目前尚未有实质性进展。白宫在一份声明中表示:“拜登-哈里斯政府目前正在制定行政命令,并将推动两党共同立法,以帮助美国在负责任的创新方面领先世界。” 该协议以白皮书的形式发布在白宫网站上,要求七家最大的人工智能公司在发布前进行内部和外部安全测试。此外,他们还同意在行业内、政府和民间社会之间分享管理人工智能风险的信息。该协议还要求这些公司促进第三方发现和报告漏洞。这可能以漏洞赏金的形式出现,例如谷歌等科技巨头运营的漏洞赏金计划。七家组织都同意立即开始实施工作,白宫将这项工作描述为“人工智能未来发展的基础”。白宫表示,这项工作体现了其发展人工智能的三项重要原则:安全、安全和信任。“为了充分发挥人工智能的潜力,拜登-哈里斯政府正在鼓励该行业坚持最高标准,确保创新不会以牺牲美国人的权利和安全为代价。” 白皮书声明道。面向未来人工智能模型这项关于安全性的新协议主要适用于未来、理论上更强的人工智能模型,例如OpenAI的GPT-5和谷歌的Gemini。因此,它目前不适用于现有模型,如GPT-4、Claude 2、PaLM 2和Titan。该协议重点强调赢得公众信任的必要性,并包括签署方需要关注的四个领域。这包括开发水印系统,以确保明确识别内容是否由人工智能生成。他们还需要定期公开报告人工智能的能力、局限性以及适当的和不适当的使用领域。每家公司都同意优先研究人工智能可能带来的社会风险,特别是有害偏见、歧视和隐私问题。“人工智能的记录表明了这些危险的隐蔽性和普遍性,这些公司承诺推出能够减轻这些问题的人工智能。” 白宫白皮书如此表示。来自我们的合作伙伴:科技团队如何推动公共部门可持续发展议程混合云中的价值数据中心运营如何优化企业价值之一项承诺超越了保护人工智能,还涉及利用人工智能解决社会面临的最大挑战。白宫解释说:“从癌症预防到缓解气候变化,再到介于两者之间的诸多问题,人工智能如果得到妥善管理,可以极大地促进所有人的繁荣、平等和安全。” 查看所有通讯订阅我们的通讯由The Tech Monitor团队为您提供在这里注册不到一年内,通用人工智能已成为技术领域最重要的监管议题之一。世界各国都在投资并研究如何最好地确保人工智能的安全构建、部署和使用。英国政府已与OpenAI、Anthropic和谷歌的DeepMind达成协议,为AI安全研究人员提供早期接触模型的渠道。这一点与新成立的价值1亿英镑的基础模型任务组一并宣布,该任务组由Ian Hogarth担任主席。OpenAI全球事务副总裁Anna Makanju表示:“世界各地的政策制定者都在考虑针对高度能力人工智能系统的法律法规。” “今天宣布的承诺为这一持续的讨论提供了具体且实际的做法。本次公告是我们与政府、民间社会组织以及世界各地的持续合作的一部分,以推动人工智能治理。” 阅读更多:OpenAI 为ChatGPT添加持续人格选项本文主题:人工智能、白宫
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘