小程序
传感搜
传感圈

UK government urged to widen scope of AI safety summit beyond frontier models

2023-10-07 19:45:55
关注

  •  

The UK government has been urged to expand the scope of its upcoming AI safety summit. The international event in November will bring together political and business leaders from around the world and is currently set to focus on next-generation, highly advanced frontier AI models, but AI research body the Ada Lovelace Institute has warned that other forms of artificial intelligence also pose risks and should be considered.

The Ada Lovelace Institute says there are models in use today that are capable of causing significant harm (Photo: Blue Planet Studio/Shutterstock)
The Ada Lovelace Institute says there are models in use today that are capable of causing significant harm. (Photo by Blue Planet Studio/Shutterstock)

The Department for Science, Innovation and Technology (DSIT) says it will engage with civil society groups, academics, and charities to examine different aspects of risk associated with AI. This will include a series of fringe events and talks in the run-up to the Bletchley Summit. However, due to the severity of risk associated with frontier models, the department says they would remain the focus.

Frontier models are defined as any AI model larger or more powerful than those currently available. This will likely include multimodal models like the upcoming GPT-5 from Microsoft-supported OpenAI, Google’s Gemini, and Claude 3 from Amazon-backed Anthropic.

DSIT says the reason for focusing the summit on frontier models is due to the significant risk of harm they pose and the rapid pace of development. There will be two key areas within the summit: misuse risk, particularly around ways criminals can use AI in biological or cyberattacks, and loss of control that could occur if AI doesn’t align with human values.

Current AI systems can cause significant harms

Michael Birtwistle, associate director of law and policy at the Ada Lovelace Institute, said there is considerable evidence current AI systems are causing significant harm. This ranges from “deep fakes and disinformation to discrimination in recruitment and public services”. Birtwistle said that “tackling these challenges will require investment, leadership and collaboration”. 

He said: “We’ve welcomed the government’s commitment to international efforts on AI safety but are concerned that the remit of the summit has been narrowed to focus solely on the risks of so-called ‘frontier AI’.

“Pragmatic measures, such as pre-release testing, can help address hypothetical AI risks while also keeping people safe in the here-and-now.”

While international cooperation is an important part of the AI safety puzzle, Birtwistle said any action will need to be grounded in evidence and backed up by robust domestic legislation. This includes bias, misinformation, and data privacy.

Content from our partners

How Midsona accelerated efficiency and reduced costs with a modern ERP system

How Midsona accelerated efficiency and reduced costs with a modern ERP system

Streamlining your business with hybrid cloud

Streamlining your business with hybrid cloud

A hybrid strategy will help distributors execute a successful customer experience

A hybrid strategy will help distributors execute a successful customer experience

“Artificial intelligence will undoubtedly transform our lives for the better if we grip the risks,” Technology Secretary Michelle Donelan said last week when unveiling the introduction to the summit documentation. 

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

“We want organisations to consider how AI will shape their work in the future, and ensure that the UK is leading in the safe development of those tools.

“I am determined to keep the public informed and invested in shaping our direction, and these engagements will be an important part of that process.”

Read more: UK government plans charm offensive ahead of AI Safety Summit

Topics in this article : AI

  •  

参考译文
呼吁英国政府扩大人工智能安全峰会的范围,超越前沿模型
英国政府被呼吁扩大其即将举行的AI安全峰会的范围。11月的这场国际活动将汇聚来自世界各地的政治和企业领袖,目前计划聚焦于下一代高度先进的前沿AI模型。但AI研究机构Ada Lovelace Institute警告说,其他形式的人工智能也存在风险,不应被忽视。该机构指出,目前已有模型可能造成重大危害。(照片由Blue Planet Studio/Shutterstock提供)科学、创新与技术部(DSIT)表示,将与民间团体、学者和慈善机构合作,探讨AI相关的各类风险问题。这将包括在布雷奇利峰会前夕举行的一系列边会和演讲。然而,由于前沿模型所带来的风险严重,该部门表示它们仍将作为峰会的核心关注点。所谓前沿模型,是指任何规模更大、功能更强的AI模型,很可能会包括多模态模型,如微软支持的OpenAI即将推出的GPT-5、谷歌的Gemini和亚马逊支持的Anthropic的Claude 3。DSIT表示,峰会之所以聚焦于前沿模型,是因为它们所带来的潜在危害较大,且发展迅速。峰会上将有两个重点议题:滥用风险,特别是围绕犯罪分子如何在生物或网络攻击中使用AI,以及AI未能与人类价值观一致所导致的失控风险。Ada Lovelace Institute法律与政策副主任Michael Birtwistle表示,有大量证据表明当前的AI系统已经造成了重大危害,例如从“深度伪造和虚假信息”到“招聘和公共服务中的歧视”。Birtwistle表示:“解决这些挑战需要投入资源、领导力和合作。”他补充说:“我们欢迎政府对国际AI安全努力的承诺,但我们担心峰会的范围被缩小,仅聚焦于所谓‘前沿AI’的风险。”他表示,“务实的措施,如发布前测试,可以有助于应对假设性的AI风险,同时保护当下的公众安全。”Birtwistle指出,尽管国际合作是AI安全问题的重要组成部分,但任何行动都应基于证据,并得到强有力的国内立法支持。这包括偏见、虚假信息和数据隐私问题。来自我们的合作伙伴内容:Midsona如何通过现代ERP系统加速效率并降低成本;通过混合云简化业务;混合策略将帮助分销商打造成功的客户体验。科技秘书Michelle Donelan上周在发布峰会文件简介时表示:“如果我们能够掌控风险,人工智能无疑将改善我们的生活。” 查看所有新闻通讯 注册我们的新闻通讯 数据、洞察和分析将直接发送至您的邮箱 由《Tech Monitor》团队提供 点击此处注册 “我们希望各组织考虑AI将如何塑造他们未来的工作,并确保英国在这些工具的安全发展中处于领先地位。” “我决心让公众了解并积极参与我们方向的制定,这些互动将是这一过程的重要组成部分。” 阅读更多:英国政府计划在AI安全峰会前开展魅力攻势 本文主题:AI
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

这种工作,比“狗屁工作”更可怕

提取码
复制提取码
点击跳转至百度网盘