小程序
传感搜
传感圈

Details of UK AI Safety Institute revealed at Bletchley Park summit

2023-11-06 22:16:34
关注

  •  

Tech companies and governments from around the world have backed the UK’s plan for an AI Safety Institute after more details of the organisation were revealed at the AI Safety Summit at Bletchley Park.

Prime Minister Rishi Sunak speaks with US VP Kamala Harris at the end of the AI Summit at Bletchley Park. (Picture by Simon Dawson/No 10 Downing Street)

Prime Minister Rishi Sunak announced plans to create the safety institute, which will test new AI models to pinpoint potential safety issues, in a speech last week. Today it was revealed that the new body will build on the work of the UK’s frontier AI task force, and be chaired by Ian Hogarth, the tech investor who has been running the task force since it was created earlier this year.

Partners buy-in to Sunak’s UK AI Safety Institute

According to a brochure released today by the government, the institute will carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation, as well as more extreme risks such as humanity losing control of AI systems.

Hogarth will chair the organisation, with the Frontier AI Task Force’s advisory board, made up of leading industry figures, moving across to the institute, too. A CEO will be recruited to run the new organisation, which will work closely with the Alan Turing Institute for data science.

At the Bletchley Park summit, which concludes today, the new AI Safety Institute was backed by governments including the US, Japan and Canada, tech heavyweights such as AWS and Microsoft, and AI labs including Open AI and Anthropic.

Sunak said: “Our AI Safety Institute will act as a global hub on AI safety, leading on vital research into the capabilities and risks of this fast-moving technology.

“It is fantastic to see such support from global partners and the AI companies themselves to work together so we can ensure AI develops safely for the benefit of all our people. This is the right approach for the long-term interests of the UK.”

AI Safety Summit draws to a close

Whether the UK institute will become the global standard bearer for AI safety research is questionable given that the US government launched its own safety institute earlier this week. The UK says it has agreed to a partnership with the institute, as well as the government of Singapore, to collaborate on AI safety testing.

Content from our partners

Distributors can leverage digital solutions to transform efficiency in equipment and rental

Distributors can leverage digital solutions to transform efficiency in equipment and rental

Reynolds Catering spurs innovation by upgrading its ERP to level up capabilities at scale

Reynolds Catering spurs innovation by upgrading its ERP to level up capabilities at scale

Zeelandia leverages AI to optimise precision, efficiency and pricing

Zeelandia leverages AI to optimise precision, efficiency and pricing

The first task for the institute will be to put in place the processes and systems to test new AI models before they launch, including open-source models, the government said.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

Governments and tech companies attending the summit agreed to work together on safety testing for AI models, while Yoshua Bengio, a computer scientist who played a key role in the development of deep neural networks, the technology that underpins many AI models, is to produce a report on the state of the science behind artificial intelligence. It is hoped this will help build a shared understanding of the capabilities and risks posed by frontier AI. 

Sam Altman, OpenAI CEO, said: “The UK AI Safety Institute is poised to make important contributions in progressing the science of the measurement and evaluation of frontier system risks. Such work is integral to our mission – ensuring that artificial general intelligence is safe and benefits all of humanity – and we look forward to working with the institute in this effort.

The AI Safety Summit programme ended this afternoon, with Sunak holding a series of meetings with political leaders, including EU president Ursula von der Leyen. Later this evening he will take part in a question and answer session with Tesla CEO Elon Musk, who has not endorsed the new AI Safety Institute.

As reported by Tech Monitor, yesterday 28 countries, including the UK, US and China, signed the Bletchley Declaration, an agreement to work together on AI safety. The government also announced it is funding a £225m supercomputer, Isambard-AI, at the University of Bristol.

Read more: The UK is building a £225m AI supercomputer

  •  

参考译文
在布莱切利公园峰会上公布英国人工智能安全研究所详情
全球的科技公司和政府在布雷奇利公园举行的AI安全峰会上,随着该组织更多细节的披露,纷纷支持英国设立AI安全研究所的计划。首相里希·苏纳克在布雷奇利公园AI峰会结束时与美国副总统卡玛拉·哈里斯进行了交谈。(图片由Simon Dawson提供/10号唐宁街)苏纳克首相上周在一次演讲中宣布了创建该安全研究所的计划,该研究所将测试新的AI模型以识别潜在的安全问题。今日披露,新机构将建立在英国前沿AI任务小组的工作基础上,并由科技投资者伊恩·霍加斯担任主席,他自今年早些时候该任务小组成立之初就一直在领导该任务小组。苏纳克的英国AI安全研究所获得了合作伙伴的支持。根据今天政府发布的一份宣传册,该研究所将在AI模型发布前后对新型前沿AI进行严格测试,以应对AI模型可能造成危害的能力,包括探索所有风险,从社会危害如偏见和虚假信息,到更极端的风险,例如人类对AI系统的控制力丧失。霍加斯将担任该组织的主席,该组织将由由领先行业人物组成的前沿AI任务小组顾问委员会提供支持。该机构还将聘请一位首席执行官来负责运营,并与数据科学领域的艾伦·图灵研究所密切合作。在今天结束的布雷奇利公园峰会上,新成立的AI安全研究所获得了包括美国、日本和加拿大在内的各国政府、AWS和微软等科技巨头,以及OpenAI和Anthropic等AI实验室的支持。苏纳克表示:“我们的AI安全研究所将作为AI安全领域的全球中心,引领关于这一快速发展的技术的能力与风险的重要研究。”“看到全球合作伙伴和AI公司本身愿意一起合作,以确保AI的安全发展并造福所有人,这令人感到非常振奋。”“这符合英国的长远利益。”AI安全峰会落下帷幕 尽管美国政府在本周早些时候已经推出了自己的安全研究所,英国研究所是否能够成为AI安全研究的全球标杆仍有待观察。英国表示已与研究所和新加坡政府达成合作,共同进行AI安全测试。合作内容由我们的合作伙伴提供 分销商可以利用数字解决方案提高设备和租赁的效率 雷诺兹餐饮通过升级ERP,推动其大规模能力的创新 泽兰迪亚利用AI优化精度、效率和定价 政府表示,该研究所的首要任务是在新AI模型发布之前建立流程和系统进行测试,包括开源模型。 查看所有通讯 订阅我们的通讯 数据、洞察和分析将送达您手中 由Tech Monitor团队提供 点击此处订阅 出席峰会的各国政府和科技公司达成一致,将在AI模型的安全测试方面合作进行。与此同时,计算机科学家约书亚·本吉奥(Yoshua Bengio)——在深度神经网络发展方面发挥了关键作用的技术奠基人之一——将发布一份关于人工智能科学现状的报告。希望这将有助于构建关于前沿AI能力与风险的共同理解。OpenAI首席执行官山姆·阿尔特曼(Sam Altman)表示:“英国AI安全研究所有望在推进对前沿系统风险的评估和测量科学方面做出重要贡献。这对我们使命至关重要——确保人工智能通用智能的安全性并造福全人类——我们期待与该研究所合作推进这项工作。”AI安全峰会今天下午结束。苏纳克与政界领导人,包括欧盟主席乌苏拉·冯德莱恩举行了一系列会晤。晚些时候,他将参加与特斯拉首席执行官埃隆·马斯克的问答环节,后者并未支持新的AI安全研究所。据Tech Monitor报道,昨天,英国、美国、中国等28个国家签署了《布雷奇利宣言》,达成一项关于AI安全的合作协议。政府还宣布,将在布里斯托大学投资建设一台2.25亿英镑的超级计算机Isambard-AI。了解更多:英国正在建造一台价值2.25亿英镑的AI超级计算机
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘