小程序
传感搜
传感圈

UK government sets out priorities for AI Safety Summit

2023-09-07 07:07:44
关注

  •  

Global cooperation on regulations and research is at the centre of the UK government’s ambitions for the first AI Safety Summit in November. A set of five priorities has been produced for the event, with a focus on the security and alignment of next-generation foundation AI models such as OpenAI’s GPT-5 or Google’s Gemini. 

AI safety research is designed to ensure the safe and trustworthy use of artificial intelligence, and ensure it aligns with human values (Photo: Gorodenkoff/Shutterstock)
AI safety research is designed to ensure the safe and trustworthy use of artificial intelligence and ensure it aligns with human values. (Photo by Gorodenkoff/Shutterstock)

Prime Minister Rishi Sunak announced plans for a global summit on the risks posed by AI earlier this year. Officials have since been wrangling academics, AI labs, tech companies and other governments to come to the event in November. It is set to be held at the home of computer science, Bletchley Park near Milton Keynes.

Leading AI labs including OpenAI, Google DeepMind and Anthropic are expected to be at the event and have previously agreed to make future frontier AI models available for safety research. These are the next-generation large language, generative and foundation models that are expected to significantly outperform the current best-in-class.

Taking place on 1–2 November, Secretary of State Michelle Donelan has kick-started the process of negotiations with companies and officials. Donelan also held a roundtable with a cross-section of civil society groups last week to try and assuage fears the summit would focus on and support Big Tech at the expense of other groups.

The Department for Science, Innovation and Technology (DSIT) is organising the summit and says it will focus on the risks created or exacerbated by the most powerful AI systems, including those yet to be released. Delegates at the summit will also debate the implications, and mitigation methods, for the most dangerous capabilities of these new models including access to information that could undermine security.

The flip side is that it will also focus on the ways safe AI can be used for public good and to improve people’s lives. This includes medical technology, improved transport safety and workplace efficiencies. There are five objectives the government hopes will be addressed.

UK AI Safety Summit: the core objectives

The objects include creating a shared understanding of the risks posed by frontier AI, as well as a need for action on those risks. A forward process for international collaboration on frontier AI safety is also an objective, including how to support the creation of national and international frameworks to deliver the collaboration. 

Outside of global cooperation efforts, the objectives include creating measures organisations have to take to increase frontier AI safety and areas for collaboration on AI safety research, some of which are already under way. This will include ways to evaluate model capabilities and develop global standards to support governance and regulation. 

Content from our partners

AI will equip the food and beverage industry for a resilient future

AI will equip the food and beverage industry for a resilient future

Insurance enterprises must harness the powers of data collaboration to achieve their commercial potential

Insurance enterprises must harness the powers of data collaboration to achieve their commercial potential

How tech teams are driving the sustainability agenda across the public sector

How tech teams are driving the sustainability agenda across the public sector

The DSIT select committee recently published a report urging the government to speed up the adoption of AI regulations or risk being left behind. In the report, the MPs reject the need for a pause on the development of next-generation foundation AI models but urge the government to speed up legislation. “Without a serious, rapid and effective effort to establish the right governance frameworks – and to ensure a leading role in international initiatives – other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer.”

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

Many countries, international organisations like the EU and UN, as well as the leading AI labs and civil society groups are already working on AI safety research. The Trades Union Congress recently formed its own task force to address workers’ rights in relation to AI. The OECD, Global Partnership on AI (GPAI) and G7 are building standards, and so the UK hopes to capitalise on this and create global regulatory consensus through the summit.

Professor Gina Neff, executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, and co-chair of the new TUC AI task force, said responsible and trustworthy AI can bring huge benefits, but laws have to ensure it works for all. “AI safety isn’t just a challenge for the future and it isn’t just a technical problem,” Neff said. “These are issues that both employers and workers are facing now, and they need the help from researchers, policymakers and civil society to build the capacity to get this right for society.”

Read more: Urgent need for AI workers’ rights legislation, TUC warns

Topics in this article : AI

  •  

参考译文
英国政府公布人工智能安全峰会的重点议题
英国政府在11月首次举办人工智能安全峰会的愿景中,全球在监管和研究方面的合作处于核心地位。峰会已确定了五个重点,重点在于下一代基础人工智能模型的安全性和对齐性,如OpenAI的GPT-5或谷歌的Gemini。人工智能安全研究旨在确保人工智能的安全和可信使用,并确保其与人类价值观保持一致。(图片提供:Gorodenkoff/Shutterstock)首相里希·苏纳克(Rishi Sunak)今年早些时候宣布计划召开一次全球峰会,以应对人工智能所带来的风险。自那以来,官员们一直在努力协调学者、人工智能实验室、科技公司和其他政府机构,邀请他们参加11月的活动。峰会计划在计算机科学的摇篮——米尔顿凯恩斯附近的布莱切利公园举行。包括OpenAI、谷歌DeepMind和Anthropic在内的主要人工智能实验室预计将参加此次活动,而且此前已同意将未来的前沿人工智能模型用于安全研究。这些是下一代的大型语言模型、生成模型和基础模型,预计将明显优于目前的最佳模型。峰会将于11月1日和2日举行,科技国务大臣米歇尔·唐兰(Michelle Donelan)已启动了与公司和官员的谈判进程。唐兰上周还与各类民间组织代表举行了圆桌会议,试图缓解人们对峰会可能偏重并支持大科技公司的担忧。科学、创新和技术部(DSIT)正在组织此次峰会,并表示将聚焦于最强大人工智能系统所带来的或加剧的风险,包括尚未发布的系统。峰会上的代表还将讨论这些新模型最危险能力的后果及应对方法,包括可削弱安全性的信息获取能力。另一方面,峰会也将关注安全人工智能如何用于公共利益,并改善人们的生活。这包括医疗技术、交通安全性提升和工作效率的提高。政府希望通过峰会实现五个目标。英国人工智能安全峰会:核心目标包括形成对前沿人工智能所带来风险的共同认识,以及采取行动的必要性。另一个目标是推进国际间在前沿人工智能安全方面的合作,包括如何支持国家和国际框架的建立以实现合作。在国际合作之外,目标还包括制定组织必须采取的措施以提高前沿人工智能的安全性,以及人工智能安全研究的合作领域,其中部分工作已经展开。这将包括评估模型能力的方法,以及制定全球标准以支持治理和监管。来自我们的合作伙伴的内容人工智能将为食品和饮料行业带来一个更具韧性的未来保险公司必须利用数据合作的力量来实现其商业潜力科技团队正在推动整个公共部门的可持续发展议程科学、创新和技术部(DSIT)特别委员会最近发布了一份报告,敦促政府加快人工智能监管的实施,否则将面临落后的风险。在报告中,议员们否定了暂停下一代基础人工智能模型开发的必要性,但敦促政府加快立法进程。“如果没有认真、迅速且有效的努力来建立正确的治理框架,并确保在国际倡议中发挥主导作用,其他司法管辖区将领先一步,而他们制定的框架可能会成为默认标准,即使它们的有效性不如英国能提供的。”查看所有通讯注册我们的通讯数据、见解和分析直接送达您手中由《科技观察》团队提供在此注册许多国家、欧盟和联合国等国际组织,以及领先的人工智能实验室和民间组织,目前都在进行人工智能安全研究。英国工会大会(Trades Union Congress)最近成立了自己的工作组,以应对与人工智能相关的工人权益问题。经济合作与发展组织(OECD)、全球人工智能合作伙伴关系(GPAI)和七国集团(G7)正在制定标准,英国希望通过此次峰会把握这一机会,推动全球监管共识的形成。剑桥大学Minderoo技术与民主中心执行主任、英国工会大会人工智能新工作组联合主席吉娜·内夫(Gina Neff)教授表示,负责任和值得信赖的人工智能可以带来巨大好处,但法律必须确保它惠及所有人。“人工智能的安全不仅仅是一个未来的问题,也不仅仅是一个技术问题。”内夫表示,“这些都是雇主和员工目前面临的问题,他们需要研究人员、政策制定者和民间社会的支持,以建立能力,从而为社会正确实施。”阅读更多:英国工会大会警告:亟需人工智能工作者权益立法本文主题:人工智能
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘