小程序
传感搜
传感圈

UK and allies launch cybersecurity guidelines for AI developers

2023-11-27 23:57:16
关注

  •  

Cybersecurity guidelines for developers working on new AI systems have been unveiled by the UK and 17 of its allies. It is the latest move by the government to attempt to take a leading role in the debate around AI safety, following the international summit held at Bletchley Park earlier this month.

The NCSC has launched new cybersecurity guidelines for AI developers. (Photo by T. Schneider/Shutterstock)

The guidelines aim to raise the cybersecurity levels of artificial intelligence and help ensure that it is designed, developed, and deployed securely, the UK’s National Cybersecurity Centre (NCSC) said.

They will be officially launched this afternoon at an event hosted by the NCSC, attended by 100 partners from industry and the public sector.

New cybersecurity guidelines for AI development launched

The Guidelines for Secure AI System Development have been developed by the NCSC and the US’s Cybersecurity and Infrastructure Security Agency (CISA) in cooperation with industry experts and 21 other international agencies and ministries from across the world.

They will help developers of any systems that use AI to make informed cybersecurity decisions at every stage of the development process, the NCSC said. This will include systems that have been created from scratch and those built on top of tools and service provided by others.

It is hoped they will help ensure developers take a “secure by design” approach to building AI systems, with cybersecurity baked into new designs.

NCSC CEO Lindy Cameron said: “We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up.

“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

Content from our partners

AI is transforming efficiencies and unlocking value for distributors

AI is transforming efficiencies and unlocking value for distributors

Collaboration along the entire F&B supply chain can optimise and enhance business

Collaboration along the entire F&B supply chain can optimise and enhance business

Inside ransomware's hidden costs

Inside ransomware’s hidden costs

The guidelines are broken down into four key areas – secure design, secure development, secure deployment, and secure operation and maintenance – each with suggested behaviours to help improve security.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

CISA Director Jen Easterly said the guidelines are a “key milestone in our collective commitment – by governments across the world – to ensure the development and deployment of artificial intelligence capabilities that are secure by design.”

Easterly added: “The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology evolution.

“This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of cross-border collaboration in securing our digital future.”

UK attempts to lead conversation on AI safety

Alongside the UK and US, countries endorsing the guidelines include Germany, France and South Korea.

They build on the outcomes of the international AI safety summit, convened by the UK government at Bletchley Park and attended by government officials and the world’s leading technology vendors and AI labs.

The event saw the Bletchley Declaration agreed, with signatories pledging to work together closely on AI safety. Developers such as OpenAI and Anthropic also agreed to submit their next generation, or frontier, AI models for inspection by the UK’s recently announced AI safety institute. Prime Minister Rishi Sunak said the institute would be the first of its kind in the world, though the US government is also setting up a similar body.

Technology Secretary Michelle Donelan said: “I believe the UK is an international standard bearer on the safe use of AI. The NCSC’s publication of these new guidelines will put cyber security at the heart of AI development at every stage so protecting against risk is considered throughout.”

Read more: Rhysida claims it hacked the British Library

  •  

参考译文
英国及其盟友试图启动人工智能开发者网络安全指南
英国及其17个盟国已发布针对开发新人工智能系统的开发者的网络安全指南。这是政府在本月早些时候在布莱切利公园举行的国际峰会上之后,试图在人工智能安全议题中发挥领导作用的最新举措。英国国家网络安全中心(NCSC)推出了针对AI开发人员的新网络安全指南。(照片由T. Schneider/Shutterstock提供)该指南旨在提升人工智能的网络安全水平,帮助确保其在设计、开发和部署过程中的安全性,英国国家网络安全中心(NCSC)表示。今天下午,NCSC将正式推出这些指南,届时将有来自产业界和公共部门的100位合作伙伴出席此次活动。针对人工智能开发的新网络安全指南正式发布。《安全人工智能系统开发指南》由英国国家网络安全中心(NCSC)和美国网络安全与基础设施安全局(CISA)与产业专家及来自全球的21个国际机构和部门共同制定。NCSC表示,这些指南将帮助开发任何使用人工智能的系统,在开发过程的每一个阶段做出知情的网络安全决策。这将包括从头创建的系统,以及基于他人提供的工具和服务构建的系统。希望这些指南可以帮助开发人员采用“安全即设计”的方法构建人工智能系统,将网络安全融入新的设计中。NCSC首席执行官林迪·卡梅伦表示:“我们深知人工智能正以惊人的速度发展,政府和产业界需要共同努力,跟上这一发展步伐。这些指南标志着在形成真正全球性的、共同理解人工智能相关的网络风险与缓解策略方面迈出的重要一步,确保安全性不是开发的后记,而是整个开发过程中的核心要求。”来自我们合作伙伴的内容:人工智能正在提高效率并释放分销商的价值,沿整个食品饮料供应链的协作可以优化和增强业务,了解勒索软件的隐藏成本。这些指南分为四个关键领域——安全设计、安全开发、安全部署和安全运行与维护——每个领域都提出了建议的行为方式以提升安全性。查看所有通讯稿注册我们的通讯稿数据、洞察和分析将直接送达您邮箱由Tech Monitor团队提供,点击此处注册。CISA主任珍·伊斯特利表示,这些指南是“我们全球各国政府共同努力的一个关键里程碑,确保人工智能能力按照安全即设计的原则进行开发和部署。”伊斯特利补充道:“在推动安全即设计原则方面,以及在全球范围内培养人工智能系统安全发展的坚实基础方面,国内和国际的团结合作,正发生在我们共同的技术演进中一个尤为重要的时刻。这种联合努力再次确认了我们保护关键基础设施的使命,并重申了跨境合作在保障数字未来中的重要性。”英国试图引领人工智能安全的讨论。除英国和美国外,支持这些指南的国家还包括德国、法国和韩国。这些指南建立在英国政府在布莱切利公园举办的国际人工智能安全峰会成果之上,峰会吸引了政府官员和世界领先的科技公司和人工智能实验室出席。活动上达成了布莱切利宣言,签署方承诺在人工智能安全方面密切合作。开发公司如OpenAI和Anthropic也同意将下一代或前沿人工智能模型提交给英国刚刚宣布的人工智能安全研究所进行审查。英国首相里希·苏纳克表示,该研究所将是全球首个同类机构,尽管美国政府也在设立类似的机构。科技国务大臣米歇尔·唐兰表示:“我相信英国在人工智能的安全使用方面是国际标准的引领者。NCSC发布这些新指南,将把网络安全置于人工智能开发过程的每一步,使风险防范得到充分考虑。”阅读更多:Rhysida声称入侵了大英图书馆
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

AI能赚到钱了么?

提取码
复制提取码
点击跳转至百度网盘