小程序
传感搜
传感圈

‘Unintended harms’ of generative AI pose national security risk to UK, report warns

2023-12-19 01:33:28
关注

  •  

Unintended consequences of generative AI use could cause significant harm to the UK’s national security, a new report has warned.

Generative AI could lead to increasingly sophisticated deepfake content being produced, a new report warns. (Photo by Tero Vesalainen/Shutterstock)

The paper from the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute highlights key areas of concern that need to be addressed to protect the nation from threats posed by these powerful technologies.

The unintended security risks of generative AI

Titled Generative AI and National Security: Risk Accelerator or Innovation Enabler?, the report authors point out that conversations about threats have focused primarily on understanding the risks from  groups or individuals who set out to inflict harm using generative AI, such as through cyberattacks or by generating child sex abuse material. It is expected that generative AI will amplify the speed and scale of these activities, and Tech Monitor reported this week that security professionals have highlighted the increased risk posed by AI-powered phishing attacks, which enable cybercriminals to generate more authentic-looking communications to lure in victims.

But the report also urges policymakers to plan for the unintentional risks posed by improper use and experimentation with generative AI tools, and excessive risk-taking as a result of over-trusting AI outputs. These risks could stem from the adoption of AI in critical national infrastructure or its supply chains, and the use of AI in public services.

Private sector experimentation with AI could also lead to problems, with the fear of missing out on AI advances potentially clouding judgments about higher-risk use cases, the authors argue.

Generative AI might offer opportunities for the national security community says Ardi Janjeva, research associate from CETaS at The Alan Turing Institute. But he believes it is “currently too unreliable and susceptible to errors to be trusted in the highest stakes contexts”.

Janjeva said: “Policymakers must change the way they think and operate to make sure that they are prepared for the full range of unintended harms that could arise from improper use of generative AI, as well as malicious uses.”

The research team consulted with over 50 experts across government, academia, civil society and leading private sector companies, with most deeming that unintended harms are not receiving adequate attention compared with adversarial threats national security agencies are accustomed to facing.

Content from our partners

Navigating the intersection of AI and sustainability

Navigating the intersection of AI and sustainability

How businesses can thrive in the age of generative AI

How businesses can thrive in the age of generative AI

AI is transforming efficiencies and unlocking value for distributors

AI is transforming efficiencies and unlocking value for distributors

The report analyses political disinformation and electoral interference and raises particular concerns about the cumulative effect of different types of generative AI technology working to spread misinformation at scale by creating realistic deepfake videos. Debunking a false AI-generated narrative in the hours or days preceding an election would be particularly challenging, the report warns.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

It cites the example of an AI-generated video of a politician delivering a speech at a venue they never attended may be seen as more plausible if presented with an accompanying selection of audio and imagery, such as the politician taking questions from reporters and text-based journalistic articles covering the content of the supposed speech.

How to combat AI’s unintended consequences

The Alan Turing Institute says the CETaS report has been released to build on the momentum created by the UK’s AI Safety Summit, which saw tech and political leaders come together to discuss how artificial intelligence can be implemented without causing societal harm.

It makes policy recommendations for the new AI Safety Institute, announced prior to the summit, and other government departments and agencies which could help address both malicious and unintentional risks.

This includes guidance about evaluating AI systems, as well as the appropriate use of generative AI for intelligence analysis. The report also highlights that autonomous AI agents, a popular early use case for the technology, could accelerate both opportunities and risks in the security environment and offer recommendations to ensure their safe and responsible use.

Professor Mark Girolami, chief scientist at the Alan Turing Institute, said: “Generative AI is developing and improving rapidly and while we are excited about the many benefits associated with the technology, we must exercise sensible caution about the risks it could pose, particularly where national security is concerned.

“With elections in the US and the UK on the horizon, it is vital that every effort is made to ensure this technology is not misused, whether intentional or not.”

Read more: The UK is building a £225m AI supercomputer

  •  

参考译文
报告警告称,生成式人工智能的“意外危害”对英国国家安全构成风险
一项新报告警告称,生成式人工智能的意外后果可能对英国国家安全造成重大伤害。该报告指出,生成式人工智能可能导致日益复杂的深度伪造内容被制造出来。(照片由Tero Vesalainen / Shutterstock提供)这份来自阿兰·图灵研究所新兴技术与安全中心(CETaS)的报告强调了几个关键的担忧领域,需要加以解决,以保护国家免受这些强大技术带来的威胁。生成式人工智能的意外安全风险。报告题为《生成式人工智能与国家安全:风险加速器还是创新推动者?》,作者指出,关于威胁的讨论主要集中在理解那些有意利用生成式人工智能来造成伤害的团体或个人所带来的风险,例如通过网络攻击或生成涉及儿童性虐待的材料。预计生成式人工智能将加速这些活动的速度和规模,而《Tech Monitor》本周报道,安全专业人士指出,由人工智能驱动的钓鱼攻击所带来的风险正在增加,这使得网络罪犯能够生成更逼真的信息来诱骗受害者。但报告还敦促政策制定者为生成式人工智能的不当使用和实验所带来的无意风险做好准备,以及由于过度信任人工智能输出结果而产生的过度冒险行为。这些风险可能来自人工智能在关键国家基础设施或其供应链中的应用,以及在公共服务中的使用。来自阿兰·图灵研究所CETaS的研究员Ardi Janjeva指出,生成式人工智能可能为国家安全领域提供机会。但他认为,目前它“还不足以可靠到可以用于最高风险的情境中”。Janjeva表示:“政策制定者必须改变思维方式和行动方式,以确保他们能够应对由于生成式人工智能不当使用以及恶意使用而引发的全部范围的意外危害。” 研究团队咨询了来自政府、学术界、民间社会和领先私营公司的50多位专家,大多数专家认为,与国家安全机构惯常面对的对抗性威胁相比,这些意外危害并未获得足够的关注。**我们的合作伙伴内容** AI与可持续性的交汇点 企业如何在生成式人工智能时代蓬勃发展 人工智能正在提升效率并为分销商释放价值 报告分析了政治性虚假信息和选举干预,并特别关注了不同类型的生成式人工智能技术在大规模传播虚假信息方面的累积效应。报告警告称,在选举前的几小时或几天内,推翻一个由人工智能生成的虚假叙事将特别具有挑战性。**订阅所有通讯** 注册我们的通讯 数据、洞察和分析直送到你手中 由《Tech Monitor》团队提供 [点击此处注册]报告还举了一个例子:如果一段人工智能生成的视频显示某位政治人物在一个他们从未出席的场合发表演讲,并配有政治人物回答记者提问的音频和图像,以及以文本形式报道“演讲”内容的新闻文章,那么这段视频可能会被视作更加可信。如何应对人工智能的意外后果 阿兰·图灵研究所表示,CETaS报告的发布是为了借助英国人工智能安全峰会所创造的势头,该峰会上科技和政治领袖齐聚一堂,讨论人工智能如何在不造成社会危害的前提下得到实施。报告为即将成立的人工智能安全研究所以及相关政府部门和机构提出了政策建议,以帮助应对恶意和非故意风险。其中包括评估人工智能系统的指南,以及在情报分析中适当使用生成式人工智能的建议。报告还指出,自主人工智能代理——该技术早期流行的用例之一——可能会同时加快安全环境中的机遇和风险,并提出了确保其安全和负责任使用的建议。阿兰·图灵研究所首席科学家Mark Girolami教授表示:“生成式人工智能正在迅速发展并不断改进,虽然我们对这项技术带来的许多好处感到兴奋,但我们必须在涉及国家安全时谨慎应对其可能带来的风险。随着美国和英国即将举行选举,必须尽一切努力确保这项技术不会被滥用,无论是有意还是无意。”**阅读更多**:英国正在建设价值2.25亿英镑的人工智能超级计算机
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

未来的科技初创公司,只要三个人就够了?

提取码
复制提取码
点击跳转至百度网盘