小程序
传感搜
传感圈

EU commissioner calls for AI code of conduct ‘within months’

2023-06-04 16:20:03
关注

  •  

A new “AI code of conduct” should be introduced within months, the European competition commissioner has said. Magrethe Vestager, who has led many of the Commission’s investigations into the behaviour of Big Tech companies, wants both the EU and the US to push a voluntary code for the AI industry as an interim measure until new laws can be drawn up to regulate the powerful technology. Convincing the White House may be an uphill battle as some US officials aren’t convinced by the EU approach.

EU AI
The EU’s Margrethe Vestager says an AI code of conduct is required (Photo by Thierry Monasse/Getty Images)

The EU’s new AI Act is currently going through the legislative process and includes strict guidelines governing biometrics, imposes transparency requirements on AI and bans facial recognition in some public areas. It is the first comprehensive AI legislation outside of China.

Vestager told reporters during an EU and US trade council meeting in Sweden on Wednesday that “we need to act now”. Speaking of the AI Act she said “in the best of cases it will take effect in two and a half to three years time.” That is “obviously way too late,” she warned. 

The surprising success of ChatGPT, following its launch in November last year, has sparked an AI revolution, with companies like Google, Microsoft and Salesforce changing business models and adding generative AI to products. This in turn prompted governments to consider the implication of foundation model AI on national security, jobs and intellectual property rights.

An agreement is needed on specifics and not just general statements about the risks, Vestager warned. She added that the US and EU should drive the process and not rely on companies alone. “I think we can push something that will make us all more comfortable with the fact that generative AI is now in the world and is developing at amazing speeds.”

Industry should be involved in the process of creating a code of conduct and it should happen as quickly as possible. “This is the kind of speed you need,” she said. It needs to be in “the coming weeks, a few months” rather than years and will give society faith in the technology.

Differences in approach between the EU and US

G7 leaders have been meeting to discuss the implications of AI, particularly when it comes to threats to national security through misinformation. They have called for the development of technical standards to keep it trustworthy. 

Companies are also working to improve the trustworthiness of AI. Google’s UK AI lab DeepMind published an “early warning” framework that can flag whether an AI model has the potential to pose serious risk to humanity and a group of industry leaders including OpenAI’s Sam Altman signed an open letter calling for urgent risk mitigation.

Content from our partners

Five key challenges facing the fashion industry

Five key challenges facing the fashion industry

<strong>How to get the best of both worlds in the hybrid cloud</strong>

How to get the best of both worlds in the hybrid cloud

The key to good corporate cybersecurity is defence in depth

The key to good corporate cybersecurity is defence in depth

While the EU is pursuing a regulation-led approach to controlling AI, the Biden administration is split on the right approach to solving the problem. Some officials in the commerce department support similar legislation to the EU but those in national security and the state department feel it would put the country at a competitive disadvantage.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

Initially the US had looked to be following the EU in regulating the use of AI, particularly in high-risk areas such as the law and healthcare. It was part of an early framework for AI systems but the EU has since moved to tighten the rules around foundation model AI.

Individual EU countries are already using existing legislation to tackle the rise in AI. Italy banned ChatGPT until OpenAI complied fully with GDPR and Google has been slow to rollout new AI tools due to issues complying with GDPR.

Speaking to Bloomberg, US National Security Council spokesman Adam Hodge says the administration is working to “advance a cohesive and comprehensive approach to AI-related risks and opportunities.”

Read more: AI safety: industry leaders warn of ‘extinction risk’ for humanity

Topics in this article : AI , EU

  •  

参考译文
欧盟委员呼吁“数月内”出台人工智能行为准则
欧盟竞争事务专员马格丽特·维斯塔格(Margrethe Vestager)表示,数月内应出台一项新的“人工智能行为准则”。作为欧盟委员会调查大型科技公司行为的负责人之一,维斯塔格希望欧盟和美国共同推动一项针对人工智能行业的自愿性行为准则,作为临时措施,直到制定出新的法律来规范这项强大的技术。说服白宫支持这一措施可能并不容易,因为一些美国官员并不认同欧盟的方法。维斯塔格表示,欧盟和美国应推动制定具体的行为准则,而不是仅仅做出泛泛的风险声明。她警告说:“我们需要立即采取行动。”她还提到,人工智能法案在理想情况下将需要两年半到三年才能生效,“这显然太晚了”。去年11月推出的ChatGPT出人意料的成功引发了人工智能的革命,促使谷歌、微软和Salesforce等公司改变商业模式,并在产品中添加生成式AI。这也促使各国政府考虑基础模型人工智能对国家安全、就业和知识产权的影响。维斯塔格强调,美国和欧盟应在制定准则的过程中发挥主导作用,而不仅仅依赖于企业。她说:“我认为我们可以推动一些措施,让我们对生成式AI如今在世界上迅速发展这一事实感到更加安心。” 行业应积极参与制定行为准则的过程,并且应尽快完成。她说:“这就是你需要的速度。” 这应该在“未来几周或几个月内”完成,而不是几年,这样才能增强公众对这项技术的信任。欧盟和美国G7领导人正在就人工智能的影响进行会谈,特别是围绕通过虚假信息对国家安全构成的威胁。他们呼吁制定技术标准以确保AI的可信度。同时,企业也在努力提高AI的可信赖性。谷歌旗下的英国人工智能实验室DeepMind发布了一个“早期预警”框架,以标记某个AI模型是否可能对人类构成重大风险。包括OpenAI的山姆·奥尔特曼(Sam Altman)在内的一群行业领袖签署了一封公开信,呼吁紧急缓解风险。尽管欧盟正在通过立法来控制人工智能,但拜登政府内部对如何应对这一问题仍存在分歧。一些商务部官员支持与欧盟类似的法规,但国家安全和国务院的官员则认为这会削弱美国的竞争力。最初,美国似乎正追随欧盟对人工智能使用的监管,尤其是在法律和医疗等高风险领域。这一思路曾是美国早期人工智能系统框架的一部分,但欧盟随后对基础模型人工智能的监管进一步收紧。一些欧盟国家已开始使用现有法律应对人工智能的崛起。例如,意大利在OpenAI完全遵守《通用数据保护条例》(GDPR)之前禁止了ChatGPT的使用,而谷歌则因符合GDPR的问题而迟迟未能推出新的AI工具。美国国家安全委员会发言人亚当·霍奇(Adam Hodge)在接受彭博社采访时表示,政府正在努力制定一项“全面而协调的应对人工智能相关风险与机遇的方法”。**本文主题**:人工智能、欧盟
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

AI入侵芯片设计,会干掉工程师吗?

提取码
复制提取码
点击跳转至百度网盘