小程序
传感搜
传感圈

The EU AI Act is improving – but still contains fundamental flaws

2023-05-25 01:19:33
关注

  •  

As our legislators and societies navigate the rapidly evolving space of artificial intelligence (AI), they have to carefully balance the need to safeguard innovation in the space against any potential risks or ethical concerns that might arise. When it comes to generative AI specifically, any framework should therefore factor in the technology’s ability to create content, its capacity to learn and adapt, and its aptitude for user interaction. As such, new legislation should include respect for data protection, mandate sufficient transparency in operations and hold firms accountable for errors and harms, while keeping generative AI systems operationalisable for companies large and small. 

The EU has undoubtedly taken the lead in defining legislative guardrails for AI. In April 2021, the European Commission proposed the first-ever comprehensive legal framework for the technology: the Artificial Intelligence Act (AI Act). The proposal, which is underpinned by a ‘risk-based’ approach, classifies AI systems based on their potential to harm rights and safety. Consequently, high-risk AI systems, such as those operating in the medical or administrative sector, would be subjected to stringent regulations.

AI Act
The EU Parliament building at dusk. Its recent deliberations over the EU AI Act have resulted in significant improvements, according to a trio of European AI researchers – but the legislation as written still threatens to accidentally suppress innovation in the space. (Photo by Thierry Monasse/Getty Images)

Foundation models in the EU AI Act

Last Thursday, the draft law was approved by the relevant committees within the European Parliament (EP.) Not only did MEPs introduce the specific term ‘foundation model’ into the legislation – a term that has gained considerable traction across the computer science community – but they also supported three levels in the regulation of foundational models, including generative AI, as suggested in our recent paper on the subject, ‘Regulating ChatGPT.’ These levels include, first, minimum standards for all foundation models; second, specific rules for using foundation models in high-risk scenarios; and, third, rules for collaboration and information exchange along the AI value chain. 

In general, the rules for generative AI in the draft legislation, including transparency concerning use, training data, and copyright, as well as content moderation, are a step in the right direction. However, significant problems persist. For one thing, the definition of AI itself in the Act still seems, in our view, excessively broad, including any ‘machine-based system that is designed to operate with varying levels of autonomy’ capable of ‘generating outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.’ 

That potentially covers swathes of technology irrelevant to the Act, including smart meters, planning assistants, rule-based systems and almost any advanced software. The concept of autonomy in the legislation is also excessively wide, omitting to mention that models should have a certain ability to learn or adapt to new environments. Under this definition, an electric toothbrush mechanically shaking its brush over its user’s enamel could, conceivably, being categorised as ‘autonomous’.

Most importantly, the demand for risk assessment, mitigation, and management for all foundation models will prove daunting for small and medium-sized enterprises (SMEs) developing these systems. With limited compliance resources, they won’t be able to consider the overblown number of hypothetical risk scenarios and implement the associated risk management system. Arguably, only big tech companies will muster the resources to meet these requirements, helping to solidify their dominance in the space while simultaneously driving the industry’s core activities beyond the EU’s borders.

The ‘ChatGPT Rule’, known to the initiated as Art. 28b(4) AI Act EP Version, is also flawed. While its transparency obligations go in the right direction, not least in making AI service providers establish a clear understanding among users that they are dealing with an AI system, the legislation should also be imposing at least some duties on those generating AI content online, not least to help combat the spread of fake news and other misinformation. The call for transparency rights in the legislation should also extend to professional users and within social media contexts. Conversely, non-professional users outside social media context could be exempted, since addressees would have no legitimate interest in knowing about AI involvement in, for example, writing a birthday card.

Regulators are exploring ways to tackle bias and other issues with tools like ChatGPT (Tada Images/Shutterstock)
The EU AI Act is the first, comprehensive attempt at legislative regulation of all things artificial intelligence. Recent advances in generative AI, however, have complicated internal deliberations over the incoming law. (Photo by Tada Images/Shutterstock)

Copyright woes

Compliance with EU law is mandatory – the AI Act EP Version re-affirms this principle, while introducing cautious ex-ante compliance duties. But this provision could be more robust. In our view, the mechanisms of the Digital Services Act (DSA) should be incorporated to give a clear, actionable framework, such as mandatory notice and action mechanism and trusted flaggers. These measures would decentralise control over AI output, solidify adherence to the law, and ensure a safer AI ecosystem.

Content from our partners

The key to good corporate cybersecurity is defence in depth

The key to good corporate cybersecurity is defence in depth

Cybersecurity in 2023 is a two-speed system

Cybersecurity in 2023 is a two-speed system

Why F&B manufacturers must find ever-greater levels of flexibility

Why F&B manufacturers must find ever-greater levels of flexibility

Article 28b(4)(c) of the AI Act also deals with copyrighted material in training data, the existence of which must be disclosed. While a commendable idea, this provision is fraught with challenges. The question of what constitutes copyrightable material is often disputed among experts, while conducting due diligence along these lines will inevitably prove daunting for developers processing vast amounts of data. A potentially over-inclusive disclosure that also covers works of uncertain copyright status should suffice. This approach would prevent exorbitant due diligence costs and place the onus of copyright dispute on the individual authors – who may then decide if they believe their work is copyrightable and what course of action to take.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

Overall, we believe that the draft legislation is heading in the right direction – but these deficiencies still threaten to derail generative AI development in the EU and beyond. Ultimately, risk management must be clearly use-case-specific and application-oriented to prevent the Act from becoming an impediment to AI design and deployment in Europe. Our collective aim should be to strike the right balance between protecting individuals and society from potential harms, and allowing the AI industry to innovate and grow, within meaningful guardrails – in the EU and beyond.

Read more: This is how GPT-4 will be regulated

Topics in this article : EU AI Act

  •  

参考译文
欧盟人工智能法案正在改进——但仍存在根本性缺陷
在我们的立法者和相关社会努力应对人工智能(AI)这一快速发展的领域时,他们必须仔细权衡如何在保护该领域的创新与发展之间取得平衡,同时应对可能出现的风险和伦理问题。就生成式人工智能而言,任何监管框架都必须考虑技术生成内容的能力、其学习和适应的潜力,以及其与用户互动的特性。因此,新的立法应体现数据保护的尊重,要求运营过程中的充分透明度,并确保企业对错误和损害行为负责,同时保持生成式AI系统对大中小企业都具有可操作性。欧盟在为人工智能设定立法边界方面无疑处于领先地位。2021年4月,欧盟委员会提出了首个全面的法律框架:人工智能法案(AI Act)。该提案基于“风险导向”方法,根据AI系统对权利和安全造成潜在危害的程度进行分类。因此,高风险AI系统,如应用于医疗或行政领域的系统,将受到严格监管。近日,欧洲议会就AI法案进行了审议,三位欧洲AI研究人员表示,法案已得到显著改进,但目前的立法仍可能无意中抑制该领域创新。(图片由Thierry Monasse/Getty Images提供)《欧盟人工智能法案》中的基础模型上周四,该法律草案已获欧洲议会相关委员会批准。不仅议员们将“基础模型”这一术语引入了立法——这一术语在计算机科学界已有广泛认可——他们还支持了我们最近在《监管ChatGPT》一文中提出的对基础模型的三级监管措施。这三级包括:首先,为所有基础模型设定最低标准;其次,为在高风险场景中使用基础模型制定具体规则;第三,建立沿AI价值链的合作与信息共享规则。总体而言,草案中有关生成式AI的规定,如使用透明度、训练数据和版权的规范,以及内容治理,是迈向正确方向的一步。然而,仍然存在重大问题。首先,法案中对AI的定义在我们看来过于宽泛,涵盖了“任何设计用于以不同自主程度运行的基于机器的系统”,其能“生成影响物理或虚拟环境的预测、建议或决策”。这可能会覆盖到与该法案无关的大量技术,包括智能电表、规划助手、基于规则的系统和几乎所有高级软件。此外,法案中“自主性”的概念也过于宽泛,未提及模型应具备在新环境中学习或适应的能力。按此定义,一个自动在用户牙齿上机械振动的电动牙刷,也有可能被归类为“自主”系统。最重要的是,基础模型在风险评估、缓解和管理方面的要求,对于开发这些系统的中小型企业(SMEs)来说将是个严峻挑战。由于合规资源有限,它们无法对大量假设性风险场景进行考虑,并实施相应的风险管理机制。可以预见,只有大型科技公司才能具备满足这些要求的资源,这将进一步巩固它们在该领域的主导地位,同时推动行业核心活动远离欧盟边界。被称为“ChatGPT规则”的条款(AI法案第28b(4)条,欧洲议会版本)也存在问题。尽管其透明度义务朝正确的方向迈进,特别是在让用户明确意识到他们正在与AI系统互动方面,但立法还应对生成AI内容的网络用户施加一定的责任,尤其是帮助遏制虚假新闻和其他虚假信息的传播。法规中的透明度要求还应延伸至专业人士及社交媒体环境,而非社交媒体背景下的非专业用户则可豁免。例如,在写生日贺卡过程中AI的参与,收件人并没有了解的正当利益。《欧盟人工智能法案》是首次尝试全面立法监管一切人工智能的尝试。然而,生成式AI的最新进展却使该法案内部审议变得复杂。(图片由Tada Images/Shutterstock提供)版权问题遵守欧盟法律是强制性的——AI法案欧洲议会版本重申了这一原则,同时引入了谨慎的事前合规义务。但这一条款可以更加完善。我们建议应借鉴《数字服务法》(DSA)中的机制,提供明确且可操作的框架,如强制性的通知与行动机制及可信赖举报人制度。这些措施可以分散对AI输出的控制,增强法律的执行力度,并确保AI生态系统更加安全。合作伙伴内容企业网络安全的关键在于纵深防御 2023年网络安全是一个双速系统 食品与饮料制造商为何必须寻求更高的灵活性 《AI法案》第28b(4)(c)条还涉及训练数据中的版权材料,其存在必须披露。尽管这本身是一个值得称赞的想法,但这一条款也带来了诸多挑战。关于何为受版权保护的材料,专家之间经常存在争议,而开发者在处理大规模数据时,进行相关尽职调查将不可避免地面临巨大挑战。我们建议,应采用涵盖版权状态不明确作品的披露方案,这将有助于避免过高的尽职调查成本,并将版权争议的责任转移给作者个体——他们随后可以决定是否认为其作品受版权保护以及采取何种行动。查看所有简报 注册我们的简报 数据、洞察与分析直送您手中 由《Tech Monitor》团队提供 点击此处注册 总体而言,我们认为该草案立法正朝着正确方向迈进,但这些缺陷仍然可能削弱欧盟及更广泛地区生成式AI的发展。最终,风险评估必须明确与具体应用场景和实际应用相关,以防止该法案成为欧洲AI设计和部署的障碍。我们共同的目标,应在保护个人和社会免受潜在危害的同时,为AI行业在合理监管框架内创新和成长创造有利条件——不仅在欧盟,也在全球范围内。了解更多:GPT-4将如何受到监管 本文涉及主题:欧盟人工智能法案
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘