小程序
传感搜
传感圈

Will the EU AI Act be the death of open source?

2023-07-31 01:21:00
关注

  •  

The EU needs to do more to support the development of open-source AI in the EU AI Act, a group of leading developers and open-source companies has warned. GitHub, Hugging Face and the Creative Commons Foundation are among the signatories of an open letter to the European Parliament. However, one AI expert told Tech Monitor open source isn’t compatible with safe AI deployment as there is no easy way to take responsibility for misuse.

The EU AI Act includes a requirement for external auditing of AI models that may be incompatible with open source projects (Photo: Rawpixel.com/Shutterstock)
The EU AI Act includes a requirement for external auditing of AI models that may be incompatible with open-source projects (Photo by Rawpixel.com/Shutterstock)

The open letter was sent to policymakers in the EU and lists a series of suggestions for the European Parliament to consider when drafting the final version of the AI Act. This includes clearer definitions of AI components, clarifying that hobbyists and researchers working on open-source models don’t commercially benefit from their code, and allowing real-world testing of AI projects.

Governments around the world are wrestling with the best way to tackle AI safety and regulation. Companies like OpenAI and Google are forming alliances to drive safety research for future models and the UK is pushing for a global approach. The EU has one of the most prescriptive approaches to AI regulation and will have the first comprehensive law.

A research paper on the regulation of AI models has been published alongside the open letter. It argues that the EU AI Act will set a global precedent in the regulation of AI and supporting the open-source ecosystem in regulation will help further the goal of managing risk while encouraging innovation. 

The act has been designed to be risk-based, taking a different approach to regulation for different types of AI. The companies argue that the rules around open source, and details of exemptions for non-commercial projects are not clear enough. They also suggest that rules around impact assessments, involving third-party auditors, are too burdensome for a usually not-for-profit project.

EU AI act could be the ‘death of open source’

Ryan Carrier, CEO of AI certification body ForHumanity, told Tech Monitor that the EU AI Act is the “death of open source”, describing the approach to technology development as an “excuse to take no responsibility and no accountability”. Carrier says: “As organisations have a duty of compliance, they can no longer rely upon tools that cannot provide sufficient representations, warranties, and indemnifications on the upstream tool.”

He said this means that someone will have to govern, evaluate and stand behind any open-source product for it to be used in production and be compliant with the act. “Open source is useful as a creative process, but it will have to morph with governance, oversight and accountability to survive,” he says. Carrier believes that “if an open source community can sort out the collections of all compliance requirements and identify a process for absorbing accountability and upstream liability, then they could continue.”

In contrast, Adam Leon Smith, chair of BCS F-TAG group, welcomed the calls for greater acceptance of open-source AI. “Much of what we need from AI technology providers to ensure AI is used safely is transparency, which is a concept built into open source culture,” Leon Smith says. “Regulation should focus on the use of AI not the creation of it. As such, regulators should be careful to minimise constraints on open source.”

Content from our partners

How tech teams are driving the sustainability agenda across the public sector

How tech teams are driving the sustainability agenda across the public sector

Finding value in the hybrid cloud

Finding value in the hybrid cloud

Optimising business value through data centre operations

Optimising business value through data centre operations

Amanda Brock, CEO of UK open source body OpenUK, told Tech Monitor the top-down approach of the EU AI Act makes it very difficult for anyone but the biggest companies to understand and comply with the legislation. She agreed that codes of conduct and acceptable use policies are incompatible with open-source software licensing. This is because it by default allows anyone to use the code for any purpose.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

AI development ‘at a crossroads’

A solution could be a further shift towards the approach used by Meta in licencing its Llama 2 large language model. It isn’t available under a true open-source licence as there are some restrictions on how it can be shared and who can use it, including paid licencing for very high-end use cases. This, says Brock “means that we are in an evolving space and the Commission’s approach to legislation will not work.”

Brock also agreed there is a need to change how open-source projects are handled and classified in the EU AI Act before it gets final approval. “The default is that organisations will be treated as commercial enterprises which are required to meet certain compliance standards under the act with respect to open-source software,” she explained. 

Some organisations are exempt if they meet the criteria and comply with certain aspects of good practice. But, says Brock, there are concerns the criteria for what is exempted are not wide enough to consider the full spectrum of open-source projects. This will lead to those working on specific products caught under the act and unable to comply.

Victor Botev, CTO and co-founder of open research platform Iris.ai, says Europe has some of the best open source credentials in the world and EU regulators need to take steps to keep the sector operational. “With companies like Meta releasing commercial open source licenses for AI models like LLaMA 2, even US industry giants have pivoted to harness the power of the open-source community.”

“We are at a crossroads in AI development,” Botev says. “One way points towards closed models and black box development. Another road leads to an ascendant European tech scene, with the open source movement iterating and experimenting with AI far faster and more effectively than any one company – no matter how large – can do alone.”

Botev believes achieving the open model requires greater awareness of the benefits and how AI systems actually operate among the wider population. This, he says, will spawn more fruitful discussions on how to regulate them without resorting to hyperbolic dialogue. “In this sense, the community itself can act as an ally to regulators, ensuring that we have the right regulations and that they’re actionable, sensible and fair to all,” he explains.

Read more: Frontier Model Forum: OpenAI and partners launch AI safety group

Topics in this article : AI , EU AI Act

  •  

参考译文
欧盟人工智能法案会成为开源的“死刑”吗?
欧盟需要在《欧盟人工智能法案》中采取更多措施来支持开源人工智能的发展,一群领先的开发者和开源公司已发出警告。GitHub、Hugging Face和知识共享基金会都是致欧洲议会公开信的签署者之一。然而,一位人工智能专家告诉《科技观察报》,开源与安全部署的人工智能并不兼容,因为很难对误用负责。《欧盟人工智能法案》中包括一项对外部审计人工智能模型的要求,这可能与开源项目不兼容(照片由Rawpixel.com/Shutterstock提供)。这封公开信已发送给欧盟的政策制定者,并列出了一系列建议,供欧洲议会在制定《人工智能法案》最终版本时参考。这些建议包括对人工智能组件有更清晰的定义,明确说明从事开源模型的业余爱好者和研究人员并未从他们的代码中获得商业利益,并允许在真实世界中对人工智能项目进行测试。全球各国政府正在努力寻找应对人工智能安全和监管的最佳方式。OpenAI和Google等公司正在形成联盟,推动未来模型的安全研究,而英国则在倡导全球性措施。欧盟在人工智能监管方面采取了其中最严格的规定,将拥有第一部全面的法律。在公开信发布的同时,还有一篇关于人工智能模型监管的研究论文发表。该论文认为,《欧盟人工智能法案》将为人工智能监管树立全球先例,支持开源生态系统的监管将有助于在管理风险的同时鼓励创新。该法案的设计以风险为基础,针对不同类型的人工智能采取不同的监管方式。这些公司指出,围绕开源的规则和非商业项目豁免的细节尚不够清晰。他们还表示,关于影响评估的规则,包括第三方审计,对于通常非营利性的项目来说过于繁琐。《欧盟人工智能法案》可能成为“开源的终结”。人工智能认证机构ForHumanity的首席执行官Ryan Carrier告诉《科技观察报》,《欧盟人工智能法案》是“开源的死亡”,他将这种技术开发方式描述为“不承担责任和不担责的借口”。Carrier表示:“由于组织有遵守法规的责任,他们不能再依赖那些无法提供上游工具足够陈述、保证和赔偿的工具。”他指出,这意味着有人将必须治理、评估并支持任何开源产品,使其能够投入生产并符合法案。“开源作为创造过程是有用的,但若要生存下去,它必须与治理、监督和问责制度结合。”他表示。Carrier认为,“如果开源社区能够整理所有合规要求,并确定一个吸收责任和上游责任的流程,那么他们仍可继续下去。”相比之下,BCS F-TAG小组主席Adam Leon Smith欢迎呼吁对开源人工智能持更开放态度。“我们对人工智能技术提供商的许多要求,以确保人工智能被安全使用,包括透明度,这是开源文化中内置的概念。”Leon Smith说。“监管应专注于人工智能的使用,而不是其创建。因此,监管机构应谨慎避免对开源造成不必要的限制。”我们合作伙伴的内容技术团队如何在公共部门推动可持续发展议程在混合云中发现价值通过数据中心运营优化业务价值英国开源组织OpenUK的首席执行官Amanda Brock告诉《科技观察报》,《欧盟人工智能法案》的自上而下方法使除最大公司外的任何人都很难理解和遵守该法规。她同意行为准则和可接受使用政策与开源软件许可不兼容,因为它们默认允许任何人出于任何目的使用代码。查看所有新闻通讯注册我们的新闻通讯由《科技观察报》团队为你提供数据、洞察和分析在这里注册人工智能发展“处于十字路口”一个可能的解决方案是进一步转向Meta在授权其Llama 2大型语言模型时采用的方法。Llama 2并未在真正的开源许可下发布,因为它对共享和使用该模型的方式及用户群有一些限制,包括对高使用场景的付费许可。Brock指出,“这表明我们正处于一个不断演变的空间,并且委员会的立法方法将行不通。”Brock也同意,在《欧盟人工智能法案》最终通过前,有必要改变如何处理和分类欧盟的开源项目。“默认情况下,组织将被视为商业企业,需按照法案规定满足某些合规标准,特别是针对开源软件。”她解释道。如果某些组织满足特定条件并遵守良好的实践标准,可以享受豁免。但Brock表示,人们对豁免条件是否足够广泛以涵盖所有开源项目表示担忧。这将导致一些特定项目被法案纳入监管,却无法合规。开源研究平台Iris.ai的首席技术官兼联合创始人Victor Botev表示,欧洲拥有世界顶级的开源资质,欧盟监管机构需要采取措施保持该领域的运行。“当Meta等公司为AI模型LLaMA 2发布商业开源许可时,即使是美国的行业巨头也已转向开源社区的力量。”Botev说。“我们正处于人工智能发展的十字路口,”Botev表示。“一条路指向封闭模型和黑箱开发。另一条道路则通向一个崛起的欧洲科技场景,开源运动将以比任何一家公司更快、更有效地迭代和实验AI——无论公司规模如何。”Botev认为,实现开源模式需要更广泛的人群认识到其优势,并了解AI系统的实际运作方式。他指出,这将激发更多关于如何进行监管的富有成果的讨论,而无需诉诸夸张的对话。“从这个意义上说,社区本身可以成为监管机构的盟友,确保我们拥有合适的监管措施,并且这些措施是可实施的、合理的,对所有人都是公平的。”他解释道。阅读更多:前沿模型论坛:OpenAI与合作伙伴启动AI安全组织本文主题:人工智能,欧盟人工智能法案
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘