小程序
传感搜
传感圈

UK AI regulation must be stronger to match global tech leadership ambitions

2023-07-20 07:09:09
关注

  •  

The UK needs stronger artificial intelligence regulation if it wants to set the global agenda. That is the view of a new report from the Ada Lovelace Institute, the UK’s AI research organisation, which says the plans set out in the government’s AI white paper are lacking. The report calls for an AI ombudsman, stricter legislation to protect users from harm and mandatory reporting from foundation model developers.

The Ada Lovelace Institute, named after one of the first computer programmers, pictured, has launched a new report on AI regulations. (Photo by Donaldson Collections/Getty Images)

Demand for AI services and technology has grown from a trickle to a flood in recent months, spurred by the success of OpenAI’s chatbot ChatGPT. While tech vendors have rushed to infuse their products with AI, the heightened interest in the technology has prompted a rush to regulation, with approaches varying from strict to almost non-existent.

The EU’s new AI Act puts requirements around transparency and safety on developers, as well as limits on high-risk use cases. In the UK, the approach has been outlined through the AI White Paper, published earlier this year. It sets out a “light-touch, pro-innovation” approach to regulation, leaving the burden on existing regulators.

This Ada Lovelace Institute paper, entitled Regulating AI in the UK, comes on the same day UK Foreign Secretary James Cleverly is set to call for a coordinated international response to AI. Chairing a session of the UN Security Council, he is expected to say that no country will be untouched by AI “so we must involve and engage the widest coalition of international actors from all sectors”.

The UK approach has been widely criticised, labelled by some experts as “no regulation at all”. The UK Labour Party has called for a national body to oversee AI regulation, as well as stricter reporting requirements, and the SNP in Scotland wants a national discussion. Rishi Sunak is pushing for global cooperation and standards, announcing a global AI safety summit in the UK to be held later this year.

The problem with this approach, warns the Ada Lovelace Institute report, is that without effective and robust national standards and regulations it will be impossible to get international agreement on how AI should be regulated. It says: “The UK must strengthen its AI regulation proposals to improve legal protections, empower regulators and address urgent risks of cutting-edge models.”

Coverage, capability and urgency

The report analyses different facets of the UK’s AI regulation plans, including the white paper, the Data Protection and Digital Information Bill, the summit proposal and the £100m Foundation Model Taskforce headed up by entrepreneur Ian Hogarth.

It outlines a trio of tests that can be used to monitor the UK’s approach to AI regulation and provides recommendations to ensure they can be met. These recommendations were drawn up after workshops with experts from industry, civil society and academia, and put through independent legal analysis before publication.

Content from our partners

How tech teams are driving the sustainability agenda across the public sector

How tech teams are driving the sustainability agenda across the public sector

Finding value in the hybrid cloud

Finding value in the hybrid cloud

Optimising business value through data centre operations

Optimising business value through data centre operations

The first test is coverage, specifically, to what extent legislation covers the development, use and risk of AI. They found that current regulations and legislation leave many aspects without regulatory coverage. This includes recruitment, policing and government itself.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

The government approach set out in the white paper is to rely on existing legislation, rather than create custom AI legislation, but legal analysis by data rights legal company AWO found that the protections from UK GDPR, the Equality Act and other laws often fail to protect people from harm or provide a viable route to redress.

The institute recommends the creation of a new AI ombudsman that would directly support anyone or any organisation impacted by AI. The report’s authors also recommend reviewing existing protections, legislating to introduce better protections and rethinking the data protection bill to consider its implications on AI regulation.

The second test is on resourcing and capability. The institute expressed concern over whether existing regulators or others involved in settings standards and guidelines would have the resources necessary to effectively do the job they’ve been tasked to complete. This includes ensuring regulators have the necessary powers to take action where needed.

AI ombudsman and statutory principles

The authors recommend creating a new statutory duty requiring them to consider AI principles. This could include common sets of powers for all regulators and a dramatic increase in funding for AI safety research. It also proposes setting up funding to allow civil society to be involved in AI regulation and not just industry or government.

Echoing comments from Labour on the need to speed up AI regulation, the institute says the current government timeline of a year to evaluate and iterate is too slow. There are significant harms associated with AI use today, and these are being felt disproportionately by the most marginalised in society. “The pace at which foundation models are being utilised risks scaling and exacerbating these harms,” the report warns.

The report suggests a need for robust governance of foundation models, underpinned by legislation and a review of all existing legislation against those models. They have also called for mandatory reporting requirements for foundation model developers such as OpenAI, Anthropic and Google’s DeepMind. Pilot projects to develop better expertise and monitoring should also be launched in government and assurances of a diverse range of voices at the AI Safety Summit, not just industry and government.

Michael Birtwistle, associate director at the Ada Lovelace Institute, said the government recognises that the UK has a unique opportunity to be a world leader in AI regulation but its credibility rests on the ability of the government to deliver a world-leading regulatory regime at home before pushing for global agreement. “Efforts towards international coordination are very welcome, but they are not sufficient,” he said. “The government must strengthen its domestic proposals for regulation if it wants to be taken seriously on AI and achieve its global ambitions.”

Alex Lawrence-Archer, a solicitor at AWO, the legal company used by the institute to review existing legislation, said for ordinary people to be effectively protected “we need regulation, strong regulators, rights to redress and realistic avenues for those rights to be enforced”. Adding that “our legal analysis shows that there are significant gaps which mean that AI harms may not be sufficiently prevented or addressed, even as the technology that threatens to cause them becomes increasingly ubiquitous.”

Read more: UK AI taskforce gets £100m to take on ChatGPT

Topics in this article : AI , Regulation

  •  

参考译文
英国的人工智能监管必须更加严格,才能与全球科技领导力的目标相匹配。
如果英国希望引领全球人工智能(AI)议程,就必须拥有更强大的AI监管体系。这是由英国AI研究机构Ada Lovelace研究所发布的一份新报告的观点。该报告指出,政府在AI白皮书中提出的计划存在不足之处。报告呼吁设立一个AI监察员,出台更严格的法律保护用户免受伤害,并要求基础模型开发者进行强制性报告。以首批程序员之一Ada Lovelace命名的该研究所,已发布了一份关于AI监管的新报告。照片中的研究所,由Donaldson Collections/Getty Images拍摄。 最近几个月,由于OpenAI聊天机器人ChatGPT的成功,对AI服务和技术的需求已经从涓涓细流变为洪水般增长。虽然科技厂商纷纷急于将AI融入其产品中,但对这项技术的强烈兴趣也引发了监管的热潮,各地的监管方式则从严格到几乎不存在不一而足。欧盟的AI法案对开发者提出了透明度和安全性的要求,并对高风险使用场景设定了限制。而英国则通过今年早些时候发布的AI白皮书提出了一种监管方法。该白皮书主张采取一种“轻触、鼓励创新”的监管方式,将责任留给现有监管机构。 在同一天,英国外交大臣詹姆斯·克莱弗利(James Cleverly)将在联合国安理会的一次会议上呼吁全球对AI作出协调一致的回应。他预计会表示,没有任何一个国家能不受AI的影响,因此我们必须动员并参与最广泛的国际行动联盟。 英国的监管方法受到了广泛批评,一些专家甚至将其称为“完全没有监管”。英国工党呼吁设立一个国家机构来监督AI监管,并要求实施更严格的报告要求。而苏格兰的苏格兰民族党(SNP)则希望进行全国范围的讨论。英国首相里希·苏纳克(Rishi Sunak)则推动全球合作和标准,并宣布将在今年晚些时候举行一次全球AI安全峰会。 Ada Lovelace研究所的报告警告说,这种方法的问题在于,如果没有有效和健全的国家标准和法规,就不可能就AI监管达成国际共识。报告指出:“英国必须加强其AI监管提案,以改善法律保护,赋予监管机构权力,并解决先进模型带来的紧迫风险。” **覆盖面、能力与紧迫性** 该报告分析了英国AI监管计划的各个方面,包括白皮书、《数据保护与数字信息法案》、峰会提案以及由企业家伊恩·霍加思(Ian Hogarth)领导的1亿英镑基础模型工作组。报告提出了三项测试,可用于监测英国AI监管方法,并提供了确保达到这些标准的建议。这些建议是在与来自产业界、民间社会和学术界专家的研讨会之后提出的,并在发布之前经过了独立法律分析。 **内容来自我们的合作伙伴** 技术团队如何推动公共部门的可持续发展议程 混合云中的价值发现 数据中心运营中的业务价值优化 第一项测试是关于“覆盖面”,即当前立法对AI的开发、使用和风险涵盖的范围有多广。研究发现,当前的监管和立法在很多方面并未涵盖监管内容。这包括招聘、执法以及政府本身。 白皮书中提出政府的方法是依赖现有立法,而不是制定专门针对AI的法律。但数据权利法律公司AWO的法律分析发现,英国《通用数据保护条例》(GDPR)、《平等法》和其他法律往往无法保护人们免受伤害,也未能提供可行的救济途径。研究所建议设立一个新的AI监察员,以直接支持受AI影响的个人或组织。 报告作者还建议审查现有保护措施,通过立法引入更好的保护,并重新考虑《数据保护与数字信息法案》对AI监管的含义。 第二项测试是关于资源与能力。研究所表达了对现有监管机构或其他制定标准和指南的机构是否具备完成任务所需资源的担忧。这包括确保监管机构拥有必要的权力,可以在需要时采取行动。 **AI监察员与法定原则** 作者建议设立新的法定义务,要求监管机构考虑AI原则。这可能包括为所有监管机构设置通用的权力集,并大幅增加对AI安全研究的资助。还建议设立资金,使民间社会能够参与AI监管,而不仅仅是产业界和政府。 研究所呼应了工党关于加快AI监管的呼吁,并指出政府目前设定的一年时间来评估和迭代的计划进展太慢。如今,AI的使用已带来显著的伤害,而这些伤害主要由社会中边缘群体承担。报告警告称:“基础模型的使用速度可能会加剧这些伤害。” 报告建议对基础模型实施强有力的治理,通过立法进行保障,并审查所有现有法规与这些模型的适用性。还呼吁要求基础模型开发者如OpenAI、Anthropic和Google DeepMind等进行强制性报告。政府还应启动试点项目,以开发更好的专业知识和监测机制,并确保在AI安全峰会上有来自各领域的多元声音,而不仅仅是产业界和政府。 Ada Lovelace研究所的副主管迈克尔·伯特威尔斯特(Michael Birtwistle)表示,政府认识到英国拥有成为AI监管全球领导者的独特机会,但其信誉取决于政府是否能在推动全球协议之前,在国内建立世界领先的监管体系。“国际协调的努力值得欢迎,但并不足够,”他说,“如果英国政府希望在AI监管方面获得认可,并实现其全球雄心,就必须加强其国内监管提案。” AWO律师事务所的律师亚历克斯·劳伦斯-阿彻(Alex Lawrence-Archer)表示,为了有效保护普通民众,“我们需要监管、强有力的监管机构、救济权和可行的救济途径”。他还补充道:“我们的法律分析表明,目前存在重大漏洞,这意味着即使AI技术日益普及,AI造成的伤害可能仍未被充分预防或解决。” **阅读更多** 英国AI工作组获得1亿英镑资金,与ChatGPT一较高下 **本文主题**:人工智能,监管
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘