小程序
传感搜
传感圈

The UK wants more AI innovation like ChatGPT. Experts say experiments should stop

2023-03-31
来源: techmonitor
关注

The UK is set to adopt a “pro-innovation” and light-touch approach to regulating artificial intelligence according to a new government white paper released on Wednesday. But the white paper was launched on the same day leading industry figures signed an open letter calling for the development of advanced AI systems to be paused while the ethical implications of technology are considered, setting out a contrasting vision for the future of models like OpenAI’s ChatGPT-4, the technology behind ChatGPT.

The new AI white paper aims to find a balance between consumer safety and the benefits to the economy (Photo: LeoWolfert/Shutterstock)
The new AI white paper aims to find a balance between consumer safety and the benefits to the economy. (Photo by LeoWolfert/Shutterstock)

The UK approach outlined in the white paper differs from the EU AI Act where firm legislation is being used to regulate and control the use of the technology in the most “at risk” areas such as healthcare and law. It also runs counter to the arguments put forward in the open letter, co-ordinated by the Future of Life Institute think tank, and signed by the likes of Elon Musk and Apple co-founder Steve Wozniak.

The group is calling on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4”, the recently released multi-modal foundation model from OpenAI that Microsoft Research says shows “the early spark of being AGI”, or artificial general intelligence, a representation of generalised human cognitive abilities in software.

“This pause should be public and verifiable, and include all key actors,” warns the group, which also includes signatures from AI researchers across universities including Harvard and companies like Google-owned AI lab DeepMind. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” they go on to declare.

How to regulate AI

So what is the best way to approach AI regulation? The problem, says Benedict Macon-Cooney, chief policy strategist at the Tony Blair Institute, is that the kind of pause proposed in the Future of Life Institute Letter is unlikely to happen.

With this in mind, governments need to engage deeply with AI developers as their work increases in complexity, he says. “The importance of this technology means that government needs to engage deeply with those at the frontier of this development,” Macon-Cooney argues. “OpenAI’s Sam Altman, for example, has suggested that government sits in the firm’s office – something which a forward-thinking government should take up to aid understand and shape thinking.”

Macon-Cooney believes governments need to build up expertise and technical capabilities around foundation and large language model AI, something the UK announced as part of the Spring Budget, including a new taskforce designed to enable and understand those technologies in the UK market. “We are at the beginning of a new era, which will have an impact on health, education and jobs,” he says. “This will result in displacement, but it will also shape new opportunities. Government needs to help guide this future.”

Sector-by-sector regulation of AI

As part of the regulation of AI white paper all of the UK’s existing regulators would be responsible for regulating the use of AI in their respective sectors. There would be “multi-regulator” tools including a sandbox, guidelines and a framework, but from health to energy, the regulators would each be responsible for establishing standards and guidelines for operators in their area of the economy.

Content from our partners

Resilience: The power of automating cloud disaster recovery

Resilience: The power of automating cloud disaster recovery

Are we witnessing a new 'Kodak moment'?

Are we witnessing a new ‘Kodak moment’?

How the logistics sector can address a shift in distribution models

It also regulates the “use not the development” of artificial intelligence tools, which works for more general purpose AI where the use at time of development is unclear. Adam Leon Smith, CTO of AI consultancy Dragonfly and part of the UK delegation to the EU on AI standards welcomed the UK approach and said the government “should wait a few months before deciding what, if anything, it should do about generative AI” such as ChatGPT, as it currently isn’t clear which technologies will gain traction.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

“The UK is already providing regulators guidance in the form of technical standards,” Leon Smith says. “Although this is also the intent in the EU, they are moving much more slowly.” This, he says, creates “regulatory uncertainty and stifles innovation.”

The lack of any significant mention of generative AI, foundation models and other forms of general purpose AI in the UK government white paper has been criticised by groups like the Ada Lovelace Institute.

The white paper does suggest individual regulators will be able to decide how to regulate LLMs, including issuing specific requirements for developers and deployers to “address risks and implement the cross-cutting principle” which could include transparency requirements on data used to train the model. “At this point, it would be premature to take specific regulatory action in response to foundation models including LLMs. To do so would risk stifling innovation, preventing AI adoption, and distorting the UK’s thriving AI ecosystem,” it says.

A need for ethical AI by design

Dr Andrew Rogoyski from the University of Surrey’s Institute for People-Centred AI told Tech Monitor the pro-innovation approach was laudable but the lack of an overarching regulator and lack of strong controls on the use of AI means the country is out-of-step with the US, Europe and China.

“We need a central regulator for AI technology, partially because the individual regulators don’t currently have the individual skills but mainly because AI regulation needs to be joined up across sectors, especially since many AI providers operate across different sectors and don’t want to find themselves operating the same technology under different regimes in different sectors,” Rogoyski says.

“The pace and scale of change in AI development is extraordinary, and everyone is struggling to keep up. I have real concerns that whatever is put forward will be made irrelevant within weeks or months,” he said.

Taking a different approach to those much larger markets could be costly for the AI sector, says Tom Hartwright, partner at law firm Travers Smith. “This flexible approach has seen real success previously with a proportionate and innovative approach to the regulation of fintechs being hailed as one of the key reasons the UK is a market leader in the sector, Hartwright says. “The UK’s approach to AI regulation will, however, have to consider the wider global context where larger markets such as the US and EU have the power to set industry standards as we have seen with privacy and the EU’s GDPR.”

Ryan Carrier, CEO of ethical AI campaign group ForHumanity, which has produced audit criteria for the use of AI, said caution was important. “It is time that corporations wake up and proactively embrace governance, oversight, and accountability for these tools,” he says. “Corporations exist at the choice of humans to benefit society, not to experiment on us, ignore the harms and respond with ‘thank you for participating in our test, we will make it better next time’.”

He cited the recent ChatGPT privacy breach as a need for better enforcement and regulation demand, particularly around the use of existing regulations such as GDPR. “They are not enforcing the rules effectively enough,” said Carrier, adding that “ForHumanity insists on mandatory independent audits because they provide a proactive, independent review of compliance in advance of harms being committed, rather than relying upon reactive enforcement.”

Lack of enabling legislation for AI regulation

John Buyers, head of AI at law company Osborne Clarke said the new white paper doesn’t actually add much to what was already revealed last summer when the government first set out AI regulation plans. He says that, unlike the EU AI Act, no specific categories of AI will be banned and no new laws will be introduced to give regulation strength. “Instead, all the detail will be devolved to the existing body of UK regulators within their existing remits and using their existing powers,” Buyers says.

This, he says suggests the government is still in the phase of defining the problem and hints that the UK will lean towards becoming a “giant sandbox for AI”, seeing whether light-touch regulation is the right approach and making the UK a place to “foster the development of AI”.

Over time, says Buyers, the government will monitor what falls through the gaps in the regulatory regime and whether so many regulators being in play will lead to it becoming cumbersome and “start to damage innovation”.

But on the subject of general AI and whether there should be tighter oversight, or even a “pause” on development, it is probably already too late, argues Michael Queenan, CEO of UK technology company, Nephos Technologies. “Unfortunately I think the horse has already bolted, and you can’t ask commercial businesses to stop,” he says. “It has never happened in human history, so why would it happen now?”

Comparing it to trying to stop Henry Ford from building a car or George Stevenson from developing steam trains, Queenan says the impact is similar to the size of the Industrial Revolution. “People have been talking about the digital revolution for years now, but realistically not a lot actually changed,” he says. “The internet revolution fundamentally changed the way people interacted, the AI revolution will change how companies will operate.”

Read more: UK AI regulation white paper dodges ChatGPT questions

Topics in this article : AI , Regulation

译文
英国希望有更多像ChatGPT这样的人工智能创新。专家表示,实验应该停止
根据英国政府周三发布的一份新白皮书,英国将采取“支持创新”和宽松的方式来监管人工智能。但白皮书发布的同一天,领先的行业人士签署了一封公开信,呼吁暂停先进人工智能系统的开发,同时考虑技术的伦理影响,为OpenAI的ChatGPT-4 (ChatGPT背后的技术)等模型的未来设定了一个对比鲜明的愿景。白皮书中概述的英国方法不同于欧盟的《人工智能法案》(EU AI Act),后者使用坚定的立法来规范和控制在医疗保健和法律等最“危险”的领域使用人工智能技术。这也与公开信中提出的论点背道而驰。公开信由智库未来生命研究所(Future of Life Institute)协调,由埃隆·马斯克(Elon Musk)和苹果联合创始人史蒂夫·沃兹尼亚克(Steve Wozniak)等人签署。该组织呼吁“所有人工智能实验室立即暂停至少6个月对比GPT-4更强大的人工智能系统的训练”。GPT-4是OpenAI最近发布的多模态基础模型,微软研究院(Microsoft Research)表示,该模型显示了“AGI的早期火花”,即人工通用智能(artificial general intelligence),是软件中人类普遍认知能力的代表。该组织警告说:“这种暂停应该是公开的、可核实的,并包括所有关键参与者。”该组织还包括来自哈佛大学(Harvard)等大学的人工智能研究人员以及谷歌(google)旗下人工智能实验室DeepMind等公司的签名。他们接着宣称:“如果这种暂停不能迅速实施,政府就应该介入并实施暂停。”那么,人工智能监管的最佳方式是什么?托尼·布莱尔研究所首席政策策略师本尼迪克特·马孔-库尼表示,问题在于,生命未来研究所信中提出的那种暂停不太可能发生。他说,考虑到这一点,随着人工智能开发人员的工作越来越复杂,政府需要与他们深入接触。马孔-库尼认为:“这项技术的重要性意味着政府需要与处于这项技术发展前沿的人深入接触。”例如,OpenAI的Sam Altman建议政府坐在公司的办公室里——一个有远见的政府应该采取这种做法来帮助理解和塑造思维。”Macon-Cooney认为,政府需要围绕基础和大型语言模型人工智能建立专业知识和技术能力,英国在春季预算中宣布了这一点,包括一个新的特别工作组,旨在在英国市场上实现和理解这些技术。他说:“我们正处于一个新时代的开端,这个时代将对健康、教育和就业产生影响。”“这将导致取代,但也将创造新的机会。政府需要帮助引导这一未来。“作为人工智能监管白皮书的一部分,英国所有现有监管机构都将负责监管人工智能在各自领域的使用。将会有包括沙盒、指导方针和框架在内的“多监管机构”工具,但从健康到能源,每个监管机构都将负责为各自经济领域的运营商制定标准和指导方针。它还规定了人工智能工具的“使用而不是开发”,这适用于更通用的人工智能,在开发时的用途尚不清楚。人工智能咨询公司Dragonfly首席技术官、英国驻欧盟人工智能标准代表团成员亚当·莱昂·史密斯(Adam Leon Smith)对英国的做法表示欢迎,并表示政府“应该等几个月再决定对ChatGPT等生成式人工智能(如果有的话)应该怎么做”,因为目前尚不清楚哪些技术将获得关注。 里昂•史密斯表示:“英国已经在以技术标准的形式向监管机构提供指导。”“尽管这也是欧盟的意图,但他们的行动要慢得多。”他说,这造成了“监管的不确定性,扼杀了创新”。英国政府白皮书中对生成式人工智能、基础模型和其他形式的通用人工智能没有任何重要提及,这受到了Ada Lovelace Institute等组织的批评。白皮书确实建议,个别监管机构将能够决定如何监管llm,包括发布针对开发人员和部署人员的具体要求,以“解决风险并实施交叉原则”,其中可能包括对用于训练模型的数据的透明度要求。“在这一点上,对包括llm在内的基金会模型采取具体的监管行动还为时过早。这样做可能会扼杀创新,阻止人工智能的采用,并扭曲英国蓬勃发展的人工智能生态系统。”萨里大学以人为本人工智能研究所的安德鲁·罗戈伊斯基博士告诉《科技观察》,这种支持创新的方法值得称赞,但缺乏全面的监管机构,对人工智能的使用缺乏强有力的控制,意味着该国与美国、欧洲和中国不同步。“我们需要一个人工智能技术的中央监管机构,部分原因是个别监管机构目前不具备个人技能,但主要原因是人工智能监管需要跨部门联合起来,特别是因为许多人工智能提供商在不同的部门运营,不希望发现自己在不同的部门的不同制度下运营相同的技术,”罗戈斯基说。“人工智能发展变化的速度和规模是惊人的,每个人都在努力跟上。我真正担心的是,无论提出什么方案,在几周或几个月内都将变得无关紧要。”特拉弗斯•史密斯律师事务所(Travers Smith)合伙人汤姆•哈特赖特(Tom Hartwright)表示,对人工智能行业来说,在这些规模大得多的市场采取不同的做法可能代价高昂。哈特赖特说:“这种灵活的方法以前已经取得了真正的成功,对金融科技的适当和创新的监管方法被誉为英国成为该领域市场领导者的关键原因之一。”“然而,英国对人工智能监管的方法将不得不考虑更广泛的全球背景,美国和欧盟等更大的市场有权制定行业标准,就像我们在隐私和欧盟的GDPR中看到的那样。”人工智能伦理运动组织ForHumanity的首席执行官瑞安·卡里尔(Ryan Carrier)表示,谨慎很重要。该组织制定了人工智能使用的审计标准。他说:“企业是时候觉醒了,主动接受这些工具的治理、监督和问责制。”“企业的存在是为了让人类选择造福社会,而不是在我们身上做实验,忽视危害,然后回应‘谢谢你参加我们的测试,我们下次会做得更好’。”他指出,最近的ChatGPT隐私泄露事件需要更好的执法和监管需求,特别是围绕GDPR等现有法规的使用。“他们没有足够有效地执行规则,”开利说,并补充说,“ForHumanity坚持强制性的独立审计,因为他们在造成损害之前提供了主动的、独立的合规审查,而不是依赖于被动的执法。”奥斯本•克拉克律师事务所(Osborne Clarke)人工智能业务主管约翰•鲍尔斯(John Buyers)表示,实际上,这份新白皮书并没有在去年夏天政府首次公布人工智能监管计划的基础上增加太多内容。他表示,与《欧盟人工智能法案》不同,该法案不会禁止特定类别的人工智能,也不会出台新的法律来加强监管力度。“相反,所有细节都将移交给现有的英国监管机构,在他们现有的职权范围内,使用他们现有的权力,”buys表示。 他表示,这表明政府仍处于定义问题的阶段,并暗示英国将倾向于成为“人工智能的巨大沙箱”,看看轻触式监管是否正确,并使英国成为“促进人工智能发展”的地方。buyer表示,随着时间的推移,政府将监控监管制度的漏洞,以及这么多监管机构的参与是否会导致它变得繁琐,并“开始损害创新”。但英国科技公司Nephos Technologies的首席执行官迈克尔•奎南(Michael Queenan)认为,在通用人工智能以及是否应该加强监管,甚至“暂停”发展方面,现在可能已经太晚了。“不幸的是,我认为这匹马已经脱缰了,你不能要求商业企业停下来,”他说。“这在人类历史上从未发生过,为什么现在会发生呢?”奎南说,将其与试图阻止亨利·福特制造汽车或乔治·史蒂文森开发蒸汽火车相比,其影响与工业革命的规模相似。他说:“多年来,人们一直在谈论数字革命,但实际上并没有多大变化。”“互联网革命从根本上改变了人们的互动方式,人工智能革命将改变公司的运营方式。”
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

机器人可以代替人类的哪些工作呢?

提取码
复制提取码
点击跳转至百度网盘