小程序
传感搜
传感圈

How do you regulate advanced AI chatbots like ChatGPT and Bard?

2023-02-20
关注

“AI will fundamentally change every software category” said Microsoft’s CEO Satya Nadella Tuesday when he announced OpenAI’s generative AI technology was coming to the Bing search engine to offer users what MSFT hopes will be a richer search experience.

The success of OpenAI’s ChatGPT and the upcoming release of Google’s Bard means the debate over AI regulation has ramped up. (Photo Illustration by Jonathan Raa/NurPhoto via Getty Images)

But how to regulate tools, such as OpenAI’s chatbot ChatGPT, that can generate any type of content from a few words, and are trained on the world’s knowledge, is a question that is puzzling policymakers around the world. The solution will involve assessing risk, one expert told Tech Monitor, and certain types of content will need to be more closely monitored than others.

Within two months of launch, ChatGPT, the AI chatbot became the fastest-growing consumer product in history, with more than 100 million active monthly users in January alone. It has prompted some of the world’s largest companies to pivot to or speed up AI rollout plans and has given a new lease of life to the conversational AI sector.

Microsoft is embedding conversational AI in its browser, search engine and broader product range, while Google is planning to do the same with the chatbot Bard and other integrations into Gmail and Google Cloud, several of which it showcased at an event in Paris today.

Other tech giants such as China’s Baidu are also getting in on the act with chatbots of their own, and start-ups and smaller companies including Jasper and Quora bringing generative and conversational AI to the mainstream consumer and enterprise markets.

This comes with real risks from widespread misinformation and harder-to-spot phishing emails through to misdiagnosis and malpractice if used for medical information. There is also a high risk of bias if the data used to feed the model isn’t diverse. While Microsoft has a retrained model that is more accurate, and other providers like AI21 are working on verifying generated content against live data, the risk of “real looking but completely inaccurate” responses from generative AI are still high.

Last week, Thierry Breton, the EU commissioner for the internal market, said that the upcoming EU AI act would include provisions targeted at generative AI systems such as ChatGPT and Bard. “As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks,” Breton told Reuters. “This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data.”

Breton and his colleagues will have to act fast, as new AI rules drawn up in the EU and elsewhere may not be ready to cope with the challenges posed by these advanced chatbots.

Content from our partners

The role of modern ERP in transforming the distribution and logistics sector

The role of modern ERP in transforming the distribution and logistics sector

How designers are leveraging tech to apply the brakes to fast fashion

How designers are leveraging tech to apply the brakes to fast fashion

Why the tech sector must embrace faster, smarter talent recruitment

Why the tech sector must embrace faster, smarter talent recruitment

AI regulation: developers will need to be ‘ethical by design’

Analytics software provider SAS outlined some of the risks posed by AI in a recent report, AI & Responsible Innovation. Author Dr Kirk Borne said: “AI has become so powerful, and so pervasive, that it’s increasingly difficult to tell what’s real or not, and what’s good or bad”, adding that this technology is being adopted faster than it can be regulated.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

Dr Iain Brown, head of data science at SAS UK & Ireland, said governments and industry both have a role to play in ensuring AI is used for good, not harm. This includes the use of ethical frameworks to guide the development of AI models and strict governance to ensure fair, transparent and equitable decisions from those models. “We test our AI models against challenger models and optimise them as new data becomes available,” Brown explained.

Other experts believe companies producing the software will be charged with mitigating the risk the software represents, with only the highest-risk activities facing tighter regulation.

Edward Machin, data, privacy and cybersecurity associate at law firm Ropes and Gray told Tech Monitor it is inevitable that technology like ChatGPT, which seemingly appeared overnight, will move faster than regulation, especially in an area like AI which is already difficult to regulate. “Although regulation of these models is going to happen, whether it is the right regulation, or at the right time, remains to be seen,” he says

“Providers of AI systems will bear the brunt of the legislation, but importers and distributors – in the EU at least – will also be subject to potentially onerous obligations,” Machin adds. This could put some developers of open-source software in a difficult position. “There is also the thorny question of how liability will be handled for open-source developers and other downstream parties, which may have a chilling effect on willingness of those folks to innovate and conduct research,” Machin says.

AI, privacy and GDPR

Aside from the overall regulation of AI, there are also questions around the copyright of generated content and around privacy, Machin continues. “For example, it’s not clear whether developers can easily – if at all – address individuals’ deletion or rectification requests, nor how they get comfortable with scraping large volumes of data from third-party websites in a way that likely breaches those sites’ terms of service,” he says.

Lilian Edwards, Professor of Law, Innovation and Society at Newcastle University, who works on regulation of AI and with the Alan Turing Institute, said some of these models will come under GDPR, and this could lead to orders being issued to delete training data or even the algorithms themselves. It may also spell the end of widescale scraping of the internet, currently used to power search engines like Google, if website owners lose out on traffic to AI searches.

The big problem, says Edwards, is the general purpose nature of these models. This makes them difficult to regulate under the EU AI Act, which has been drafted to work on the basis of risk, as it is difficult to judge what the end user is going to be doing with the technology due to the fact it is designed for multiple use cases. She said the European Commission is trying to add rules to govern this type of technology but is likely to do this after the act becomes law, which could happen this year.

Enforcing algorithmic transparency could be one solution. “Big Tech will start lobbying to say ‘you can’t put these obligations on us as we can’t imagine every future risk or use’,” says Dr Edwards. “There are ways of dealing with this that are less or more helpful to Big Tech, including making the underlying algorithms more transparent. We are in a head-in-the-sand moment. Incentives ought to be towards openness and transparency to better understand how AI makes decisions and generates content.”

“It is the same problem you get with much more boring technology, that tech is global, bad actors are global and enforcement is incredibly difficult,” she said. “General purpose AI doesn’t match the structure of the AI act which is what the fight is over now.”

Adam Leon Smith, CTO of AI consultancy DragonFly has worked in technical AI standardisation with UK and international standards development organisations and acted as the UK industry representative to the EU AI standards group. “Regulators globally are increasingly realising that it is very difficult to regulate technology without consideration of how it is actually being used,” he says.

He told Tech Monitor that accuracy and bias requirements can only be considered in the context of use, with risks, rights and freedoms requirements also difficult to consider before it reaches widescale adoption. The problem, he says, is that large language models are general-purpose AI.

“Regulators can force transparency and logging requirements on the technology providers,” Leon Smith says. “However, only the user – the company that operates and deploys the LLM system for a particular purpose – can understand the risks and implement mitigations like humans in the loop or ongoing monitoring.”

AI regulatory debate looming

It is a large-scale debate that is looming over the European Commission and hasn’t even started in the UK, but one that regulators such as data watchdog the Information Commissioner’s Office and its counterpart for financial markets, the Financial Conduct Authority, will have to tackle. Eventually, Leon Smith believes, as regulators increase their focus on the issue, AI providers will start to list the purposes for which the technology “must not be used”, including issuing legal disclaimers before a user signs in to put them outside the scope of “risk-based regulatory action”.

Current best practices for managing AI systems “barely touch on LLMs, it is a nascent field that is moving extremely quickly,” Leon Smith says. “A lot of work is necessary in this space and the firms providing such technologies are not stepping up to help define them.”

OpenAI’s CTO Mira Muratti this week said that generative AI tools will need to be regulated. “It is important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible,” she said in an interview with Time.

But beyond the AI vendors, she said “a tonne more input into the system” is needed, including from regulators and governments. She added that it’s important the issue is considered quickly. “It’s not too early,” Muratti said. “It’s very important for everyone to start getting involved, given the impact these technologies are going to have.”

Read more: ChatGPT update will improve chatbot’s factual accuracy

Topics in this article : AI , Google , Microsoft

参考译文
你如何监管像ChatGPT和Bard这样的高级AI聊天机器人?
微软首席执行官萨蒂亚·纳德拉周二宣布,OpenAI的生成式人工智能技术将引入必应搜索引擎,为用户提供微软所希望的更丰富的搜索体验,他说:“人工智能将从根本上改变所有软件类别。”但是,如何监管OpenAI的聊天机器人ChatGPT等工具,这是一个困扰世界各地政策制定者的问题。ChatGPT可以从几个单词中生成任何类型的内容,并接受世界知识的训练。一位专家告诉Tech Monitor,解决方案将涉及评估风险,某些类型的内容将需要比其他内容更密切地监控。在推出两个月内,人工智能聊天机器人ChatGPT成为历史上增长最快的消费产品,仅在1月份就有超过1亿的月活跃用户。这促使世界上一些最大的公司转向或加快人工智能的推出计划,并为对话式人工智能领域带来了新的生机。微软正在其浏览器、搜索引擎和更广泛的产品系列中嵌入对话式人工智能,而谷歌计划对聊天机器人Bard以及Gmail和谷歌Cloud中的其他集成进行同样的操作,谷歌今天在巴黎的一次活动中展示了其中几款。中国的百度等其他科技巨头也在推出自己的聊天机器人,初创企业和Jasper和Quora等规模较小的公司也在为主流消费者和企业市场带来生成式对话式人工智能。这带来了广泛的错误信息和难以发现的网络钓鱼电子邮件,以及误诊和医疗事故(如果用于医疗信息)的真实风险。如果用于填充模型的数据不是多样化的,也有很高的偏差风险。虽然微软已经有了一个更准确的重新训练的模型,而且AI21等其他提供商正在努力验证生成的内容与实时数据的对比,但生成人工智能“看起来真实但完全不准确”的响应的风险仍然很高。上周,欧盟内部市场专员蒂埃里·布雷顿(Thierry Breton)表示,即将出台的欧盟人工智能法案将包括针对ChatGPT和Bard等生成式人工智能系统的条款。布雷顿对路透社表示:“正如ChatGPT所展示的那样,人工智能解决方案可以为企业和公民提供巨大的机会,但也可能带来风险。”“这就是为什么我们需要一个坚实的监管框架,以确保基于高质量数据的值得信赖的人工智能。”布雷顿和他的同事们必须迅速采取行动,因为欧盟和其他地方制定的新人工智能规则可能还不足以应对这些先进聊天机器人带来的挑战。分析软件提供商SAS在最近的一份报告中概述了人工智能带来的一些风险,AI &负责任的创新。作者柯克·伯恩博士说:“人工智能已经变得如此强大,如此普遍,以至于越来越难以分辨什么是真的什么是假的,什么是好是坏的。”他补充说,这项技术被采用的速度比监管的速度还要快。英国SAS数据科学主管Iain Brown博士;爱尔兰表示,政府和行业都有责任确保人工智能被用于好的方面,而不是伤害。这包括使用道德框架来指导人工智能模型的开发,并严格治理,以确保这些模型做出公平、透明和公平的决策。布朗解释说:“我们将人工智能模型与挑战者模型进行测试,并在新数据可用时对其进行优化。”其他专家认为,生产软件的公司将被要求降低软件所代表的风险,只有风险最高的活动才会面临更严格的监管。 Ropes and Gray律师事务所的数据、隐私和网络安全助理爱德华·马钦(Edward Machin)告诉Tech Monitor,像ChatGPT这样看似一夜之间出现的技术,不可避免地会比监管发展得更快,尤其是在人工智能这样已经很难监管的领域。“虽然对这些模型的监管将会发生,但这是否是正确的监管,或者在正确的时间,还有待观察,”他说,“人工智能系统的提供商将首当其冲,但进口商和分销商(至少在欧盟)也将面临潜在的繁重义务,”Machin补充道。这可能会让一些开源软件开发者陷入困境。“还有一个棘手的问题是,开源开发者和其他下游各方的责任将如何处理,这可能会对这些人进行创新和研究的意愿产生寒蝉效应,”马钦说。除了对人工智能的整体监管之外,关于生成内容的版权和隐私也存在问题,马钦继续说道。他说:“例如,目前尚不清楚开发人员是否能够轻松地(如果有的话)处理个人的删除或更正请求,也不清楚他们如何能够以一种可能违反第三方网站服务条款的方式从第三方网站抓取大量数据。”纽卡斯尔大学法律、创新和社会教授莉莲·爱德华兹(Lilian Edwards)在艾伦·图灵研究所(Alan Turing Institute)从事人工智能监管工作,她表示,其中一些模型将受到GDPR的监管,这可能会导致发布删除训练数据甚至算法本身的命令。如果网站所有者失去了人工智能搜索的流量,这也可能意味着互联网大规模抓取的终结,目前这种抓取是谷歌等搜索引擎的动力。爱德华兹说,最大的问题是这些模型的通用性质。这使得它们难以根据欧盟人工智能法案进行监管,该法案是根据风险起草的,因为很难判断最终用户将用这项技术做什么,因为它是为多个用例设计的。她说,欧盟委员会正试图增加规则来管理这类技术,但很可能会在该法案成为法律后这样做,这可能会在今年发生。加强算法透明度可能是一种解决方案。爱德华兹博士说:“大型科技公司将开始游说,说‘你们不能把这些义务强加给我们,因为我们无法想象未来的每一种风险或用途’。”“有一些处理这个问题的方法对大型科技公司或多或少有帮助,包括让底层算法更加透明。我们正处于进退两难的时刻。激励措施应该朝着开放和透明的方向发展,以便更好地理解人工智能如何做出决策和生成内容。”她说:“这和你在更无聊的技术上遇到的问题是一样的,技术是全球性的,坏人是全球性的,执行起来非常困难。”“通用人工智能与人工智能行为的结构不匹配,而现在的斗争已经结束了。”Adam Leon Smith是人工智能咨询公司DragonFly的首席技术官,他曾与英国和国际标准开发组织一起从事人工智能技术标准化工作,并担任欧盟人工智能标准组织的英国行业代表。他表示:“全球监管机构越来越意识到,如果不考虑技术的实际使用方式,就很难对其进行监管。”他告诉Tech Monitor,准确度和偏差要求只能在使用的背景下考虑,在大规模采用之前,风险、权利和自由要求也很难考虑。他说,问题在于大型语言模型是通用的人工智能。 里昂•史密斯表示:“监管机构可以强制要求技术提供商提供透明度和日志记录。”“然而,只有用户——为特定目的而操作和部署LLM系统的公司——才能了解风险,并像人工循环或持续监控一样实施缓解措施。”这是一场笼罩在欧盟委员会(European Commission)上空的大规模辩论,甚至还没有在英国开始,但数据监管机构信息专员办公室(Information Commissioner’s Office)和金融市场监管机构金融行为监管局(financial Conduct Authority)等监管机构将不得不解决这一问题。最终,里昂•史密斯认为,随着监管机构加大对这一问题的关注,人工智能提供商将开始列出该技术“不得用于”的目的,包括在用户登录之前发布法律免责声明,将他们置于“基于风险的监管行动”的范围之外。里昂•史密斯表示,目前管理人工智能系统的最佳实践“几乎没有涉及LLMs,这是一个发展极其迅速的新兴领域。”“在这个领域需要做很多工作,而提供此类技术的公司并没有站出来帮助定义它们。”OpenAI的首席技术官米拉·穆拉蒂本周表示,生成式人工智能工具将需要受到监管。“对OpenAI和像我们这样的公司来说,以一种可控和负责任的方式让公众意识到这一点很重要,”她在接受《时代》杂志采访时表示。但她表示,除了人工智能供应商之外,“还需要对系统进行大量投入”,包括来自监管机构和政府的投入。她补充说,迅速考虑这个问题很重要。“现在还不算太早,”穆拉蒂说。“考虑到这些技术将产生的影响,每个人都开始参与进来是非常重要的。”
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘