小程序
传感搜
传感圈

As the FTC probes OpenAI, why aren’t businesses being more cautious with artificial intelligence?

2023-07-16 00:18:29
关注

  •  

OpenAI is being investigated by the US Federal Trade Commission (FTC) over claims the ChatGPT maker has broken consumer protection rules. The agency is concerned the AI lab has put personal reputations and data at risk through its business practices and technology. The investigation highlights the risks to businesses of allowing staff to use generative AI tools, with have the potential to expose valuable and confidential information

The FTC has asked OpenAI to provide consumer protection research and analysis (Photo: rafapress/Shutterstock)
The FTC has asked OpenAI to provide consumer protection research and analysis. (Photo by rafapress/Shutterstock)

The FTC sent OpenAI a 20-page demand for records on how it addresses risks related to its AI models, according to a report in the Washington Post. This includes detailed descriptions of all complaints it has received in relation to its products. It is focused on instances of ChatGPT making false, misleading, disparaging, or harmful statements about real people.

This is just the latest regulatory headache for the AI lab which was suddenly propelled into the spotlight following the surprising success of ChatGPT in November last year. Not long after it was launched Italian regulators banned OpenAI from collecting data in the country until ChatGPT was in compliance with GDPR legislation. It also sparked activity from the European Commission, updating the EU AI Act to reflect the risk of generative AI.

Company founder and CEO Sam Altman has been on a global charm offensive to try and convince regulators around the world to take its perspective on AI regulation into account. This includes putting the spotlight on safety research and advanced AGI, or superintelligence, rather than a direct focus on today’s AI models like GPT-4. It has been reported that OpenAI heavily lobbied the EU to water down regulations in the Bloc’s AI Act.

Globally, governments are wrestling with how to regulate a technology as powerful as generative or foundation model AI.  Some are calling for a restriction on the data used to train the models, others, such as the UK Labour Party, are calling for full licensing of anyone developing generative AI. There are others yet, including the UK government, taking a more light-touch approach to regulation.

As well as the potential risk of harm and even libel from generated content being investigated by the FTC, there are also more immediate and fundamental risks to enterprise from employees putting company data on to an open platform like ChatGPT. 

FTC filings also request any information on research the company has done on how well consumers understand the “accuracy or reliability of outputs” from their tools. It was likely spurred on to start the investigation following a lawsuit in Georgia where radio talk show host Mark Walters is suing OpenAI for defamation, alleging ChatGPT made up legal claims against him including where he worked and around financial issues.

The FTC wants extensive details on the OpenAI products, how it advertises them and what policies are in place before it releases any new products to the market, as well as times it has held back on releasing a new model due to safety risks in testing.

Content from our partners

Finding value in the hybrid cloud

Finding value in the hybrid cloud

Optimising business value through data centre operations

Optimising business value through data centre operations

Why revenue teams need to be empowered through data optimisation

Why revenue teams need to be empowered through data optimisation

Employees still using ChatGPT despite privacy concerns

Despite myriad security concerns, businesses appear to be slow in recognising the potential risks posed by ChatGPT. A new study from cybersecurity company Kaspersky found that one in three employees have been given no guidance on the use of AI with company data. This presents a real risk as, unless used with an authorised and paid-for enterprise account, or explicitly instructed not to, ChatGPT can reuse any data entered by the user in future training or improvements. This could lead to company data appearing in generated responses for other users that don’t work for that company.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

In its survey of 1,000 full-time British workers, Kaspersky found that 58% were regularly using ChatGPT to save time on mundane tasks such as summarising long text or meeting notes. This may change in future as meeting software like Teams, Zoom and Slack now have secure instances of foundation models capable of taking and summarising those notes.

Many of those using ChatGPT for work also say they use it for generating fresh content, creating translations and improving texts, which suggests they are inputting sensitive company information. More than 40% also said they don’t verify the accuracy of the output before passing it off as their own work.

“Despite their obvious benefits, we must remember that language model tools such as ChatGPT are still imperfect as they are prone to generating unsubstantiated claims and fabricate information sources,” warned Kaspersky data science lead Vladislav Tushkanov. 

“Privacy is also a big concern, as many AI services can re-use user inputs to improve their systems, which can lead to data leaks,” he added. “This is also the case if hackers were to steal users’ credentials (or buy them on the dark web), as they could get access to the potentially sensitive information stored as chat history.” 

Sue Daley, director for tech and innovation at techUK, told Tech Monitor that AI can be seen as a supportive tool providing employees with the potential to maximise productivity and streamline mundane activities. She says it can also be used to free up time to focus on high-priority and higher-value tasks for the organisation, but it does come with risks.

“It is crucial that the modern workforce has the skills and knowledge to work and thrive in an AI-driven world of work,” she warns. “Businesses should endeavour to train their employees in the responsible use of AI, to empower them to use this innovative technology and help them to unleash its full potential.” 

Read more: UK government approach to AI disadvantages workers – Labour

Topics in this article : AI , ftc , OpenAI

  •  

参考译文
当美国联邦贸易委员会(FTC)调查OpenAI时,为什么企业对人工智能的使用不够谨慎?
美国联邦贸易委员会(FTC)正在对OpenAI展开调查,原因是该ChatGPT的开发商被指控违反了消费者保护法规。该机构担忧这家人工智能实验室通过其商业实践和技术将个人声誉和数据置于风险之中。此次调查凸显了企业允许员工使用生成式人工智能工具所带来的风险,因为这些工具有可能泄露有价值且机密的信息。FTC已要求OpenAI提供消费者保护相关的研究和分析。(照片由rafapress/Shutterstock提供)据《华盛顿邮报》报道,FTC已向OpenAI发出一份20页的要求,要求其提供关于如何处理与AI模型相关风险的记录。其中包括其产品相关投诉的详细描述。这次调查专注于ChatGPT对真实人物做出虚假、误导性、贬低性或有害言论的实例。这是这家人工智能实验室面临的最新监管难题。去年11月,ChatGPT出乎意料的成功使其一夜之间成为公众关注的焦点。不久之后,意大利监管机构禁止OpenAI在该国收集数据,直到ChatGPT符合GDPR法规。这还引发了欧洲委员会的行动,对欧盟AI法案进行了更新,以反映生成式AI的风险。公司创始人兼首席执行官萨姆·阿尔特曼已在全球范围内展开魅力攻势,试图说服世界各国的监管机构采纳OpenAI对人工智能监管的观点。这包括强调安全研究和高级AGI(人工通用智能)或超级智能,而不仅仅聚焦于当前的AI模型,如GPT-4。据报道,OpenAI曾大力游说欧盟,试图削弱该地区AI法案的监管力度。在全球范围内,政府正在努力思考如何监管这一强大如生成式或基础模型AI的技术。一些国家呼吁对用于训练模型的数据进行限制,而另一些国家,如英国工党,则呼吁对任何开发生成式AI的人进行全面授权。还有一些国家,包括英国政府,则采取了更为宽松的监管方法。除了FTC正在调查生成内容可能带来的伤害风险甚至诽谤外,还有来自企业员工将公司数据上传至像ChatGPT这样的公开平台所造成的更直接且根本的风险。FTC提交的文件还要求提供公司有关消费者对其工具的“输出准确性或可靠性”理解程度的任何研究信息。此次调查可能是在佐治亚州一起诉讼案件的推动下展开的,该案件中,广播谈话节目主持人马克·沃特斯起诉OpenAI诽谤,指控ChatGPT编造了对他不利的法律指控,包括他曾在何处工作以及涉及财务问题的内容。FTC希望获得关于OpenAI产品的广泛细节,包括如何宣传这些产品、在推出任何新产品之前有哪些政策,以及由于测试中的安全风险而推迟发布新产品的情况。来自我们的合作伙伴的内容在混合云中发现价值通过数据中心运营优化商业价值为什么收入团队需要通过数据优化获得赋能尽管存在隐私方面的担忧,员工仍在使用ChatGPT尽管存在众多安全顾虑,但企业似乎在识别ChatGPT可能带来的潜在风险方面反应缓慢。网络安全公司Kaspersky的一项新研究发现,有三分之一的员工没有接受关于如何在工作中使用AI的任何指导。这带来了真实的风险,因为除非通过授权并付费的企业账户使用,或者明确被告知不可以,否则ChatGPT可以将用户输入的任何数据重新用于未来的训练或改进中。这可能导致企业数据出现在其他非本企业用户的生成回复中。查看所有通讯订阅我们的通讯数据、洞察和分析直接发送给您由《Tech Monitor》团队提供在此处订阅在对1000名英国全职员工的调查中,Kaspersky发现有58%的人经常使用ChatGPT来节省时间,用于诸如总结长篇文字或会议记录等琐碎任务。随着Teams、Zoom和Slack等会议软件现在已具备安全的模型实例,能够记录并总结会议笔记,这一情况在未来可能会发生改变。许多将ChatGPT用于工作的人还表示,他们使用它生成新内容、创建翻译和优化文字,这表明他们正在输入敏感的公司信息。超过40%的人表示,他们不会在将输出内容作为自己的工作成果之前验证其准确性。“尽管这些工具显然具有诸多优势,我们必须记住,像ChatGPT这样的语言模型工具仍然存在一些问题,因为它们容易生成未经证实的声明和编造信息来源,”Kaspersky的数据科学主管Vladislav Tushkanov警告道。“隐私也是一个重要问题,因为许多AI服务可以重新使用用户输入的内容以改进其系统,这可能导致数据泄露,”他补充道。“如果黑客窃取了用户的凭证(或在暗网上购买),他们可能获得访问用户聊天记录中潜在敏感信息的权限。”techUK的技术与创新总监Sue Daley告诉Tech Monitor,AI可以被视为一种辅助工具,有助于提高员工的工作效率和简化日常任务。她表示,它也可以用来节省时间,让员工专注于对公司而言优先级更高、价值更高的任务,但同时它也确实带来了风险。“至关重要的是,现代劳动力必须具备技能和知识,才能在以人工智能为主导的工作环境中工作并蓬勃发展,”她警告道。“企业应努力培训员工负责任地使用AI,从而赋予他们使用这项创新技术的能力,并帮助他们充分发挥其潜力。”阅读更多:英国政府对AI的监管方式不利于工人——工党本文相关话题:AI、FTC、OpenAI
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

小米大模型,不搞“ChatGPT”

提取码
复制提取码
点击跳转至百度网盘