小程序
传感搜
传感圈

ChatGPT blocked in Italy over privacy concerns

2023-04-05
来源: techmonitor
关注

OpenAI’s successful natural language AI platform ChatGPT has been blocked in Italy. The company has been ordered to cease collecting and processing Italian users’ data until it complies with the personal data protection regulations such as GDPR by Garante Privacy (GPDP), the Italian data protection authority.

ChatGPT has been blocked in Italy over privacy concerns
OpenAI launched ChatGPT in November 2022 and it reached over 100 million active monthly users in January. (Photo by rarrarorro/Shutterstock)

It argues that OpenAI provides a “lack of information to users and all interested parties” over what data is collected, as well as a lack of a legal basis to justify the collection and storage of personal data used to train the algorithm and models that power ChatGPT.

GPDP also raised concerns over the absence of any age-filtering technology to prevent the use of the tool by minors and to ensure they are not exposed to “absolutely unsuitable answers with respect to their degree of development and self-awareness”.

The investigation could be the first of many in the EU, as OpenAI has no legal entity in Europe, so all and any individual national regulator can investigate the impact of data collection. It has 20 days to respond to the order and could face fines of up to 4% of annual turnover.

OpenAI hasn’t disclosed what training data was used to make the latest iteration of its foundation model GPT-4, but previous generations were built on data scraped from the internet including Reddit and Wikipedia. The latest update also introduces a web browser, allowing ChatGPT to find information on the live internet for the first time.

GPDP highlighted a recent data breach in which conversation history titles were leaked to other users which some claim included personal details and payment information, as cause for pausing further data collection.

Potential for significant fines

If it is found that OpenAI has processed user data unlawfully then data protection authorities across Europe could order that data deleted, including data used in the training of the underlying model. This could force OpenAI to retrain GPT-4 and make it unavailable both as the API and within ChatGPT itself. This is all before the EU AI Act comes into force, although that makes little provision for the regulation of foundation and general purpose AI like ChatGPT.

A translation of the original Italian note from GPDP states that the action is being issued because of “the lack of information to users and all interested parties whose data are collected by OpenAI, but above all the absence of a legal basis that justifies the collection and massive storage of personal data, in order to “train” the algorithms underlying the operation of the platform.”

Content from our partners

Resilience: The power of automating cloud disaster recovery

Resilience: The power of automating cloud disaster recovery

Are we witnessing a new 'Kodak moment'?

Are we witnessing a new ‘Kodak moment’?

How the logistics sector can address a shift in distribution models

The Italian regulator has form, blocking the Replika chatbot earlier this month over concerns it posed “too many risks to children and emotionally vulnerable individuals”. The virtual friend tool is not able to process the personal data of Italian users until an investigation is complete.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

Italy isn’t the only country critical of these types of tools. In the US the Center for AI and Digital Privacy (CAIDP) has lodged a complaint with the FTC over the way OpenAI uses data. It wants the US regulator to order OpenAI to freeze development on its GPT models, claiming GPT-4 fails to satisfy any of the standards set out by the commission including the need to be transparent, explainable, fair and empirically sound.

“CAIDP urges the commission to initiate an investigation into OpenAI and find that the commercial release of GPT-4 violates Section 5 of the FTC Act, the FTC’s well-established guidance to businesses on the use and advertising of AI products, as well as the emerging norms for the governance of AI that the United States government has formally endorsed and the Universal Guidelines for AI that leading experts and scientific societies have recommended,” the organisation wrote in its FTC complaint.

“The FTC is already looking at LLM and the impact of generative AI; and it’s Section 5 powers clearly apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or primary purpose,” explained Ieuan Jolly, New York-based Linklaters partner and chair of its US TMT & Data Solutions Practice. “Generative AI and synthetic media based on chatbots that simulate human activity fall squarely within the type of tools that have the capability to engage in deceptive practices, for example, software that creates voice clones or deepfake videos.

“We’ve seen how fraudsters can use these AI tools and chatbots to generate realistic but fake content quickly and cheaply, targeting specific groups or individuals through fake websites, posts, profiles and executing malware and ransomware attacks – and the FTC has previously taken action in similar cases. The challenge is how to regulate a product that merely has the capability for deceptive production, as all generative technology can have, while permitting technological progress.”

Could lead to other complaints

This follows calls from more than 1,000 tech leaders and commentators in the US, including Steve Wozniak and Elon Musk, for OpenAI to “pause” development on its next generation of large language model until ethical guardrails can be introduced.

Edward Machin, senior lawyer in the Ropes and Gray data, privacy and cybersecurity practice said it is sometimes easy to forget that ChatGPT has only been widely used for a matter of weeks and only went live in November last year, meaning most users haven’t had time to stop and consider the privacy implications of their data being used to train the algorithm.

“Although they may be willing to accept that trade, the allegation here is that users aren’t being given the information to allow them to make an informed decision, and more problematically, that in any event there may not be a lawful basis to process their data,” he said. “The decision to stop a company processing personal data is one of the biggest weapons in the regulator’s armoury and can be more challenging for a company to deal with than a financial penalty. I suspect that regulators across Europe will be quietly thanking the Garante for being the first to take this step and it wouldn’t be surprising to see others now follow suit and issue similar processing bans.”

Ryan Carrier, Executive Director of ForHumanity said there have been calls, including from OpenAI CEO Sam Altman, for independent audits of AI systems but to date, nothing has happened. “ForHumanity has a GDPR certification scheme for AI, Algorithmic, Autonomous Systems that has been submitted to national data protection authorities in the UK and EU – much of this angst could be avoided by establishing compliance-by-design capacity at OpenAI.”

Read more: UK at odds with Elon Musk and other experts on AI regulation

Topics in this article : AI , OpenAI

译文
ChatGPT因隐私问题在意大利被屏蔽
OpenAI成功的自然语言人工智能平台ChatGPT在意大利被屏蔽。该公司已被意大利数据保护机构Garante Privacy (GPDP)勒令停止收集和处理意大利用户的数据,直到它符合GDPR等个人数据保护法规。它认为,OpenAI在收集数据方面“向用户和所有相关方提供了信息”,同时也缺乏法律基础来证明收集和存储用于训练ChatGPT算法和模型的个人数据是合理的。GPDP还对缺乏任何年龄过滤技术表示担忧,以防止未成年人使用该工具,并确保他们不会接触到“与他们的发展程度和自我意识完全不合适的答案”。此次调查可能是欧盟的第一次,因为OpenAI在欧洲没有法律实体,因此所有国家和任何一个国家监管机构都可以调查数据收集的影响。该公司有20天的时间对该命令做出回应,并可能面临高达年营业额4%的罚款。OpenAI尚未披露其基础模型GPT-4的最新迭代使用了哪些训练数据,但前几代是基于从Reddit和维基百科等互联网上收集的数据。最新的更新还引入了一个网络浏览器,允许ChatGPT首次在实时互联网上查找信息。GPDP强调了最近的一次数据泄露,其中对话历史标题泄露给其他用户,一些人声称其中包括个人详细信息和支付信息,作为暂停进一步数据收集的原因。如果发现OpenAI非法处理用户数据,欧洲各地的数据保护机构可以下令删除数据,包括用于基础模型训练的数据。这可能会迫使OpenAI重新训练GPT-4,使其无论是作为API还是在ChatGPT内部都不可用。这都是在欧盟人工智能法案生效之前,尽管该法案对ChatGPT等基础和通用人工智能的监管几乎没有规定。GPDP的意大利原文翻译称,之所以采取这一行动,是因为“OpenAI收集了用户和所有相关方的数据,但最重要的是,缺乏法律基础来证明收集和大量存储个人数据是合理的,以“训练”平台运营的算法。”意大利监管机构已在本月早些时候叫停了Replika聊天机器人,理由是担心它“对儿童和情感脆弱的人构成太多风险”。在调查完成之前,虚拟朋友工具无法处理意大利用户的个人数据。意大利并不是唯一一个对这类工具持批评态度的国家。在美国,人工智能和数字隐私中心(CAIDP)就OpenAI使用数据的方式向联邦贸易委员会提出了投诉。它希望美国监管机构命令OpenAI冻结其GPT模型的开发,声称GPT-4未能满足欧盟委员会制定的任何标准,包括透明、可解释、公平和经验合理的需要。“CAIDP OpenAI敦促欧盟委员会发起调查,发现GPT-4违反第五节的商业发布联邦贸易委员会法案,联邦贸易委员会的行之有效的指导企业使用人工智能产品和广告,以及新兴规范治理的AI,美国政府已经正式认可的通用指南AI权威专家和科学社会推荐,”该组织在其贸易委员会投诉。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

OpenAI发布ChatGPT API,谷歌还有活路吗?

提取码
复制提取码
点击跳转至百度网盘