小程序
传感搜
传感圈

China’s new generative AI rules are ‘about state control’ not user safety

2023-04-19
1 评论
来源: techmonitor
关注

China has published a set of draft regulations to govern the use of generative artificial intelligence technologies like ChatGPT and image generators such as MidJourney. The rules place greater responsibility for accuracy on the developer of the AI model than similar rules proposed in the EU, US or the UK. One expert told Tech Monitor the rules are more about ensuring state control than protecting users.

China’s generative AI guidelines place a greater burden on the provider of the technology. (Photo by Koshiro K/Shutterstock)

Published by the Cyberspace Administration of China (CAC), the draft measures set out ground rules for the use of generative AI and the content they can and cannot generate. This includes ensuring any output aligns with the “core values of socialism” and does not subvert state power.

Chinese companies responded quickly following the surprising success of OpenAI’s large language model-based natural language tool ChatGPT, which was released in November 2022. Alibaba, Tencent, Baidu and others have all announced plans to open access to their own large language models and incorporate chat technology into their own apps.

The Chinese government has also shown an interest in generative AI, declaring a need for it to be at the heart of the country’s economy. Officials from the Science and Technology Ministry said it attaches “great importance” to development of AI and that it “has wide application potential in many industries”.

Western-built AI tools like ChatGPT are banned in China, leading to a flurry of home-grown alternatives but the new rules are designed to ensure what comes out of the tools reflects the views and position of the communist party.

China’s algorithmic transparency rules

This isn’t the first time CAC has published guidelines for the use of AI or algorithms. The regulator has previously placed a requirement on social media companies to publish details of their algorithms, including how they make decisions on what videos to show or products to recommend.

These new rules place the burden on the developer or provider of the AI model rather than the end user. It includes ensuring any data used to train the model doesn’t discriminate against ethnicity, race or gender and that it does not produce false information.

Any new generative AI product will also need to go through a security assessment and publish the same transparency information for the algorithm as seen in social media services. There is also no difference in the level of safety and security requirements between direct-to-consumer and direct-to-enterprise tools.

Content from our partners

How enhanced analytics is creating a more sustainable F&B sector

How enhanced analytics is creating a more sustainable F&B sector

The war in Ukraine has changed the cybercrime landscape. SMEs must beware

The war in Ukraine has changed the cybercrime landscape. SMEs must beware

Why F&B must leverage digital to unlock innovation

Why F&B must leverage digital to unlock innovation

Moderation rules in the guidelines place a requirement on providers to ensure content is consistent with “social order and societal morals”, doesn’t endanger national security, avoids discrimination, is accurate and “respects intellectual property”.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

The assessment provisions govern internet services including public forums, streaming and search. Service providers have to self-assess or engage a third-party agency where they look for verification of real identities of users, how personal information is protected and content is reviewed internally.

Data submitted into the system by end users has to be protected, as well as activity logs and providers aren’t allowed to use that data for user profiling or sharing information with third parties. CAC says end users can report a provider to them if content being generated doesn’t comply with the draft measures.

This opens that provider up to a series of potential penalties under the Personal Information Protection Law, Cybersecurity Law and Data Security Law. This could include fines, having their service suspended and criminal investigations against the executives. If the tools do generative content that goes against the guidelines then companies are given three months to update the model, retrain data and ensure it doesn’t happen again. Failing to do so could open them up to having their services removed. and large, but unspecified fines.

China AI rules: financial penalties can be incurred

Louisa Chambers, Partner at law company Travers Smith told Tech Monitor the new regulations have some similar foundations to those elsewhere in the world, in that there are concerns around the increasing proliferation and sophistication of AI. “For example, we are all concerned that, if not used with safeguards and checks, AI can entrench and legitimise bias and discrimination – and all the draft legislation that we are starting to see published worldwide seeks to address this,” she says.

Chambers says the other similarity is in the need for transparency as all governments want some degree of openness from business over how they use AI and how it is being trained, but the approach in China is different to that of the UK and the EU.

“The EU draft AI Act and the UK’s recent white paper both show a desire to use AI to support innovation whilst at the same time protecting individuals from unfair or unduly invasive AI processes.  By comparison, the focus set out in the recent draft measures in China is to ensure that generative AI does not create content which is inaccurate, or which is inconsistent with social order,” Chambers adds

However, Lillian Edwards, professor of law, innovation and society at Newcastle University believes the policy is about control. She says China is interested in reining in its private tech industry over “well-founded fears” it could outstrip the capacity of the state to control and monitor citizens.

“This legislation echoes previous laws such as the one on recommender algorithms in naming vague social goals that providers must comply with on pain of penalties,” Edwards says. “These goals are clearly not fully operationable; but there have already been enforcement actions under the recommendation algorithms laws so they are not to be disregarded either.”

The West and China have different approaches, where China wants to shackle its tech industry “the West is largely scared” of doing the same, Edwards argues. “At least the EU is protecting the fundamental rights of citizens,” she says. “In China, arguably neither of these motivations apply and the main aim is to protect state power.”

Read more: China’s generative AI revolution is only just beginning

Topics in this article : AI , China , Regulation

译文
中国新的生成式人工智能规则是“国家控制”,而不是用户安全
中国发布了一套条例草案,以管理ChatGPT等生成式人工智能技术和MidJourney等图像生成器的使用。与欧盟、美国或英国提出的类似规则相比,这些规则让人工智能模型的开发者承担了更大的准确性责任。一位专家告诉Tech Monitor,这些规定更多的是为了确保国家控制,而不是保护用户。由中国国家互联网信息办公室(CAC)发布的《征求意见稿》规定了使用生成式人工智能的基本规则,以及它们可以和不可以生成的内容。这包括确保任何产出都符合“社会主义核心价值观”,不会颠覆国家政权。在OpenAI于2022年11月发布的基于大型语言模型的自然语言工具ChatGPT取得惊人成功后,中国企业迅速做出反应。阿里巴巴(Alibaba)、腾讯(Tencent)、b百度等公司都宣布了开放自己的大型语言模型的计划,并将聊天技术整合到自己的应用程序中。中国政府也对生成式人工智能表现出了兴趣,宣称有必要让它成为中国经济的核心。科技部官员表示,该部“高度重视”人工智能的发展,并表示人工智能“在许多行业具有广泛的应用潜力”。像ChatGPT这样的西方制造的人工智能工具在中国是被禁止的,这导致了一系列国产替代品的出现,但新规定的目的是确保这些工具的结果反映了共产党的观点和立场。这并不是CAC第一次发布人工智能或算法的使用指南。监管机构此前曾要求社交媒体公司公布其算法的细节,包括它们如何决定播放哪些视频或推荐哪些产品。这些新规则将负担放在人工智能模型的开发人员或提供者身上,而不是最终用户。它包括确保用于训练模型的任何数据都不会歧视种族、种族或性别,也不会产生虚假信息。任何新的生成式人工智能产品也需要经过安全评估,并发布与社交媒体服务相同的算法透明度信息。在直接面向消费者和直接面向企业的工具之间,安全性和安全性需求的级别也没有区别。指南中的审核规则要求提供商确保内容符合“社会秩序和社会道德”,不危害国家安全,避免歧视,准确并“尊重知识产权”。评估条款适用于包括公共论坛、流媒体和搜索在内的互联网服务。服务提供商必须自我评估或聘请第三方机构,在那里他们寻找用户真实身份的验证,如何保护个人信息以及内部审查内容。最终用户提交到系统中的数据必须受到保护,活动日志和提供商不得将这些数据用于用户分析或与第三方共享信息。CAC表示,如果生成的内容不符合草案规定,终端用户可以向他们举报提供商。根据《个人信息保护法》、《网络安全法》和《数据安全法》,该提供商可能面临一系列处罚。这可能包括罚款、暂停服务以及对高管进行刑事调查。如果这些工具生成了违反指导方针的内容,那么公司将有三个月的时间来更新模型,重新训练数据,并确保不再发生这种情况。如果做不到这一点,他们的服务可能会被取消。以及数额巨大但未具体说明的罚款。 特拉弗斯史密斯律师事务所的合伙人路易莎·钱伯斯告诉Tech Monitor,新规定与世界其他地方的规定有一些相似的基础,因为人们对人工智能的日益扩散和复杂化感到担忧。“例如,我们都担心,如果不进行保障和检查,人工智能可能会巩固偏见和歧视,并使其合法化——我们开始看到世界各地公布的所有立法草案都在寻求解决这个问题,”她说。钱伯斯表示,另一个相似之处在于对透明度的需求,因为所有政府都希望企业在如何使用人工智能以及如何培训人工智能方面保持一定程度的开放,但中国的做法与英国和欧盟不同。“欧盟人工智能法案草案和英国最近的白皮书都表明了利用人工智能支持创新的愿望,同时保护个人免受不公平或过度侵入的人工智能过程的影响。相比之下,中国最近出台的措施草案的重点是确保生成式人工智能不会创造不准确或与社会秩序不一致的内容,”钱伯斯补充道。然而,纽卡斯尔大学法律、创新与社会教授莉莲·爱德华兹认为,该政策是关于控制的。她表示,中国有兴趣控制其私营科技行业,因为“有充分理由担心”它可能超出国家控制和监控公民的能力。爱德华兹说:“这项立法与以前的法律相呼应,比如关于推荐算法的法律,规定了模糊的社会目标,提供者必须遵守这些目标,否则将受到处罚。”“这些目标显然不是完全可行的;但根据推荐算法法,已经有了执法行动,所以它们也不能被忽视。“西方和中国有不同的方法,中国想要束缚自己的科技产业,而西方在很大程度上害怕”这样做,爱德华兹说。“至少欧盟在保护公民的基本权利,”她说。“在中国,可以说这两种动机都不适用,主要目的是保护国家权力。”
您觉得本篇内容如何
评分

评论 1

您需要登录才可以回复|注册

提交评论

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

2023广州国际智能安防展览会

提取码
复制提取码
点击跳转至百度网盘