Press "Enter" to skip to content

Key Changes to Generative AI Measures

Now Jointly issued by the Cybersecurity Administration, National Development and Reform Commission, Ministry of Education, Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Public Security, and State Administration of Radio and Television, China’s new Interim Measures for the Management of Generative Artificial Intelligence Services will take effect on August 15, 2023.

Reduced Restrictions in the Name of Development and Innovation.

In writing about the draft Measures that preceded China’s new Interim Measures on the Management of Generative AI Services, I emphasized that the rules addressed several global concerns:

[*Links are to earlier article on this site]

These issues remain prominent in the Interim Measures, but many of the strictest controls now yield significantly to another factor: promoting development and innovation in the AI industry. A new Article 3 lays this balance bare, providing that development and innovation are to be emphasized equally with security and governance in AI.

There are measures introduced for the direct encouragement of cooperation in development and research, particularly in articles 5 and 6, calling for coordination in basic technologies such as chips and software platforms, and the development of shared data resources. In article 16, it is provided that all regulatory measures should be compatible with innovation.

Emphasis on public-facing AI

The most drastic illustration of the resulting relaxation is a change to article 2, making clear that the rules apply only to public-facing generative AI services. The original draft had held that they applied to all uses of generative AI and even in the research and development of generative AI products, but that language has been removed. Not only have those additional areas been deleted, but a new clause expressly excludes them from the scope of the Measures.

The shift might be viewed as indicating that Beijing subscribes to the idea of an AI race in which it must remain competitive, reflecting feedback from the tech community that the Draft’s rules were unworkable and would limit the availability of benefits from AI, or being the result of having additional departments with differing prioritize now jointly issuing the rules. Whatever the immediate cause, most of the concerns above, which the measures seek to address, are most pronounced in publicly viewed content.

Provider Obligations

The obligations of those who are providing public-facing generative AI services have not changed remarkably, with a few important exceptions. Service providers remain liable for content made using their services, and for the improper handling of personal information. Consider this list of obligations updated from an earlier article.

#

Provider Obligation

Draft art.

Current art.

A.

Security Assessment to be conducted before making services publicly available in accordance with Provisions on the Security Assessment of Internet Information Services that have Public Opinion Properties or the Capacity for Social Mobilization

6

17

B.

Filing Algorithms in accordance with Provisions on the Management of Algorithmic Recommendations in Internet Information Services

6

17

C.

Ensuring the sources of training data are lawful

7

7(1)

D.

Developing detailed rules for manual data tagging

8

8

E.

Implementing Real-name verification system

9

F.

Sign service agreements with users

9p2

G.

Specifying intended users and uses

10

10

H.

Taking measures to prevent over-reliance and addiction [now limited to minors] and guide proper usage

10

18

10

10

I.

Protecting information entered by users Added Minimum Necessity Principle for PI

11

11

J.

Non-discriminatory output

12

K.

Accept user complaints and correct information that infringes user rights

13

11p2, 15

L.

Provide safe and stable services

14

13

M.

Prevent illegal content through screening and retraining of the model.

15

14

N.

Suspend services to stop user violations

19

14.2

O.

Label generated content (Art. 16)

16

12

Truth be told:

In addressing both AI-generated output and training data, the Draft measures had called for materials to be truthful and accurate. I previously discussed how the question of ‘fake’ or ‘true’ information wasn’t always a useful measure, and was at best a complicated issue that hinged on the purpose for which the content was presented. A piece of abstract visual art, for example, isn’t really “true” or “fake” regardless of whether it was generated by a machine or person, but could be an accurate reproduction etc.

The unworkable “truth” requirements have largely been abandoned in favor of a call for greater transparency. Draft article 4(4)’s prohibition on ‘fake information’ has been abandoned by a call for measures to increase service transparency and increase accuracy and reliability. The mandate that the “truth” of training data be ensured, has been greatly watered down now only calling on providers to work to increase the truth of such data.

Reduced Scope of User Supervision:

Rows M and N above refer to providers’ duties related to users’ improper content and activity. The Draft rules held that providers must address all content that didn’t comply with the Measures, but article 14 of the Interim rules clarifies that only “illegal content” needs to be addressed. Illegal content is a term of art in Chinese law, distinguishable from the broader category of “negative information”. Earlier rules on the Administration of Deep Synthesis Services, which will still also apply to most generative AI services, do address “negative information” and have a further requirement that it be reported to the authorities that is now also included in the Interim Measures.

As for improper conduct, the Draft Rules required service providers to suspend or stop the services for users who used them to violate laws and regulations, violate commercial and social ethics, generate spam, maliciously post comments, compile harmful software, carry out improper marketing, etc. This list of bad behaviors has been reduced to only include ‘illegal’ activities, and the range of potential responses has been expanded to include mild measures such as warnings, or limited functions, as well as suspension or termination of services.

Protecting Personal Information: Protections provided in the draft, including providers’ liability as personal information handlers under the Personal Information Protection Law, have been carried over with only a few tweaks. One is the inclusion of the “minimum necessary” standard familiar from APP regulations in article 11, requiring that only personal information necessary to provide the services be collected. Another is the clarification of individuals’ rights regarding their personal information, which now includes the request to access, reproduce, modify, supplement, or delete their own personal information.

Tolerating Adult Addiction: A draft provision on preventing addiction or overreliance on AI tools has been limited to only impact minors. Micro-management of children’s online activity to prevent internet addiction is a regular focus in Chinese law, but rarely applies to adults. This change is a welcome concession that adults are welcome to waste as much of their time playing with AI tools as they please. The service providers, however, is still required to ‘guide’ all users to use those tools rationally, however.

Discrimination: Originally the draft required that service providers should not allow the AI to produce content based on users’ race, country, sex, or other discriminatory factors. While this seems to have been dropped, there is still plenty of content addressing the production of content that is itself discriminatory. Items (1) and (2) of article 4, both refer to preventing discriminatory content, even adding the new category of “health” discrimination – which would likely include physical and mental disabilities, and also having contracted an infectious disease such HIV.

Real Name Verification: The explicit requirement that users have their identity information verified has been removed, but the requirement is still there. Draft article 9 had only stated the need for compliance with the Cybersecurity Law’s existing real identification requirements, and the Deep Synthesis Provisions, which overlap in scope with the generative AI measures, still have an explicit verification requirement at article 9.

Graded management by category: A final interesting change is the introduction of 分类分级管理 hierarchical and categorical management, which refers to regulating market actors on the basis of their past compliance (the grade or hierarchy element) and in light of the nature of the industry involved (the category element). This concept is key to regulation in China, and generally not controversial, but I point it out only because it is the heart of ‘credit regulation’ in China’s social credit system.

 

 

Click to rate this post!
[Total: 0 Average: 0]

Be First to Comment

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Translate