Earlier this week, Douyin, TikTok’s sister app in China, proudly announced new rules for AI-generated content. Like TikTok, Douyin is a enormous platform for publishing short-video content and it is eager to lead the way in setting standards addressing AI-generated content. In addition to 11 broad rules for its own platform, Douyin also released more detailed technical specifications for the creation of a uniform industry-wide labeling system for AI-generated content using watermarks and content metadata.
The Platform Norms can be viewed as a direct response to recent government efforts to regulate AI-generated content, including provisions on “Deep Synthesis” services that became effective in January of this year, and draft rules on “generative-AI” tools, which were available for public comment until May 10th. Both of these documents address computer-generated content, but the earlier document focuses more on applying China’s broader content regulation system into the AI-content generation context, while the latter emphasizes the development of content-generation tools themselves. Douyin’s rules implement many of the core requirements from these documents, including the need to create easy channels for user feedback and complaints, labeling AI, and preventing the spread of prohibited or misleading information. The Deep Synthesis rules themselves call for industry organizations to work towards setting standards, and Douyin seems to be taking the lead.
Labeling Requirements (4,9)
A two-tier labelling system for AI-generated content is emerging as a core component of the government’s regulatory strategy, including both technical labeling and visible labeling.
Technical labeling requires an indication included in metadata or code of any content that was created or edited using a machine-powered service. The labels should not impact users’ experience but are to be logged and traceable should problems with the content be discovered later.
Visible labeling is the conspicuous marking of machine-generated or manipulated content that is likely to mislead or confuse the public- a category seemingly broad enough to include all content that can be perceived by humans.
To comply with the labeling requirements, Douyin has standardized methods for labeling AI-generated content, both in file metadata (technical labeling) and using a uniform watermark (visible labeling).
Standardized methods for entering labels in metadata, if adopted by content generation applications, would allow platforms like Douyin to easily recognize AI-generated content, track it, and provide necessary labels etc.
A standardized visible watermark in a regular location would quickly become familiar and recognizable to users, avoiding potential confusion from diverse styles and placement of markers. As Douyin is a short video platform, their visible marker seems primarily at images and videos, rather than at audio content- which is addressed in their metadata label standards- or text.
Misinformation. 3, 5, 8, 11
Preventing the use of AI tools to generate false and misleading information is a global concern, and another focus of recent government action. China’s strict content regulation and censorship system is well-known, and this includes even criminal penalties to stop online ‘rumor spreading’ that causes a public disturbance. All general content restrictions apply regardless of how the content is created, and while the recent rules on AI-generated content do not seek to add restrictions, they aim to prevent the creation of prohibited content and to hold tool creators and providers responsible for identifying and stopping such content.
Douyin seems to introduce a slightly different standard, prohibiting spreading rumors and deception, but also content contrary to basic scientific knowledge. Singling out unscientific content is odd and vague, particularly as it is not even included in detailed content guidance specific to short video platforms. It may be that drafters were thinking of stopping ‘flat earth conspiracy’ content, but challenging the ‘known’ is as important a part of scientific progress as exploring the unknown. It’s also easy to see how social media discussion over “real” sexual and gender identity could fall afoul of such an idea.
Virtual Persons 6, 10, (and FAQ 5)
An interesting inclusion relates to the use of virtual persons. This term is translated sometimes as “avatars”, but is here referring to a simulated persona, not just the user profile image in social media apps, which is also sometimes called an ‘avatar’. Virtual persons can be used in livestreaming, short videos, etc.
The rules require that a system for registering such virtual persons will be instituted, linking them to a user who has had their identity authenticated. Real-name registration systems are required for online services in China, including for the providers of AI-content generation tools and services. What’s new here is the emphasis on registering the persona itself as well, with the promise to protect their operators’ rights in the virtual likeness.
In the frequently asked questions section included with the rules, an additional requirement is added that in real-time interactions (as opposed to recorded videos, etc.) a human must be driving the virtual persons’ interactions, and they must not be fully automated.
Rights Protection 1, 7, 11
While they broad rules don’t say much, they emphasize the protection of intellectual property and likeness rights in Ai-generated content. The CAC’s draft generative AI rules have rules concerning the protection of such rights in both training data and outputs. The full scope of protections intended here is not yet clear.
Be First to Comment