Powered by
Technology

China to Enforce AI Content Labeling Starting in September

This article was published more than a year ago. Some information may no longer be current.

The Cyberspace Administration of China stated that all artificial intelligence (AI) machine-generated content must be flagged with watermarks or metadata to identify its artificial origin. Unflagged content can still be produced, but creators must clarify the origin of the content, which will be logged by the source app for easier tracking.

SHARE
China to Enforce AI Content Labeling Starting in September

China Starts a Crusade to Flag AI Content

China has embarked on a quest to easily differentiate machine-created AI content from the real deal. The Cyberspace Administration of China recently released a clarification article regarding the “Measures for the Identification of Synthetic Content Generated by Artificial Intelligence,” a series of regulations that seek to fight AI-aided disinformation.

The administration stated that, starting in September 2025, it will enforce labeling for all AI-generated content, which must be displayed to distinguish it from other organic content. This requirement will apply to all types of data, including images, videos, music, and more, using various elements such as watermarks and innovative techniques. Metadata will also carry a tamper-proof AI label, with authorities prohibiting any modification of this field.

The measures also affect online app stores, which must now adapt to these rules. According to the administration, internet application distribution platforms will have to “require the Internet application service provider to explain whether it provides artificial intelligence generated synthetic services and verify the relevant materials for the identification of the synthetic content it generates.”

This changes the landscape for AI service providers in China, which have to adapt the reasoning of their platforms to include these requirements and limitations enacted by the national authorities. Nonetheless, the institution clarified that the door for generating unlabeled artificial content is still open, with the responsibility falling on the user generating this content, who will have to disclose its nature.

AI platforms must maintain logs of unflagged content to facilitate the potential enforcement against misuse of this content. It remains to be seen whether foreign platforms will comply with these measures, which could potentially close the Chinese AI market to foreign tools.

Read more: Tether Ventures Into Artificial Intelligence With New Application Suite