China Unveils Stricter Regulations for AI Training Data

China has recently introduced stringent security requirements for companies offering services powered by generative artificial intelligence (AI). This development, announced by the National Information Security Standardization Committee, is aimed at curbing the misuse of AI models and their potential harm.

Ethical Boundaries

Generative AI, which has gained popularity through notable models like OpenAI’s ChatGPT, possesses the capacity to learn and generate new content, whether it be text or images, by drawing from the training data it has been exposed to. China’s new regulations signal an effort to ensure that AI models operate within certain ethical boundaries.

One of the prominent features of these regulations is the establishment of a blacklist of sources that are prohibited for training AI models. This blacklist will be instrumental in preventing AI systems from being influenced by harmful or illegal content. Specifically, any data containing more than 5% of information deemed illegal or harmful will be categorically excluded.

The prohibited content includes elements such as “advocating terrorism” or violence, any attempts to “overthrow the socialist system,” activities that “damage the country’s image,” and those that “undermine national unity and social stability.” By doing so, China aims to ensure that AI systems avoid generating or endorsing content that may incite harm or discord.

Also read:

China Tightens Rules For Generative AI Platforms

Google Bard Emerges As A Strong Competitor In Generative AI, But Challenges Remain

Censored Data Off-limits As Well

Moreover, the guidelines also stipulate that data censored within the Chinese internet ecosystem should not serve as training material for AI models. This measure aligns with China’s broader commitment to maintain a controlled digital environment and curb the spread of undesirable content.

This development comes in the wake of a recent decision to allow several Chinese tech firms, including the tech giant Baidu, to launch AI-driven chatbots. However, this authorization is accompanied by the requirement for these companies to conduct thorough security assessments before making their generative AI services available to the public. The Cyberspace Administration of China has been actively pursuing this agenda since April.

Consent Required from Individuals

To ensure accountability, the draft security requirements also mandate that organizations seek the consent of individuals whose personal information, including biometric data, is used for training AI models. This aligns with the broader global trend of data privacy and consent.

China’s focus on AI is underscored by its ambition to compete with the United States in this technology sector. The country aspires to become a global leader in AI by 2030, and these newly unveiled regulations signify a crucial step towards that goal.As the world grapples with setting boundaries and standards for AI technology, China’s proactive approach reflects its commitment to harness the potential of AI while safeguarding against its misuse. These regulations are not only expected to shape the AI landscape in China but also contribute to the ongoing global discourse surrounding AI ethics and security.


Flavie Du

Flavie Du was a senior writer at BTW media focused on blockchain and fintech investment. She graduated from King’s College London.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *