China’s proposed AI security rules
China’s proposed security rules for generative AI are making waves.They’re introducing a blacklist of sources that can’t be used for training AI models, focusing on keeping content clean. This move comes as China aims to lead in AI by 2030.
China has introduced proposed security guidelines for services utilizing generative AI. These guidelines involve a list of sources that are not permitted for training AI models.
The National Information Security Standardization Committee, composed of officials from China’s regulatory authorities, recommends the evaluation of security for generative AI models intended for public use. Models containing more than 5% illegal or harmful content would be included in a blacklist.
This proscribed content encompasses endorsing terrorism or violence, tarnishing the nation’s reputation, and destabilizing national unity. These regulations follow China’s recent permission for tech companies like Baidu to launch generative AI-driven chatbots and are part of China’s broader strategy to compete with the US in the field of AI by 2030.
Why does this matter?
The proposed blacklist of sources highlights China’s desire to control the content generated by AI models, particularly to prevent the spread of illegal or harmful information. By establishing clear guidelines, China is positioning itself to compete effectively with other countries, such as the United States, in the AI sector. The requirement for consent when using personal data in AI training addresses growing privacy concerns in the era of AI.