Japan considers legally binding regulations for large-scale AI systems to tackle disinformation

Japan is intensifying its efforts to confront disinformation and other risks linked to large-scale AI systems. The government is presently contemplating the implementation of legally binding regulations on AI developers, inspired by the precedent set by the European Union.

 Flag, Japan Flag

The Japanese government is considering implementing legally binding regulations on developers of large-scale AI systems to tackle disinformation and other risks. Initially, the government leaned towards voluntary measures, but it now recognises the need for penal regulations in line with the European Union and other nations to address concerns about AI misuse. To address this issue, the government plans to establish a council of AI experts to discuss the matter, and the new regulations may be incorporated into economic and fiscal management policy guidelines by June.

Japan is also planning to release guidelines that outline ten principles for safe and responsible AI use, with a focus on ‘human-centeredness.’ Under a draft proposal by the ruling Liberal Democratic Party, businesses developing advanced technologies like ChatGPT, a generative AI chatbot, may be designated as ‘AI foundation model developers.’ Companies using AI in high-risk areas will be required to conduct internal or external safety verifications and share risk assessments with the government.

Government-designated developers will have to report their compliance status to the government or third-party agencies, and non-compliance may result in fines, penalties, or on-site inspections.