Tokyo plans to expose makers of malicious AI systems
In a push to curb the misuse of AI, Japan is considering naming and shaming developers of harmful AI systems.
The Japanese government is considering publicly disclosing the names of developers behind malicious artificial intelligence systems as part of efforts to combat disinformation and cyberattacks. The move, aimed at ensuring accountability, follows a government panel’s recommendation that stricter legal frameworks are necessary to prevent AI misuse.
The proposed bill, expected to be submitted to parliament soon, will focus on gathering information on harmful AI activities and encouraging developers to cooperate with government investigations. However, it will stop short of imposing penalties on offenders, amid concerns that harsh measures might discourage AI innovation.
Japan’s government may also share its findings with the public if harmful AI systems cause significant damage, such as preventing access to vital public services. While the bill aims to balance innovation with public safety, questions remain about how the government will decide what constitutes a “malicious” AI system and the potential impact on freedom of expression.