Indian government urges social media to combat misinformation and deepfakes
It has called for due diligence, emphasising the removal of reported content within 36 hours and the potential loss of safe harbor protection for non-compliance.
With deepfakes, which use AI to manipulate appearances for deceptive purposes, becoming a growing concern in India, the Ministry of Electronics and Information Technology has issued a directive to social media intermediaries, calling on them to actively identify and combat misinformation and deepfake content, according to the IT Rules 2021.
They have urged intermediaries to exercise due diligence and promptly address such issues, removing reported content within 36 hours and disabling access. Failure to comply could result in losing safe harbor protection for these platforms.
This directive comes in response to a fake video featuring Telugu actor Rashmika Mandanna, which raised concerns about the misuse of AI and its potential to contribute to online gender violence.
Why does it matter?
The use of AI tools to create and share manipulated content, such as face swaps and fake nude images, has sharply increased in recent years. As per the Washington Post, on leading websites hosting AI-generated pornographic material, there has been a staggering increase of over 290% since 2018, featuring both public figures and ordinary individuals, often with malicious intent. The Indian government’s response has faced criticism, with some suggesting more stringent penalties to deter perpetrators and protect the victims of deepfake content, as the damage can be swift and long-lasting.