Millions use Telegram to create AI deepfake nudes as digital abuse escalates
Researchers warn that AI deepfake channels on Telegram are expanding rapidly, enabling global harassment instead of accountability and placing women in severe danger.
A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.
Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.
Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.
The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.
Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.
Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.
Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.
The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
