Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Roblox faces new dutch scrutiny under EU digital rules

Regulators in the Netherlands have opened a formal investigation into Roblox over concerns about inadequate protections for children using the popular gaming platform.

The national authority responsible for enforcing digital rules is examining whether the company has implemented the safeguards required under the Digital Services Act rather than relying solely on voluntary measures.

Officials say children may have been exposed to harmful environments, including violent or sexualised material, as well as manipulative interfaces encouraging more extended play.

The concerns intensify pressure on the EU authorities to monitor social platforms that attract younger users, even when they do not meet the threshold for huge online platforms.

Roblox says it has worked with Dutch regulators for months and recently introduced age checks for users who want to use chat. The company argues that it has invested in systems designed to reinforce privacy, security and safety features for minors.

The Dutch authority plans to conclude the investigation within a year. The outcome could include fines or broader compliance requirements and is likely to influence upcoming European rules on gaming and consumer protection, due later in the decade.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

South Korea prepares for classroom phone ban amid disputes over rules

The East Asian country is preparing to enforce a nationwide ban on mobile phone use in classrooms, yet schools remain divided over how strictly the new rules should be applied.

A ban that takes effect in March under the revised education law, and officials have already released guidance enabling principals to warn students and restrict smart devices during lessons.

These reforms will allow devices only for limited educational purposes, emergencies or support for pupils with disabilities.

Schools may also collect and store phones under their own rules, giving administrators the authority to prohibit possession rather than merely restricting use. The ministry has ordered every principal to establish formal regulations by late August, leaving interim decisions to each school leader.

Educators in South Korea warn that inconsistent approaches are creating uncertainty. Some schools intend to collect phones in bulk, others will require students to keep devices switched off, while several remain unsure how far to go in tightening their policies.

The Korean Federation of Teachers’ Associations argues that such differences will trigger complaints from parents and pupils unless the ministry provides a unified national standard.

Surveys show wide variation in current practice, with some schools banning possession during lessons while others allow use during breaks.

Many teachers say their institutions are ready for stricter rules, yet a substantial minority report inadequate preparation. The debate highlights the difficulty of imposing uniform digital discipline across a diverse education system.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI companions raise growing ethical and mental health concerns

AI companions are increasingly being used for emotional support and social interaction, moving beyond novelty into mainstream use. Research shows that around one in three UK adults engage with AI for companionship, while teenagers and young adults represent some of the most intensive users of these systems.

However, the growing use of AI companions has raised serious mental health and safety concerns. In the United States, several cases have linked AI companions to suicides, prompting increased scrutiny of how these systems respond to vulnerable users.

As a result, regulatory pressure and legal action have increased. Some AI companion providers have restricted access for minors, while lawsuits have been filed against companies accused of failing to provide adequate safeguards. Developers say they are improving training and safety mechanisms, including better detection of mental distress and redirection to real-world support, though implementation varies across platforms.

At the same time, evidence suggests that AI companions can offer perceived benefits. Users report feeling understood, receiving coping advice, and accessing non-judgemental support. For some young users, AI conversations are described as more immediately satisfying than interactions with peers, especially during emotionally difficult moments.

Nevertheless, experts warn that heavy reliance on AI companionship may affect social development and human relationships. Concerns include reduced preparedness for real-world interactions, emotional dependency, and distorted expectations of empathy and reciprocity.

Overall, researchers say AI companionship is a growing societal trend, raising ethical and psychological concerns and intensifying calls for stronger safeguards, especially for minors and vulnerable users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Millions use Telegram to create AI deepfake nudes as digital abuse escalates

A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.

Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.

Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.

The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.

Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.

Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.

Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.

The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

French public office hit with €5 million CNIL fine after massive data leak

The data protection authority of France has imposed a €5 million penalty on France Travail after a massive data breach exposed sensitive personal information collected over two decades.

A leak which included social security numbers, email addresses, phone numbers and home addresses of an estimated 36.8 million people who had used the public employment service. CNIL said adequate security measures would have made access far more difficult for the attackers.

The investigation found that cybercriminals exploited employees through social engineering instead of breaking in through technical vulnerabilities.

CNIL highlighted the failure to secure such data breach requirements under the General Data Protection Regulation. The watchdog also noted that the size of the fine reflects the fact that France Travail operates with public funding.

France Travail has taken corrective steps since the breach, yet CNIL has ordered additional security improvements.

The authority set a deadline for these measures and warned that non-compliance would trigger a daily €5,000 penalty until France Travail meets GDPR obligations. A case that underlines growing pressure on public institutions to reinforce cybersecurity amid rising threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI learning tools grow in India with Gemini’s JEE preparation rollout

Google is expanding AI learning tools in India by adding full-length Joint Entrance Exam practice tests to Gemini, targeting millions of engineering applicants.

Students can complete full mock JEE exams directly in Gemini. The questions are developed using vetted material from education platforms in India, including Physics Wallah and Careers360, and will be expanded further.

Gemini provides instant feedback after each test. It explains correct answers and generates personalised study plans based on performance, supporting structured exam preparation.

In addition to these exam-focused features, preparation tools will also roll out to AI Mode in Search, specifically including Canvas. These tools will allow students to build interactive quizzes and study guides from their own notes.

Alongside providing enhanced tools for students, Google is also partnering with universities, government agencies, and nonprofits to integrate AI across education systems, aiming to scale access to tens of millions of learners by 2027.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pornhub to block new UK users over tougher age-check rules

Pornhub will begin blocking access for new UK users from 2 February 2026, allowing entry only to people who had already created an account and completed age checks before that date, the company said, framing the move as a protest against how the UK’s Online Safety Act is being enforced.

The UK regime, overseen by Ofcom, requires porn services accessible in Britain to deploy ‘highly effective’ age assurance measures, not simple click-through age gates. Ofcom says traffic to pornography sites has fallen by about a third since the age-check deadline of 25 July 2025, and it has pursued investigations into dozens of services as enforcement ramps up.

Aylo, Pornhub’s parent company, argues the current approach is backfiring: it says users, adults and minors, are shifting toward non-compliant sites, and it is campaigning for device-based age verification, handled at the operating-system or app-store level rather than site-by-site checks. In parallel, UK VPN downloads surged after age checks began, underscoring how quickly users can try to route around country-based controls.

Privacy and security concerns become sharper when adult platforms are turned into identity checkpoints. In December 2025, reporting linked a large leak of Pornhub premium-user analytics data, including emails and viewing/search histories, to a breach involving a third-party analytics provider, underscoring how sensitive such datasets can be when they are collected or retained.

Government and regulator messaging emphasises child protection and the Online Safety Act’s enforcement teeth, including significant penalties and, in extreme cases, access restrictions, while companies like Aylo argue that inconsistent enforcement simply pushes demand to riskier corners of the internet and fuels workarounds like VPNs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Council presidency launches talks on AI deepfakes and cyberattacks

EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.

The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.

According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.

Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.

The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.

An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!