Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.
Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.
Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.
The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.
However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.
The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Regulators in the Netherlands have opened a formal investigation into Roblox over concerns about inadequate protections for children using the popular gaming platform.
The national authority responsible for enforcing digital rules is examining whether the company has implemented the safeguards required under the Digital Services Act rather than relying solely on voluntary measures.
Officials say children may have been exposed to harmful environments, including violent or sexualised material, as well as manipulative interfaces encouraging more extended play.
The concerns intensify pressure on the EU authorities to monitor social platforms that attract younger users, even when they do not meet the threshold for huge online platforms.
Roblox says it has worked with Dutch regulators for months and recently introduced age checks for users who want to use chat. The company argues that it has invested in systems designed to reinforce privacy, security and safety features for minors.
The Dutch authority plans to conclude the investigation within a year. The outcome could include fines or broader compliance requirements and is likely to influence upcoming European rules on gaming and consumer protection, due later in the decade.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
France has blocked the planned divestment of Eutelsat’s ground-station infrastructure, arguing that control over satellite facilities remains essential for national sovereignty.
The aborted sale to EQT Infrastructure VI had been announced as a significant transaction, yet the company revealed that the required conditions had not been met.
Officials in France say that the infrastructure forms part of a strategic system used for both civilian and military purposes.
The finance minister described Eutelsat as Europe’s only genuine competitor to Starlink, further strengthening the view that France must retain authority over ground-station operations rather than allow external ownership.
Eutelsat stressed that the proposed transfer concerned only passive facilities such as buildings and site management rather than active control systems. Even so, French authorities believe that end-to-end stewardship of satellite ground networks is essential to safeguard operational independence.
The company says the failed sale will not hinder its capital plans, including the deployment of hundreds of replacement satellites for the OneWeb constellation.
Investors had not commented by publication time, yet the decision highlights France’s growing assertiveness in satellite governance and broader European debates on technological autonomy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The East Asian country is preparing to enforce a nationwide ban on mobile phone use in classrooms, yet schools remain divided over how strictly the new rules should be applied.
A ban that takes effect in March under the revised education law, and officials have already released guidance enabling principals to warn students and restrict smart devices during lessons.
These reforms will allow devices only for limited educational purposes, emergencies or support for pupils with disabilities.
Schools may also collect and store phones under their own rules, giving administrators the authority to prohibit possession rather than merely restricting use. The ministry has ordered every principal to establish formal regulations by late August, leaving interim decisions to each school leader.
Educators in South Korea warn that inconsistent approaches are creating uncertainty. Some schools intend to collect phones in bulk, others will require students to keep devices switched off, while several remain unsure how far to go in tightening their policies.
The Korean Federation of Teachers’ Associations argues that such differences will trigger complaints from parents and pupils unless the ministry provides a unified national standard.
Surveys show wide variation in current practice, with some schools banning possession during lessons while others allow use during breaks.
Many teachers say their institutions are ready for stricter rules, yet a substantial minority report inadequate preparation. The debate highlights the difficulty of imposing uniform digital discipline across a diverse education system.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic engineers are increasingly relying on AI to write the code behind the company’s products, with senior staff now delegating nearly all programming tasks to AI systems.
Claude Code lead Boris Cherny said he has not written any software by hand for more than two months, with all recent updates generated by Anthropic’s own models. Similar practices are reportedly spreading across internal teams.
Company leadership has previously suggested AI could soon handle most software engineering work from start to finish, marking a shift in how digital products are built and maintained.
The adoption of AI coding tools has accelerated across the technology sector, with firms citing major productivity gains and faster development cycles as automation expands.
Industry observers note the transition may reshape hiring practices and entry-level engineering roles, as AI increasingly performs core implementation tasks previously handled by human developers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
French streaming platform Deezer has opened access to its AI music detection tool for rival services, including Spotify. The move follows mounting concern in France and across the industry over the rapid rise of synthetic music uploads.
Deezer said around 60,000 AI-generated tracks are uploaded daily, with 13.4 million detected in 2025. In France, the company has already demonetised 85% of AI-generated streams to redirect royalties to human artists.
The tool automatically tags fully AI-generated tracks, removes them from recommendations and flags fraudulent streaming activity. Spotify, which also operates widely in France, has introduced its own measures but relies more heavily on creator disclosure.
Challenges remain for Deezer in France and beyond, as the system struggles to identify hybrid tracks mixing human and AI elements. Industry pressure continues to grow for shared standards that balance innovation, transparency and fair payment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Millions of South Africans are set to gain access to AI and digital skills through a partnership between Microsoft South Africa and the national broadcaster SABC Plus. The initiative will deliver online courses, assessments, and recognised credentials directly to learners’ devices.
Building on Microsoft Elevate and the AI Skills Initiative, the programme follows the training of 1.4 million people and the credentialing of nearly half a million citizens since 2025. SABC Plus, with over 1.9 million registered users, provides an ideal platform to reach diverse communities nationwide.
AI and data skills are increasingly critical for employability, with global demand for AI roles growing rapidly. Microsoft and SABC aim to equip citizens with practical, future-ready capabilities, ensuring learning opportunities are not limited by geography or background.
The collaboration also complements Microsoft’s broader initiatives in South Africa, including Ikamva Digital, ElevateHer, Civic AI, and youth certification programmes, all designed to foster inclusion and prepare the next generation for a digital economy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
European technology leaders are increasingly questioning the long-held assumption that information technology operates outside politics, amid growing concerns about reliance on US cloud providers and digital infrastructure.
At HiPEAC 2026, Nextcloud chief executive Frank Karlitschek argued that software has become an instrument of power, warning that Europe’s dependence on American technology firms exposes organisations to legal uncertainty, rising costs, and geopolitical pressure.
He highlighted conflicts between EU privacy rules and US surveillance laws, predicting continued instability around cross-border data transfers and renewed risks of services becoming legally restricted.
Beyond regulation, Karlitschek pointed to monopoly power among major cloud providers, linking recent price increases to limited competition and warning that vendor lock-in strategies make switching increasingly difficult for European organisations.
He presented open-source and locally controlled cloud systems as a path toward digital sovereignty, urging stronger enforcement of EU competition rules alongside investment in decentralised, federated technology models.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The exposure of more than 50,000 children’s chat logs by AI toy company Bondu highlights serious gaps in child data protection. Sensitive personal information, including names, birth dates, and family details, was accessible through a poorly secured parental portal, raising immediate concerns about children’s privacy and safety.
The incident highlights the absence of mandatory security-by-design standards for AI products for children, with weak safeguards enabling unauthorised access and exposing vulnerable users to serious risks.
Beyond the specific flaw, the case raises wider concerns about AI toys used by children. Researchers warned that the exposed data could be misused, strengthening calls for stricter rules and closer oversight of AI systems designed for minors.
Concerns also extend to transparency around data handling and AI supply chains. Uncertainty over whether children’s data was shared with third-party AI model providers points to the need for clearer rules on data flows, accountability, and consent in AI ecosystems.
Finally, the incident has added momentum to policy discussions on restricting or pausing the sale of interactive AI toys. Lawmakers are increasingly considering precautionary measures while more robust child-focused AI safety frameworks are developed.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
European data protection authorities recorded a sharp rise in GDPR violation reports in 2025, according to a new study by law firm DLA Piper, signalling growing regulatory pressure across the European Union.
Average daily reports surpassed 400 for the first time since the regulation entered force in 2018, reaching 443 incidents per day, a 22% increase compared with the previous year. The firm noted that expanding digital systems, new breach reporting laws, and geopolitical cyber risks may be driving the surge.
Despite the higher number of cases in the EU, total fines remained broadly stable at around €1.2 billion for the year, pushing cumulative GDPR penalties since 2018 to €7.1 billion, underlining regulators’ continued willingness to impose major sanctions.
Ireland once again led enforcement figures, with fines imposed by its Data Protection Commission totaling €4.04 billion, reflecting the presence of major technology firms headquartered there, including Meta, Google, and Apple.
Recent headline penalties included a €1.2 billion fine against Meta and a €530 million sanction against TikTok over data transfers to China, while courts across Europe increasingly consider compensation claims linked to GDPR violations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!