South Korea prepares for classroom phone ban amid disputes over rules

The East Asian country is preparing to enforce a nationwide ban on mobile phone use in classrooms, yet schools remain divided over how strictly the new rules should be applied.

A ban that takes effect in March under the revised education law, and officials have already released guidance enabling principals to warn students and restrict smart devices during lessons.

These reforms will allow devices only for limited educational purposes, emergencies or support for pupils with disabilities.

Schools may also collect and store phones under their own rules, giving administrators the authority to prohibit possession rather than merely restricting use. The ministry has ordered every principal to establish formal regulations by late August, leaving interim decisions to each school leader.

Educators in South Korea warn that inconsistent approaches are creating uncertainty. Some schools intend to collect phones in bulk, others will require students to keep devices switched off, while several remain unsure how far to go in tightening their policies.

The Korean Federation of Teachers’ Associations argues that such differences will trigger complaints from parents and pupils unless the ministry provides a unified national standard.

Surveys show wide variation in current practice, with some schools banning possession during lessons while others allow use during breaks.

Many teachers say their institutions are ready for stricter rules, yet a substantial minority report inadequate preparation. The debate highlights the difficulty of imposing uniform digital discipline across a diverse education system.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Anthropic challenges Pentagon over military AI use

Pentagon officials are at odds with AI developer Anthropic over restrictions designed to prevent autonomous weapons targeting and domestic surveillance. The disagreement has stalled discussions under a $200 million contract.

Anthropic has expressed concern about its tools being used in ways that could harm civilians or breach privacy. The company emphasises that human oversight is essential for national security applications.

The dispute reflects broader tensions between Silicon Valley firms and government use of AI. Pentagon officials argue that commercial AI can be deployed as long as it follows US law, regardless of corporate guidelines.

Anthropic’s stance may affect its Pentagon contracts as the firm prepares for a public offering. The company continues to engage with officials while advocating for ethical AI deployment in defence operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US cloud dominance sparks debate about Europe’s digital sovereignty

European technology leaders are increasingly questioning the long-held assumption that information technology operates outside politics, amid growing concerns about reliance on US cloud providers and digital infrastructure.

At HiPEAC 2026, Nextcloud chief executive Frank Karlitschek argued that software has become an instrument of power, warning that Europe’s dependence on American technology firms exposes organisations to legal uncertainty, rising costs, and geopolitical pressure.

He highlighted conflicts between EU privacy rules and US surveillance laws, predicting continued instability around cross-border data transfers and renewed risks of services becoming legally restricted.

Beyond regulation, Karlitschek pointed to monopoly power among major cloud providers, linking recent price increases to limited competition and warning that vendor lock-in strategies make switching increasingly difficult for European organisations.

He presented open-source and locally controlled cloud systems as a path toward digital sovereignty, urging stronger enforcement of EU competition rules alongside investment in decentralised, federated technology models.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI could harm the planet but also help save it

AI is often criticised for its growing electricity and water use, but experts argue it can also support sustainability. AI can reduce emissions, save energy, and optimise resource use across multiple sectors.

In agriculture, AI-powered irrigation helps farmers use water more efficiently. In Chile, precision systems reduced water consumption by up to 30%, while farmers earned extra income from verified savings.

Data centres and energy companies are deploying AI to improve efficiency, predict workloads, optimise cooling, monitor methane leaks, and schedule maintenance. These measures help reduce emissions and operational costs.

Buildings and aviation are also benefiting from AI. Innovative systems manage heating, cooling, and appliances more efficiently. AI also optimises flight routes, reducing fuel consumption and contrail formation, showing that wider adoption could help fight climate change.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Critical AI toy security failure exposes children’s data

The exposure of more than 50,000 children’s chat logs by AI toy company Bondu highlights serious gaps in child data protection. Sensitive personal information, including names, birth dates, and family details, was accessible through a poorly secured parental portal, raising immediate concerns about children’s privacy and safety.

The incident highlights the absence of mandatory security-by-design standards for AI products for children, with weak safeguards enabling unauthorised access and exposing vulnerable users to serious risks.

Beyond the specific flaw, the case raises wider concerns about AI toys used by children. Researchers warned that the exposed data could be misused, strengthening calls for stricter rules and closer oversight of AI systems designed for minors.

Concerns also extend to transparency around data handling and AI supply chains. Uncertainty over whether children’s data was shared with third-party AI model providers points to the need for clearer rules on data flows, accountability, and consent in AI ecosystems.

Finally, the incident has added momentum to policy discussions on restricting or pausing the sale of interactive AI toys. Lawmakers are increasingly considering precautionary measures while more robust child-focused AI safety frameworks are developed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GDPR violation reports surge across Europe in 2025, study finds

European data protection authorities recorded a sharp rise in GDPR violation reports in 2025, according to a new study by law firm DLA Piper, signalling growing regulatory pressure across the European Union.

Average daily reports surpassed 400 for the first time since the regulation entered force in 2018, reaching 443 incidents per day, a 22% increase compared with the previous year. The firm noted that expanding digital systems, new breach reporting laws, and geopolitical cyber risks may be driving the surge.

Despite the higher number of cases in the EU, total fines remained broadly stable at around €1.2 billion for the year, pushing cumulative GDPR penalties since 2018 to €7.1 billion, underlining regulators’ continued willingness to impose major sanctions.

Ireland once again led enforcement figures, with fines imposed by its Data Protection Commission totaling €4.04 billion, reflecting the presence of major technology firms headquartered there, including Meta, Google, and Apple.

Recent headline penalties included a €1.2 billion fine against Meta and a €530 million sanction against TikTok over data transfers to China, while courts across Europe increasingly consider compensation claims linked to GDPR violations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe opens new digital identity innovation hub in France

ID Campus has opened in the French city of Angers as a new European hub dedicated to identity technologies and trust-based digital services. Led by sovereign provider iDAKTO, the initiative aims to bring together public institutions, startups, and researchers to advance secure online systems.

The campus will support innovation, training, pilot projects, and cross-sector collaboration. A key focus is the rollout of the European Digital Identity Wallet. Deeptech firms, research labs, and international delegations are expected to use the site for testing and cooperation.

The project’s development involved partnerships with public bodies in France, including France Titres, La Mission French Tech, and Angers Loire Metropole, reflecting a wider push to strengthen national and European authentication infrastructure.

The official launch brought together leaders from government and industry to discuss the rise in digital adoption and tightening regulatory frameworks across Europe, as secure digital identity systems become central to public services and cross-border transactions.

European digital sovereignty remains a core driver of the initiative, with policymakers seeking interoperable trust frameworks that reduce reliance on non-European technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China moves toward data centres in orbit

China is planning to develop large-scale space-based data centres over the next five years as part of a broader push to support AI development. The China Aerospace Science and Technology Corporation (CASC) has announced plans to build gigawatt-class digital infrastructure in orbit, according to Chinese state broadcaster CCTV.

Under CASC’s five-year development plan, the space data centres are expected to combine cloud, edge and terminal technologies, allowing computing power, data storage and communication capacity to operate as an integrated system. The aim is to create high-performance infrastructure capable of supporting advanced AI workloads beyond Earth.

The initiative follows a recent CASC policy proposal calling for solar-powered, gigawatt-scale space-based hubs to supply energy for AI processing. The proposal aligns with China’s upcoming 15th Five-Year Plan, which is set to place AI at the centre of national development priorities.

China has already taken early steps in this direction. In May 2025, Zhejiang Lab launched 12 low Earth orbit satellites to form the first phase of its ‘Three-Body Computing Constellation.’ The research institute plans to eventually deploy around 2,800 satellites, targeting a total computing power of 1,000 peta operations per second.

Interest in space-based data centres is growing globally. European aerospace firm Thales Alenia Space has been studying its feasibility since 2023, while companies such as SpaceX, Blue Origin, and several startups in the US and the UAE are exploring similar concepts at varying stages of development and ambition.

Supporters argue that space data centres could reduce environmental impacts on Earth, benefit from constant solar energy and simplify cooling. However, experts warn that operating in space brings its own challenges, including exposure to radiation, solar flares and space debris, as well as higher costs and greater difficulty when repairs are needed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Millions use Telegram to create AI deepfake nudes as digital abuse escalates

A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.

Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.

Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.

The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.

Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.

Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.

Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.

The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!