South Korea confirms scale of Coupang data breach

The South Korean government has confirmed that 33.67 million user accounts were exposed in a major data breach at Coupang in South Korea. The findings were released by the Ministry of Science and ICT in Seoul.

Investigators in South Korea said names and email addresses were leaked, while delivery lists containing addresses and phone numbers were accessed 148 million times. Officials warned that the impact in South Korea could extend beyond the headline account figure.

Authorities in South Korea identified a former employee as the attacker, alleging misuse of authentication signing keys. The probe concluded that weaknesses in internal controls at Coupang enabled the breach in South Korea.

The ministry in South Korea criticised delayed reporting and plans to impose a fine on Coupang. The company disputed aspects of the findings but said 33.7 million accounts were involved in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fukushima rebuilds as technology hub

Fukushima is repositioning itself as a technology and innovation hub, more than a decade after the 2011 earthquake, tsunami and nuclear disaster in Japan. The Fukushima Innovation Coast Framework aims to revitalise the coastal Hamadori region of Fukushima Prefecture.

At the centre of the push in Fukushima is the Fukushima Institute for Research, Education and Innovation, which plans a major research complex in Namie. The site in Fukushima will focus on robotics, energy, agriculture and radiation science, drawing researchers from across Japan and overseas.

Fukushima already hosts the Fukushima Robot Test Field and the Fukushima Hydrogen Energy Research Field. Projects in Fukushima include hydrogen production from solar power and large-scale robotics and drone testing.

Officials in Fukushima say the strategy combines clean energy, sustainable materials and advanced research to create jobs and attract families back to Japan’s northeast. Fukushima is positioning itself as a global case study in post-disaster recovery through technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU launches cyberbullying action plan to protect children online

The European Commission has launched an Action Plan Against Cyberbullying aimed at protecting the mental health and well-being of children and teenagers online across the EU. The initiative focuses on reporting access, national coordination, and prevention.

A central element is the development of an EU-wide reporting app that would allow victims to report cyberbullying, receive support, and safely store evidence. The Commission will provide a blueprint for Member States to adapt and link to national helplines.

To ensure consistent protection, Member States are encouraged to adopt a shared understanding of cyberbullying and develop national action plans. This would support comparable data collection and a more coordinated EU response.

The Action Plan builds on existing legislation, including the Digital Services Act, the Audiovisual Media Services Directive, and the AI Act. Updated guidelines will strengthen platform obligations and address AI-enabled forms of abuse.

Prevention and education are also prioritised through expanded resources for schools and families via Safer Internet Centres and the Better Internet for Kids platform. The Commission will implement the plan with Member States, industry, civil society, and children.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft explores superconductors for AI data centres

Microsoft is studying high-temperature superconductors to transmit electricity to its AI data centres in the US. The company says zero-resistance cables could reduce power losses and eliminate heat generated during transmission.

High-temperature superconductors can carry large currents through compact cables, potentially cutting space requirements for substations and overhead lines. Microsoft argues that denser infrastructure could support expanding AI workloads across the US.

The main obstacle is cooling, as superconducting materials must operate at extremely low temperatures using cryogenic systems. Even high-temperature variants require conditions near minus 200 degrees Celsius.

Rising electricity demand from AI systems has strained grids in the US, prompting political scrutiny and industry pledges to fund infrastructure upgrades. Microsoft says efficiency gains could ease pressure while it develops additional power solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Custom AI bots support student negotiating skills

In Cambridge, instructors at MIT and the Harvard Negotiation Project are using AI negotiation bots to enhance classroom simulations. The tools are designed to prompt reflection rather than offer fixed answers.

Students taking part in a multiparty exercise called Harborco engage with preparation, back-table and debriefing bots. The system helps them analyse stakeholder interests and test strategies before and after live negotiations.

Back-table bots simulate unseen political or organisational actors who often influence real-world negotiations. Students can safely explore trade-offs and persuasion tactics in a protected digital setting.

According to reported course findings, most participants said the AI bots improved preparation and sharpened their understanding of opposing interests. Instructors in Cambridge stress that AI supports, rather than replaces, human teaching and peer learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU reopens debate on social media age restrictions for children

The European Union is revisiting the idea of an EU-wide social media age restriction as several member states move ahead with national measures to protect children online. Spain, France, and Denmark are among the countries considering the enforcement of age limits for access to social platforms.

The issue was raised in the European Commission’s new action plan against cyberbullying, published on Tuesday. The plan confirms that a panel of child protection experts will advise the Commission by the summer on possible EU-wide age restrictions for social media use.

Commission President Ursula von der Leyen announced the creation of an expert panel last September, although its launch was delayed until early 2026. The panel will assess options for a coordinated European approach, including potential legislation and awareness-raising measures for parents.

The document notes that diverging national rules could lead to uneven protection for children across the bloc. A harmonised EU framework, the Commission argues, would help ensure consistent safeguards and reduce fragmentation in how platforms apply age restrictions.

So far, the Commission has relied on non-binding guidance under the Digital Services Act to encourage platforms such as TikTok, Instagram, and Snap to protect minors. Increasing pressure from member states pursuing national bans may now prompt a shift towards more formal EU-level regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

eSafety escalates scrutiny of Roblox safety measures

Australia’s online safety regulator has notified Roblox of plans to directly test how the platform has implemented a set of child safety commitments agreed last year, amid growing concerns over online grooming and sexual exploitation.

In September last year, Roblox made nine commitments following months of engagement with eSafety, aimed at supporting compliance with obligations under the Online Safety Act and strengthening protections for children in Australia.

Measures included making under-16s’ accounts private by default, restricting contact between adults and minors without parental consent, disabling chat features until age estimation is complete, and extending parental controls and voice chat restrictions for younger users.

Roblox told eSafety at the end of 2025 that it had delivered all agreed commitments, after which the regulator continued monitoring implementation. eSafety Commissioner Julie Inman Grant said serious concerns remain over reports of child exploitation and harmful material on the platform.

Direct testing will now examine how the measures work in practice, with support from the Australian Government. Enforcement action may follow, including penalties of up to $49.5 million, alongside checks against new age-restricted content rules from 9 March.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI innovation drive accelerates in Singapore with Google support

Google has announced a major expansion of its AI investments in Singapore, strengthening research capabilities, workforce development, and enterprise innovation as part of a long-term regional strategy.

The initiatives were unveiled at the company’s Google for Singapore event, signalling deeper alignment with the nation’s ambition to lead the AI economy.

Research and development form a central pillar of the expansion. Building on the recent launch of a Google DeepMind research lab in Singapore, the company is scaling specialised teams across software engineering, research science, and user experience design.

A new Google Cloud Singapore Engineering Centre will also support enterprises in deploying advanced AI solutions across sectors, including robotics and clean energy.

Healthcare innovation features prominently in the investment roadmap. Partnerships with AI Singapore will support national health AI infrastructure, including access to the MedGemma model to accelerate diagnostics and treatment development.

Google is also launching a security-focused AI Center of Excellence and rolling out age assurance technologies to strengthen online protections for younger users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI drives robots from labs into industry

The International Federation of Robotics says AI is accelerating the move of robots from research labs into real world use. A new position paper highlights rapid adoption across multiple industries as AI becomes a core enabler.

Logistics, manufacturing and services are leading AI driven robotics deployment. Warehousing and supply chains benefit from controlled environments, while factories use AI to improve efficiency, quality and precision in sectors including automotive and electronics.

The IFR said service robots are expanding as labour shortages persist, with restaurants and hospitality testing AI enabled machines. Hybrid models are emerging where robots handle repetitive work while humans focus on customer interaction.

Investment is rising globally, with major commitments in the US, Europe and China. The IFR expects AI to improve returns on robotics investment over the next decade through lower costs and higher productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Enterprise AI security evolves as Cisco expands AI Defense capabilities

Cisco has announced a major update to its AI Defense platform as enterprise AI evolves from chat tools into autonomous agents. The company says AI security priorities are shifting from controlling outputs to protecting complex agent-driven systems.

The update strengthens end-to-end AI supply chain security by scanning third-party models, datasets, and tools used in development workflows. New inventory features help organisations track provenance and governance across AI resources.

Cisco has also expanded algorithmic red teaming through an upgraded AI Validation interface. The system enables adaptive multi-turn testing and aligns security assessments with NIST, MITRE, and OWASP frameworks.

Runtime protections now reflect the growing autonomy of AI agents. Cisco AI Defense inspects agent-to-tool interactions in real time, adding guardrails to prevent data leakage and malicious task execution.

Cisco says the update responds to the rapid operationalisation of AI across enterprises. The company argues that effective AI security now requires continuous visibility, automated testing, and real-time controls that scale with autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!