Fukushima rebuilds as technology hub

Fukushima is repositioning itself as a technology and innovation hub, more than a decade after the 2011 earthquake, tsunami and nuclear disaster in Japan. The Fukushima Innovation Coast Framework aims to revitalise the coastal Hamadori region of Fukushima Prefecture.

At the centre of the push in Fukushima is the Fukushima Institute for Research, Education and Innovation, which plans a major research complex in Namie. The site in Fukushima will focus on robotics, energy, agriculture and radiation science, drawing researchers from across Japan and overseas.

Fukushima already hosts the Fukushima Robot Test Field and the Fukushima Hydrogen Energy Research Field. Projects in Fukushima include hydrogen production from solar power and large-scale robotics and drone testing.

Officials in Fukushima say the strategy combines clean energy, sustainable materials and advanced research to create jobs and attract families back to Japan’s northeast. Fukushima is positioning itself as a global case study in post-disaster recovery through technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU launches cyberbullying action plan to protect children online

The European Commission has launched an Action Plan Against Cyberbullying aimed at protecting the mental health and well-being of children and teenagers online across the EU. The initiative focuses on reporting access, national coordination, and prevention.

A central element is the development of an EU-wide reporting app that would allow victims to report cyberbullying, receive support, and safely store evidence. The Commission will provide a blueprint for Member States to adapt and link to national helplines.

To ensure consistent protection, Member States are encouraged to adopt a shared understanding of cyberbullying and develop national action plans. This would support comparable data collection and a more coordinated EU response.

The Action Plan builds on existing legislation, including the Digital Services Act, the Audiovisual Media Services Directive, and the AI Act. Updated guidelines will strengthen platform obligations and address AI-enabled forms of abuse.

Prevention and education are also prioritised through expanded resources for schools and families via Safer Internet Centres and the Better Internet for Kids platform. The Commission will implement the plan with Member States, industry, civil society, and children.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft explores superconductors for AI data centres

Microsoft is studying high-temperature superconductors to transmit electricity to its AI data centres in the US. The company says zero-resistance cables could reduce power losses and eliminate heat generated during transmission.

High-temperature superconductors can carry large currents through compact cables, potentially cutting space requirements for substations and overhead lines. Microsoft argues that denser infrastructure could support expanding AI workloads across the US.

The main obstacle is cooling, as superconducting materials must operate at extremely low temperatures using cryogenic systems. Even high-temperature variants require conditions near minus 200 degrees Celsius.

Rising electricity demand from AI systems has strained grids in the US, prompting political scrutiny and industry pledges to fund infrastructure upgrades. Microsoft says efficiency gains could ease pressure while it develops additional power solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Custom AI bots support student negotiating skills

In Cambridge, instructors at MIT and the Harvard Negotiation Project are using AI negotiation bots to enhance classroom simulations. The tools are designed to prompt reflection rather than offer fixed answers.

Students taking part in a multiparty exercise called Harborco engage with preparation, back-table and debriefing bots. The system helps them analyse stakeholder interests and test strategies before and after live negotiations.

Back-table bots simulate unseen political or organisational actors who often influence real-world negotiations. Students can safely explore trade-offs and persuasion tactics in a protected digital setting.

According to reported course findings, most participants said the AI bots improved preparation and sharpened their understanding of opposing interests. Instructors in Cambridge stress that AI supports, rather than replaces, human teaching and peer learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU reopens debate on social media age restrictions for children

The European Union is revisiting the idea of an EU-wide social media age restriction as several member states move ahead with national measures to protect children online. Spain, France, and Denmark are among the countries considering the enforcement of age limits for access to social platforms.

The issue was raised in the European Commission’s new action plan against cyberbullying, published on Tuesday. The plan confirms that a panel of child protection experts will advise the Commission by the summer on possible EU-wide age restrictions for social media use.

Commission President Ursula von der Leyen announced the creation of an expert panel last September, although its launch was delayed until early 2026. The panel will assess options for a coordinated European approach, including potential legislation and awareness-raising measures for parents.

The document notes that diverging national rules could lead to uneven protection for children across the bloc. A harmonised EU framework, the Commission argues, would help ensure consistent safeguards and reduce fragmentation in how platforms apply age restrictions.

So far, the Commission has relied on non-binding guidance under the Digital Services Act to encourage platforms such as TikTok, Instagram, and Snap to protect minors. Increasing pressure from member states pursuing national bans may now prompt a shift towards more formal EU-level regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI and human love in the digital age debate

AI is increasingly entering intimate areas of human life, including romance and emotional companionship. AI chatbots are now widely used as digital companions, raising broader questions about emotional authenticity and human-machine relationships.

Millions of people use AI companion apps, and studies suggest that a significant share of them describe their relationship with a chatbot as romantic. While users may experience genuine emotions, experts stress that current AI systems do not feel love but generate responses based on patterns in data.

Researchers explain that large language models can simulate empathy and emotional understanding, yet they lack consciousness and subjective experience. Their outputs are designed to imitate human interaction rather than reflect genuine emotion.

Scientific research describes love as deeply rooted in biology. Hormones such as dopamine and oxytocin, along with specific brain regions, shape attraction, attachment, and emotional bonding. These processes are embodied and chemical, which machines do not possess.

Some scholars argue that future AI systems could replicate certain cognitive aspects of attachment, such as loyalty or repeated engagement. However, most agree that replicating human love would likely require consciousness, which remains poorly understood and technically unresolved.

Debate continues over whether conscious AI is theoretically possible. While some researchers believe advanced architectures or neuromorphic computing could move in that direction, no existing system meets the established criteria for consciousness.

In practice, human-AI romantic relationships remain asymmetrical. Chatbots are designed to engage, agree, and provide comfort, which can create dependency or unrealistic expectations about real-world relationships.

Experts therefore emphasise transparency and AI literacy, stressing that users should understand AI companions simulate emotion and do not possess feelings, intentions, or awareness; while these systems can imitate expressions of love, they do not experience it, and the emotional reality remains human even when the interaction is digital.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT researchers tackle antimicrobial resistance with AI and synthetic biology

A pioneering research initiative at MIT is deploying AI and synthetic biology to combat the escalating global crisis of antimicrobial resistance, which has been fuelled by decades of antibiotic overuse and misuse.

The $3 million, three-year project, led by Professor James J. Collins at MIT’s Department of Biological Engineering, centres on developing programmable antibacterials designed to target specific pathogens.

The approach uses AI to design small proteins that turn off specific bacterial functions. These designer molecules would be produced and delivered by engineered microbes, offering a more precise alternative to traditional antibiotics.

Antimicrobial resistance impacts low and middle-income countries most severely, where limited diagnostic infrastructure causes treatment delays. Drug-resistant infections continue to rise globally, whilst the development of new antibacterial tools has stagnated.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

eSafety escalates scrutiny of Roblox safety measures

Australia’s online safety regulator has notified Roblox of plans to directly test how the platform has implemented a set of child safety commitments agreed last year, amid growing concerns over online grooming and sexual exploitation.

In September last year, Roblox made nine commitments following months of engagement with eSafety, aimed at supporting compliance with obligations under the Online Safety Act and strengthening protections for children in Australia.

Measures included making under-16s’ accounts private by default, restricting contact between adults and minors without parental consent, disabling chat features until age estimation is complete, and extending parental controls and voice chat restrictions for younger users.

Roblox told eSafety at the end of 2025 that it had delivered all agreed commitments, after which the regulator continued monitoring implementation. eSafety Commissioner Julie Inman Grant said serious concerns remain over reports of child exploitation and harmful material on the platform.

Direct testing will now examine how the measures work in practice, with support from the Australian Government. Enforcement action may follow, including penalties of up to $49.5 million, alongside checks against new age-restricted content rules from 9 March.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI workshops strengthen digital skills in Wales tourism sector

Wales has launched a national programme of practical AI workshops to help tourism and hospitality businesses adopt digital tools. Funded by Visit Wales and the Welsh Government, the initiative aims to strengthen the sector’s competitiveness by assisting companies to save time and enhance their online presence.

Strong demand reflects growing readiness within the sector to embrace AI. Delivered through Business Wales, the free sessions have quickly reached near capacity, with most places booked shortly after launch. The programme is tailored to small and medium-sized enterprises and prioritises hands-on learning over technical theory.

Workshops focus on simple, immediately usable tools that improve website content, search visibility, and customer engagement. Organisers highlight that AI-driven search features are reshaping how visitors discover tourism services, making accuracy, consistency, and authoritative digital content increasingly important.

At the centre of the initiative is Harri, a bespoke AI tool explicitly developed for Welsh tourism businesses. Designed to reflect the local context, it supports listings management, customer enquiries, and search optimisation. Early feedback indicates that the approach delivers practical and measurable benefits.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cisco warns AI agents need checks before joining workforces

The US-based conglomerate Cisco is promoting a future in which AI agents work alongside employees rather than operate as mere tools. Jeetu Patel, the company’s president, revealed that Cisco has already produced a product written entirely with AI-generated code and expects several more by the end of 2026.

A shift to spec-driven development that allows smaller human teams to work with digital agents instead of relying on larger groups of developers.

Human oversight will still play a central role. Coders will be asked to review AI-generated outputs as they adjust to a workplace where AI influences every stage of development. Patel argues that AI should be viewed as part of every loop rather than kept at the edge of decision-making.

Security concerns dominate the company’s planning. Patel warns that AI agents acting as digital co-workers must undergo background checks in the same way that employees do.

Cisco is investing billions in security systems to protect agents from external attacks and to prevent agents that malfunction or act independently from harming society.

Looking ahead, Cisco expects AI to deliver insights that extend beyond human knowledge. Patel believes that the most significant gains will emerge from breakthroughs in science, health, energy and poverty reduction rather than simple productivity improvements.

He also positions Cisco as a core provider of infrastructure designed to support the next stage of the AI era.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!