Japan develops system to measure and share physical and mental pain

Japanese mobile carrier NTT Docomo has developed a system that measures physical and mental pain and translates it into a format others can understand.

The technology utilises brainwave analysis to convert subjective sensations, such as injuries, stomachaches, spiciness, or emotional distress, into quantifiable levels.

The system, created in collaboration with startup Pamela Inc., allows recipients to understand what a specific pain score represents and even experience it through a device.

Docomo sees potential applications in medical diagnosis, rehabilitation, immersive gaming, and support for individuals who have been exposed to psychological or social harm.

Officials said the platform could be introduced for practical use alongside sixth-generation cellular networks, which are expected to be available in the 2030s.

The innovation aims to overcome the challenge of pain being experienced differently by each person, creating a shared understanding of physical and emotional discomfort.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IMY investigates major ransomware attack on Swedish IT supplier

Sweden’s data protection authority, IMY, has opened an investigation into a massive ransomware-related data breach that exposed personal information belonging to 1.5 million people. The breach originated from a cyberattack on IT provider Miljödata in August, which affected roughly 200 municipalities.

Hackers reportedly stole highly sensitive data, including names, medical certificates, and rehabilitation records, much of which has since been leaked on the dark web. Swedish officials have condemned the incident, calling it one of the country’s most serious cyberattacks in recent years.

The IMY said the investigation will examine Miljödata’s data protection measures and the response of several affected public bodies, such as Gothenburg, Älmhult, and Västmanland. The regulator’s goal is to identify security shortcomings for future cyber threats.

Authorities have yet to confirm how the attackers gained access to Miljödata’s systems, and no completion date for the investigation has been announced. The breach has reignited calls for tighter cybersecurity standards across Sweden’s public sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Honor to launch world’s first AI robot phone in 2026

Honor has unveiled plans to release the world’s first AI-powered robot phone in 2026, marking a bold step in the evolution of smart devices.

The announcement was made by CEO Li Jian during the World Internet Conference Wuzhen Summit. It highlighted the company’s ambition to merge AI with advanced hardware design.

The upcoming device will combine AI capabilities, embodied intelligence, and high-definition imaging within a foldable, liftable mechanical structure. Its rear camera module will act as a rotating gimbal, offering 360° movement, auto-tracking, and 4K ultra-HD recording for professional-grade content creation.

Powered by an on-device large model, the upgraded YOYO assistant will feature emotional interaction, environmental awareness, and seamless coordination across devices.

The launch forms part of Honor’s $10 billion Alpha Strategy, announced earlier this year, which aims to establish a complete ecosystem for AI-driven technologies over the next five years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic strengthens European growth through Paris and Munich offices

AI firm Anthropic is expanding its European presence by opening new offices in Paris and Munich, strengthening its footprint alongside existing hubs in London, Dublin, and Zurich.

An expansion that follows rapid growth across the EMEA region, where the company has tripled its workforce and seen a ninefold increase in annual run-rate revenue.

The move comes as European businesses increasingly rely on Claude for critical enterprise tasks. Companies such as L’Oréal, BMW, SAP, and Sanofi are using the AI model to enhance software, improve workflows, and ensure operational reliability.

Germany and France, both among the top 20 countries in Claude usage per capita, are now at the centre to Anthropic’s strategic expansion.

Anthropic is also strengthening its leadership team across Europe. Guillaume Princen will oversee startups and digital-native businesses, while Pip White and Thomas Remy will lead the northern and southern EMEA regions, respectively.

A new head will soon be announced for Central and Eastern Europe, reflecting the company’s growing regional reach.

Beyond commercial goals, Anthropic is partnering with European institutions to promote AI education and culture. It collaborates with the Light Art Space in Berlin, supports student hackathons through TUM.ai, and works with the French organisation Unaite to advance developer training.

These partnerships reinforce Anthropic’s long-term commitment to responsible AI growth across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta invests $600 billion to expand AI data centres across the US

A $600 billion investment aimed at boosting innovation, job creation, and sustainability is being launched in the US by Meta to expand its AI infrastructure.

Instead of outsourcing development, the company is building its new generation of AI data centres domestically, reinforcing America’s leadership in technology and supporting local economies.

Since 2010, Meta’s data centre projects have supported more than 30,000 skilled trade jobs and 5,000 operational roles, generating $20 billion in business for US subcontractors. These facilities are designed to power Meta’s AI ambitions while driving regional economic growth.

The company emphasises responsible development by investing heavily in renewable energy and water efficiency. Its projects have added 15 gigawatts of new energy to US power grids, upgraded local infrastructure, and helped restore water systems in surrounding communities.

Meta aims to become fully water positive by 2030.

Beyond infrastructure, Meta has channelled $58 million into community grants for schools, nonprofits, and local initiatives, including STEM education and veteran training programmes.

As AI grows increasingly central to digital progress, Meta’s continued investment in sustainable, community-focused data centres underscores its vision for a connected, intelligent future built within the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cars.com launches Carson AI to transform online car shopping

The US tech company, Cars.com, has unveiled Carson, a multilingual AI search engine designed to revolutionise the online car shopping experience.

Instead of relying on complex filters, Carson interprets natural language queries such as ‘a reliable car for a family of five’ or ‘a used truck under $30,000’, instantly producing targeted results tailored to each shopper’s needs.

A new AI feature that already powers around 15% of all web and mobile searches on Cars.com, with early data showing that users engaging with Carson return to the site twice as often and save three times more vehicles.

They also generate twice as many leads and convert 30% more frequently from search to vehicle detail pages.

Cars.com aims to simplify decision-making for its 25 million monthly shoppers, 70% of whom begin their search without knowing which brand or model to choose.

Carson helps these undecided users explore lifestyle, emotional and practical preferences while guiding them through Cars.com’s award-winning listings.

Further updates will introduce AI-generated summaries, personalised comparisons and search refinement suggestions.

Cars.com’s parent company, Cars Commerce, plans to expand its use of AI-driven tools to strengthen its role at the forefront of automotive retail innovation, offering a more efficient and intelligent marketplace for both consumers and dealerships.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Google uses AI to support teachers and inspire students

Google is redefining education with AI designed to enhance learning, rather than replace teachers. The company has unveiled new tools grounded in learning science to support both educators and students, aiming to make learning more effective, efficient and engaging.

Through its Gemini platform, users can follow guided learning paths that encourage discovery rather than passive answers.

YouTube and Search now include conversational features that allow students to ask questions as they learn, while NotebookLM can transform personal materials into quizzes or immersive study aids.

Instructors can also utilise Google Classroom’s free AI tools for lesson planning and administrative support, thereby freeing up time for direct student engagement.

Google emphasises that its goal is to preserve the human essence of education while using AI to expand understanding. The company also addresses challenges linked to AI in learning, such as cheating, fairness, accuracy and critical thinking.

It is exploring assessment models that cannot be easily replicated by AI, including debates, projects, and oral examinations.

The firm pledges to develop its tools responsibly by collaborating with educators, parents and policymakers. By combining the art of teaching with the science of AI-driven learning, Google seeks to make education more personal, equitable and inspiring for all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines roadmap for AI safety, accountability and global cooperation

New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.

The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.

According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.

The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.

To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.

It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.

OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!