How neurotech is turning science fiction into lived reality

Some experts now say neurotechnology could be as revolutionary as AI, as devices advance rapidly from sci-fi tropes into practical reality. Researchers can already translate thoughts into words through brain implants, and spinal implants are helping people with paralysis regain movement.

King’s College London neuroscientist Anne Vanhoestenberghe told AFP, ‘People do not realise how much we’re already living in science fiction.’

Her lab works on implants for both brain and spinal systems, not just restoring function, but reimagining communication.

At the same time, the technology carries profound ethical risks. There is growing unease about privacy, data ownership and the potential misuse of neural data.

Some even warn that our ‘innermost thoughts are under threat.’ Institutions like UNESCO are already moving to establish global neurotech governance frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT launches group chats in Asia-Pacific pilot

OpenAI has introduced a new group chat feature in its ChatGPT app, currently piloted across Japan, New Zealand, South Korea and Taiwan. The rollout aims to test how users will interact in multi-participant conversations with the AI.

The pilot enables Free, Plus, and Team users on both mobile and web platforms to start or join group chats of up to 20 participants, where ChatGPT can participate as a member.

Human-to-human messages do not count against AI usage quotas; usage only applies when the AI replies. Group creators remain in charge of membership; invite links are used for access, and additional safeguards are applied when participants under the age of 18 are present.

This development marks a significant pivot from one-on-one AI assistants toward collaborative workflows, messaging and shared decision-making.

From a digital policy and governance perspective, this new feature raises questions around privacy, data handling in group settings, the role of AI in multi-user contexts and how usage quotas or model performance might differ across plans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Most workers see AI risk but not for themselves

A new survey by YouGov and Udemy reveals that while workers across the US, UK, India and Brazil see AI as a significant economic force, many believe their own jobs are unlikely to be affected.

Over 4,500 adults were polled, highlighting a clear gap between concern for the broader economy and personal job security.

In the UK, 70% of respondents expressed concern about AI’s impact on the economy, but only 39% worried about its effects on their own occupation.

Similarly, in the US, 72% feared wider economic effects, while 47% concerned about personal job loss. Experts suggest this reflects a psychological blind spot similar to early reactions to the internet.

The survey also highlighted a perceived AI skills gap, particularly in the UK, where 55% of workers had received no AI training. Many employees acknowledged awareness of AI’s rise but lacked motivation to develop skills immediately, a phenomenon researchers describe as an ‘awareness action gap’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Salesforce unveils eVerse for dependable enterprise AI

The US cloud-based software company, Salesforce and its Research AI department, have unveiled eVerse, a new environment designed to train voice and text agents through synthetic data generation, stress testing and reinforcement learning.

In an aim to resolve a growing reliability problem known as jagged intelligence, where systems excel at complex reasoning yet falter during simple interactions.

The company views eVerse as a key requirement for creating an Agentic Enterprise, where human staff and digital agents work together smoothly and dependably.

eVerse supports continuous improvement by generating large volumes of simulated interactions, measuring performance and adjusting behaviour over time, rather than waiting for real-world failures.

A platform that played a significant role in the development of Agentforce Voice, giving AI agents the capacity to cope with unpredictable calls involving noise, varied accents and weak connections.

Thousands of simulated conversations enabled teams to identify problems early and deliver stronger performance.

The technology is also being tested with UCSF Health, where clinical experts are working with Salesforce to refine agents that support billing services. Only a portion of healthcare queries can typically be handled automatically, as much of the knowledge remains undocumented.

eVerse enhances coverage by enabling agents to adapt to complex cases through reinforcement learning, thereby improving performance across both routine and sophisticated tasks.

Salesforce describes eVerse as a milestone in a broader effort to achieve Enterprise General Intelligence. The goal is a form of AI designed for dependable business use, instead of the more creative outputs that dominate consumer systems.

It also argues that trust and consistency will shape the next stage of enterprise adoption and that real-world complexity must be mirrored during development to guarantee reliable deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reveals hidden messages in gut microbes

Researchers at the University of Tokyo in Japan have utilised AI to investigate the intricate world of gut bacteria and their chemical signals.

Their system, VBayesMM, utilises a Bayesian neural network to identify genuine connections between bacteria and human health that traditional methods often overlook.

The human gut contains roughly 100 trillion bacterial cells, which interact with human metabolism, immunity and brain function through thousands of chemical compounds called metabolites.

Using AI, scientists can map which bacteria influence specific metabolites, offering hope for personalised treatment strategies for conditions such as obesity, sleep disorders and cancer.

VBayesMM stands out by recognising uncertainty in its predictions, offering more reliable insights than conventional models.

Researchers plan to expand the system to analyse larger and more diverse datasets, aiming to identify bacterial targets for therapies or dietary interventions that could improve patient outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta pushes deeper into robotics with key hardware move

Meta is expanding its robotics ambitions by appointing Li-Chen Miller, previously head of its smart glasses portfolio, as the first product manager for Reality Labs’ robotics division. Her transfer marks a significant shift in Meta’s hardware priorities following the launch of its latest augmented reality devices.

The company is reportedly developing a humanoid assistant known internally as Metabot within the same organisation that oversees its AR and VR platforms. Former Cruise executive Marc Whitten leads the robotics group, supported by veteran engineer Ning Li and renowned MIT roboticist Sangbae Kim.

Miller’s move emphasises Meta’s aim to merge its AI expertise with physical robotics. The new team collaborates with the firm’s Superintelligence Lab, which is building a ‘world model’ capable of powering dextrous motion and real-time reasoning.

Analysts see the strategy as Meta’s attempt to future-proof its ecosystem and diversify Reality Labs, which continues to post heavy losses. The company’s growing investment in humanoid design could bring home-use robots closer to reality, blending social AI with the firm’s long-term vision for the metaverse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Disney+ prepares AI tools for user creations

Disney+ is preparing to introduce tools that enable subscribers to create short, AI-generated videos inspired by its characters and franchises. Chief executive Bob Iger described the move as part of a sweeping platform upgrade that marks the service’s most significant technological expansion since its 2019 launch.

Alongside user-generated video features, Disney+ will gain interactive, game-like functions through its collaboration with Epic Games. The company plans to merge storytelling and interactivity, creating a new form of engagement where fans can build or remix short scenes within Disney’s creative universe.

Iger confirmed that Disney has held productive talks with several AI firms to develop responsible tools that safeguard intellectual property. The company aims to ensure that fans’ creations can exist within brand limits, avoiding misuse of iconic characters while opening the door to more creative participation.

Industry analysts suggest that the plan could reshape the streaming industry by blending audience creativity with studio production. Yet creators have expressed caution, urging transparency on rights and moderation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China targets deepfake livestreams of public figures

Chinese cyberspace authorities announced a crackdown on AI deepfakes impersonating public figures in livestream shopping. Regulators said platforms have removed thousands of posts and sanctioned numerous accounts for misleading users.

Officials urged platforms to conduct cleanups and hold marketers accountable for deceptive promotions. Reported actions include removing over 8,700 items and dealing with more than 11,000 impersonation accounts.

Measures build on wider campaigns against AI misuse, including rules targeting deep synthesis and labelling obligations. Earlier efforts focused on curbing rumours, impersonation and harmful content across short videos and e-commerce.

Chinese authorities pledged a continued high-pressure stance to safeguard consumers and protect celebrity likenesses online. Platforms risk penalties if complaint handling and takedowns fail to deter repeat infringements in livestream commerce.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Firefox expands AI features with full user choice

Mozilla has outlined its vision for integrating AI into Firefox in a way that protects user choice instead of limiting it. The company argues that AI should be built like the open web, allowing people and developers to use tools on their own terms rather than being pushed into a single ecosystem.

Recent features such as the AI sidebar chatbot and Shake to Summarise on iOS reflect that approach.

The next step is an ‘AI Window’, a controlled space inside Firefox that lets users chat with an AI assistant while browsing. The feature is entirely optional, offers full control, and can be switched off at any time. Mozilla has opened a waitlist so users can test the feature early and help shape its development.

Mozilla believes browsers must adapt as AI becomes a more common interface to the web. The company argues that remaining independent allows it to prioritise transparency, accountability and user agency instead of the closed models promoted by competitors.

The goal is an assistant that enhances browsing and guides users outward to the wider internet rather than trapping them in isolated conversations.

Community involvement remains central to Mozilla’s work. The organisation is encouraging developers and users to contribute ideas and support open-source projects as it works to ensure Firefox stays fast, secure and private while embracing helpful forms of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI drives a new identity security crisis

New research from Rubrik Zero Labs warns that agentic AI is reshaping the identity landscape faster than organisations can secure it.

The study reveals a surge in non-human identities created through automation and API driven workflows, with numbers now exceeding human users by a striking margin.

Most firms have already introduced AI agents into their identity systems or plan to do so, yet many struggle to govern the growing volume of machine credentials.

Experts argue that identity has become the primary attack surface as remote work, cloud adoption and AI expansion remove traditional boundaries. Threat actors increasingly rely on valid credentials instead of technical exploits, which makes weaknesses in identity governance far more damaging.

Rubrik’s researchers and external analysts agree that a single compromised key or forgotten agent account can provide broad access to sensitive environments.

Industry specialists highlight that agentic AI disrupts established IAM practices by blurring distinctions between human and machine activity.

Organisations often cannot determine whether a human or an automated agent performed a critical action, which undermines incident investigations and weakens zero-trust strategies. Poor logging, weak lifecycle controls and abandoned machine identities further expand the attack surface.

Rubrik argues that identity resilience is becoming essential, since IAM tools alone cannot restore trust after a breach. Many firms have already switched IAM providers, reflecting widespread dissatisfaction with current safeguards.

Analysts recommend tighter control of agent creation, stronger credential governance and a clearer understanding of how AI-driven identities reshape operational and security risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!