ChatGPT launches group chats in Asia-Pacific pilot

OpenAI has introduced a new group chat feature in its ChatGPT app, currently piloted across Japan, New Zealand, South Korea and Taiwan. The rollout aims to test how users will interact in multi-participant conversations with the AI.

The pilot enables Free, Plus, and Team users on both mobile and web platforms to start or join group chats of up to 20 participants, where ChatGPT can participate as a member.

Human-to-human messages do not count against AI usage quotas; usage only applies when the AI replies. Group creators remain in charge of membership; invite links are used for access, and additional safeguards are applied when participants under the age of 18 are present.

This development marks a significant pivot from one-on-one AI assistants toward collaborative workflows, messaging and shared decision-making.

From a digital policy and governance perspective, this new feature raises questions around privacy, data handling in group settings, the role of AI in multi-user contexts and how usage quotas or model performance might differ across plans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital twin technology drives new era in predictive medicine

A new AI model capable of generating digital twins of patients is being hailed as a significant step forward for clinical research. Developed at the University of Melbourne, the system reviews health records to predict how a patient’s condition may change during treatment.

DT-GPT, the model in question, was trained on thousands of records covering Alzheimer’s disease, non-small cell lung cancer, and intensive care admissions. Researchers stated that the model accurately predicted shifts in key clinical indicators, utilising medical literature and patient histories.

Predictions were validated without giving DT-GPT access to actual outcomes, strengthening confidence in its performance.

Lead researcher Associate Professor Michael Menden said the tool not only replicated patient profiles but also outperformed fourteen advanced machine-learning systems.

The ability to simulate clinical trial outcomes could lower costs and accelerate drug development, while enabling clinicians to anticipate deterioration and tailor treatment plans more effectively.

Researchers also noted that DT-GPT’s zero-shot ability to predict medical values it had never been trained on. The team has formed a company with the Royal Melbourne Women’s Hospital to apply the technology to patients with endometriosis, demonstrating wider potential in healthcare.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Most workers see AI risk but not for themselves

A new survey by YouGov and Udemy reveals that while workers across the US, UK, India and Brazil see AI as a significant economic force, many believe their own jobs are unlikely to be affected.

Over 4,500 adults were polled, highlighting a clear gap between concern for the broader economy and personal job security.

In the UK, 70% of respondents expressed concern about AI’s impact on the economy, but only 39% worried about its effects on their own occupation.

Similarly, in the US, 72% feared wider economic effects, while 47% concerned about personal job loss. Experts suggest this reflects a psychological blind spot similar to early reactions to the internet.

The survey also highlighted a perceived AI skills gap, particularly in the UK, where 55% of workers had received no AI training. Many employees acknowledged awareness of AI’s rise but lacked motivation to develop skills immediately, a phenomenon researchers describe as an ‘awareness action gap’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Salesforce unveils eVerse for dependable enterprise AI

The US cloud-based software company, Salesforce and its Research AI department, have unveiled eVerse, a new environment designed to train voice and text agents through synthetic data generation, stress testing and reinforcement learning.

In an aim to resolve a growing reliability problem known as jagged intelligence, where systems excel at complex reasoning yet falter during simple interactions.

The company views eVerse as a key requirement for creating an Agentic Enterprise, where human staff and digital agents work together smoothly and dependably.

eVerse supports continuous improvement by generating large volumes of simulated interactions, measuring performance and adjusting behaviour over time, rather than waiting for real-world failures.

A platform that played a significant role in the development of Agentforce Voice, giving AI agents the capacity to cope with unpredictable calls involving noise, varied accents and weak connections.

Thousands of simulated conversations enabled teams to identify problems early and deliver stronger performance.

The technology is also being tested with UCSF Health, where clinical experts are working with Salesforce to refine agents that support billing services. Only a portion of healthcare queries can typically be handled automatically, as much of the knowledge remains undocumented.

eVerse enhances coverage by enabling agents to adapt to complex cases through reinforcement learning, thereby improving performance across both routine and sophisticated tasks.

Salesforce describes eVerse as a milestone in a broader effort to achieve Enterprise General Intelligence. The goal is a form of AI designed for dependable business use, instead of the more creative outputs that dominate consumer systems.

It also argues that trust and consistency will shape the next stage of enterprise adoption and that real-world complexity must be mirrored during development to guarantee reliable deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reveals hidden messages in gut microbes

Researchers at the University of Tokyo in Japan have utilised AI to investigate the intricate world of gut bacteria and their chemical signals.

Their system, VBayesMM, utilises a Bayesian neural network to identify genuine connections between bacteria and human health that traditional methods often overlook.

The human gut contains roughly 100 trillion bacterial cells, which interact with human metabolism, immunity and brain function through thousands of chemical compounds called metabolites.

Using AI, scientists can map which bacteria influence specific metabolites, offering hope for personalised treatment strategies for conditions such as obesity, sleep disorders and cancer.

VBayesMM stands out by recognising uncertainty in its predictions, offering more reliable insights than conventional models.

Researchers plan to expand the system to analyse larger and more diverse datasets, aiming to identify bacterial targets for therapies or dietary interventions that could improve patient outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta pushes deeper into robotics with key hardware move

Meta is expanding its robotics ambitions by appointing Li-Chen Miller, previously head of its smart glasses portfolio, as the first product manager for Reality Labs’ robotics division. Her transfer marks a significant shift in Meta’s hardware priorities following the launch of its latest augmented reality devices.

The company is reportedly developing a humanoid assistant known internally as Metabot within the same organisation that oversees its AR and VR platforms. Former Cruise executive Marc Whitten leads the robotics group, supported by veteran engineer Ning Li and renowned MIT roboticist Sangbae Kim.

Miller’s move emphasises Meta’s aim to merge its AI expertise with physical robotics. The new team collaborates with the firm’s Superintelligence Lab, which is building a ‘world model’ capable of powering dextrous motion and real-time reasoning.

Analysts see the strategy as Meta’s attempt to future-proof its ecosystem and diversify Reality Labs, which continues to post heavy losses. The company’s growing investment in humanoid design could bring home-use robots closer to reality, blending social AI with the firm’s long-term vision for the metaverse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Disney+ prepares AI tools for user creations

Disney+ is preparing to introduce tools that enable subscribers to create short, AI-generated videos inspired by its characters and franchises. Chief executive Bob Iger described the move as part of a sweeping platform upgrade that marks the service’s most significant technological expansion since its 2019 launch.

Alongside user-generated video features, Disney+ will gain interactive, game-like functions through its collaboration with Epic Games. The company plans to merge storytelling and interactivity, creating a new form of engagement where fans can build or remix short scenes within Disney’s creative universe.

Iger confirmed that Disney has held productive talks with several AI firms to develop responsible tools that safeguard intellectual property. The company aims to ensure that fans’ creations can exist within brand limits, avoiding misuse of iconic characters while opening the door to more creative participation.

Industry analysts suggest that the plan could reshape the streaming industry by blending audience creativity with studio production. Yet creators have expressed caution, urging transparency on rights and moderation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NotebookLM gains automated Deep Research tool and wider file support

Google is expanding NotebookLM with Deep Research, a tool designed to handle complex online inquiries and produce structured, source-grounded reports. The feature acts like a dedicated researcher, planning its own process and gathering material across the web.

Users can enter a question, choose a research style, and let Deep Research browse relevant sites before generating a detailed briefing. The tool runs in the background, allowing additional sources to be added without disrupting the workflow or leaving the notebook.

NotebookLM now supports more file types, including Google Sheets, Drive URLs, PDFs stored in Drive, and Microsoft Word documents. Google says this enables tasks such as summarising spreadsheets and quickly importing multiple Drive files for analysis.

The update continues the service’s gradual expansion since its late-2023 launch, which has brought features such as Video Overviews for turning dense materials into visual explainers. These follow earlier additions, such as Audio Overviews, which create podcast-style summaries of shared documents.

Google also released NotebookLM apps for Android and iOS earlier this year, extending access beyond desktop. The company says the latest enhancements should reach all users within a week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China targets deepfake livestreams of public figures

Chinese cyberspace authorities announced a crackdown on AI deepfakes impersonating public figures in livestream shopping. Regulators said platforms have removed thousands of posts and sanctioned numerous accounts for misleading users.

Officials urged platforms to conduct cleanups and hold marketers accountable for deceptive promotions. Reported actions include removing over 8,700 items and dealing with more than 11,000 impersonation accounts.

Measures build on wider campaigns against AI misuse, including rules targeting deep synthesis and labelling obligations. Earlier efforts focused on curbing rumours, impersonation and harmful content across short videos and e-commerce.

Chinese authorities pledged a continued high-pressure stance to safeguard consumers and protect celebrity likenesses online. Platforms risk penalties if complaint handling and takedowns fail to deter repeat infringements in livestream commerce.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New guidelines by Apple curb how apps send user data to external AI systems

Apple has updated its App Review Guidelines to require developers to disclose and obtain permission before sharing personal data with third-party AI systems. The company says the change enhances user control as AI features become more prevalent across apps.

The revision arrives ahead of Apple’s planned 2026 release of an AI-enhanced Siri, expected to take actions across apps and rely partly on Google’s Gemini technology. Apple is also moving to ensure external developers do not pass personal data to AI providers without explicit consent.

Previously, rule 5.1.2(i) already limited the sharing of personal information without permission. The update adds explicit language naming third-party AI as a category that requires disclosure, reflecting growing scrutiny of how apps use machine learning and generative models.

The shift could affect developers who use external AI systems for features such as personalisation or content generation. Enforcement details remain unclear, as the term ‘AI’ encompasses a broad range of technologies beyond large language models.

Apple released several other guideline updates alongside the AI change, including support for its new Mini Apps Programme and amendments involving creator tools, loan products, and regulated services such as crypto exchanges.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!