Salesforce unveils eVerse for dependable enterprise AI

The US cloud-based software company, Salesforce and its Research AI department, have unveiled eVerse, a new environment designed to train voice and text agents through synthetic data generation, stress testing and reinforcement learning.

In an aim to resolve a growing reliability problem known as jagged intelligence, where systems excel at complex reasoning yet falter during simple interactions.

The company views eVerse as a key requirement for creating an Agentic Enterprise, where human staff and digital agents work together smoothly and dependably.

eVerse supports continuous improvement by generating large volumes of simulated interactions, measuring performance and adjusting behaviour over time, rather than waiting for real-world failures.

A platform that played a significant role in the development of Agentforce Voice, giving AI agents the capacity to cope with unpredictable calls involving noise, varied accents and weak connections.

Thousands of simulated conversations enabled teams to identify problems early and deliver stronger performance.

The technology is also being tested with UCSF Health, where clinical experts are working with Salesforce to refine agents that support billing services. Only a portion of healthcare queries can typically be handled automatically, as much of the knowledge remains undocumented.

eVerse enhances coverage by enabling agents to adapt to complex cases through reinforcement learning, thereby improving performance across both routine and sophisticated tasks.

Salesforce describes eVerse as a milestone in a broader effort to achieve Enterprise General Intelligence. The goal is a form of AI designed for dependable business use, instead of the more creative outputs that dominate consumer systems.

It also argues that trust and consistency will shape the next stage of enterprise adoption and that real-world complexity must be mirrored during development to guarantee reliable deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reveals hidden messages in gut microbes

Researchers at the University of Tokyo in Japan have utilised AI to investigate the intricate world of gut bacteria and their chemical signals.

Their system, VBayesMM, utilises a Bayesian neural network to identify genuine connections between bacteria and human health that traditional methods often overlook.

The human gut contains roughly 100 trillion bacterial cells, which interact with human metabolism, immunity and brain function through thousands of chemical compounds called metabolites.

Using AI, scientists can map which bacteria influence specific metabolites, offering hope for personalised treatment strategies for conditions such as obesity, sleep disorders and cancer.

VBayesMM stands out by recognising uncertainty in its predictions, offering more reliable insights than conventional models.

Researchers plan to expand the system to analyse larger and more diverse datasets, aiming to identify bacterial targets for therapies or dietary interventions that could improve patient outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta pushes deeper into robotics with key hardware move

Meta is expanding its robotics ambitions by appointing Li-Chen Miller, previously head of its smart glasses portfolio, as the first product manager for Reality Labs’ robotics division. Her transfer marks a significant shift in Meta’s hardware priorities following the launch of its latest augmented reality devices.

The company is reportedly developing a humanoid assistant known internally as Metabot within the same organisation that oversees its AR and VR platforms. Former Cruise executive Marc Whitten leads the robotics group, supported by veteran engineer Ning Li and renowned MIT roboticist Sangbae Kim.

Miller’s move emphasises Meta’s aim to merge its AI expertise with physical robotics. The new team collaborates with the firm’s Superintelligence Lab, which is building a ‘world model’ capable of powering dextrous motion and real-time reasoning.

Analysts see the strategy as Meta’s attempt to future-proof its ecosystem and diversify Reality Labs, which continues to post heavy losses. The company’s growing investment in humanoid design could bring home-use robots closer to reality, blending social AI with the firm’s long-term vision for the metaverse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NotebookLM gains automated Deep Research tool and wider file support

Google is expanding NotebookLM with Deep Research, a tool designed to handle complex online inquiries and produce structured, source-grounded reports. The feature acts like a dedicated researcher, planning its own process and gathering material across the web.

Users can enter a question, choose a research style, and let Deep Research browse relevant sites before generating a detailed briefing. The tool runs in the background, allowing additional sources to be added without disrupting the workflow or leaving the notebook.

NotebookLM now supports more file types, including Google Sheets, Drive URLs, PDFs stored in Drive, and Microsoft Word documents. Google says this enables tasks such as summarising spreadsheets and quickly importing multiple Drive files for analysis.

The update continues the service’s gradual expansion since its late-2023 launch, which has brought features such as Video Overviews for turning dense materials into visual explainers. These follow earlier additions, such as Audio Overviews, which create podcast-style summaries of shared documents.

Google also released NotebookLM apps for Android and iOS earlier this year, extending access beyond desktop. The company says the latest enhancements should reach all users within a week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New guidelines by Apple curb how apps send user data to external AI systems

Apple has updated its App Review Guidelines to require developers to disclose and obtain permission before sharing personal data with third-party AI systems. The company says the change enhances user control as AI features become more prevalent across apps.

The revision arrives ahead of Apple’s planned 2026 release of an AI-enhanced Siri, expected to take actions across apps and rely partly on Google’s Gemini technology. Apple is also moving to ensure external developers do not pass personal data to AI providers without explicit consent.

Previously, rule 5.1.2(i) already limited the sharing of personal information without permission. The update adds explicit language naming third-party AI as a category that requires disclosure, reflecting growing scrutiny of how apps use machine learning and generative models.

The shift could affect developers who use external AI systems for features such as personalisation or content generation. Enforcement details remain unclear, as the term ‘AI’ encompasses a broad range of technologies beyond large language models.

Apple released several other guideline updates alongside the AI change, including support for its new Mini Apps Programme and amendments involving creator tools, loan products, and regulated services such as crypto exchanges.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ohanian predicts AI-driven jobs growth despite economic jitters

Reddit co-founder Alexis Ohanian says AI remains a durable long-term trend despite growing investor concern that the sector has inflated a market bubble. He argues the technology is now too deeply embedded in workflows to be dismissed as hype.

Tech stocks fell sharply on Thursday as uncertainty over US interest rate cuts prompted investors to seek safer assets. The Nasdaq Composite slid more than two percent, and the AI-driven Magnificent Seven posted broad losses, with Nvidia among the hardest-hit names.

Ohanian says valuations are not his focus but insists the underlying innovations are meaningful, pointing to faster software development as an example of measurable progress. He maintains confidence in technology trends even amid short-term market swings.

He also believes AI will create more roles than it eliminates, despite estimates that widespread adoption could disrupt up to seven percent of the US workforce. He argues that major technological shifts consistently open new career paths.

Ohanian notes that jobs once unimaginable, such as full-time online content creation, are now mainstream aspirations. He expects AI-led change to follow a similar pattern, delivering overall gains while acknowledging that the transition may be uneven.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Embodied AI steps forward with DeepMind’s SIMA 2 research preview

Google DeepMind has released a research preview of SIMA 2, an upgraded generalist agent that draws on Gemini’s language and reasoning strengths. The system moves beyond simple instruction following, aiming to understand user intent and interact more effectively with its environment.

SIMA 1 relied on game data to learn basic tasks across diverse 3D worlds but struggled with complex actions. DeepMind says SIMA 2 represents a step change, completing harder objectives in unfamiliar settings and adapting its behaviour through experience without heavy human supervision.

The agent is powered by the Gemini 2.5 Flash-Lite model and built around the idea of embodied intelligence, where an AI acts through a body and responds to its surroundings. Researchers say this approach supports a deeper understanding of context, goals, and the consequences of actions.

Demos show SIMA 2 describing landscapes, identifying objects, and choosing relevant tasks in titles such as No Man’s Sky. It also reveals its reasoning, interprets clues, uses emojis as instructions, and navigates photorealistic worlds generated by Genie, DeepMind’s own environment model.

Self-improvement comes from Gemini models that create new tasks and score attempts, enabling SIMA 2 to refine its abilities through trial and error. DeepMind sees these advances as groundwork for future general-purpose robots, though the team has not shared timelines for wider deployment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Firefox expands AI features with full user choice

Mozilla has outlined its vision for integrating AI into Firefox in a way that protects user choice instead of limiting it. The company argues that AI should be built like the open web, allowing people and developers to use tools on their own terms rather than being pushed into a single ecosystem.

Recent features such as the AI sidebar chatbot and Shake to Summarise on iOS reflect that approach.

The next step is an ‘AI Window’, a controlled space inside Firefox that lets users chat with an AI assistant while browsing. The feature is entirely optional, offers full control, and can be switched off at any time. Mozilla has opened a waitlist so users can test the feature early and help shape its development.

Mozilla believes browsers must adapt as AI becomes a more common interface to the web. The company argues that remaining independent allows it to prioritise transparency, accountability and user agency instead of the closed models promoted by competitors.

The goal is an assistant that enhances browsing and guides users outward to the wider internet rather than trapping them in isolated conversations.

Community involvement remains central to Mozilla’s work. The organisation is encouraging developers and users to contribute ideas and support open-source projects as it works to ensure Firefox stays fast, secure and private while embracing helpful forms of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Purdue and Google collaborate to advance AI research and education

Purdue University and Google are expanding their partnership to integrate AI into education and research, preparing the next generation of leaders while advancing technological innovation.

The collaboration was highlighted at the AI Frontiers summit in Indianapolis on 13 November. The event brought together university, industry, and government leaders to explore AI’s impact across sectors such as health care, manufacturing, agriculture, and national security.

Leaders from both organisations emphasised the importance of placing AI tools in the hands of students, faculty, and staff. Purdue plans a working AI competency requirement for incoming students in fall 2026, ensuring all graduates gain practical experience with AI tools, pending Board approval.

The partnership also builds on projects such as analysing data to improve road safety.

Purdue’s Institute for Physical Artificial Intelligence (IPAI), the nation’s first institute dedicated to AI in the physical world, plays a central role in the collaboration. The initiative focuses on physical AI, quantum science, semiconductors, and computing to equip students for AI-driven industries.

Google and Purdue emphasised responsible innovation and workforce development as critical goals of the partnership.

Industry leaders, including Waymo, Google Public Sector, and US Senator Todd Young, discussed how AI technologies like autonomous drones and smart medical devices are transforming key sectors.

The partnership demonstrates the potential of public-private collaboration to accelerate AI research and prepare students for the future of work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Stanford’s new AI model boosts liver transplant efficiency

A new machine learning model has been developed by Stanford Medicine researchers to make liver transplants more efficient. It predicts whether a donor will die within the time frame necessary for organ viability.

Donation after circulatory death requires that the donor pass within 30 to 45 minutes after life support removal; otherwise, surgeons often reject the liver due to increased risks for recipients. The model reduced futile procurements by 60%, outperforming surgeons’ predictions.

The algorithm analyses a wide range of donor data, including vital signs, blood work, neurological reflexes, and ventilator settings. The model was trained on over 2,000 cases from six US transplant centres and can be customised for hospital procedures and surgeon preferences.

The model also features a natural language interface that extracts relevant medical record information, streamlining the transplant workflow.

Donation after circulatory death is becoming increasingly important as it helps narrow the gap between organ demand and availability. Normothermic machine perfusion devices preserve organs during transport, making such donations more feasible.

Researchers hope the model will also be adapted for heart and lung transplants, further expanding its potential to save lives.

Stanford researchers stress that better predictions could help more patients receive life-saving transplants. Ongoing refinements aim to decrease missed opportunities from just over 15% to around 10%, enhancing efficiency and patient outcomes in organ transplantation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot