AI and fusion combine to accelerate clean energy breakthroughs

A new research partnership between Google and Commonwealth Fusion Systems (CFS) aims to accelerate the development of clean, abundant fusion energy. Fusion powers the sun and offers limitless, clean energy, but achieving it on Earth requires stabilising plasma at over 100 million degrees Celsius.

The collaboration builds on prior AI research in controlling plasma using deep reinforcement learning. Google and CFS are combining AI with the SPARC tokamak, using superconducting magnets to achieve net energy gain from fusion.

AI tools such as TORAX, a fast and differentiable plasma simulator, allow millions of virtual experiments to optimise plasma behaviour before SPARC begins operations.

AI is also being applied to find the most efficient operating paths for the tokamak, including optimising magnetic coils, fuel injection, and heat management.

Reinforcement learning agents can optimise energy output in real time while safeguarding the machine, potentially exceeding human-designed methods.

The partnership combines advanced AI with fusion hardware to develop intelligent, adaptive control systems for future clean and sustainable fusion power plants.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google and Salesforce deepen AI partnership across Agentforce 360 and Gemini Enterprise

Salesforce and Google have expanded their long-term partnership, introducing new integrations between Salesforce’s Agentforce 360 platform and Google’s Gemini Enterprise. The collaboration aims to enhance productivity and build a new foundation for intelligent, connected business operations.

Through the expansion, Gemini models now power Salesforce’s Atlas Reasoning Engine, combining multimodal intelligence with hybrid reasoning to improve how AI agents handle complex, multistep enterprise tasks.

These integrations also extend across Google Workspace, bringing Agentforce 360 capabilities directly into Gmail, Meet, Docs, Sheets and Drive for sales, service and IT teams.

Salesforce highlights that fine-tuned Gemini models outperform competing LLMs on key CRM benchmarks, enabling businesses to automate workflows more reliably and consistently.

The companies also reaffirm their commitment to open standards like Model Context Protocol and Agent2Agent, allowing multi-agent collaboration and interoperability across enterprise systems.

A partnership that further integrates Gemini Enterprise with Slack’s real-time search API, enabling users to draw insights directly from organisational data within conversations.

Both companies stress that these advances mark a major step toward an ‘Agentic Enterprise’, where AI systems work alongside people to drive innovation, improve service quality and streamline decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nurses gain AI support as Microsoft evolves Dragon Copilot in healthcare

Microsoft has announced major AI upgrades to Dragon Copilot, its healthcare assistant, extending ambient and generative AI capabilities to nursing workflows and third-party partner integrations.

The update is designed to improve patient journeys, reduce administrative workloads and enhance efficiency across healthcare systems.

The new features allow partners to integrate their own AI applications directly into Dragon Copilot, helping clinicians access trusted information, automate documentation and streamline financial management without leaving their workflow.

Partnerships with Elsevier, Wolters Kluwer, Atropos Health, Canary Speech and others will provide real-time decision support, clinical insights and revenue cycle automation.

Microsoft is also introducing the first commercial ambient AI solution built for nurses, designed to reduce burnout and enhance care quality.

A technology that automatically records nurse-patient interactions and transforms them into editable documentation for electronic health records, saving time and supporting accuracy.

Nurses can also access medical content within the same interface and automate note-taking and summaries, allowing greater focus on patient care.

The company says these developments mark a new phase in its AI strategy for healthcare, strengthening its collaboration with providers and partners.

Microsoft aims to make clinical workflows more connected, reliable and human-centred, while supporting safe, evidence-based decision-making through its expanding ecosystem of AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI predicts how cells respond to drugs and genes

KAIST researchers have developed AI that predicts cell responses to drugs and genes, with potential to transform drug discovery, cancer therapy, and regenerative medicine. The method models cell-drug interactions in a modular ‘Lego block’ approach, enabling analysis of previously untested combinations.

The AI separates representations of cell states and drug effects in a ‘latent space’ and recombines them to forecast reactions. The system can predict gene effects on cells, providing a quantitative view of drug and genetic impacts.

Validation using real experimental data demonstrated the AI’s ability to identify molecular targets that restored colorectal cancer cells to a normal-like state.

Beyond cancer treatment, the platform is versatile, capable of predicting diverse cell-state transitions and drug responses. The technology shows how drugs work inside cells, offering a powerful tool to design therapies that guide cells toward desired outcomes.

The study, led by Professor Kwang-Hyun Cho with his KAIST team, was published in Cell Systems and supported by the National Research Foundation of Korea. Researchers highlight the AI framework’s broad use, from restoring cells to developing new therapies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft warns of a surge in ransomware and extortion incidents

Financially motivated cybercrime now accounts for the majority of global digital threats, according to Microsoft’s latest Digital Defense Report.

The company’s analysts found that over half of all cyber incidents with known motives in the past year were driven by extortion or ransomware, while espionage represented only a small fraction.

Microsoft warns that automation and accessible off-the-shelf tools have allowed criminals with limited technical skills to launch widespread attacks, making cybercrime a constant global threat.

The report reveals that attackers increasingly target critical services such as hospitals and local governments, where weak security and urgent operational demands make them easy victims.

Cyberattacks on these sectors have already led to real-world harm, from disrupted emergency care to halted transport systems. Microsoft highlights that collaboration between governments and private industry is essential to protect vulnerable sectors and maintain vital services.

While profit-seeking criminals dominate by volume, nation-state actors are also expanding their reach. State-sponsored operations are growing more sophisticated and unpredictable, with espionage often intertwined with financial motives.

Some state actors even exploit the same cybercriminal networks, complicating attribution and increasing risks for global organisations.

Microsoft notes that AI is being used by both attackers and defenders. Criminals are employing AI to refine phishing campaigns, generate synthetic media and develop adaptive malware, while defenders rely on AI to detect threats faster and close security gaps.

The report urges leaders to prioritise cybersecurity as a strategic responsibility, adopt phishing-resistant multifactor authentication, and build strong defences across industries.

Security, Microsoft concludes, must now be treated as a shared societal duty rather than an isolated technical task.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Music and AI unite to power new mental health innovations

MIT PhD student Kimaya Lecamwasam is blending neuroscience, artificial intelligence, and music to pioneer new approaches to mental health care. Her research explores how music impacts the brain and emotions to develop scalable non-drug therapies.

Lecamwasam combines her neuroscience background and love of music to study how performances, composition, and listening affect emotional and physical well-being. Her work validates music as a mental health tool and explores AI-generated music for therapy.

Lecamwasam studies AI- and human-composed music, exploring ethical ways to use AI in emotional health without compromising creativity. She collaborates with institutions like Carnegie Hall and Myndstream to test music-based applications in real-world settings.

Beyond research, Lecamwasam contributes to building supportive communities at MIT. Through mentoring and student initiatives, she promotes inclusion and collaboration among emerging scientists and artists who share her belief in music’s power to heal and connect.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Oracle launches embedded AI Agent Marketplace in Fusion Applications

Oracle has announced substantial enhancements to its AI Agent Studio for Fusion Applications, introducing a native AI Agent Marketplace, broader LLM support, and advanced agent tooling and governance features.

The AI Agent Marketplace is embedded within Fusion Applications, allowing customers to browse, test and deploy partner-built, Oracle-validated agents directly within their enterprise workflows. These agents can supplement or replace built-in agents to address industry-specific tasks.

Oracle is also expanding support for external large language models: customers and partners can now select from providers including OpenAI, Anthropic, Cohere, Google, Meta and xAI. This gives flexibility in choosing which LLM best fits a given use case.

New capabilities in Agent Studio include MCP support to integrate agents with third-party data systems, agent cards for cross-agent communication and collaboration, credential store for secure access to external APIs, monitoring dashboard, and agent tracing and performance metrics for observability.

It will also have prompt libraries and version control for managing agent prompts across lifecycles, workflow chaining and deterministic execution to organise multi-step agent tasks, and human-in-the-loop support to combine automation with oversight.

Oracle also highlights its network of 32,000 certified experts trained in building AI agents via Agent Studio. These experts can help customers optimise use, extend the marketplace, and ensure agent quality and safety.

Overall, Oracle’s release positions its Fusion ecosystem as a more open, flexible, and enterprise-ready platform for AI agent deployment, balancing embedded automation with extensibility and governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New method helps AI models locate personalised objects in scenes

MIT and the MIT-IBM Watson AI Lab have developed a training approach that enables generative vision-language models to localise personalised objects (for example, a specific cat) across new scenes, a task at which they previously performed poorly.

While vision-language models (VLMs) are good at recognising generic object categories (dogs, chairs, etc.), they struggle when asked to point out your specific dog or chair under different conditions.

To remedy this, the researchers framed a fine-tuning regime using video-tracking datasets, where the same object appears in multiple frames.

Crucially, they used pseudo-names (e.g. ‘Charlie’) instead of real object names to prevent the model from relying on memorised label associations. This encourages it to reason about context, scene layout, appearance cues, and relative position, rather than shortcut to category matches.

AI models trained with the method showed a 12% average improvement in personalised localization. In some settings, especially with pseudo-naming, gains reached 21%. Importantly, this enhanced ability did not degrade the model’s overall object recognition performance.

Potential applications include smart home cameras recognising your pet, assistive devices helping visually impaired users find items, robotics, surveillance, and ecological monitoring (e.g. tracking particular animals). The approach helps models better generalise from a few example images rather than needing full retraining for each new object.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Adaptive optics meets AI for cellular-scale eye care

AI is moving from lab demos to frontline eye care, with clinicians using algorithms alongside routine fundus photos to spot disease before symptoms appear. The aim is simple: catch diabetic retinopathy early enough to prevent avoidable vision loss and speed referrals for treatment.

New imaging workflows pair adaptive optics with machine learning to shrink scan times from hours to minutes while preserving single-cell detail. At the US National Eye Institute, models recover retinal pigment epithelium features and clean noisy OCT data to make standard scans more informative.

Duke University’s open-source DCAOSLO goes further by combining multiplexed light signals with AI to capture cellular-scale images quickly. The approach eases patient strain and raises the odds of getting diagnostic-quality data in busy clinics.

Clinic-ready diagnostics are already changing triage. LumineticsCore, the first FDA-cleared AI to detect more-than-mild diabetic retinopathy from primary-care images, flags who needs urgent referral in seconds, enabling earlier laser or pharmacologic therapy.

Researchers also see the retina as a window on wider health, linking vascular and choroidal biomarkers to diabetes, hypertension and cardiovascular risk. Standardised AI tools promise more reproducible reads, support for trials and, ultimately, home-based monitoring that extends specialist insight beyond the clinic.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI system links hidden signals in patient records to improve diagnosis

Researchers at Mount Sinai and UC Irvine have developed a novel AI system, InfEHR, which creates a dynamic network of an individual’s medical events and relationships over time. The system detects disease patterns that traditional approaches often miss.

InfEHR transforms time-ordered data, visits, labs, medications, and vital signs, into a graphical network for each patient. It then learns which combinations of clues across that network tend to correlate with hidden disease states.

In testing, with only a few physician-annotated examples, the AI system identified neonatal sepsis without positive blood cultures at rates 12–16× higher than current methods, and post-operative kidney injury with 4–7× more sensitivity than baseline clinical rules.

As a safety feature, InfEHR can also respond ‘not sure’ when the record lacks enough signal, reducing the risk of overconfident errors.

Because it adapts its reasoning per patient rather than applying the same rules to all, InfEHR shows promise for personalized diagnostics across hospitals and populations, even with relatively small annotated datasets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot