How Microsoft is teaching AI to understand biological systems

Medicine still relies largely on population averages, even though genetic and cellular differences shape how diseases develop and respond to treatment.

Researchers at Microsoft argue that AI could transform healthcare by learning the language of biology and enabling truly personalised medicine instead of one-size-fits-all therapies.

Ava Amini, principal researcher at Microsoft Research, explains that AI can detect biological patterns at a scale impossible for human analysis.

Single cancer biopsies can generate tens of millions of data points, allowing AI models to identify meaningful signals and support precision treatment strategies tailored to individual patients.

Building on decades of biological coding systems, Microsoft has developed generative models such as EvoDiff and the Dayhoff Atlas to design new proteins using biological language.

Lab testing has shown a marked improvement in functional success, demonstrating that AI-driven protein design is moving beyond theory into real-world application.

Challenges remain in modelling entire human cells, where current AI systems still predict averages rather than biological diversity. Microsoft researchers continue to pursue integrated experimental and computational approaches, aiming to bring precision oncology closer to everyday clinical practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global data centre investment hits record $61bn

Investment in data centres worldwide reached a record $61bn in 2025, according to a new report from S&P Global. The surge is being driven by growing demand for AI workloads, with construction and expansion showing little sign of slowing.

Analysts describe the market as a ‘global construction frenzy’ as companies race to meet rising hardware and energy requirements.

The report highlights that investors, unable to buy existing facilities, are increasingly turning to new builds. The sector, with 500 data centres in the UK and 4,000 in the US, is projected to expand faster over the next five years than the previous five.

The AI boom is pushing energy- and computer-intensive workloads to new extremes.

Concerns are emerging about potential overspending in the AI sector. Analysts note that companies like OpenAI, Oracle, and Nvidia are investing heavily despite uncertain returns.

OpenAI is expected to spend $143bn from 2024 to 2029, prompting concerns over profitability while still holding potential for major innovations. The rapid expansion of data centres also carries significant energy implications.

The International Energy Agency forecasts data centre electricity demand could more than double by 2030, matching Japan’s current total consumption and underscoring the scale needed for AI growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ripple transforms cross-border payments with XRP

Cross-border payments have long struggled with delays and high costs, but networks like SWIFT could be transformed by systems that leverage blockchain. Ripple, launched by Ripple Labs in 2012, enables faster, more transparent, and cost-effective international transfers.

RippleNet, the company’s unified payment network, connects multiple banks via the interledger standard, removing intermediaries and enabling near-instant settlement. XRP, Ripple’s digital token, acts as a bridge currency to provide liquidity, though transactions can occur without it.

XRP boasts low fees, high scalability, and settlement times of just a few seconds.

Since its creation, Ripple has evolved from individual protocols to the unified RippleNet platform, supported by the XRPL Foundation. Unlike Bitcoin, XRP is premined and relies on a select group of validators, offering a different governance model and centralisation approach.

The network also supports broader financial applications, including central bank digital currencies, DeFi, and NFTs.

Despite its potential, investing in Ripple carries risks typical of crypto assets, including volatility, lack of regulation, and complexity. Investors are advised to research thoroughly and limit high-risk exposure to ensure a diversified portfolio.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU moves to extend child abuse detection rules

The European Commission has proposed extending the Interim Regulation that allows online service providers to voluntarily detect and report child sexual abuse instead of facing a legal gap once the current rules expire.

These measures would preserve existing safeguards while negotiations on permanent legislation continue.

The Interim Regulation enables providers of certain communication services to identify and remove child sexual abuse material under a temporary exemption from e-Privacy rules.

Without an extension beyond April 2026, voluntary detection would have to stop, making it easier for offenders to share illegal material and groom children online.

According to the Commission, proactive reporting by platforms has played a critical role for more than fifteen years in identifying abuse and supporting criminal investigations. Extending the interim framework until April 2028 is intended to maintain these protections until long-term EU rules are agreed.

The proposal now moves to the European Parliament and the Council, with the Commission urging swift agreement to ensure continued protection for children across the Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mac users lose ChatGPT voice access in 2026

OpenAI has confirmed that Voice interactions will stop working in the ChatGPT macOS app as of 15 January 2026, affecting users who rely on spoken conversations instead of typing.

The company states that the change is part of a broader effort to streamline voice experiences across its platforms.

Currently, the Mac app allows hands-free, real-time conversations with ChatGPT. After the deadline, voice functionality will remain accessible through chatgpt.com, as well as on iOS, Android, and the Windows app. OpenAI stresses that no other macOS features will be removed.

According to OpenAI, recent updates have already brought Voice mode closer to standard chat interactions on mobile and the web, allowing users to review earlier messages and engage with visual content while speaking.

The company has suggested that the existing macOS Voice feature may not support its next-generation approach.

Mac users will be able to continue using Voice mode until mid-January 2026. After this date, voice-based interactions will require switching to other supported platforms until a potential macOS update is introduced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TSA introduces a fee for travellers without ID

From 1 February, the US Transportation Security Administration will charge a $45 fee to travellers who arrive at airports without a valid form of identification, such as a REAL ID or passport.

A measure that is linked to the rollout of a new alternative identity verification system designed to modernise security checks.

The fee applies to passengers using TSA Confirm.ID, a process that may involve biometric or biographic verification. Even after payment, access to the secure area is not guaranteed, and the charge will remain non-refundable, valid for a period of ten days.

According to the TSA, the policy ensures that the traveller, instead of taxpayers, bears the cost of verifying insufficient identification. Officials have urged passengers to obtain a REAL ID or other approved documentation to avoid delays or missed flights.

The agency has indicated that travellers will be encouraged to pay the fee online before arrival. At the same time, further details are expected on how advance payment and verification will operate across different airports.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK plans ban on deepfake AI nudification apps

Britain plans to ban AI-nudification apps that digitally remove clothing from images. Creating or supplying these tools would become illegal under new proposals.

The offence would build on existing UK laws covering non-consensual sexual deepfakes and intimate image abuse. Technology Secretary Liz Kendall said developers and distributors would face harsh penalties.

Experts warn that nudification apps cause serious harm, mainly when used to create child sexual abuse material. Children’s Commissioner Dame Rachel de Souza has called for a total ban on the technology.

Child protection charities welcomed the move but want more decisive action from tech firms. The government said it would work with companies to stop children from creating or sharing nude images.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Amazon considers 10 billion investment in OpenAI

Amazon is reportedly considering a $10 billion investment in OpenAI, highlighting its growing focus on the generative AI market. The investment follows OpenAI’s October restructuring, giving it more flexibility to raise funds and form new tech partnerships.

OpenAI has recently secured major infrastructure agreements, including a $38 billion cloud computing deal with Amazon Web Services (AWS). Deals with Nvidia, AMD, and Broadcom boost OpenAI’s access to computing power for its AI development.

Amazon has invested $8 billion in Anthropic and continues developing AI hardware through AWS’s Inferentia and Trainium chips. The move into OpenAI reflects Amazon’s strategy to expand its influence across the AI sector.

OpenAI’s prior $13 billion Microsoft exclusivity has ended, enabling it to pursue new partnerships. The combination of fresh funding, cloud capacity, and hardware support positions OpenAI for continued growth in the AI industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Kimwolf Android botnet linked to a record-breaking DDoS attacks

Cybersecurity researchers have uncovered a rapidly expanding Android botnet known as Kimwolf, which has already compromised approximately 1.8 million devices worldwide.

The malware primarily targets smart TVs, set-top boxes, and tablets connected to residential networks, with infections concentrated in countries including Brazil, India, the US, Argentina, South Africa, and the Philippines.

Analysis by QiAnXin XLab indicates that Kimwolf demonstrates a high degree of operational resilience.

Despite multiple disruptions to its command-and-control infrastructure, the botnet has repeatedly re-emerged with enhanced capabilities, including the adoption of Ethereum Name Service to harden its communications against takedown efforts.

Researchers also identified significant similarities between Kimwolf and AISURU, one of the most powerful botnets observed in recent years. Shared source code, infrastructure, and infection scripts suggest both botnets are operated by the same threat group and have coexisted on large numbers of infected devices.

AISURU has previously drawn attention for launching record-setting distributed denial-of-service attacks, including traffic peaks approaching 30 terabits per second.

The emergence of Kimwolf alongside such activity highlights the growing scale and sophistication of botnet-driven cyber threats targeting global internet infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare faces growing compliance pressure from AI adoption

AI is becoming a practical tool across healthcare as providers face rising patient demand, chronic disease and limited resources.

These AI systems increasingly support tasks such as clinical documentation, billing, diagnostics and personalised treatment instead of relying solely on manual processes, allowing clinicians to focus more directly on patient care.

At the same time, AI introduces significant compliance and safety risks. Algorithmic bias, opaque decision-making, and outdated training data can affect clinical outcomes, raising questions about accountability when errors occur.

Regulators are signalling that healthcare organisations cannot delegate responsibility to automated systems and must retain meaningful human oversight over AI-assisted decisions.

Regulatory exposure spans federal and state frameworks, including HIPAA privacy rules, FDA oversight of AI-enabled medical devices and enforcement under the False Claims Act.

Healthcare providers are expected to implement robust procurement checks, continuous monitoring, governance structures and patient consent practices as AI regulation evolves towards a more coordinated national approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!