IBM unveils Digital Asset Haven for secure institutional blockchain management

IBM has introduced Digital Asset Haven, a unified platform designed for banks, corporations, and governments to securely manage and scale their digital asset operations. The platform manages the full asset lifecycle from custody to settlement while maintaining compliance.

Built with Dfns, the platform combines IBM’s security framework with Dfns’ custody technology. The Dfns platform supports 15 million wallets for 250 clients, providing multi-party authorisation, policy governance, and access to over 40 blockchains.

IBM Digital Asset Haven includes tools for identity verification, crime prevention, yield generation, and developer-friendly APIs for extra services. Security features include Multi-Party Computation, HSM-based signing, and quantum-safe cryptography to ensure compliance and resilience.

According to IBM’s Tom McPherson, the platform gives clients ‘the opportunity to enter and expand into the digital asset space backed by IBM’s level of security and reliability.’ Dfns CEO Clarisse Hagège said the partnership builds infrastructure to scale digital assets from pilots to global use.

IBM plans to roll out Digital Asset Haven via SaaS and hybrid models in late 2025, with on-premises deployment expected in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FDA and patent law create dual hurdles for AI-enabled medical technologies

AI reshapes healthcare by powering more precise and adaptive medical devices and diagnostic systems.

Yet, innovators face two significant challenges: navigating the US Food and Drug Administration’s evolving regulatory framework and overcoming legal uncertainty under US patent law.

These two systems, although interconnected, serve different goals. The FDA protects patients, while patent law rewards invention.

The FDA’s latest guidance seeks to adapt oversight for AI-enabled medical technologies that change over time. Its framework for predetermined change control plans allows developers to update AI models without resubmitting complete applications, provided updates stay within approved limits.

An approach that promotes innovation while maintaining transparency, bias control and post-market safety. By clarifying how adaptive AI devices can evolve safely, the FDA aims to balance accountability with progress.

Patent protection remains more complex. US courts continue to exclude non-human inventors, creating tension when AI contributes to discoveries.

Legal precedents such as Thaler vs Vidal and Alice Corp. vs CLS Bank limit patent eligibility for algorithms or diagnostic methods that resemble abstract ideas or natural laws. Companies must show human-led innovation and technical improvement beyond routine computation to secure patents.

Aligning regulatory and intellectual property strategies is now essential. Developers who engage regulators early, design flexible change control plans and coordinate patent claims with development timelines can reduce risk and accelerate market entry.

Integrating these processes helps ensure AI technologies in healthcare advance safely while preserving inventors’ rights and innovation incentives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Virginia’s data centre boom divides residents and industry

Loudoun County in Virginia, known as Data Center Alley, now hosts nearly 200 data centres powering much of the world’s internet and AI infrastructure. Their growth has brought vast economic benefits but stirred concerns about noise, pollution, and rising energy bills for nearby residents.

The facilities occupy about 3% of the county’s land yet generate 40% of its tax revenue. Locals say the constant humming and industrial sprawl have driven away wildlife and inflated electricity costs, which have surged by over 250% in five years.

Despite opposition, new US and global data centre projects continue to receive state support. The industry contributes $5.5 billion annually to Virginia’s economy and sustains around 74,000 jobs. Additionally, President Trump’s administration recently pledged to accelerate permits.

Residents like Emily Kasabian argue the expansion is eroding community life, replacing trees with concrete and machinery to fuel AI. Activists are now lobbying for construction pauses, warning that unchecked development threatens to transform affluent suburbs beyond recognition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Qualcomm and HUMAIN power Saudi Arabia’s AI transformation

HUMAIN and Qualcomm Technologies have launched a collaboration to deploy advanced AI infrastructure in Saudi Arabia, aiming to position the Kingdom as a global hub for AI.

Announced ahead of the Future Investment Initiative conference, the project will deliver the world’s first fully optimised edge-to-cloud AI system, expanding Saudi Arabia’s regional and global inferencing services capabilities.

In 2026, HUMAIN plans to deploy 200 megawatts of Qualcomm’s AI200 and AI250 rack solutions to power large-scale AI inference services.

The partnership combines HUMAIN’s regional infrastructure and full AI stack with Qualcomm’s semiconductor expertise, creating a model for nations seeking to develop sovereign AI ecosystems.

However, the initiative will also integrate HUMAIN’s Saudi-developed ALLaM models with Qualcomm’s AI platforms, offering enterprise and government customers tailor-made solutions for industry-specific needs.

The collaboration supports Saudi Arabia’s strategy to drive economic growth through AI and semiconductor innovation, reinforcing its ambition to lead the next wave of global intelligent computing.

Qualcomm’s CEO Cristiano Amon said the partnership would help the Kingdom build a technology ecosystem to accelerate its AI ambitions.

HUMAIN CEO Tareq Amin added that combining local insight with Qualcomm’s product leadership will establish Saudi Arabia as a key player in global AI and semiconductor development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN cybercrime treaty signed in Hanoi amid rights concerns

Around 73 countries signed a landmark UN cybercrime convention in Hanoi, seeking faster cooperation against online crime. Leaders cited trillions in annual losses from scams, ransomware, and trafficking. The pact enters into force after 40 ratifications.

UN supporters say the treaty will streamline evidence sharing, extradition requests, and joint investigations. Provisions target phishing, ransomware, online exploitation, and hate speech. Backers frame the deal as a boost to global security.

Critics warn the text’s breadth could criminalise security research and dissent. The Cybersecurity Tech Accord called it a surveillance treaty. Activists fear expansive data sharing with weak safeguards.

The UNODC argues the agreement includes rights protections and space for legitimate research. Officials say oversight and due process remain essential. Implementation choices will decide outcomes on the ground.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI deepfake videos spark ethical and environmental concerns

Deepfake videos created by AI platforms like OpenAI’s Sora have gone viral, generating hyper-realistic clips of deceased celebrities and historical figures in often offensive scenarios.

Families of figures like Dr Martin Luther King Jr have publicly appealed to AI firms to prevent using their loved ones’ likenesses, highlighting ethical concerns around the technology.

Beyond the emotional impact, Dr Kevin Grecksch of Oxford University warns that producing deepfakes carries a significant environmental footprint. Instead of occurring on phones, video generation happens in data centres that consume vast amounts of electricity and water for cooling, often at industrial scales.

The surge in deepfake content has been rapid, with Sora downloaded over a million times in five days. Dr Grecksch urges users to consider the environmental cost, suggesting more integrated thinking about where data centres are built and how they are cooled to minimise their impact.

As governments promote AI growth areas like South Oxfordshire, questions remain over sustainable infrastructure. Users are encouraged to balance technological enthusiasm with environmental mindfulness, recognising the hidden costs behind creating and sharing AI-generated media.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google expands Earth AI for disaster response and environmental monitoring

The US tech giant, Google, has expanded access to Earth AI, a platform built on decades of geospatial modelling combined with Gemini’s advanced reasoning.

Enterprises, cities, and nonprofits can now rapidly analyse environmental and disaster-related data, enabling faster, informed decisions to protect communities.

During the 2025 California wildfires, Google’s AI helped alert millions and guide them to safety, showing the potential of Earth AI in crisis response.

A key feature, Geospatial Reasoning, allows the AI to connect multiple models (such as satellite imagery, population maps, and weather forecasts) to assess which communities and infrastructure are most at risk.

Instead of manual data analysis, organisations can now identify vulnerable areas and prioritise relief efforts in minutes.

Earth AI now includes tools to detect patterns in satellite imagery, such as drying rivers, harmful algae blooms, or vegetation encroachment on infrastructure. These insights support environmental monitoring and early warnings, letting authorities respond before disasters escalate.

The models are available on Google Cloud to Trusted Testers, allowing integration with external datasets for tailored analysis.

Several organisations are already leveraging Earth AI for the public good. WHO AFRO uses it to monitor cholera risks in the Democratic Republic of Congo, while Planet and Airbus analyse satellite imagery for deforestation and power line safety.

Bellwether uses Earth AI for hurricane prediction, enabling faster insurance claim processing and recovery. Google aims to make these tools broadly accessible to support global crisis management, public health, and environmental protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NVIDIA boosts open-source robotics with new ROS 2 and Physical AI contributions

At the ROSCon conference in Singapore, NVIDIA unveiled significant open-source contributions to accelerate the future of robotics.

The company announced updates to the ROS 2 framework, new partnerships within the Open Source Robotics Alliance, and the latest release of NVIDIA Isaac ROS 4.0 (all designed to strengthen collaboration in robotics development).

NVIDIA’s involvement in the new Physical AI Special Interest Group aims to enhance real-time robot control and AI processing efficiency.

Its integration of GPU-aware abstractions into ROS 2 allows the framework to handle both CPUs and GPUs seamlessly, ensuring faster and more consistent performance for robotic systems.

Additionally, the company open-sourced Greenwave Monitor, which helps developers quickly identify and fix performance bottlenecks. NVIDIA Isaac ROS 4.0, now available on the Jetson Thor platform, provides GPU-accelerated AI models and libraries to power robot mobility and manipulation.

Global robotics leaders, including AgileX, Canonical, Intrinsic, and Robotec.ai, are already deploying NVIDIA’s open-source tools to enhance simulation, digital twins, and real-world testing.

NVIDIA’s initiatives reinforce its role as a core contributor to the open-source robotics ecosystem and the development of physical AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI market surge raises alarm over financial stability

AI has become one of the dominant forces in global markets, with AI-linked firms now making up around 44% of the S&P 500’s market capitalisation. Their soaring valuations have pushed US stock indices near levels last seen in the dot com bubble.

While optimism remains high, the future is uncertain. AI’s infrastructure demands are immense, with estimates suggesting that trillions of dollars will be needed to build and power new data centres by 2030.

Much of this investment is expected to be financed through debt, increasing exposure to potential market shocks. Analysts warn that any slowdown in AI progress or monetisation could trigger sharp corrections in AI-related asset prices.

The Bank of England has noted that financial stability risks could rise if AI infrastructure expansion continues at its current pace. Banks and private credit funds may face growing exposure to highly leveraged sectors, while power and commodity markets could also come under strain from surging AI energy needs.

Although AI remains a powerful growth driver for the US economy, its rapid expansion is creating new systemic vulnerabilities. Policymakers and financial institutions are urged to monitor the sector closely as the next phase of AI-driven growth unfolds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot