NVIDIA and Nokia join forces to build the AI platform for 6G

Nokia and NVIDIA have announced a $1 billion partnership to develop an AI-powered platform that will drive the transition from 5G to 6G networks.

The collaboration will create next-generation AI-RAN systems, combining computing, sensing and connectivity to transform how the US mobile networks process data and deliver services.

However, this partnership marks a strategic step in both companies’ ambition to regain global leadership in telecommunications.

By integrating NVIDIA’s new Aerial RAN Computer and Nokia’s AI-RAN software, operators can upgrade existing networks through software updates instead of complete infrastructure replacements.

T-Mobile US will begin field tests in 2026, supported by Dell’s PowerEdge servers.

NVIDIA’s investment and collaboration with Nokia aim to strengthen the foundation for AI-native networks that can handle the rising demand from agentic, generative and physical AI applications.

These networks are expected to support future 6G use cases, including drones, autonomous vehicles and advanced augmented reality systems.

Both companies see AI-RAN as the next evolution of wireless connectivity, uniting data processing and communication at the edge for greater performance, energy efficiency and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europol urges coordinated EU action against caller ID spoofing

Europol calls for a Europe-wide response to caller ID spoofing, which criminals use to impersonate trusted numbers and commit fraud. The practice causes significant harm, with an estimated €850 million lost yearly.

Organised networks now run ‘spoofing as a service’, impersonating banks, authorities or family members, and even staging so-called swatting incidents by making false emergency calls from a victim’s address. Operating across borders, these groups exploit jurisdictional gaps to avoid detection and prosecution.

A Europol survey across 23 countries found major obstacles to implementing anti-spoofing measures, leaving around 400 million vulnerable to these scams.

Law enforcement said weak cooperation with telecom operators, fragmented rules and limited technical tools to identify and block spoofed traffic hinder an adequate response.

Europol has put forward several priorities, including setting up EU-wide technical standards to verify caller IDs and trace fraudulent calls, stronger cross-border cooperation among authorities and industry, and regulatory convergence to enable lawful tracing.

The proposals, aligned with the ProtectEU strategy, aim to harden networks while anticipating evolving scammers’ tactics such as SIM-based scams, anonymous prepaid services and smishing (fraud via fake text messages).

Brussels has begun a phishing awareness campaign alongside enforcement to help users spot and report scams.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

IBM unveils Digital Asset Haven for secure institutional blockchain management

IBM has introduced Digital Asset Haven, a unified platform designed for banks, corporations, and governments to securely manage and scale their digital asset operations. The platform manages the full asset lifecycle from custody to settlement while maintaining compliance.

Built with Dfns, the platform combines IBM’s security framework with Dfns’ custody technology. The Dfns platform supports 15 million wallets for 250 clients, providing multi-party authorisation, policy governance, and access to over 40 blockchains.

IBM Digital Asset Haven includes tools for identity verification, crime prevention, yield generation, and developer-friendly APIs for extra services. Security features include Multi-Party Computation, HSM-based signing, and quantum-safe cryptography to ensure compliance and resilience.

According to IBM’s Tom McPherson, the platform gives clients ‘the opportunity to enter and expand into the digital asset space backed by IBM’s level of security and reliability.’ Dfns CEO Clarisse Hagège said the partnership builds infrastructure to scale digital assets from pilots to global use.

IBM plans to roll out Digital Asset Haven via SaaS and hybrid models in late 2025, with on-premises deployment expected in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Poland indicts former deputy justice minister in Pegasus spyware case

Poland’s former deputy justice minister, Michał Woś, has been indicted for allegedly authorising the transfer of $6.9 million from a fund intended for crime victims to a government office that later used the money to purchase commercial spyware.

Prosecutors claim the transfer took place in 2017. If convicted, Woś could face up to 10 years in prison.

The indictment is part of a broader investigation into the use of Pegasus, spyware developed by Israel’s NSO Group, in Poland between 2017 and 2022. The software was reportedly deployed against opposition politicians during that period.

In April 2024, Prime Minister Donald Tusk announced that nearly 600 individuals in Poland had been targeted with Pegasus under the previous Law and Justice (PiS) government, of which Woś is a member.

Responding on social media, Woś defended the purchase, writing that Pegasus was used to fight crime, and “that Prime Minister Tusk and Justice Minister Waldemar Żurek oppose such equipment is not surprising—just as criminals dislike the police, those involved in wrongdoing dislike crime detection tools.”

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake videos raises environmental worries

Deepfake videos powered by AI are spreading across social media at an unprecedented pace, but their popularity carries a hidden environmental cost.

Creating realistic AI videos depends on vast data centres that consume enormous amounts of electricity and use fresh water to cool powerful servers. Each clip quietly produced adds to the rising energy demand and increasing pressure on local water supplies.

Apps such as Sora have made generating these videos almost effortless, resulting in millions of downloads and a constant stream of new content. Users are being urged to consider how frequently they produce and share such media, given the heavy energy and water footprint behind every video.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italian political elite targeted in hacking scandal using stolen state data

Italian authorities have uncovered a vast hacking operation that built detailed dossiers on politicians and business leaders using data siphoned from state databases. Prosecutors say the group, operating under the name Equalize, tried to use the information to manipulate Italy’s political class.

The network, allegedly led by former police inspector Carmine Gallo, businessman Enrico Pazzali and cybersecurity expert Samuele Calamucci, created a system called Beyond to compile thousands of records from state systems, including confidential financial and criminal records.

Police wiretaps captured suspects boasting they could operate all over Italy. Targets included senior officials such as former Prime Minister Matteo Renzi and the president of the Senate Ignazio La Russa.

Investigators say the gang presented itself as a corporate intelligence firm while illegally accessing phones, computers and government databases. The group allegedly sold reputational dossiers to clients, including major firms such as Eni, Barilla and Heineken, which have all denied wrongdoing or said they were unaware of any illegal activity.

The probe began when police monitoring a northern Italian gangster uncovered links to Gallo. Gallo, who helped solve cases including the 1995 murder of Maurizio Gucci, leveraged contacts in law enforcement and intelligence to arrange unlawful data searches for Equalize.

The operation collapsed in autumn 2024, with four arrests and dozens questioned. After months of questioning and plea bargaining, 15 defendants are due to enter pleas this month. Officials warn the case shows how hackers can weaponise state data, calling it ‘a real and actual attack on democracy’.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

FDA and patent law create dual hurdles for AI-enabled medical technologies

AI reshapes healthcare by powering more precise and adaptive medical devices and diagnostic systems.

Yet, innovators face two significant challenges: navigating the US Food and Drug Administration’s evolving regulatory framework and overcoming legal uncertainty under US patent law.

These two systems, although interconnected, serve different goals. The FDA protects patients, while patent law rewards invention.

The FDA’s latest guidance seeks to adapt oversight for AI-enabled medical technologies that change over time. Its framework for predetermined change control plans allows developers to update AI models without resubmitting complete applications, provided updates stay within approved limits.

An approach that promotes innovation while maintaining transparency, bias control and post-market safety. By clarifying how adaptive AI devices can evolve safely, the FDA aims to balance accountability with progress.

Patent protection remains more complex. US courts continue to exclude non-human inventors, creating tension when AI contributes to discoveries.

Legal precedents such as Thaler vs Vidal and Alice Corp. vs CLS Bank limit patent eligibility for algorithms or diagnostic methods that resemble abstract ideas or natural laws. Companies must show human-led innovation and technical improvement beyond routine computation to secure patents.

Aligning regulatory and intellectual property strategies is now essential. Developers who engage regulators early, design flexible change control plans and coordinate patent claims with development timelines can reduce risk and accelerate market entry.

Integrating these processes helps ensure AI technologies in healthcare advance safely while preserving inventors’ rights and innovation incentives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!