NVIDIA and Nokia join forces to build the AI platform for 6G

Nokia and NVIDIA have announced a $1 billion partnership to develop an AI-powered platform that will drive the transition from 5G to 6G networks.

The collaboration will create next-generation AI-RAN systems, combining computing, sensing and connectivity to transform how the US mobile networks process data and deliver services.

However, this partnership marks a strategic step in both companies’ ambition to regain global leadership in telecommunications.

By integrating NVIDIA’s new Aerial RAN Computer and Nokia’s AI-RAN software, operators can upgrade existing networks through software updates instead of complete infrastructure replacements.

T-Mobile US will begin field tests in 2026, supported by Dell’s PowerEdge servers.

NVIDIA’s investment and collaboration with Nokia aim to strengthen the foundation for AI-native networks that can handle the rising demand from agentic, generative and physical AI applications.

These networks are expected to support future 6G use cases, including drones, autonomous vehicles and advanced augmented reality systems.

Both companies see AI-RAN as the next evolution of wireless connectivity, uniting data processing and communication at the edge for greater performance, energy efficiency and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Estimating biological age from routine records with LifeClock

LifeClock, reported in Nature Medicine, estimates biological age from routine health records. Trained on 24.6 million visits and 184 indicators, it offers a low-cost route to precision health beyond simple chronology.

Researchers found two distinct clocks: a paediatric development clock and an adult ageing clock. Specialised models improved accuracy, reflecting scripted growth versus decline. Biomarkers diverged between stages, aligning with growth or deterioration.

LifeClock stratified risk years ahead. In children, clusters flagged malnutrition, developmental disorders, and endocrine issues, including markedly higher odds of pituitary hyperfunction and obesity. Adult clusters signalled future diabetes, stroke, renal failure, and cardiovascular disease.

Performance was strong after fine-tuning: the area under the curve hit 0.98 for current diabetes and 0.91 for future diabetes. EHRFormer outperformed RNN and gradient-boosting baselines across longitudinal records.

Authors propose LifeClock for accessible monitoring, personalised interventions, and prevention. Adding wearables and real-time biometrics could refine responsiveness, enabling earlier action on emerging risks and supporting equitable precision medicine at the population scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Adobe Firefly expands with new AI tools for audio and video creation

Adobe has unveiled major updates to its Firefly creative AI studio, introducing advanced audio, video, and imaging tools at the Adobe MAX 2025 conference.

These new features include Generate Soundtrack for licensed music creation, Generate Speech for lifelike multilingual voiceovers, and a timeline-based video editor that integrates seamlessly with Firefly’s existing creative tools.

The company also launched the Firefly Image Model 5, which can produce photorealistic 4MP images with prompt-based editing. Firefly now includes partner models from Google, OpenAI, ElevenLabs, Topaz Labs, and others, bringing the industry’s top AI capabilities into one unified workspace.

Adobe also announced Firefly Custom Models, allowing users to train AI models to match their personal creative style.

In a preview of future developments, Adobe showcased Project Moonlight, a conversational AI assistant that connects across creative apps and social channels to help creators move from concept to content in minutes.

A system that can offer tailored suggestions and automate parts of the creative process while keeping creators in complete control.

Adobe emphasised that Firefly is designed to enhance human creativity rather than replace it, offering responsible AI tools that respect intellectual property rights.

With such a release, the company continues integrating generative AI across its ecosystem to simplify production and empower creators at every stage of their workflow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Yuan says AI ‘digital twins’ could trim meetings and the workweek

AI could shorten the workweek, says Zoom’s Eric Yuan. At TechCrunch Disrupt, he pitched AI ‘digital twins’ that attend meetings, negotiate drafts, and triage email, arguing assistants will shoulder routine tasks so humans focus on judgement.

Yuan has already used an AI avatar on an investor call to show how a stand-in can speak on your behalf. He said Zoom will keep investing heavily in assistants that understand context, prioritise messages, and draft responses.

Use cases extend beyond meetings. Yuan described counterparts sending their digital twins to hash out deal terms before principals join to resolve open issues, saving hours of live negotiation and accelerating consensus across teams and time zones.

Zoom plans to infuse AI across its suite, including whiteboards and collaborative docs, so work moves even when people are offline. Yuan said assistants will surface what matters, propose actions, and help execute routine workflows securely.

If adoption scales, Yuan sees schedules changing. He floated a five-year goal where many knowledge workers shift to three or four days a week, with AI increasing throughput, reducing meeting load, and improving focus time across organisations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Poland indicts former deputy justice minister in Pegasus spyware case

Poland’s former deputy justice minister, Michał Woś, has been indicted for allegedly authorising the transfer of $6.9 million from a fund intended for crime victims to a government office that later used the money to purchase commercial spyware.

Prosecutors claim the transfer took place in 2017. If convicted, Woś could face up to 10 years in prison.

The indictment is part of a broader investigation into the use of Pegasus, spyware developed by Israel’s NSO Group, in Poland between 2017 and 2022. The software was reportedly deployed against opposition politicians during that period.

In April 2024, Prime Minister Donald Tusk announced that nearly 600 individuals in Poland had been targeted with Pegasus under the previous Law and Justice (PiS) government, of which Woś is a member.

Responding on social media, Woś defended the purchase, writing that Pegasus was used to fight crime, and “that Prime Minister Tusk and Justice Minister Waldemar Żurek oppose such equipment is not surprising—just as criminals dislike the police, those involved in wrongdoing dislike crime detection tools.”

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italian political elite targeted in hacking scandal using stolen state data

Italian authorities have uncovered a vast hacking operation that built detailed dossiers on politicians and business leaders using data siphoned from state databases. Prosecutors say the group, operating under the name Equalize, tried to use the information to manipulate Italy’s political class.

The network, allegedly led by former police inspector Carmine Gallo, businessman Enrico Pazzali and cybersecurity expert Samuele Calamucci, created a system called Beyond to compile thousands of records from state systems, including confidential financial and criminal records.

Police wiretaps captured suspects boasting they could operate all over Italy. Targets included senior officials such as former Prime Minister Matteo Renzi and the president of the Senate Ignazio La Russa.

Investigators say the gang presented itself as a corporate intelligence firm while illegally accessing phones, computers and government databases. The group allegedly sold reputational dossiers to clients, including major firms such as Eni, Barilla and Heineken, which have all denied wrongdoing or said they were unaware of any illegal activity.

The probe began when police monitoring a northern Italian gangster uncovered links to Gallo. Gallo, who helped solve cases including the 1995 murder of Maurizio Gucci, leveraged contacts in law enforcement and intelligence to arrange unlawful data searches for Equalize.

The operation collapsed in autumn 2024, with four arrests and dozens questioned. After months of questioning and plea bargaining, 15 defendants are due to enter pleas this month. Officials warn the case shows how hackers can weaponise state data, calling it ‘a real and actual attack on democracy’.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

FDA and patent law create dual hurdles for AI-enabled medical technologies

AI reshapes healthcare by powering more precise and adaptive medical devices and diagnostic systems.

Yet, innovators face two significant challenges: navigating the US Food and Drug Administration’s evolving regulatory framework and overcoming legal uncertainty under US patent law.

These two systems, although interconnected, serve different goals. The FDA protects patients, while patent law rewards invention.

The FDA’s latest guidance seeks to adapt oversight for AI-enabled medical technologies that change over time. Its framework for predetermined change control plans allows developers to update AI models without resubmitting complete applications, provided updates stay within approved limits.

An approach that promotes innovation while maintaining transparency, bias control and post-market safety. By clarifying how adaptive AI devices can evolve safely, the FDA aims to balance accountability with progress.

Patent protection remains more complex. US courts continue to exclude non-human inventors, creating tension when AI contributes to discoveries.

Legal precedents such as Thaler vs Vidal and Alice Corp. vs CLS Bank limit patent eligibility for algorithms or diagnostic methods that resemble abstract ideas or natural laws. Companies must show human-led innovation and technical improvement beyond routine computation to secure patents.

Aligning regulatory and intellectual property strategies is now essential. Developers who engage regulators early, design flexible change control plans and coordinate patent claims with development timelines can reduce risk and accelerate market entry.

Integrating these processes helps ensure AI technologies in healthcare advance safely while preserving inventors’ rights and innovation incentives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!