Poland indicts former deputy justice minister in Pegasus spyware case

Poland’s former deputy justice minister, Michał Woś, has been indicted for allegedly authorising the transfer of $6.9 million from a fund intended for crime victims to a government office that later used the money to purchase commercial spyware.

Prosecutors claim the transfer took place in 2017. If convicted, Woś could face up to 10 years in prison.

The indictment is part of a broader investigation into the use of Pegasus, spyware developed by Israel’s NSO Group, in Poland between 2017 and 2022. The software was reportedly deployed against opposition politicians during that period.

In April 2024, Prime Minister Donald Tusk announced that nearly 600 individuals in Poland had been targeted with Pegasus under the previous Law and Justice (PiS) government, of which Woś is a member.

Responding on social media, Woś defended the purchase, writing that Pegasus was used to fight crime, and “that Prime Minister Tusk and Justice Minister Waldemar Żurek oppose such equipment is not surprising—just as criminals dislike the police, those involved in wrongdoing dislike crime detection tools.”

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italian political elite targeted in hacking scandal using stolen state data

Italian authorities have uncovered a vast hacking operation that built detailed dossiers on politicians and business leaders using data siphoned from state databases. Prosecutors say the group, operating under the name Equalize, tried to use the information to manipulate Italy’s political class.

The network, allegedly led by former police inspector Carmine Gallo, businessman Enrico Pazzali and cybersecurity expert Samuele Calamucci, created a system called Beyond to compile thousands of records from state systems, including confidential financial and criminal records.

Police wiretaps captured suspects boasting they could operate all over Italy. Targets included senior officials such as former Prime Minister Matteo Renzi and the president of the Senate Ignazio La Russa.

Investigators say the gang presented itself as a corporate intelligence firm while illegally accessing phones, computers and government databases. The group allegedly sold reputational dossiers to clients, including major firms such as Eni, Barilla and Heineken, which have all denied wrongdoing or said they were unaware of any illegal activity.

The probe began when police monitoring a northern Italian gangster uncovered links to Gallo. Gallo, who helped solve cases including the 1995 murder of Maurizio Gucci, leveraged contacts in law enforcement and intelligence to arrange unlawful data searches for Equalize.

The operation collapsed in autumn 2024, with four arrests and dozens questioned. After months of questioning and plea bargaining, 15 defendants are due to enter pleas this month. Officials warn the case shows how hackers can weaponise state data, calling it ‘a real and actual attack on democracy’.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

FDA and patent law create dual hurdles for AI-enabled medical technologies

AI reshapes healthcare by powering more precise and adaptive medical devices and diagnostic systems.

Yet, innovators face two significant challenges: navigating the US Food and Drug Administration’s evolving regulatory framework and overcoming legal uncertainty under US patent law.

These two systems, although interconnected, serve different goals. The FDA protects patients, while patent law rewards invention.

The FDA’s latest guidance seeks to adapt oversight for AI-enabled medical technologies that change over time. Its framework for predetermined change control plans allows developers to update AI models without resubmitting complete applications, provided updates stay within approved limits.

An approach that promotes innovation while maintaining transparency, bias control and post-market safety. By clarifying how adaptive AI devices can evolve safely, the FDA aims to balance accountability with progress.

Patent protection remains more complex. US courts continue to exclude non-human inventors, creating tension when AI contributes to discoveries.

Legal precedents such as Thaler vs Vidal and Alice Corp. vs CLS Bank limit patent eligibility for algorithms or diagnostic methods that resemble abstract ideas or natural laws. Companies must show human-led innovation and technical improvement beyond routine computation to secure patents.

Aligning regulatory and intellectual property strategies is now essential. Developers who engage regulators early, design flexible change control plans and coordinate patent claims with development timelines can reduce risk and accelerate market entry.

Integrating these processes helps ensure AI technologies in healthcare advance safely while preserving inventors’ rights and innovation incentives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Virginia’s data centre boom divides residents and industry

Loudoun County in Virginia, known as Data Center Alley, now hosts nearly 200 data centres powering much of the world’s internet and AI infrastructure. Their growth has brought vast economic benefits but stirred concerns about noise, pollution, and rising energy bills for nearby residents.

The facilities occupy about 3% of the county’s land yet generate 40% of its tax revenue. Locals say the constant humming and industrial sprawl have driven away wildlife and inflated electricity costs, which have surged by over 250% in five years.

Despite opposition, new US and global data centre projects continue to receive state support. The industry contributes $5.5 billion annually to Virginia’s economy and sustains around 74,000 jobs. Additionally, President Trump’s administration recently pledged to accelerate permits.

Residents like Emily Kasabian argue the expansion is eroding community life, replacing trees with concrete and machinery to fuel AI. Activists are now lobbying for construction pauses, warning that unchecked development threatens to transform affluent suburbs beyond recognition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Celebrity estates push back on Sora as app surges to No.1

OpenAI’s short-video app Sora topped one million downloads in under a week, then ran headlong into a likeness-rights firestorm. Celebrity families and studios demanded stricter controls. Estates for figures like Martin Luther King Jr. sought blocks on unauthorised cameos.

Users showcased hyperreal mashups that blurred satire and deception, from cartoon crossovers to dead celebrities in improbable scenes. All clips are AI-made, yet reposting across platforms spread confusion. Viewers faced a constant real-or-fake dilemma.

Rights holders pressed for consent, compensation, and veto power over characters and personas. OpenAI shifted toward opt-in for copyrighted properties and enabled estate requests to restrict cameos. Policy language on who qualifies as a public figure remains fuzzy.

Agencies and unions amplified pressure, warning of exploitation and reputational risks. Detection firms reported a surge in takedown requests for unauthorised impersonations. Watermarks exist, but removal tools undercut provenance and complicate enforcement.

Researchers warned about a growing fog of doubt as realistic fakes multiply. Every day, people are placed in deceptive scenarios, while bad actors exploit deniability. OpenAI promised stronger guardrails as Sora scales within tighter rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic boosts cloud capacity with Google’s AI hardware

Anthropic has struck a multibillion-dollar deal with Google to expand its use of cloud computing and specialised AI chips. The agreement includes the purchase of up to one million Tensor Processing Units, Google’s custom hardware built to train and run large AI models.

The partnership will provide Anthropic with more than a gigawatt of additional computing power by late 2026. Executives said the move will support soaring demand for its Claude model family, which already serves over 300,000 business clients.

Anthropic, founded by former OpenAI employees, has quickly become a major player in generative AI. Backed by Amazon and valued at $183 billion, the company recently launched Claude Sonnet 4.5, praised for its coding and reasoning abilities.

Google continues to invest heavily in AI hardware to compete with Nvidia’s GPUs and rival US tech giants. Analysts said Anthropic’s expansion signals intensifying demand for computing power as companies race to lead the global AI revolution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Two founders turn note-taking into an AI success

Two 20-year-old drop-outs, Rudy Arora and Sarthak Dhawan, are behind Turbo AI, an AI-powered notetaker that has grown to around 5 million users and reached a multi-million-dollar annual recurring revenue (ARR) in a short timeframe.

Their app addresses a clear pain point, which is that meetings, lectures, and long videos produce information overload. Turbo AI uses generative AI to convert audio, typed notes or uploads into structured summaries, highlight key points and help users organise insights. The founders describe it as a ‘productivity assistant’ more than a general-purpose chat agent.

The business model appears lean, meaning that freemium user acquisition is scaling quickly, then converting power users into paid subscriptions. The insights are that a well-targeted niche tool can win strong uptake even in a crowded productivity-AI market.

Arora and Dhawan say they kept the feature set focused and user experience simple, enabling rapid word-of-mouth growth.

The growth raises interesting implications for enterprise and consumer AI alike. While large language models dominate headlines, tools like Turbo AI show the value of vertical-specific applications addressing tangible workflows (e.g., note-taking, summarisation). It also underscores how younger founders are building AI tools outside the major tech hubs and scaling globally.

At this stage, challenges remain: user retention, differentiation in a field where major players (Microsoft, Google, OpenAI) are adding similar capabilities, and privacy/data governance (especially with audio and meeting content). However, the early results suggest that targeted AI productivity tools can achieve a meaningful scale quickly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants push AI agents into web browsing

Tech companies are intensifying competition to reshape how people search online through AI-powered browsers. OpenAI’s new Atlas browser, built around ChatGPT, can generate answers and complete web-based tasks such as making shopping lists or reservations.

Atlas joins rivals like Microsoft’s Copilot-enabled Edge, Perplexity’s Comet, and newer platforms Dia and Neon. Developers are moving beyond traditional assistants, creating ‘agentic’ AI capable of acting autonomously while keeping user experience familiar.

Google remains dominant, with Chrome holding over 70 percent of the browser market and integrating limited AI features. Analysts say OpenAI could challenge that control by combining ChatGPT insights with browser behaviour to personalise search and advertising.

Experts note the battle extends beyond browsers as wearables and voice interfaces evolve. Controlling how users interact with AI today, they argue, could determine which company shapes digital habits in the coming decade.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot