Clearview AI faces criminal complaint in Austria over GDPR violations

On 28 October 2025, European privacy NGO noyb (None of Your Business) submitted a criminal complaint against Clearview AI and its management to Austrian prosecutors.

The complaint targets Clearview’s long-criticised practice of scraping billions of photos and videos from the public web to build a facial recognition database, including biometric data of EU residents, in ways noyb claims flagrantly violate the EU General Data Protection Regulation (GDPR).

Clearview markets its technology to law enforcement and governmental agencies, offering clients the ability to upload a face image and retrieve matches from its vast index, reportedly over 60 billion images.

Multiple EU data protection authorities have already found Clearview in breach of GDPR rules and imposed fines and bans in countries such as France, Greece, Italy, the Netherlands, and the United Kingdom.

Despite those rulings, Clearview has largely ignored enforcement actions, refusing to comply or pay fines except in limited cases, citing its lack of a European base as a shield. Noyb argues that the company exploits this regulatory gap to skirt accountability.

Under Austrian law, certain GDPR violations are criminal offences (via § 63 of Austria’s data protection statute), allowing prosecutors to hold both corporations and their executives personally liable, including potential imprisonment. Noyb’s complaint thus seeks to escalate enforcement beyond administrative fines to criminal sanctions.

Max Schrems, noyb’s founder, condemned Clearview’s conduct as a systematic affront to European legal frameworks: ‘Clearview AI amassed a global database of photos and biometric data … Such power is extremely concerning and undermines the idea of a free society.’

The outcome could set a landmark precedent: if prosecutors accept and pursue the case, Clearview’s executives might face arrest if they travel to Europe, and EU-wide legal cooperation (e.g. extradition requests) could follow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Labels press platforms to curb AI slop and protect artists

Luke Temple woke to messages about a new Here We Go Magic track he never made. An AI-generated song appeared on the band’s Spotify, Tidal, and YouTube pages, triggering fresh worries about impersonation as cheap tools flood platforms.

Platforms say defences are improving. Spotify confirmed the removal of the fake track and highlighted new safeguards against impersonation, plus a tool to flag mismatched releases pre-launch. Tidal said it removed the song and is upgrading AI detection. YouTube did not comment.

Industry teams describe a cat-and-mouse race. Bad actors exploit third-party distributors with light verification, slipping AI pastiches into official pages. Tools like Suno and Udio enable rapid cloning, encouraging volume spam that targets dormant and lesser-known acts.

Per-track revenue losses are tiny, reputational damage is not. Artists warn that identity theft and fan confusion erode trust, especially when fakes sit beside legitimate catalogues or mimic deceased performers. Labels caution that volume is outpacing takedowns across major services.

Proposed fixes include stricter distributor onboarding, verified artist controls, watermark detection, and clear AI labels for listeners. Rights holders want faster escalation and penalties for repeat offenders. Musicians monitor profiles and report issues, yet argue platforms must shoulder the heavier lift.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and Nokia join forces to build the AI platform for 6G

Nokia and NVIDIA have announced a $1 billion partnership to develop an AI-powered platform that will drive the transition from 5G to 6G networks.

The collaboration will create next-generation AI-RAN systems, combining computing, sensing and connectivity to transform how the US mobile networks process data and deliver services.

However, this partnership marks a strategic step in both companies’ ambition to regain global leadership in telecommunications.

By integrating NVIDIA’s new Aerial RAN Computer and Nokia’s AI-RAN software, operators can upgrade existing networks through software updates instead of complete infrastructure replacements.

T-Mobile US will begin field tests in 2026, supported by Dell’s PowerEdge servers.

NVIDIA’s investment and collaboration with Nokia aim to strengthen the foundation for AI-native networks that can handle the rising demand from agentic, generative and physical AI applications.

These networks are expected to support future 6G use cases, including drones, autonomous vehicles and advanced augmented reality systems.

Both companies see AI-RAN as the next evolution of wireless connectivity, uniting data processing and communication at the edge for greater performance, energy efficiency and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Estimating biological age from routine records with LifeClock

LifeClock, reported in Nature Medicine, estimates biological age from routine health records. Trained on 24.6 million visits and 184 indicators, it offers a low-cost route to precision health beyond simple chronology.

Researchers found two distinct clocks: a paediatric development clock and an adult ageing clock. Specialised models improved accuracy, reflecting scripted growth versus decline. Biomarkers diverged between stages, aligning with growth or deterioration.

LifeClock stratified risk years ahead. In children, clusters flagged malnutrition, developmental disorders, and endocrine issues, including markedly higher odds of pituitary hyperfunction and obesity. Adult clusters signalled future diabetes, stroke, renal failure, and cardiovascular disease.

Performance was strong after fine-tuning: the area under the curve hit 0.98 for current diabetes and 0.91 for future diabetes. EHRFormer outperformed RNN and gradient-boosting baselines across longitudinal records.

Authors propose LifeClock for accessible monitoring, personalised interventions, and prevention. Adding wearables and real-time biometrics could refine responsiveness, enabling earlier action on emerging risks and supporting equitable precision medicine at the population scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Adobe Firefly expands with new AI tools for audio and video creation

Adobe has unveiled major updates to its Firefly creative AI studio, introducing advanced audio, video, and imaging tools at the Adobe MAX 2025 conference.

These new features include Generate Soundtrack for licensed music creation, Generate Speech for lifelike multilingual voiceovers, and a timeline-based video editor that integrates seamlessly with Firefly’s existing creative tools.

The company also launched the Firefly Image Model 5, which can produce photorealistic 4MP images with prompt-based editing. Firefly now includes partner models from Google, OpenAI, ElevenLabs, Topaz Labs, and others, bringing the industry’s top AI capabilities into one unified workspace.

Adobe also announced Firefly Custom Models, allowing users to train AI models to match their personal creative style.

In a preview of future developments, Adobe showcased Project Moonlight, a conversational AI assistant that connects across creative apps and social channels to help creators move from concept to content in minutes.

A system that can offer tailored suggestions and automate parts of the creative process while keeping creators in complete control.

Adobe emphasised that Firefly is designed to enhance human creativity rather than replace it, offering responsible AI tools that respect intellectual property rights.

With such a release, the company continues integrating generative AI across its ecosystem to simplify production and empower creators at every stage of their workflow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Yuan says AI ‘digital twins’ could trim meetings and the workweek

AI could shorten the workweek, says Zoom’s Eric Yuan. At TechCrunch Disrupt, he pitched AI ‘digital twins’ that attend meetings, negotiate drafts, and triage email, arguing assistants will shoulder routine tasks so humans focus on judgement.

Yuan has already used an AI avatar on an investor call to show how a stand-in can speak on your behalf. He said Zoom will keep investing heavily in assistants that understand context, prioritise messages, and draft responses.

Use cases extend beyond meetings. Yuan described counterparts sending their digital twins to hash out deal terms before principals join to resolve open issues, saving hours of live negotiation and accelerating consensus across teams and time zones.

Zoom plans to infuse AI across its suite, including whiteboards and collaborative docs, so work moves even when people are offline. Yuan said assistants will surface what matters, propose actions, and help execute routine workflows securely.

If adoption scales, Yuan sees schedules changing. He floated a five-year goal where many knowledge workers shift to three or four days a week, with AI increasing throughput, reducing meeting load, and improving focus time across organisations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Poland indicts former deputy justice minister in Pegasus spyware case

Poland’s former deputy justice minister, Michał Woś, has been indicted for allegedly authorising the transfer of $6.9 million from a fund intended for crime victims to a government office that later used the money to purchase commercial spyware.

Prosecutors claim the transfer took place in 2017. If convicted, Woś could face up to 10 years in prison.

The indictment is part of a broader investigation into the use of Pegasus, spyware developed by Israel’s NSO Group, in Poland between 2017 and 2022. The software was reportedly deployed against opposition politicians during that period.

In April 2024, Prime Minister Donald Tusk announced that nearly 600 individuals in Poland had been targeted with Pegasus under the previous Law and Justice (PiS) government, of which Woś is a member.

Responding on social media, Woś defended the purchase, writing that Pegasus was used to fight crime, and “that Prime Minister Tusk and Justice Minister Waldemar Żurek oppose such equipment is not surprising—just as criminals dislike the police, those involved in wrongdoing dislike crime detection tools.”

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italian political elite targeted in hacking scandal using stolen state data

Italian authorities have uncovered a vast hacking operation that built detailed dossiers on politicians and business leaders using data siphoned from state databases. Prosecutors say the group, operating under the name Equalize, tried to use the information to manipulate Italy’s political class.

The network, allegedly led by former police inspector Carmine Gallo, businessman Enrico Pazzali and cybersecurity expert Samuele Calamucci, created a system called Beyond to compile thousands of records from state systems, including confidential financial and criminal records.

Police wiretaps captured suspects boasting they could operate all over Italy. Targets included senior officials such as former Prime Minister Matteo Renzi and the president of the Senate Ignazio La Russa.

Investigators say the gang presented itself as a corporate intelligence firm while illegally accessing phones, computers and government databases. The group allegedly sold reputational dossiers to clients, including major firms such as Eni, Barilla and Heineken, which have all denied wrongdoing or said they were unaware of any illegal activity.

The probe began when police monitoring a northern Italian gangster uncovered links to Gallo. Gallo, who helped solve cases including the 1995 murder of Maurizio Gucci, leveraged contacts in law enforcement and intelligence to arrange unlawful data searches for Equalize.

The operation collapsed in autumn 2024, with four arrests and dozens questioned. After months of questioning and plea bargaining, 15 defendants are due to enter pleas this month. Officials warn the case shows how hackers can weaponise state data, calling it ‘a real and actual attack on democracy’.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!