Spot the red flags of AI-enabled scams, says California DFPI

The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI to scam consumers. Deepfakes, cloned voices, and slick messages mimic trusted people and exploit urgency. Learning the new warning signs cuts risk quickly.

Imposter deepfakes and romance ruses often begin with perfect profiles or familiar voices pushing you to pay or invest. Grandparent scams use cloned audio in fake emergencies; agree a family passphrase and verify on a separate channel. Influencers may flaunt fabricated credentials and followers.

Automated attacks now use AI to sidestep basic defences and steal passwords or card details. Reduce exposure with two-factor authentication, regular updates, and a reputable password manager. Pause before clicking unexpected links or attachments, even from known names.

Investment frauds increasingly tout vague ‘AI-powered’ returns while simulating growth and testimonials, then blocking withdrawals. Beware guarantees of no risk, artificial deadlines, unsolicited messages, and recruit-to-earn offers. Research independently and verify registrations before sending money.

DFPI advises careful verification before acting. Confirm identities through trusted channels, refuse to move money under pressure, and secure devices. Report suspicious activity promptly; smart habits remain the best defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ontario updates deidentification guidelines for safer data use

Ontario’s privacy watchdog has released an expanded set of deidentification guidelines to help organisations protect personal data while enabling innovation. The 100-page document from the Office of the Information and Privacy Commissioner (IPC) offers step-by-step advice, checklists and examples.

The update modernises the 2016 version to reflect global regulatory changes and new data protection practices. She emphasised that the guidelines aim to help organisations of all sizes responsibly anonymise data while maintaining its usefulness for research, AI development and public benefit.

Developed through broad stakeholder consultation, the guidelines were refined with input from privacy experts and the Canadian Anonymization Network. The new version responds to industry requests for more detailed, operational guidance.

Although the guidelines are not legally binding, experts said following them can reduce liability risks and strengthen compliance with privacy laws. The IPC hopes they will serve as a practical reference for executives and data officers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Clearview AI faces criminal complaint in Austria over GDPR violations

On 28 October 2025, European privacy NGO noyb (None of Your Business) submitted a criminal complaint against Clearview AI and its management to Austrian prosecutors.

The complaint targets Clearview’s long-criticised practice of scraping billions of photos and videos from the public web to build a facial recognition database, including biometric data of EU residents, in ways noyb claims flagrantly violate the EU General Data Protection Regulation (GDPR).

Clearview markets its technology to law enforcement and governmental agencies, offering clients the ability to upload a face image and retrieve matches from its vast index, reportedly over 60 billion images.

Multiple EU data protection authorities have already found Clearview in breach of GDPR rules and imposed fines and bans in countries such as France, Greece, Italy, the Netherlands, and the United Kingdom.

Despite those rulings, Clearview has largely ignored enforcement actions, refusing to comply or pay fines except in limited cases, citing its lack of a European base as a shield. Noyb argues that the company exploits this regulatory gap to skirt accountability.

Under Austrian law, certain GDPR violations are criminal offences (via § 63 of Austria’s data protection statute), allowing prosecutors to hold both corporations and their executives personally liable, including potential imprisonment. Noyb’s complaint thus seeks to escalate enforcement beyond administrative fines to criminal sanctions.

Max Schrems, noyb’s founder, condemned Clearview’s conduct as a systematic affront to European legal frameworks: ‘Clearview AI amassed a global database of photos and biometric data … Such power is extremely concerning and undermines the idea of a free society.’

The outcome could set a landmark precedent: if prosecutors accept and pursue the case, Clearview’s executives might face arrest if they travel to Europe, and EU-wide legal cooperation (e.g. extradition requests) could follow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Labels press platforms to curb AI slop and protect artists

Luke Temple woke to messages about a new Here We Go Magic track he never made. An AI-generated song appeared on the band’s Spotify, Tidal, and YouTube pages, triggering fresh worries about impersonation as cheap tools flood platforms.

Platforms say defences are improving. Spotify confirmed the removal of the fake track and highlighted new safeguards against impersonation, plus a tool to flag mismatched releases pre-launch. Tidal said it removed the song and is upgrading AI detection. YouTube did not comment.

Industry teams describe a cat-and-mouse race. Bad actors exploit third-party distributors with light verification, slipping AI pastiches into official pages. Tools like Suno and Udio enable rapid cloning, encouraging volume spam that targets dormant and lesser-known acts.

Per-track revenue losses are tiny, reputational damage is not. Artists warn that identity theft and fan confusion erode trust, especially when fakes sit beside legitimate catalogues or mimic deceased performers. Labels caution that volume is outpacing takedowns across major services.

Proposed fixes include stricter distributor onboarding, verified artist controls, watermark detection, and clear AI labels for listeners. Rights holders want faster escalation and penalties for repeat offenders. Musicians monitor profiles and report issues, yet argue platforms must shoulder the heavier lift.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and Nokia join forces to build the AI platform for 6G

Nokia and NVIDIA have announced a $1 billion partnership to develop an AI-powered platform that will drive the transition from 5G to 6G networks.

The collaboration will create next-generation AI-RAN systems, combining computing, sensing and connectivity to transform how the US mobile networks process data and deliver services.

However, this partnership marks a strategic step in both companies’ ambition to regain global leadership in telecommunications.

By integrating NVIDIA’s new Aerial RAN Computer and Nokia’s AI-RAN software, operators can upgrade existing networks through software updates instead of complete infrastructure replacements.

T-Mobile US will begin field tests in 2026, supported by Dell’s PowerEdge servers.

NVIDIA’s investment and collaboration with Nokia aim to strengthen the foundation for AI-native networks that can handle the rising demand from agentic, generative and physical AI applications.

These networks are expected to support future 6G use cases, including drones, autonomous vehicles and advanced augmented reality systems.

Both companies see AI-RAN as the next evolution of wireless connectivity, uniting data processing and communication at the edge for greater performance, energy efficiency and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Estimating biological age from routine records with LifeClock

LifeClock, reported in Nature Medicine, estimates biological age from routine health records. Trained on 24.6 million visits and 184 indicators, it offers a low-cost route to precision health beyond simple chronology.

Researchers found two distinct clocks: a paediatric development clock and an adult ageing clock. Specialised models improved accuracy, reflecting scripted growth versus decline. Biomarkers diverged between stages, aligning with growth or deterioration.

LifeClock stratified risk years ahead. In children, clusters flagged malnutrition, developmental disorders, and endocrine issues, including markedly higher odds of pituitary hyperfunction and obesity. Adult clusters signalled future diabetes, stroke, renal failure, and cardiovascular disease.

Performance was strong after fine-tuning: the area under the curve hit 0.98 for current diabetes and 0.91 for future diabetes. EHRFormer outperformed RNN and gradient-boosting baselines across longitudinal records.

Authors propose LifeClock for accessible monitoring, personalised interventions, and prevention. Adding wearables and real-time biometrics could refine responsiveness, enabling earlier action on emerging risks and supporting equitable precision medicine at the population scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Adobe Firefly expands with new AI tools for audio and video creation

Adobe has unveiled major updates to its Firefly creative AI studio, introducing advanced audio, video, and imaging tools at the Adobe MAX 2025 conference.

These new features include Generate Soundtrack for licensed music creation, Generate Speech for lifelike multilingual voiceovers, and a timeline-based video editor that integrates seamlessly with Firefly’s existing creative tools.

The company also launched the Firefly Image Model 5, which can produce photorealistic 4MP images with prompt-based editing. Firefly now includes partner models from Google, OpenAI, ElevenLabs, Topaz Labs, and others, bringing the industry’s top AI capabilities into one unified workspace.

Adobe also announced Firefly Custom Models, allowing users to train AI models to match their personal creative style.

In a preview of future developments, Adobe showcased Project Moonlight, a conversational AI assistant that connects across creative apps and social channels to help creators move from concept to content in minutes.

A system that can offer tailored suggestions and automate parts of the creative process while keeping creators in complete control.

Adobe emphasised that Firefly is designed to enhance human creativity rather than replace it, offering responsible AI tools that respect intellectual property rights.

With such a release, the company continues integrating generative AI across its ecosystem to simplify production and empower creators at every stage of their workflow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Yuan says AI ‘digital twins’ could trim meetings and the workweek

AI could shorten the workweek, says Zoom’s Eric Yuan. At TechCrunch Disrupt, he pitched AI ‘digital twins’ that attend meetings, negotiate drafts, and triage email, arguing assistants will shoulder routine tasks so humans focus on judgement.

Yuan has already used an AI avatar on an investor call to show how a stand-in can speak on your behalf. He said Zoom will keep investing heavily in assistants that understand context, prioritise messages, and draft responses.

Use cases extend beyond meetings. Yuan described counterparts sending their digital twins to hash out deal terms before principals join to resolve open issues, saving hours of live negotiation and accelerating consensus across teams and time zones.

Zoom plans to infuse AI across its suite, including whiteboards and collaborative docs, so work moves even when people are offline. Yuan said assistants will surface what matters, propose actions, and help execute routine workflows securely.

If adoption scales, Yuan sees schedules changing. He floated a five-year goal where many knowledge workers shift to three or four days a week, with AI increasing throughput, reducing meeting load, and improving focus time across organisations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Poland indicts former deputy justice minister in Pegasus spyware case

Poland’s former deputy justice minister, Michał Woś, has been indicted for allegedly authorising the transfer of $6.9 million from a fund intended for crime victims to a government office that later used the money to purchase commercial spyware.

Prosecutors claim the transfer took place in 2017. If convicted, Woś could face up to 10 years in prison.

The indictment is part of a broader investigation into the use of Pegasus, spyware developed by Israel’s NSO Group, in Poland between 2017 and 2022. The software was reportedly deployed against opposition politicians during that period.

In April 2024, Prime Minister Donald Tusk announced that nearly 600 individuals in Poland had been targeted with Pegasus under the previous Law and Justice (PiS) government, of which Woś is a member.

Responding on social media, Woś defended the purchase, writing that Pegasus was used to fight crime, and “that Prime Minister Tusk and Justice Minister Waldemar Żurek oppose such equipment is not surprising—just as criminals dislike the police, those involved in wrongdoing dislike crime detection tools.”

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!