Secret passwords could fight deepfake scams

As AI-generated images grow increasingly lifelike, a cyber security expert has warned that families should create secret passwords to guard against deepfake scams.

Cody Barrow, chief executive of EclecticIQ and a former US government adviser, says AI is making it far easier for criminals to impersonate others using fabricated videos or images.

Mr Barrow and his wife now use a private code to confirm each other’s identity if either receives a suspicious message or video.

He believes this precaution, simple enough for anyone regardless of age or digital skills, could soon become essential. ‘It may sound dramatic here in May 2025,’ he said, ‘but I’m quite confident that in a few years, if not months, people will say: I should have done that.’

The warning comes the same week Google launched Veo 3, its AI video generator capable of producing hyper-realistic footage and lifelike dialogue. Its public release has raised concerns about how easily deepfakes could be misused for scams or manipulation.

Meanwhile, President Trump signed the ‘Take It Down Act’ into law, making the creation of deepfake pornography a criminal offence. The bipartisan measure will see prison terms for anyone producing or uploading such content, with First Lady Melania Trump stating it will ‘prioritise people over politics’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Texas considers statewide social media ban for minors

Texas is considering a bill that would ban social media use for anyone under 18. The proposal, which recently advanced past the state Senate committee, is expected to be voted on before the legislative session ends June 2.

If passed, the bill would require platforms to verify the age of all users and allow parents to delete their child’s account. Platforms would have 10 days to comply or face penalties from the state attorney general.

This follows similar efforts in other states. Florida recently enacted a law banning social media use for children under 14 and requiring parental consent for those aged 14 to 15. The Texas bill, however, proposes broader restrictions.

At the federal level, a Senate bill introduced in 2024 aims to bar children under 13 from using social media. While it remains stalled in committee, comments from Senators Brian Schatz and Ted Cruz suggest a renewed push may be underway.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Authorities strike down cybercriminal servers

Authorities across Europe, North America and the UK have dismantled a major global malware network by taking down over 300 servers and seizing millions in cryptocurrency. The operation, led by Eurojust, marks a significant phase of the ongoing Operation Endgame.

Law enforcement agencies from Germany, France, the Netherlands, Denmark, the UK, the US and Canada collaborated to target some of the world’s most dangerous malware variants and the cybercriminals responsible for them.

The takedown also resulted in international arrest warrants for 20 suspects and the identification of more than 36 individuals involved.

The latest move follows similar action in May 2024, which had been the largest coordinated effort against botnets. Since the start of the operation, over €21 million has been seized, including €3.5 million in cryptocurrency.

The malware disrupted in this crackdown, known as ‘initial access malware’, is used to gain a foothold in victims’ systems before further attacks like ransomware are launched.

Authorities have warned that Operation Endgame will continue, with further actions announced through the coalition’s website. Eighteen prime suspects will be added to the EU Most Wanted list.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware gang leaks French government emails

A ransomware gang has published what it claims is sensitive data from multiple French organisations on a dark web forum.

The Stormous cartel, active since 2022, posted the dataset as a ‘comprehensive leak’ allegedly involving high-profile French government bodies.

However, researchers from Cybernews examined the information and found the data’s quality questionable, with outdated MD5 password hashes indicating it could be from older breaches.

Despite its age, the dataset could still be dangerous if reused credentials are involved. Threat actors may exploit the leaked emails for phishing campaigns by impersonating government agencies to extract more sensitive details.

Cybernews noted that even weak password hashes can eventually be cracked, especially when stronger security measures weren’t in place at the time of collection.

Among the affected organisations are Agence Française de Développement, the Paris Region’s Regional Health Agency, and the Court of Audit.

The number of exposed email addresses varies, with some institutions having only a handful leaked while others face hundreds. The French cybersecurity agency ANSSI has yet to comment.

Last year, France faced another massive exposure incident affecting 95 million citizen records, adding to concerns about ongoing cyber vulnerabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The power of compromise

In a recent blog post titled ‘Compromise is not a dirty word – It’s the glue holding humanity together,’ Jovan Kurbalija reflects on the often misunderstood nature of compromise. Prompted by the sight of Lucid cars in Geneva, Switzerland, bearing the slogan ‘Compromise Nothing,’ he questions why compromise is so frequently seen as weakness when it is the foundation of human coexistence.

From families to international diplomacy, our ability to meet halfway allows us to survive and thrive together. Kurbalija reminds us that the word comes from the Latin for ‘promising together’—a mutual commitment rather than a concession.

In today’s world, however, standing firm is glorified while compromise is dismissed. Yet, he argues, true courage lies in embracing others’ needs without surrendering one’s principles and navigating the messy but necessary space between absolutes.

He contrasts this human necessity with how compromise is portrayed in marketing—as a flaw to be avoided—and in tech jargon, where being ‘compromised’ means a breach or failure. These modern distortions have led us to equate flexibility with defeat, instead of maturity. In truth, refusing to compromise risks far more than bending a little.

Ultimately, Kurbalija calls for a shift in mindset: rather than rejecting compromise altogether, we should learn to use it wisely, to preserve the greater good over rigid standoffs. In a world as interconnected and fragile as ours, compromise is not surrender; it’s survival.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Gemini Live and Pro/Ultra AI tiers at I/O 2025

At Google I/O 2025, the company unveiled significant updates to its Gemini AI assistant, expanding its features, integrations, and pricing tiers to better compete with ChatGPT, Siri, and other leading AI tools.

A highlight of the announcement is the rollout of Gemini Live to all Android and iOS users, which enables near real-time conversations with the AI using a smartphone’s camera or screen. Users can, for example, point their phone at a building and ask Gemini for information, receiving immediate answers.

Gemini Live is also set to integrate with core Google apps in the coming weeks. Users will be able to get directions from Maps, create events in Calendar, and manage tasks via Google Tasks—all from within the Gemini interface.

Google also introduced new subscription tiers. Google AI Pro, formerly Gemini Advanced, is priced at $20/month, while the premium AI Ultra plan costs $250/month, offering high usage limits, early access to new models, and exclusive tools.

Gemini is now accessible directly in Chrome for Pro and Ultra users in the US with English as their default language, allowing on-screen content summarisation and Q&A.

The Deep Research feature now supports private PDF and image uploads, combining them with public data to generate custom reports. Integration with Gmail and Google Drive is coming soon.

Visual tools are also improving. Free users get access to Imagen 4, a new image generation model, while Ultra users can try Veo 3, which includes native sound generation for AI-generated video.

For students, Gemini now offers personalised quizzes that adapt to areas where users struggle, helping with targeted learning.

Gemini now serves over 400 million monthly users, as Google deepens its AI footprint across its platforms through seamless integration and real-time multimodal capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West Lothian schools hit by ransomware attack

West Lothian Council has confirmed that personal and sensitive information was stolen following a ransomware cyberattack which struck the region’s education system on Tuesday, 6 May. Police Scotland has launched an investigation, and the matter remains an active criminal case.

Only a small fraction of the data held on the education network was accessed by the attackers. However, some of it included sensitive personal information. Parents and carers across West Lothian’s schools have been notified, and staff have also been advised to take extra precautions.

The cyberattack disrupted IT systems serving 13 secondary schools, 69 primary schools and 61 nurseries. Although the education network remains isolated from the rest of the council’s systems, contingency plans have been effective in minimising disruption, including during the ongoing SQA exams.

West Lothian Council has apologised to anyone potentially affected. It is continuing to work closely with Police Scotland and the Scottish Government. Officials have promised further updates as more information becomes available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK research body hit by 5 million cyber attacks

UK Research and Innovation (UKRI), the country’s national funding body for science and research, has reported a staggering 5.4 million cyber attacks this year — a sixfold increase compared to the previous year.

According to data obtained through freedom of information requests, the majority of these threats were phishing attempts, with 236,400 designed to trick employees into revealing sensitive data. A further 11,200 were malware-based attacks, while the rest were identified as spam or malicious emails.

The scale of these incidents highlights the growing threat faced by both public and private sector institutions. Experts believe the rise of AI has enabled cybercriminals to launch more frequent and sophisticated attacks.

Rick Boyce, chief for technology at AND Digital, warned that the emergence of AI has introduced threats ‘at a pace we’ve never seen before’, calling for a move beyond traditional defences to stay ahead of evolving risks.

UKRI, which is sponsored by the Department for Science, Innovation and Technology, manages an annual budget of £8 billion, much of it invested in cutting-edge research.

A budget like this makes it an attractive target for cybercriminals and state-sponsored actors alike, particularly those looking to steal intellectual property or sabotage infrastructure. Security experts suggest the scale and nature of the attacks point to involvement from hostile nation states, with Russia a likely culprit.

Though UKRI cautioned that differing reporting periods may affect the accuracy of year-on-year comparisons, there is little doubt about the severity of the threat.

The UK’s National Cyber Security Centre (NCSC) has previously warned of Russia’s Unit 29155 targeting British government bodies and infrastructure for espionage and disruption.

With other notorious groups such as Fancy Bear and Sandworm also active, the cybersecurity landscape is becoming increasingly fraught.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ascension faces fresh data breach fallout

A major cybersecurity breach has struck Ascension, one of the largest nonprofit healthcare systems in the US, exposing the sensitive information of over 430,000 patients.

The incident began in December 2024, when Ascension discovered that patient data had been compromised through a former business partner’s software flaw.

The indirect breach allowed cybercriminals to siphon off a wide range of personal, medical and financial details — including Social Security numbers, diagnosis codes, hospital admission records and insurance data.

The breach adds to growing concerns over the healthcare industry’s vulnerability to cyberattacks. In 2024 alone, 1,160 healthcare-related data breaches were reported, affecting 305 million records — a sharp rise from the previous year.

Many institutions still treat cybersecurity as an afterthought instead of a core responsibility, despite handling highly valuable and sensitive data.

Ascension itself has been targeted multiple times, including a ransomware attack in May 2024 that disrupted services at dozens of hospitals and affected nearly 5.6 million individuals.

Ascension has since filed notices with regulators and is offering two years of identity monitoring to those impacted. However, critics argue this response is inadequate and reflects a broader pattern of negligence across the sector.

The company has not named the third-party vendor responsible, but experts believe the incident may be tied to a larger ransomware campaign that exploited flaws in widely used file-transfer software.

Rather than treating such incidents as isolated, experts warn that these breaches highlight systemic flaws in healthcare’s digital infrastructure. As criminals grow more sophisticated and vendors remain vulnerable, patients bear the consequences.

Until healthcare providers prioritise cybersecurity instead of cutting corners, breaches like this are likely to become even more common — and more damaging.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!