EU prolongs sanctions for cyberattackers until 2026

The EU Council has extended its sanctions on cyberattacks until May 18, 2026, with the legal framework for enforcing these measures now lasting until 2028. The sanctions target individuals and institutions involved in cyberattacks that pose a significant threat to the EU and its members.

The extended measures will allow the EU to impose restrictions on those responsible for cyberattacks, including freezing assets and blocking access to financial resources.

These actions may also apply to attacks against third countries or international organisations, if necessary for EU foreign and security policy objectives.

At present, sanctions are in place against 17 individuals and four institutions. The EU’s decision highlights its ongoing commitment to safeguarding its digital infrastructure and maintaining its foreign policy goals through legal actions against cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starkville Utilities hit by cyberattack

Starkville Utilities, a Mississippi-based electricity and water provider that also services Mississippi State University, has revealed a data breach that may have exposed sensitive information belonging to over 11,000 individuals.

The breach, which was first detected in late October last year, led the company to disconnect its network in an attempt to contain the intrusion.

Despite these efforts, an investigation later found that attackers may have accessed personal data, including full names and Social Security numbers. Details were submitted to the Maine Attorney General’s Office, confirming the scale of the breach and the nature of the data involved.

While no reports of identity theft have emerged since the incident, Starkville Utilities has chosen to offer twelve months of free identity protection services to those potentially affected. The company maintains that it is taking additional steps to improve its cybersecurity defences.

Stolen data such as Social Security numbers often ends up on underground marketplaces instead of staying idle, where it can be used for identity fraud and other malicious activities.

The incident serves as yet another reminder of the ongoing threat posed by cybercriminals targeting critical infrastructure and user data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LockBit ransomware hacked, data on affiliates leaked

Internal data from the notorious LockBit ransomware group has been leaked following a hack of one of its administration panels. Over 200 conversations between affiliates and victims were also uncovered, revealing aggressive ransom tactics ranging from demands of a few thousand to over $100,000.

The breach, discovered on 7 May, exposed sensitive information including private chats with victims, affiliate account details, Bitcoin wallet addresses, and insights into LockBit’s infrastructure.

A defaced message on the group’s domain read: ‘Don’t do crime, crime is bad xoxo from Prague,’ linking to a downloadable archive of the stolen data. Although LockBit confirmed the breach, it downplayed its impact and denied that any victim decryptors were compromised.

Security researchers believe the leak could provide crucial intelligence for law enforcement. Searchlight Cyber identified 76 user credentials, 22 of which include TOX messaging IDs, commonly used by hackers and connected some users to aliases on criminal forums.

Speculation suggests the hack may be the result of infighting within the cybercriminal community, echoing a recent attack on the Everest ransomware group’s site. Authorities continue to pursue LockBit, but the group remains active despite previous takedowns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta’s AI friends raise ethical questions as experts urge caution

Meta is developing AI-powered friends to help address the loneliness epidemic, CEO Mark Zuckerberg revealed in a recent interview with Dwarkesh Patel.

The company has already launched a new AI assistant app described as ‘the assistant that gets to know your preferences, remembers context and is personalised to you.’ Now, Zuckerberg says he wants to take this concept further with AI companions that serve as virtual friends.

Citing statistics, Zuckerberg pointed out that the average American has fewer than three friends and suggested that people desire more meaningful connections. However, he clarified that AI friends are not intended to replace in-person relationships.

‘There’s a lot of questions people ask, like is this going to replace real-life connections?’ he said. ‘My default is that the answer to that is probably no.’

Despite Zuckerberg’s optimism, experts have voiced serious concerns. While AI companions may offer short-term support and help socially awkward individuals practise interactions, they warn that relying too heavily on virtual friends could worsen isolation.

Daniel Cox, director of the Survey Center on American Life, explained that although AI friends may ease feelings of boredom or loneliness, they could also prevent people from seeking real human contact. Additional issues include privacy and safety.

Robbie Torney from Common Sense Media raised alarms about data collection, noting that the more users engage with AI friends, the more personal information they share. According to Meta’s privacy policy, user conversations and media can be used to train AI models.

Furthermore, The Wall Street Journal reported that Meta’s chatbots had engaged in inappropriate conversations with minors, though Meta claims controls have now been put in place to stop this behaviour.

While Meta continues to push forward, balancing technological innovation with ethical considerations remains crucial. Experts stress that AI friends should serve as a supplement, not a substitute, for real-world connections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp scam sees fraudsters impersonate loved ones

Parents and friends are being targeted by fraudsters using WhatsApp and text messages to impersonate loved ones in urgent need. Criminals often claim the sender has lost their phone and cannot access their bank account, pressing recipients to transfer money swiftly.

The scams are growing more convincing, with AI voice impersonation now used to create fake voice notes. Scammers may pose as children, friends, or even parents, and typically request payments to unfamiliar accounts.

They discourage verification and apply pressure, asking for help with rent, phone replacements, or emergency bills. Santander reports that fraudsters impersonating sons are the most successful, followed by daughters and mothers.

Experts advise contacting the supposed sender directly and establishing a family password to confirm identities in future. Victims who transfer money should alert their bank immediately, report the scam through messaging apps or to Action Fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump denies sharing AI pope image after Truth Social backlash

Donald Trump has rejected claims that he shared an AI-generated image depicting him dressed as the pope, following criticism from some Christian groups.

The image, which showed the former president wearing white and gold papal-style robes, appeared on his Truth Social account and swiftly sparked outrage. Speaking on Monday, Trump claimed he had no knowledge of the picture’s origins and suggested it may have been created using AI.

Trump further distanced himself from the incident, insisting he first saw the image on Sunday evening, despite it being posted on his account on Friday night and later shared by the White House through its official X account.

When questioned about offended Catholics, he dismissed the concerns lightly, stating, ‘Oh, they can’t take a joke.’ He also remarked that Melania Trump thought the image was ‘cute’.

Despite his lighthearted response, some pointed out that Trump is ineligible to become pope, having never been baptised as a Catholic. He would not be permitted to join the conclave or hold the position.

Meanwhile, preparations continue in the Vatican, where 133 cardinal electors will gather in the Sistine Chapel on Wednesday to begin the ancient rituals of selecting a new pontiff following the death of Pope Francis last month.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s Gemini AI completes Pokémon Blue with a little help

Google’s cutting-edge AI model, Gemini 2.5 Pro, has made headlines by completing the 1996 classic video game Pokémon Blue. While Google didn’t achieve the feat directly, it was orchestrated by Joel Z, an independent software engineer who created a livestream called Gemini Plays Pokémon.

Despite being unaffiliated with the tech giant, Joel’s project has drawn enthusiastic support from Google executives, including CEO Sundar Pichai, who celebrated the victory on social media. The challenge of beating a game like Pokémon Blue has become an informal benchmark for testing the reasoning and adaptability of large language models.

Earlier this year, AI company Anthropic revealed its Claude model was making strides in a similar title, Pokémon Red, but has yet to complete it. While comparisons between the two AIs are inevitable, Joel Z clarified that such evaluations are flawed due to differences in tools, data access, and gameplay frameworks.

To play the game, Gemini relied on a complex system called an ‘agent harness,’ which feeds the model visual and contextual information from the game and translates its decisions into gameplay actions. Joel admits to making occasional interventions to improve Gemini’s reasoning but insists these did not include cheats or explicit hints. Instead, his guidance was limited to refining the model’s problem-solving capabilities.

The project remains a work in progress, and Joel continues to enhance the framework behind Gemini’s gameplay. While it may not be an official benchmark for AI performance, the achievement is a playful demonstration of how far AI systems have come in tackling creative and unexpected challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!