Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starkville Utilities hit by cyberattack

Starkville Utilities, a Mississippi-based electricity and water provider that also services Mississippi State University, has revealed a data breach that may have exposed sensitive information belonging to over 11,000 individuals.

The breach, which was first detected in late October last year, led the company to disconnect its network in an attempt to contain the intrusion.

Despite these efforts, an investigation later found that attackers may have accessed personal data, including full names and Social Security numbers. Details were submitted to the Maine Attorney General’s Office, confirming the scale of the breach and the nature of the data involved.

While no reports of identity theft have emerged since the incident, Starkville Utilities has chosen to offer twelve months of free identity protection services to those potentially affected. The company maintains that it is taking additional steps to improve its cybersecurity defences.

Stolen data such as Social Security numbers often ends up on underground marketplaces instead of staying idle, where it can be used for identity fraud and other malicious activities.

The incident serves as yet another reminder of the ongoing threat posed by cybercriminals targeting critical infrastructure and user data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LockBit ransomware hacked, data on affiliates leaked

Internal data from the notorious LockBit ransomware group has been leaked following a hack of one of its administration panels. Over 200 conversations between affiliates and victims were also uncovered, revealing aggressive ransom tactics ranging from demands of a few thousand to over $100,000.

The breach, discovered on 7 May, exposed sensitive information including private chats with victims, affiliate account details, Bitcoin wallet addresses, and insights into LockBit’s infrastructure.

A defaced message on the group’s domain read: ‘Don’t do crime, crime is bad xoxo from Prague,’ linking to a downloadable archive of the stolen data. Although LockBit confirmed the breach, it downplayed its impact and denied that any victim decryptors were compromised.

Security researchers believe the leak could provide crucial intelligence for law enforcement. Searchlight Cyber identified 76 user credentials, 22 of which include TOX messaging IDs, commonly used by hackers and connected some users to aliases on criminal forums.

Speculation suggests the hack may be the result of infighting within the cybercriminal community, echoing a recent attack on the Everest ransomware group’s site. Authorities continue to pursue LockBit, but the group remains active despite previous takedowns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta’s AI friends raise ethical questions as experts urge caution

Meta is developing AI-powered friends to help address the loneliness epidemic, CEO Mark Zuckerberg revealed in a recent interview with Dwarkesh Patel.

The company has already launched a new AI assistant app described as ‘the assistant that gets to know your preferences, remembers context and is personalised to you.’ Now, Zuckerberg says he wants to take this concept further with AI companions that serve as virtual friends.

Citing statistics, Zuckerberg pointed out that the average American has fewer than three friends and suggested that people desire more meaningful connections. However, he clarified that AI friends are not intended to replace in-person relationships.

‘There’s a lot of questions people ask, like is this going to replace real-life connections?’ he said. ‘My default is that the answer to that is probably no.’

Despite Zuckerberg’s optimism, experts have voiced serious concerns. While AI companions may offer short-term support and help socially awkward individuals practise interactions, they warn that relying too heavily on virtual friends could worsen isolation.

Daniel Cox, director of the Survey Center on American Life, explained that although AI friends may ease feelings of boredom or loneliness, they could also prevent people from seeking real human contact. Additional issues include privacy and safety.

Robbie Torney from Common Sense Media raised alarms about data collection, noting that the more users engage with AI friends, the more personal information they share. According to Meta’s privacy policy, user conversations and media can be used to train AI models.

Furthermore, The Wall Street Journal reported that Meta’s chatbots had engaged in inappropriate conversations with minors, though Meta claims controls have now been put in place to stop this behaviour.

While Meta continues to push forward, balancing technological innovation with ethical considerations remains crucial. Experts stress that AI friends should serve as a supplement, not a substitute, for real-world connections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp scam sees fraudsters impersonate loved ones

Parents and friends are being targeted by fraudsters using WhatsApp and text messages to impersonate loved ones in urgent need. Criminals often claim the sender has lost their phone and cannot access their bank account, pressing recipients to transfer money swiftly.

The scams are growing more convincing, with AI voice impersonation now used to create fake voice notes. Scammers may pose as children, friends, or even parents, and typically request payments to unfamiliar accounts.

They discourage verification and apply pressure, asking for help with rent, phone replacements, or emergency bills. Santander reports that fraudsters impersonating sons are the most successful, followed by daughters and mothers.

Experts advise contacting the supposed sender directly and establishing a family password to confirm identities in future. Victims who transfer money should alert their bank immediately, report the scam through messaging apps or to Action Fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump denies sharing AI pope image after Truth Social backlash

Donald Trump has rejected claims that he shared an AI-generated image depicting him dressed as the pope, following criticism from some Christian groups.

The image, which showed the former president wearing white and gold papal-style robes, appeared on his Truth Social account and swiftly sparked outrage. Speaking on Monday, Trump claimed he had no knowledge of the picture’s origins and suggested it may have been created using AI.

Trump further distanced himself from the incident, insisting he first saw the image on Sunday evening, despite it being posted on his account on Friday night and later shared by the White House through its official X account.

When questioned about offended Catholics, he dismissed the concerns lightly, stating, ‘Oh, they can’t take a joke.’ He also remarked that Melania Trump thought the image was ‘cute’.

Despite his lighthearted response, some pointed out that Trump is ineligible to become pope, having never been baptised as a Catholic. He would not be permitted to join the conclave or hold the position.

Meanwhile, preparations continue in the Vatican, where 133 cardinal electors will gather in the Sistine Chapel on Wednesday to begin the ancient rituals of selecting a new pontiff following the death of Pope Francis last month.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s Gemini AI completes Pokémon Blue with a little help

Google’s cutting-edge AI model, Gemini 2.5 Pro, has made headlines by completing the 1996 classic video game Pokémon Blue. While Google didn’t achieve the feat directly, it was orchestrated by Joel Z, an independent software engineer who created a livestream called Gemini Plays Pokémon.

Despite being unaffiliated with the tech giant, Joel’s project has drawn enthusiastic support from Google executives, including CEO Sundar Pichai, who celebrated the victory on social media. The challenge of beating a game like Pokémon Blue has become an informal benchmark for testing the reasoning and adaptability of large language models.

Earlier this year, AI company Anthropic revealed its Claude model was making strides in a similar title, Pokémon Red, but has yet to complete it. While comparisons between the two AIs are inevitable, Joel Z clarified that such evaluations are flawed due to differences in tools, data access, and gameplay frameworks.

To play the game, Gemini relied on a complex system called an ‘agent harness,’ which feeds the model visual and contextual information from the game and translates its decisions into gameplay actions. Joel admits to making occasional interventions to improve Gemini’s reasoning but insists these did not include cheats or explicit hints. Instead, his guidance was limited to refining the model’s problem-solving capabilities.

The project remains a work in progress, and Joel continues to enhance the framework behind Gemini’s gameplay. While it may not be an official benchmark for AI performance, the achievement is a playful demonstration of how far AI systems have come in tackling creative and unexpected challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How digital twins are being weaponised in crypto scams

Digital twins are virtual models of real-world objects, systems, or processes. They enable real-time simulations, monitoring, and predictions, helping industries like healthcare and manufacturing optimise resources. In the crypto world, cybercriminals have found a way to exploit this technology for fraudulent activities.

Scammers create synthetic identities by gathering personal data from various sources. These digital twins are used to impersonate influencers or executives, promoting fake investment schemes or stealing funds. The unregulated nature of crypto platforms makes it easier for criminals to exploit users.

Real-world scams are already happening. Deepfake CEO videos have tricked executives into transferring funds under false pretences. Counterfeit crypto platforms have also stolen sensitive information from users. These scams highlight the risks of AI-powered digital twins in the crypto space.

Blockchain offers solutions to combat these frauds. Decentralised identities (DID) and NFT identity markers can verify interactions. Blockchain’s immutable audit trails and smart contracts can help secure transactions and protect users from digital twin scams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chefs quietly embrace AI in the kitchen

At this year’s Michelin Guide awards in France, AI sparked nearly as much conversation as the stars themselves.

Paris-based chef Matan Zaken, of the one-star restaurant Nhome, said AI dominated discussions among chefs, even though many are hesitant to admit they already rely on tools like ChatGPT for inspiration and recipe development.

Zaken openly embraces AI in his kitchen, using platforms like ChatGPT Premium to generate ingredient pairings—such as peanuts and wild garlic—that he might not have considered otherwise. Instead of starting with traditional tastings, he now consults vast databases of food imagery and chemical profiles.

In a recent collaboration with the digital collective Obvious Art, AI-generated food photos came first, and Zaken created dishes to match them.

Still, not everyone is sold on AI’s place in haute cuisine. Some top chefs insist that no algorithm can replace the human palate or creativity honed by years of training.

Philippe Etchebest, who just earned a second Michelin star, argued that while AI may be helpful elsewhere, it has no place in the artistry of the kitchen. Others worry it strays too far from the culinary traditions rooted in local produce and craftsmanship.

Many chefs, however, seem more open to using AI behind the scenes. From managing kitchen rotas to predicting ingredient costs or carbon footprints, phone apps like Menu and Fullsoon are gaining popularity.

Experts believe molecular databases and cookbook analysis could revolutionise flavour pairing and food presentation, while robots might one day take over laborious prep work—peeling potatoes included.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!