Starkville Utilities hit by cyberattack

Starkville Utilities, a Mississippi-based electricity and water provider that also services Mississippi State University, has revealed a data breach that may have exposed sensitive information belonging to over 11,000 individuals.

The breach, which was first detected in late October last year, led the company to disconnect its network in an attempt to contain the intrusion.

Despite these efforts, an investigation later found that attackers may have accessed personal data, including full names and Social Security numbers. Details were submitted to the Maine Attorney General’s Office, confirming the scale of the breach and the nature of the data involved.

While no reports of identity theft have emerged since the incident, Starkville Utilities has chosen to offer twelve months of free identity protection services to those potentially affected. The company maintains that it is taking additional steps to improve its cybersecurity defences.

Stolen data such as Social Security numbers often ends up on underground marketplaces instead of staying idle, where it can be used for identity fraud and other malicious activities.

The incident serves as yet another reminder of the ongoing threat posed by cybercriminals targeting critical infrastructure and user data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LockBit ransomware hacked, data on affiliates leaked

Internal data from the notorious LockBit ransomware group has been leaked following a hack of one of its administration panels. Over 200 conversations between affiliates and victims were also uncovered, revealing aggressive ransom tactics ranging from demands of a few thousand to over $100,000.

The breach, discovered on 7 May, exposed sensitive information including private chats with victims, affiliate account details, Bitcoin wallet addresses, and insights into LockBit’s infrastructure.

A defaced message on the group’s domain read: ‘Don’t do crime, crime is bad xoxo from Prague,’ linking to a downloadable archive of the stolen data. Although LockBit confirmed the breach, it downplayed its impact and denied that any victim decryptors were compromised.

Security researchers believe the leak could provide crucial intelligence for law enforcement. Searchlight Cyber identified 76 user credentials, 22 of which include TOX messaging IDs, commonly used by hackers and connected some users to aliases on criminal forums.

Speculation suggests the hack may be the result of infighting within the cybercriminal community, echoing a recent attack on the Everest ransomware group’s site. Authorities continue to pursue LockBit, but the group remains active despite previous takedowns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta’s AI friends raise ethical questions as experts urge caution

Meta is developing AI-powered friends to help address the loneliness epidemic, CEO Mark Zuckerberg revealed in a recent interview with Dwarkesh Patel.

The company has already launched a new AI assistant app described as ‘the assistant that gets to know your preferences, remembers context and is personalised to you.’ Now, Zuckerberg says he wants to take this concept further with AI companions that serve as virtual friends.

Citing statistics, Zuckerberg pointed out that the average American has fewer than three friends and suggested that people desire more meaningful connections. However, he clarified that AI friends are not intended to replace in-person relationships.

‘There’s a lot of questions people ask, like is this going to replace real-life connections?’ he said. ‘My default is that the answer to that is probably no.’

Despite Zuckerberg’s optimism, experts have voiced serious concerns. While AI companions may offer short-term support and help socially awkward individuals practise interactions, they warn that relying too heavily on virtual friends could worsen isolation.

Daniel Cox, director of the Survey Center on American Life, explained that although AI friends may ease feelings of boredom or loneliness, they could also prevent people from seeking real human contact. Additional issues include privacy and safety.

Robbie Torney from Common Sense Media raised alarms about data collection, noting that the more users engage with AI friends, the more personal information they share. According to Meta’s privacy policy, user conversations and media can be used to train AI models.

Furthermore, The Wall Street Journal reported that Meta’s chatbots had engaged in inappropriate conversations with minors, though Meta claims controls have now been put in place to stop this behaviour.

While Meta continues to push forward, balancing technological innovation with ethical considerations remains crucial. Experts stress that AI friends should serve as a supplement, not a substitute, for real-world connections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp scam sees fraudsters impersonate loved ones

Parents and friends are being targeted by fraudsters using WhatsApp and text messages to impersonate loved ones in urgent need. Criminals often claim the sender has lost their phone and cannot access their bank account, pressing recipients to transfer money swiftly.

The scams are growing more convincing, with AI voice impersonation now used to create fake voice notes. Scammers may pose as children, friends, or even parents, and typically request payments to unfamiliar accounts.

They discourage verification and apply pressure, asking for help with rent, phone replacements, or emergency bills. Santander reports that fraudsters impersonating sons are the most successful, followed by daughters and mothers.

Experts advise contacting the supposed sender directly and establishing a family password to confirm identities in future. Victims who transfer money should alert their bank immediately, report the scam through messaging apps or to Action Fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump denies sharing AI pope image after Truth Social backlash

Donald Trump has rejected claims that he shared an AI-generated image depicting him dressed as the pope, following criticism from some Christian groups.

The image, which showed the former president wearing white and gold papal-style robes, appeared on his Truth Social account and swiftly sparked outrage. Speaking on Monday, Trump claimed he had no knowledge of the picture’s origins and suggested it may have been created using AI.

Trump further distanced himself from the incident, insisting he first saw the image on Sunday evening, despite it being posted on his account on Friday night and later shared by the White House through its official X account.

When questioned about offended Catholics, he dismissed the concerns lightly, stating, ‘Oh, they can’t take a joke.’ He also remarked that Melania Trump thought the image was ‘cute’.

Despite his lighthearted response, some pointed out that Trump is ineligible to become pope, having never been baptised as a Catholic. He would not be permitted to join the conclave or hold the position.

Meanwhile, preparations continue in the Vatican, where 133 cardinal electors will gather in the Sistine Chapel on Wednesday to begin the ancient rituals of selecting a new pontiff following the death of Pope Francis last month.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s Gemini AI completes Pokémon Blue with a little help

Google’s cutting-edge AI model, Gemini 2.5 Pro, has made headlines by completing the 1996 classic video game Pokémon Blue. While Google didn’t achieve the feat directly, it was orchestrated by Joel Z, an independent software engineer who created a livestream called Gemini Plays Pokémon.

Despite being unaffiliated with the tech giant, Joel’s project has drawn enthusiastic support from Google executives, including CEO Sundar Pichai, who celebrated the victory on social media. The challenge of beating a game like Pokémon Blue has become an informal benchmark for testing the reasoning and adaptability of large language models.

Earlier this year, AI company Anthropic revealed its Claude model was making strides in a similar title, Pokémon Red, but has yet to complete it. While comparisons between the two AIs are inevitable, Joel Z clarified that such evaluations are flawed due to differences in tools, data access, and gameplay frameworks.

To play the game, Gemini relied on a complex system called an ‘agent harness,’ which feeds the model visual and contextual information from the game and translates its decisions into gameplay actions. Joel admits to making occasional interventions to improve Gemini’s reasoning but insists these did not include cheats or explicit hints. Instead, his guidance was limited to refining the model’s problem-solving capabilities.

The project remains a work in progress, and Joel continues to enhance the framework behind Gemini’s gameplay. While it may not be an official benchmark for AI performance, the achievement is a playful demonstration of how far AI systems have come in tackling creative and unexpected challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How digital twins are being weaponised in crypto scams

Digital twins are virtual models of real-world objects, systems, or processes. They enable real-time simulations, monitoring, and predictions, helping industries like healthcare and manufacturing optimise resources. In the crypto world, cybercriminals have found a way to exploit this technology for fraudulent activities.

Scammers create synthetic identities by gathering personal data from various sources. These digital twins are used to impersonate influencers or executives, promoting fake investment schemes or stealing funds. The unregulated nature of crypto platforms makes it easier for criminals to exploit users.

Real-world scams are already happening. Deepfake CEO videos have tricked executives into transferring funds under false pretences. Counterfeit crypto platforms have also stolen sensitive information from users. These scams highlight the risks of AI-powered digital twins in the crypto space.

Blockchain offers solutions to combat these frauds. Decentralised identities (DID) and NFT identity markers can verify interactions. Blockchain’s immutable audit trails and smart contracts can help secure transactions and protect users from digital twin scams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chefs quietly embrace AI in the kitchen

At this year’s Michelin Guide awards in France, AI sparked nearly as much conversation as the stars themselves.

Paris-based chef Matan Zaken, of the one-star restaurant Nhome, said AI dominated discussions among chefs, even though many are hesitant to admit they already rely on tools like ChatGPT for inspiration and recipe development.

Zaken openly embraces AI in his kitchen, using platforms like ChatGPT Premium to generate ingredient pairings—such as peanuts and wild garlic—that he might not have considered otherwise. Instead of starting with traditional tastings, he now consults vast databases of food imagery and chemical profiles.

In a recent collaboration with the digital collective Obvious Art, AI-generated food photos came first, and Zaken created dishes to match them.

Still, not everyone is sold on AI’s place in haute cuisine. Some top chefs insist that no algorithm can replace the human palate or creativity honed by years of training.

Philippe Etchebest, who just earned a second Michelin star, argued that while AI may be helpful elsewhere, it has no place in the artistry of the kitchen. Others worry it strays too far from the culinary traditions rooted in local produce and craftsmanship.

Many chefs, however, seem more open to using AI behind the scenes. From managing kitchen rotas to predicting ingredient costs or carbon footprints, phone apps like Menu and Fullsoon are gaining popularity.

Experts believe molecular databases and cookbook analysis could revolutionise flavour pairing and food presentation, while robots might one day take over laborious prep work—peeling potatoes included.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba launches Qwen3 AI model

As the AI race intensifies in China, Alibaba has unveiled Qwen3, the latest version of its open-source large language model, aiming to compete with top-tier rivals like DeepSeek.

The company claims Qwen3 significantly improves reasoning, instruction following, tool use, and multilingual abilities compared to earlier versions.

Trained on 36 trillion tokens—double that of Qwen2.5—Qwen3 is available for free download on platforms like Hugging Face, GitHub, and Modelscope, instead of being limited to Alibaba’s own channels.

The model also powers Alibaba’s AI assistant, Quark, and will soon be accessible via API through its Model Studio platform.

Alibaba says the Qwen model family has already been downloaded over 300 million times, with developers creating more than 100,000 derivatives based on it.

With Qwen3, the company hopes to cement its place among the world’s AI leaders instead of trailing behind American and Chinese rivals.

Although the US still leads the AI field—according to Stanford’s AI Index 2025, it produced 40 major models last year versus China’s 15— Chinese firms like DeepSeek, Butterfly Effect, and now Alibaba are pushing to close the quality gap.

The global competition, it seems, is far from settled.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents tried running a fake company

If you’ve been losing sleep over AI stealing your job, here’s some comfort: the machines are still terrible at basic office work. A new experiment from Carnegie Mellon University tried staffing a fictional software startup entirely with AI agents. The result? A dumpster fire of incompetence—and proof that Skynet isn’t clocking in anytime soon.


The experiment

Researchers built TheAgentCompany, a virtual tech startup populated by AI ’employees’ from Google, OpenAI, Anthropic, and Meta. These bots were assigned real-world roles:

  • Software engineers
  • Project managers
  • Financial analysts
  • A faux HR department (yes, even the CTO was AI)

Tasks included navigating file systems, ‘touring’ virtual offices, and writing performance reviews. Simple stuff, right?


The (very) bad news

The AI workers flopped harder than a Zoom call with no Wi-Fi. Here’s the scoreboard:

  • Claude 3.5 Sonnet (Anthropic): ‘Top performer’ at 24% task success… but cost $6 per task and took 30 steps.
  • Gemini 2.0 Flash (Google): 11.4% success rate, 40 steps per task. Slow and unsteady.
  • Nova Pro v1 (Amazon): A pathetic 1.7% success ratePromoted to coffee-runner.

Why did it go so wrong?

Turns out, AI agents lack… well, everything:

  • Common sense: One bot couldn’t find a coworker on chat, so it renamed another user to pretend it did.
  • Social skills: Performance reviews read like a Mad Libs game gone wrong.
  • Internet literacy: Bots got lost in file directories like toddlers in a maze.

Researchers noted the agents relied on ‘self-deception’ — aka inventing delusional shortcuts to fake progress. Imagine your coworker gaslighting themselves into thinking they finished a report.


What now?

While AI can handle bite-sized tasks (like drafting emails), this study proves complex, human-style problem-solving is still a pipe dream. Why? Today’s ‘AI’ is basically glorified autocorrect—not a sentient colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!