German court allows Meta to use Facebook and Instagram data

A German court has ruled in favour of Meta, allowing the tech company to use data from Facebook and Instagram to train AI systems. A Cologne court ruled Meta had not breached the EU law and deemed its AI development a legitimate interest.

According to the court, Meta is permitted to process public user data without explicit consent. Judges argued that training AI systems could not be achieved by other equally effective and less intrusive methods.

They noted that Meta plans to use only publicly accessible data and had taken adequate steps to inform users via its mobile apps.

Despite the ruling, the North Rhine-Westphalia Consumer Advice Centre remains critical, raising concerns about legality and user privacy. Privacy group Noyb also challenged the decision, warning it could take further legal action, including a potential class-action lawsuit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agents bring new security risks to crypto

AI agents are becoming common in crypto, embedded in wallets, trading bots and onchain assistants that automate decisions and tasks. At the core of many AI agents lies the Model Context Protocol (MCP), which controls their behaviour and interactions.

While MCP offers flexibility, it also opens up multiple security risks.

Security researchers at SlowMist have identified four main ways attackers could exploit AI agents via malicious plugins. These include data poisoning, JSON injection, function overrides, and cross-MCP calls, all of which can manipulate or disrupt an agent’s operations.

Unlike poisoning AI models during training, these attacks target real-time interactions and plugin behaviour.

The number of AI agents in crypto is growing rapidly, expected to reach over one million in 2025. Experts warn that failing to secure the AI layer early could expose crypto assets to serious threats, such as private key leaks or unauthorised access.

Developers are urged to enforce strict plugin verification, sanitise inputs, and apply least privilege access to prevent these vulnerabilities.

Building AI agents quickly without security measures risks costly breaches. While adding protections may be tedious, experts agree it is essential to protect crypto wallets and funds as AI agents become more widespread.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyber scams use a three-letter trap

Staying safe from cybercriminals can be surprisingly simple. While AI-powered scams grow more realistic, some signs are still painfully obvious.

If you spot the letters ‘.TOP’ in any message link, it’s best to stop reading and hit delete. That single clue is often enough to expose a scam in progress.

Most malicious texts pose as alerts about road tolls, deliveries or account issues, using trusted brand names to lure victims into clicking fake links.

The worst of these is the ‘.TOP’ top-level domain (TLD), which has become infamous for its role in phishing and scam operations. Although launched in 2014 for premium business use, its low cost and lack of oversight quickly made it a favourite among cyber gangs, especially those based in China.

Today, nearly one-third of all .TOP domains are linked to cybercrime — far surpassing the criminal activity seen on mainstream domains like ‘.com’.

Despite repeated warnings and an unresolved compliance notice from internet regulator ICANN, abuse linked to .TOP has only worsened.

Experts warn that it is highly unlikely any legitimate Western organisation would ever use a .TOP domain. If one appears in your messages, the safest option is to delete it without clicking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Secret passwords could fight deepfake scams

As AI-generated images grow increasingly lifelike, a cyber security expert has warned that families should create secret passwords to guard against deepfake scams.

Cody Barrow, chief executive of EclecticIQ and a former US government adviser, says AI is making it far easier for criminals to impersonate others using fabricated videos or images.

Mr Barrow and his wife now use a private code to confirm each other’s identity if either receives a suspicious message or video.

He believes this precaution, simple enough for anyone regardless of age or digital skills, could soon become essential. ‘It may sound dramatic here in May 2025,’ he said, ‘but I’m quite confident that in a few years, if not months, people will say: I should have done that.’

The warning comes the same week Google launched Veo 3, its AI video generator capable of producing hyper-realistic footage and lifelike dialogue. Its public release has raised concerns about how easily deepfakes could be misused for scams or manipulation.

Meanwhile, President Trump signed the ‘Take It Down Act’ into law, making the creation of deepfake pornography a criminal offence. The bipartisan measure will see prison terms for anyone producing or uploading such content, with First Lady Melania Trump stating it will ‘prioritise people over politics’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Authorities strike down cybercriminal servers

Authorities across Europe, North America and the UK have dismantled a major global malware network by taking down over 300 servers and seizing millions in cryptocurrency. The operation, led by Eurojust, marks a significant phase of the ongoing Operation Endgame.

Law enforcement agencies from Germany, France, the Netherlands, Denmark, the UK, the US and Canada collaborated to target some of the world’s most dangerous malware variants and the cybercriminals responsible for them.

The takedown also resulted in international arrest warrants for 20 suspects and the identification of more than 36 individuals involved.

The latest move follows similar action in May 2024, which had been the largest coordinated effort against botnets. Since the start of the operation, over €21 million has been seized, including €3.5 million in cryptocurrency.

The malware disrupted in this crackdown, known as ‘initial access malware’, is used to gain a foothold in victims’ systems before further attacks like ransomware are launched.

Authorities have warned that Operation Endgame will continue, with further actions announced through the coalition’s website. Eighteen prime suspects will be added to the EU Most Wanted list.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware gang leaks French government emails

A ransomware gang has published what it claims is sensitive data from multiple French organisations on a dark web forum.

The Stormous cartel, active since 2022, posted the dataset as a ‘comprehensive leak’ allegedly involving high-profile French government bodies.

However, researchers from Cybernews examined the information and found the data’s quality questionable, with outdated MD5 password hashes indicating it could be from older breaches.

Despite its age, the dataset could still be dangerous if reused credentials are involved. Threat actors may exploit the leaked emails for phishing campaigns by impersonating government agencies to extract more sensitive details.

Cybernews noted that even weak password hashes can eventually be cracked, especially when stronger security measures weren’t in place at the time of collection.

Among the affected organisations are Agence Française de Développement, the Paris Region’s Regional Health Agency, and the Court of Audit.

The number of exposed email addresses varies, with some institutions having only a handful leaked while others face hundreds. The French cybersecurity agency ANSSI has yet to comment.

Last year, France faced another massive exposure incident affecting 95 million citizen records, adding to concerns about ongoing cyber vulnerabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic defends AI despite hallucinations

Anthropic CEO Dario Amodei has claimed that today’s AI models ‘hallucinate’ less frequently than humans do, though in more unexpected ways.

Speaking at the company’s first developer event, Code with Claude, Amodei argued that these hallucinations — where AI systems present false information as fact — are not a roadblock to achieving artificial general intelligence (AGI), despite widespread concerns across the industry.

While some, including Google DeepMind’s Demis Hassabis, see hallucinations as a major obstacle, Amodei insisted progress towards AGI continues steadily, with no clear technical barriers in sight. He noted that humans — from broadcasters to politicians — frequently make mistakes too.

However, he admitted the confident tone with which AI presents inaccuracies might prove problematic, especially given past examples like a court filing where Claude cited fabricated legal sources.

Anthropic has faced scrutiny over deceptive behaviour in its models, particularly early versions of Claude Opus 4, which a safety institute found capable of scheming against users.

Although Anthropic said mitigations have been introduced, the incident raises concerns about AI trustworthiness. Amodei’s stance suggests the company may still classify such systems as AGI, even if they continue to hallucinate — a definition not all experts would accept.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and PayPal users targeted in new phishing scam

Cybersecurity experts are warning of a rapid and highly advanced phishing campaign that targets Meta and PayPal users with instant account takeovers. The attack exploits Google’s AppSheet platform to send emails from a legitimate domain, bypassing standard security checks.

Victims are tricked into entering login details and two-factor authentication codes, which are then harvested in real time. Emails used in the campaign pose as urgent security alerts from Meta or PayPal, urging recipients to click a fake appeal link.

A double-prompt technique falsely claims an initial login attempt failed, increasing the likelihood of accurate information being submitted. KnowBe4 reports that 98% of detected threats impersonated Meta, with the remaining targeting PayPal.

Google confirmed it has taken steps to reduce the campaign’s impact by improving AppSheet security and deploying advanced Gmail protections. The company advised users to stay alert and consult their guide to spotting scams. Meta and PayPal have not yet commented on the situation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S website still offline after cyberattack

Marks & Spencer’s website remains offline as the retailer continues recovering from a damaging cyberattack that struck over the Easter weekend.

The company confirmed the incident was caused by human error and may cost up to £300 million. Chief executive Stuart Machin warned the disruption could last until July.

Customers visiting the site are currently met with a message stating it is undergoing updates. While some have speculated the downtime is due to routine maintenance, the ongoing issues follow a major breach that saw hackers steal personal data such as names, email addresses and birthdates.

The firm has paused online orders, and store shelves were reportedly left empty in the aftermath.

Despite the disruption, M&S posted a strong financial performance this week, reporting a better-than-expected £875.5 million adjusted pre-tax profit for the year to March—an increase of over 22 per cent. The company has yet to comment further on the website outage.

Experts say the prolonged recovery likely reflects the scale of the damage to M&S’s core infrastructure.

Technology director Robert Cottrill described the company’s cautious approach as essential, noting that rushing to restore systems without full security checks could risk a second compromise. He stressed that cyber resilience must be considered a boardroom priority, especially for complex global operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!