European crypto crime ring dismantled

European authorities have broken up a crypto laundering ring that moved over €21 million for criminal groups tied to China and the Middle East. Dubbed the ‘mafia crypto bank,’ the group used the hawala method and cryptocurrency to obscure illicit fund transfers.

Seventeen suspects were arrested in a Spanish-led operation, with additional arrests in Austria and Belgium. Most of those detained were of Chinese and Syrian origin, allegedly serving clients involved in drug trafficking and migrant smuggling.

Police seized €4.5 million in assets, including digital currencies, cash, vehicles, shotguns, and luxury goods.

The group posed as a remittance business and advertised its services on social media. The crackdown highlights growing concern over crypto’s role in organised crime, with illicit transactions reaching $51.3 billion in 2024.

Crypto crime continues to surge in 2025, with $1.74 billion in losses reported already—exceeding all of last year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ethereum launches new security initiative

The Ethereum Foundation has launched the Trillion Dollar Security Initiative to boost security across its network. The project aims to improve user experience, wallet protection, smart contract safety, and infrastructure resilience.

It is led by Fredrik Svantes and Josh Stark, with support from ecosystem experts samczsun, Medhi Zerouali, and Zach Obront.

Ethereum remains the leading platform for decentralized finance (DeFi), holding 50-60% of total value locked across blockchains, with nearly $80 billion as of mid-May. The Foundation emphasises that billions of users collectively secure trillions of dollars on the Ethereum network.

Ethereum’s recent Pectra upgrade, the most significant since The Merge, has introduced key enhancements including smart contract external accounts, higher staking limits, and data blobs per block.

Since the upgrade, Ethereum’s native token ETH has surged over 43%, signalling renewed market confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Android adds new scam protection for phone calls

Google is introducing new protections on Android devices to combat phone call scams, particularly those involving screen-sharing and app installations. Users will see warning messages if they attempt to change settings during a call and Android will also block the deactivation of Play Protect features.

The system will now block users from sideloading apps or granting accessibility permissions while on a call with unknown contacts.

The new tools are available on devices running Android 16 and select protections are also rolling out to older versions, starting with Android 11

A separate pilot in the UK will alert users trying to open banking apps during a screen-sharing call, prompting them to end the call or wait before proceeding.

These features expand Android’s broader efforts to prevent fraud, which already include AI-based scam detection for phone calls and messages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kick faces investigation after ignoring Ofcom risk assessment request

Ofcom has launched two investigations into Kick Online Entertainment, the provider of a pornography website, over potential breaches of the Online Safety Act.

The regulator said the company failed to respond to a statutory request for a risk assessment related to illegal content appearing on the platform.

As a result, Ofcom is investigating whether Kick has failed to meet its legal obligations to complete and retain a record of such a risk assessment, as well as for not responding to the regulator’s information request.

Ofcom confirmed it had received complaints about potentially illegal material on the site, including child sexual abuse content and extreme pornography.

It is also considering a third investigation into whether the platform has implemented adequate safety measures to protect users from such material—another requirement under the Act.

Under the Online Safety Act, firms found in breach can face fines of up to £18 million or 10% of their global revenue, whichever is higher. In the most severe cases, Ofcom can pursue court orders to block UK access to the website or compel payment providers and advertisers to cut ties with the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta targets critics as FTC case unfolds

Long-standing friction between Big Tech and the media resurfaced during Meta’s antitrust trial with the Federal Trade Commission this week. In a heated courtroom exchange, Meta’s legal team used critical commentary from prominent tech journalists to cast doubt on the FTC’s case.

Meta’s lead attorney, Mark Hansen, questioned the credibility of FTC expert Scott Hemphill by referencing a 2019 antitrust pitch Hemphill co-authored with Facebook co-founder Chris Hughes and former White House advisor Tim Wu.

The presentation cited public statements from reporters Kara Swisher and Om Malik as evidence of Meta’s dominance and aggressive acquisitions.

Hansen dismissed Malik as a ‘failed blogger’ with personal bias and accused Swisher of similar hostility, projecting a headline where she described Mark Zuckerberg as a ‘small little creature with a shriveled soul.’

He also attempted to discredit a cited New York Post article by invoking the tabloid’s notorious ‘Headless Body in Topless Bar’ cover.

These moments highlight Meta’s growing resentment toward the press, which has intensified alongside rising criticism of its business practices. Once seen as scrappy disruptors, Facebook and other tech giants now face regular scrutiny—and appear eager to push back.

Swisher and Malik have both openly criticized Meta in the past. Swisher famously challenged Zuckerberg over content moderation and political speech, while Malik has questioned the company’s global expansion strategies.

Their inclusion in a legal document presented in court underscores how media commentary is influencing regulatory narratives. Meta has previously blamed critical press for damaging user sentiment in the wake of scandals like Cambridge Analytica.

The FTC argues that consistent engagement levels despite bad press prove Meta’s monopoly power—users feel they have no real alternatives to Facebook and Instagram. As the trial continues, so too does Meta’s public battle—not just with regulators, but with the journalists documenting its rise and reckoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cheshire’s new AI tool flags stalking before it escalates

Cheshire Police has become the first UK force to use AI in stalking investigations, aiming to identify harmful behaviours earlier. The AI will analyse reports in real time, even as victims speak with call handlers.

The system, trained using data from the force and the Suzy Lamplugh Trust, is designed to detect stalking patterns—even if the term isn’t used directly. Currently, officers in the Harm Reduction Unit manually review 10 cases a day.

Det Ch Insp Danielle Knox said AI will enhance, not replace, police work, and ethical safeguards are in place. Police and Crime Commissioner Dan Price secured £300,000 to fund the initiative, saying it could be ’25 times more effective’ than manual investigation.

Survivor ‘Amy’ said earlier intervention might have prevented her violent assault. Three-quarters of the unit’s cases already lead to charges, but police hope AI will improve that success rate and offer victims faster protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram calls for EU-wide teen protection rules

Instagram is calling on the European Union to introduce new regulations requiring app stores to implement age verification and parental approval systems.

The platform argues that such protections, applied consistently across all apps, are essential to safeguarding teenagers from harmful content online.

‘The EU needs consistent standards for all apps, to help keep teens safe, empower parents and preserve privacy,’ Instagram said in a blog post.

The company believes the most effective way to achieve this is by introducing protections at the source—before teenagers download apps from the Apple App Store or Google Play Store.

Instagram is proposing that app stores verify users’ ages and require parental approval for teen app downloads. The social media platform cites new research from Morning Consult showing that three in four parents support such legislation.

Most parents also view app stores, rather than individual apps, as the safer and more manageable point for controlling what their teens can access.

To reinforce its position, Instagram points to its own safety efforts, such as the introduction of Teen Accounts. These private-by-default profiles limit teen exposure to messages and content from unknown users, and apply stricter filters to reduce exposure to sensitive material.

Instagram says it is working with civil society groups, industry partners, and European policymakers to push for rules that protect young users across platforms. With teen safety a growing concern, the company insists that industry-wide, enforceable solutions are urgently needed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S breach linked to DragonForce hacking group

Marks & Spencer has confirmed that personal customer data was stolen in a recent cyberattack, including names, contact details, dates of birth, household information, and order histories. The company stressed that no useable payment details or account passwords were compromised.

The breach, which began over the Easter weekend, has disrupted online orders since April 25 and is reportedly costing M&S £43 million per week in lost sales.

Customers are being prompted to reset their passwords, and the retailer has warned users to be cautious of phishing emails or messages pretending to be from M&S.

The attack is linked to the DragonForce cybercrime group, known for double-extortion tactics—stealing and encrypting data while demanding ransom.

While no leaked M&S data has appeared online, experts say the risk of identity fraud remains high.

M&S has contacted website users, reported the breach to authorities, and is working with cybersecurity experts. The company has not disclosed how many of its 9.4 million online customers were affected.

Chief executive Stuart Machin said M&S is working ‘around the clock’ to restore services. Shares in the retailer have dropped 12% over the past month.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon to invest in Saudi AI Zone

Amazon has announced a new partnership with Humain, an AI company launched by Saudi Arabia’s Crown Prince Mohammed bin Salman, to invest over $5 billion in creating an ‘AI Zone’ in the kingdom.

The project will feature Amazon Web Services (AWS) infrastructure, including servers, networks, and training programmes, while Humain will develop AI tools using AWS and support Saudi startups with access to resources.

A move like this adds Amazon to a growing list of tech firms—such as Nvidia and AMD—that are working with Humain, which is backed by Saudi Arabia’s Public Investment Fund. American companies like Google and Salesforce have also recently turned to the PIF for funding and AI collaborations.

Under a new initiative supported by former US President Donald Trump, US tech firms can now pursue deals with Saudi-based partners more freely.

Instead of relying on foreign data centres, Saudi Arabia has required AI providers to store data locally, prompting companies like Google, Oracle, and now Amazon to expand operations within the region.

Amazon has already committed $5.3 billion to build an AWS region in Saudi Arabia by 2026, and says the AI Zone partnership is a separate, additional investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google tests AI tool to automate software development

Google is internally testing an advanced AI tool designed to support software engineers through the entire development cycle, according to The Information. The firm is also expected to demonstrate integration between its Gemini chatbot in voice mode and Android-powered XR headsets.

The agentic AI assistant is said to handle tasks such as code generation and documentation, and has already been previewed to staff and developers ahead of Google’s I/O conference on 20 May. The move reflects a wider trend among tech giants racing to automate programming.

Amazon is developing its own coding assistant, Kiro, which can process both text and visual inputs, detect bugs, and auto-document code. While AWS initially targeted a June launch, the current release date remains uncertain.

Microsoft and Google have claimed that around 30% of their code is now AI-generated. OpenAI is also eyeing expansion, reportedly in talks to acquire AI coding start-up Windsurf for $3 billion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!