Masked cybercrime groups rise as attacks escalate worldwide

Cybercrime is thriving like never before, with hackers launching attacks ranging from absurd ransomware demands of $1 trillion to large-scale theft of personal data. Despite efforts from Microsoft, Google and even the FBI, these threat actors continue to outpace defences.

A new report by Group-IB has analysed over 1,500 cybercrime investigations to uncover the most active and dangerous hacker groups operating today.

Rather than fading away after arrests or infighting, many cybercriminal gangs are re-emerging stronger than before.

Group-IB’s May 2025 report highlights a troubling increase in key attack types across 2024 — phishing rose by 22%, ransomware leak sites by 10%, and APT (advanced persistent threat) attacks by 58%. The United States was the most affected country by ransomware activity.

At the top of the cybercriminal hierarchy now sits RansomHub, a ransomware-as-a-service group that emerged from the collapsed ALPHV group and has already overtaken long-established players in attack numbers.

Behind it is GoldFactory, which developed the first iOS banking trojan and exploited facial recognition data. Lazarus, a well-known North Korean state-linked group, also remains highly active under multiple aliases.

Meanwhile, politically driven hacktivist group NoName057(16) has been targeting European institutions using denial-of-service attacks.

With jurisdictional gaps allowing cybercriminals to flourish, these masked hackers remain a growing concern for global cybersecurity, especially as new threat actors emerge from the shadows instead of disappearing for good.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

German watchdog demands Meta stop AI training with EU user data

The Verbraucherzentrale North Rhine-Westphalia (NRW), a regional data protection authority in Germany, has issued a formal warning to Meta, urging the tech giant to stop training its AI models using data from European users.

The regulator argues that Meta’s current approach violates EU privacy laws and may lead to legal action if not halted. Meta recently announced that it would use content from Facebook, Instagram, WhatsApp, and Messenger—including posts, comments, and public interactions—to train its AI systems in Europe.

The company claims this will improve the performance of Meta AI by helping it better understand European languages, culture, and history.

However, data protection authorities from several EU countries, including Belgium, France, and the Netherlands, have expressed concern and encouraged users to act before Meta’s new privacy policy takes effect on 27 May.

The NRW DPA took the additional step of sending Meta a cease-and-desist letter on 30 April. Should Meta ignore the request, legal action could follow.

Christine Steffen, data protection expert at NRW, said that once personal data is used to train AI, it becomes nearly impossible to reverse. She criticised Meta’s opt-out model and insisted that meaningful user consent is legally required.

Austrian privacy advocate Max Schrems, head of the NGO Noyb, also condemned Meta’s strategy, accusing the company of ignoring EU privacy law in favour of commercial gain.

‘Meta should simply ask the affected people for their consent,’ he said, warning that failure to do so could have consequences across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU prolongs sanctions for cyberattackers until 2026

The EU Council has extended its sanctions on cyberattacks until May 18, 2026, with the legal framework for enforcing these measures now lasting until 2028. The sanctions target individuals and institutions involved in cyberattacks that pose a significant threat to the EU and its members.

The extended measures will allow the EU to impose restrictions on those responsible for cyberattacks, including freezing assets and blocking access to financial resources.

These actions may also apply to attacks against third countries or international organisations, if necessary for EU foreign and security policy objectives.

At present, sanctions are in place against 17 individuals and four institutions. The EU’s decision highlights its ongoing commitment to safeguarding its digital infrastructure and maintaining its foreign policy goals through legal actions against cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pays around $1.4 billion over privacy case

Google has agreed to pay $1.375 billion to settle a lawsuit brought by the state of Texas over allegations that it violated users’ privacy through features such as Incognito mode, Location History, and biometric data collection.

Despite the sizable sum, Google denies any wrongdoing, stating that the claims were based on outdated practices which have since been updated.

Texas Attorney General Ken Paxton announced the settlement, emphasising that large tech firms are not above the law.

He accused Google of covertly tracking individuals’ locations and personal searches, while also collecting biometric data such as voiceprints and facial geometry — all without users’ consent. Paxton claimed the state’s legal challenge had forced Google to answer for its actions.

Although the settlement resolves two lawsuits filed in 2022, the specific terms and how the funds will be used remain undisclosed. A Google spokesperson maintained that the resolution brings closure to claims about past practices, instead of requiring any changes to its current products.

The case comes after a similar $1.4 billion agreement involving Meta, which faced accusations of unlawfully gathering facial recognition data. The repeated scrutiny from Texas authorities signals a broader pushback against the data practices of major tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FTC says Amazon misused legal privilege to dodge scrutiny

Federal regulators have accused Amazon of deliberately concealing incriminating evidence in an ongoing antitrust case by abusing privilege claims. The Federal Trade Commission (FTC) said Amazon wrongly withheld nearly 70,000 documents, withdrawing 92% of its claims after a judge forced a re-review.

The FTC claims Amazon marked non-legal documents as privileged to keep them from scrutiny. Internal emails suggest staff were told to mislabel communications by including legal teams unnecessarily.

One email reportedly called former CEO Jeff Bezos the ‘chief dark arts officer,’ referring to questionable Prime subscription tactics.

The documents revealed issues such as widespread involuntary Prime sign-ups and efforts to manipulate search results in favour of Amazon’s products. Regulators said these practices show Amazon intended to hide evidence rather than make honest errors.

The FTC is now seeking a 90-day extension for discovery and wants Amazon to cover the additional legal costs. It claims the delay and concealment gave Amazon an unfair strategic advantage instead of allowing a level playing field.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches data residency in India for ChatGPT enterprise

OpenAI has announced that enterprise and educational customers in India using ChatGPT can now store their data locally instead of relying on servers abroad.

The move, aimed at complying with India’s upcoming data localisation rules under the Digital Personal Data Protection Act, allows conversations, uploads, and prompts to remain within the country. Similar options are now available in Japan, Singapore, and South Korea.

Data stored under this new residency option will be encrypted and kept secure, according to the company. OpenAI clarified it will not use this data for training its models unless customers choose to share it.

The change may also influence a copyright infringement case against OpenAI in India, where the jurisdiction was previously questioned due to foreign server locations.

Alongside this update, OpenAI has unveiled a broader international initiative, called OpenAI for Countries, as part of the US-led $500 billion Stargate project.

The plan involves building AI infrastructure in partner countries instead of centralising development, allowing nations to create localised versions of ChatGPT tailored to their languages and services.

OpenAI says the goal is to help democracies develop AI on their own terms instead of adopting centralised, authoritarian systems.

The company and the US government will co-invest in local data centres and AI models to strengthen economic growth and digital sovereignty across the globe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta wins $168 million verdict against NSO Group in landmark spyware case

Meta has secured a major legal victory against Israeli surveillance company NSO Group, with a California jury awarding $168 million in damages.

The ruling concludes a six-year legal battle over the unlawful deployment of NSO’s Pegasus spyware, which targeted journalists, human rights activists, and other individuals through a vulnerability in WhatsApp.

The verdict includes $444,719 in compensatory damages and $167.3 million in punitive damages.

Meta hailed the decision as a milestone for privacy, calling it ‘the first victory against the development and use of illegal spyware that threatens the safety and privacy of everyone’. NSO, meanwhile, said it would review the outcome and consider further legal steps, including an appeal.

The case, launched by WhatsApp in 2019, exposed the far-reaching use of Pegasus. Between 2018 and 2020, NSO generated $61.7 million in revenue from a single exploited vulnerability, with profits potentially reaching $40 million.

Court documents revealed that Pegasus was deployed against 1,223 individuals across 51 countries, with the highest number of victims in Mexico, India, Bahrain, Morocco, and Pakistan. Spain, where officials were targeted in 2022, ranked as the highest Western democracy on the list.

While NSO has long maintained that its spyware is sold exclusively to governments for counterterrorism purposes, the data highlighted its extensive use in authoritarian and semi-authoritarian regimes.

A former NSO employee testified that the company attempted to sell Pegasus to United States police forces, though those efforts were unsuccessful.

Beyond the financial penalty, the ruling exposed NSO’s internal operations. The company runs a 140-person research team with a $50 million budget dedicated to discovering smartphone vulnerabilities. Clients have included Saudi Arabia, Mexico, and Uzbekistan.

However, the firm’s conduct drew harsh criticism from Judge Phyllis Hamilton, who accused NSO of withholding evidence and ignoring court orders. Israeli officials reportedly intervened last year to prevent sensitive documents from reaching the US courts.

Privacy advocates welcomed the decision. Natalia Krapiva, a senior lawyer at Access Now, said it sends a strong message to the spyware industry. ‘This will hopefully show spyware companies that there will be consequences if you are careless, if you are brazen, and if you act as NSO did in these cases,’ she said.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google faces DOJ’s request to sell key ad platforms

The US Department of Justice (DOJ) has moved to break up Google’s advertising technology business after a federal judge ruled that the company holds illegal monopolies across two markets.

The DOJ is seeking the sale of Google’s AdX digital advertising marketplace and its DFP platform, which helps publishers manage their ad inventory.

It follows a ruling in April by Federal Judge Leonie Brinkema, who found that Google’s dominance in the online advertising market violated antitrust laws.

AdX and DFP were key acquisitions for Google, particularly the purchase of DoubleClick in 2008 for $3.1 billion. The DOJ argues that Google used monopolistic tactics, such as acquisitions and customer lock-ins, to control the ad tech market and stifle competition.

In response, Google has disputed the DOJ’s move, claiming the proposed sale of its advertising tools exceeds the court’s findings and could harm publishers and advertisers.

The DOJ’s latest filing also comes amid a separate legal action over Google’s Chrome browser, and the company is facing additional scrutiny in the UK for its dominance in the online search market.

The UK’s Competition and Markets Authority (CMA) has found that Google engaged in anti-competitive practices in open-display advertising technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!