Masked cybercrime groups rise as attacks escalate worldwide

Cybercrime is thriving like never before, with hackers launching attacks ranging from absurd ransomware demands of $1 trillion to large-scale theft of personal data. Despite efforts from Microsoft, Google and even the FBI, these threat actors continue to outpace defences.

A new report by Group-IB has analysed over 1,500 cybercrime investigations to uncover the most active and dangerous hacker groups operating today.

Rather than fading away after arrests or infighting, many cybercriminal gangs are re-emerging stronger than before.

Group-IB’s May 2025 report highlights a troubling increase in key attack types across 2024 — phishing rose by 22%, ransomware leak sites by 10%, and APT (advanced persistent threat) attacks by 58%. The United States was the most affected country by ransomware activity.

At the top of the cybercriminal hierarchy now sits RansomHub, a ransomware-as-a-service group that emerged from the collapsed ALPHV group and has already overtaken long-established players in attack numbers.

Behind it is GoldFactory, which developed the first iOS banking trojan and exploited facial recognition data. Lazarus, a well-known North Korean state-linked group, also remains highly active under multiple aliases.

Meanwhile, politically driven hacktivist group NoName057(16) has been targeting European institutions using denial-of-service attacks.

With jurisdictional gaps allowing cybercriminals to flourish, these masked hackers remain a growing concern for global cybersecurity, especially as new threat actors emerge from the shadows instead of disappearing for good.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

German watchdog demands Meta stop AI training with EU user data

The Verbraucherzentrale North Rhine-Westphalia (NRW), a regional data protection authority in Germany, has issued a formal warning to Meta, urging the tech giant to stop training its AI models using data from European users.

The regulator argues that Meta’s current approach violates EU privacy laws and may lead to legal action if not halted. Meta recently announced that it would use content from Facebook, Instagram, WhatsApp, and Messenger—including posts, comments, and public interactions—to train its AI systems in Europe.

The company claims this will improve the performance of Meta AI by helping it better understand European languages, culture, and history.

However, data protection authorities from several EU countries, including Belgium, France, and the Netherlands, have expressed concern and encouraged users to act before Meta’s new privacy policy takes effect on 27 May.

The NRW DPA took the additional step of sending Meta a cease-and-desist letter on 30 April. Should Meta ignore the request, legal action could follow.

Christine Steffen, data protection expert at NRW, said that once personal data is used to train AI, it becomes nearly impossible to reverse. She criticised Meta’s opt-out model and insisted that meaningful user consent is legally required.

Austrian privacy advocate Max Schrems, head of the NGO Noyb, also condemned Meta’s strategy, accusing the company of ignoring EU privacy law in favour of commercial gain.

‘Meta should simply ask the affected people for their consent,’ he said, warning that failure to do so could have consequences across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU prolongs sanctions for cyberattackers until 2026

The EU Council has extended its sanctions on cyberattacks until May 18, 2026, with the legal framework for enforcing these measures now lasting until 2028. The sanctions target individuals and institutions involved in cyberattacks that pose a significant threat to the EU and its members.

The extended measures will allow the EU to impose restrictions on those responsible for cyberattacks, including freezing assets and blocking access to financial resources.

These actions may also apply to attacks against third countries or international organisations, if necessary for EU foreign and security policy objectives.

At present, sanctions are in place against 17 individuals and four institutions. The EU’s decision highlights its ongoing commitment to safeguarding its digital infrastructure and maintaining its foreign policy goals through legal actions against cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber attack disrupts Edinburgh school networks

Thousands of Edinburgh pupils were forced to attend school on Saturday after a phishing attack disrupted access to vital online learning resources.

The cyber incident, discovered on Friday, prompted officials to lock users out of the system as a precaution, just days before exams.

Approximately 2,500 students visited secondary schools to reset passwords and restore their access. Although the revision period was interrupted, the council confirmed that no personal data had been compromised.

Scottish Council staff acted swiftly to contain the threat, supported by national cyber security teams. Ongoing monitoring is in place, with authorities confident that exam schedules will continue unaffected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Punycode scams steal crypto through lookalike URLs

Crypto holders are facing a growing threat from a sophisticated form of phishing that swaps letters in website addresses for nearly identical lookalikes, tricking users into handing over their digital assets.

Known as Punycode phishing, the tactic has led to significant losses—even for vigilant users—by mimicking legitimate cryptocurrency exchange sites with deceptive domain names.

Cybercriminals exploit the similarity between characters from different alphabets, such as replacing Latin letters with visually identical Cyrillic ones.

These fake websites are almost indistinguishable from real ones, making it extremely difficult to spot the fraud. Recent reports reveal that even browser recommendation systems, such as Google Chrome’s, have directed users to these deceptive domains.

In one widely cited case, a user was guided to a fraudulent site impersonating the crypto exchange ChangeNOW and subsequently lost over $20,000. The incident has raised questions about browser accountability and the urgency of protective measures against increasingly advanced phishing strategies.

US regulators, including the Federal Trade Commission (FTC), the North American Securities Administrators Association (NASAA), and California’s Department of Financial Protection and Innovation (DFPI), have issued ongoing warnings about crypto scams.

While none have specifically addressed Punycode-based attacks, their advice—careful URL scrutiny, skepticism of unsolicited links, and immediate fraud reporting—remains critical.

As phishing methods evolve, users are urged to double-check domain names, avoid clicking unverified links, and consult tools like the DFPI Crypto Scam Tracker. Until browsers and platforms address the threat directly, user awareness remains the most effective defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US senator calls for AI chip tracking to protect national security

A new bill introduced by Republican Senator Tom Cotton aims to bolster national security by requiring location verification features on American-made AI chips.

The Chip Security Act, announced on 9 May, would ensure such technology does not end up in the hands of foreign adversaries, particularly China.

Cotton urged the US Departments of Commerce and Defence to assess how tracking mechanisms could help detect and prevent illegal chip exports.

He also called for stricter obligations for companies exporting AI chips, including notifying authorities if devices are tampered with or redirected from their original destinations.

The proposed legislation follows a policy shift announced on 7 May by the Trump administration to ease restrictions on AI chip exports previously imposed under President Biden.

Cotton argued that better security practices could allow US firms to expand globally without undermining the country’s technological edge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercriminals trick users with fake AI apps

Cybercriminals are tricking users into downloading a dangerous new malware called Noodlophile by disguising it as AI software. Rather than using typical phishing tactics, attackers create convincing fake platforms that appear to offer AI-powered tools for editing videos or images.

These are promoted through realistic-looking Facebook groups and viral social media posts, some of which have received over 62,000 views.

Users are lured with promises of AI-generated content and are directed to bogus sites, one of which pretends to be CapCut AI, offering video editing features. Once users upload prompts and attempt to download the content, they unknowingly receive a malicious ZIP file.

Inside, it is a disguised program that kicks off a chain of infections, eventually installing the Noodlophile malware. However, this software can steal browser credentials, crypto wallet details, and other sensitive data.

The malware is linked to a Vietnamese developer who identifies themselves as a ‘passionate Malware Developer’ on GitHub. Vietnam has a known history of cybercrime activity targeting social media platforms like Facebook.

In some cases, the Noodlophile Stealer has been bundled with remote access tools like XWorm, which allow attackers to maintain long-term control over victims’ systems.

This isn’t the first time attackers have used public interest in AI for malicious purposes. Meta removed over 1,000 dangerous links in 2023 that exploited ChatGPT’s popularity to spread malware.

Meanwhile, cybersecurity experts at CYFIRMA have reported another threat: a new, simple yet effective malware called PupkinStealer, which secretly sends stolen information to hackers using Telegram bots.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pays around $1.4 billion over privacy case

Google has agreed to pay $1.375 billion to settle a lawsuit brought by the state of Texas over allegations that it violated users’ privacy through features such as Incognito mode, Location History, and biometric data collection.

Despite the sizable sum, Google denies any wrongdoing, stating that the claims were based on outdated practices which have since been updated.

Texas Attorney General Ken Paxton announced the settlement, emphasising that large tech firms are not above the law.

He accused Google of covertly tracking individuals’ locations and personal searches, while also collecting biometric data such as voiceprints and facial geometry — all without users’ consent. Paxton claimed the state’s legal challenge had forced Google to answer for its actions.

Although the settlement resolves two lawsuits filed in 2022, the specific terms and how the funds will be used remain undisclosed. A Google spokesperson maintained that the resolution brings closure to claims about past practices, instead of requiring any changes to its current products.

The case comes after a similar $1.4 billion agreement involving Meta, which faced accusations of unlawfully gathering facial recognition data. The repeated scrutiny from Texas authorities signals a broader pushback against the data practices of major tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!