Coinbase hit by breach and SEC probe ahead of S&P 500 entry

Cryptocurrency exchange Coinbase has disclosed a potential financial impact of $180 million to $400 million following a cyberattack that compromised customer data, according to a regulatory filing on Thursday.

The company said it received an email from an unidentified threat actor on Sunday, claiming to possess internal documents and account data for a limited number of customers.

Although hackers gained access to personal information such as names, addresses, and email addresses, Coinbase confirmed that no login credentials or passwords were compromised.

Coinbase stated it would reimburse users who were deceived into transferring funds to the attackers. It also revealed that multiple contractors and support staff outside the US had provided information to the hackers. Those involved have been terminated, the company said.

In parallel, the US Securities and Exchange Commission (SEC) is reportedly investigating whether Coinbase previously misrepresented its verified user figures.

Two sources familiar with the matter told Reuters that the SEC inquiry is ongoing, though it does not focus on know-your-customer (KYC) compliance or Bank Secrecy Act obligations. Coinbase has denied any such investigation into its compliance practices.

The SEC declined to comment. Coinbase’s chief legal officer, Paul Grewal, characterised the probe as a continuation of a past investigation into a user metric the company stopped reporting over two years ago. He said Coinbase is cooperating with the SEC but believes the inquiry should be closed.

The news comes ahead of Coinbase’s upcoming addition to the S&P 500 index, potentially overshadowing what had been viewed as a major milestone for the industry. Shares fell 7.2% following the disclosure.

Coinbase has rejected a $20 million ransom demand from the attackers and is cooperating with law enforcement. It has also offered a $20 million reward for information leading to the identification of the hackers.

The firm is opening a new US-based support hub and taking further measures to strengthen its cybersecurity framework.

The cyberattack adds to broader concerns about digital asset platform vulnerabilities. In 2024, hacks have resulted in over $2.2 billion in stolen funds, according to Chainalysis. Bybit alone reported a $1.5 billion theft in February, the largest on record.

Coinbase is also facing a lawsuit filed in the Southern District of New York, alleging the company failed to protect personal data belonging to millions of current and former customers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan approves preemptive cyberdefence law

Japan’s parliament has passed a new law enabling active cyberdefence measures, allowing authorities to legally monitor communications data during peacetime and neutralise foreign servers if cyberattacks occur.

Instead of reacting only after incidents, this law lets the government take preventive steps to counter threats before they escalate.

Operators of vital infrastructure, such as electricity and railway companies, must now report cyber breaches directly to the government. The shift follows recent cyber incidents targeting banks and an airline, prompting Japan to put a full framework in place by 2027.

Although the law permits monitoring of IP addresses in communications crossing Japanese borders, it explicitly bans surveillance of domestic messages and their contents.

A new independent panel will authorise all monitoring and response actions beforehand, instead of leaving decisions solely to security agencies.

Police will handle initial countermeasures, while the Self-Defense Forces will act only when attacks are highly complex or planned. The law, revised to address opposition concerns, includes safeguards to ensure personal rights are protected and that government surveillance remains accountable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI hallucination at center of Anthropic copyright lawsuit

Anthropic, the AI company behind the Claude chatbot, has been ordered by a federal judge to respond to allegations that it submitted fabricated material—possibly generated by AI—as part of its defense in an ongoing copyright lawsuit.

The lawsuit, filed in October 2023 by music publishers Universal Music Group, Concord, and ABKCO, accuses Anthropic of unlawfully using lyrics from over 500 songs to train its chatbot. The publishers argue that Claude can produce copyrighted material when prompted, such as lyrics from Don McLean’s American Pie.

During a court hearing on Tuesday in California, the publishers’ attorney claimed that an Anthropic data scientist cited a nonexistent academic article from The American Statistician journal to support the argument that Claude rarely outputs copyrighted lyrics.

One of the article’s alleged authors later confirmed the paper was a ‘complete fabrication.’ The judge is now requiring Anthropic to formally address the incident in court.

The company, founded in 2021, is backed by major investors including Amazon, Google, and Sam Bankman-Fried, the disgraced crypto executive convicted of fraud in 2023.

The case marks a significant test of how AI companies handle copyrighted content, and how courts respond when AI-generated material is used in legal proceedings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kick faces investigation after ignoring Ofcom risk assessment request

Ofcom has launched two investigations into Kick Online Entertainment, the provider of a pornography website, over potential breaches of the Online Safety Act.

The regulator said the company failed to respond to a statutory request for a risk assessment related to illegal content appearing on the platform.

As a result, Ofcom is investigating whether Kick has failed to meet its legal obligations to complete and retain a record of such a risk assessment, as well as for not responding to the regulator’s information request.

Ofcom confirmed it had received complaints about potentially illegal material on the site, including child sexual abuse content and extreme pornography.

It is also considering a third investigation into whether the platform has implemented adequate safety measures to protect users from such material—another requirement under the Act.

Under the Online Safety Act, firms found in breach can face fines of up to £18 million or 10% of their global revenue, whichever is higher. In the most severe cases, Ofcom can pursue court orders to block UK access to the website or compel payment providers and advertisers to cut ties with the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta targets critics as FTC case unfolds

Long-standing friction between Big Tech and the media resurfaced during Meta’s antitrust trial with the Federal Trade Commission this week. In a heated courtroom exchange, Meta’s legal team used critical commentary from prominent tech journalists to cast doubt on the FTC’s case.

Meta’s lead attorney, Mark Hansen, questioned the credibility of FTC expert Scott Hemphill by referencing a 2019 antitrust pitch Hemphill co-authored with Facebook co-founder Chris Hughes and former White House advisor Tim Wu.

The presentation cited public statements from reporters Kara Swisher and Om Malik as evidence of Meta’s dominance and aggressive acquisitions.

Hansen dismissed Malik as a ‘failed blogger’ with personal bias and accused Swisher of similar hostility, projecting a headline where she described Mark Zuckerberg as a ‘small little creature with a shriveled soul.’

He also attempted to discredit a cited New York Post article by invoking the tabloid’s notorious ‘Headless Body in Topless Bar’ cover.

These moments highlight Meta’s growing resentment toward the press, which has intensified alongside rising criticism of its business practices. Once seen as scrappy disruptors, Facebook and other tech giants now face regular scrutiny—and appear eager to push back.

Swisher and Malik have both openly criticized Meta in the past. Swisher famously challenged Zuckerberg over content moderation and political speech, while Malik has questioned the company’s global expansion strategies.

Their inclusion in a legal document presented in court underscores how media commentary is influencing regulatory narratives. Meta has previously blamed critical press for damaging user sentiment in the wake of scandals like Cambridge Analytica.

The FTC argues that consistent engagement levels despite bad press prove Meta’s monopoly power—users feel they have no real alternatives to Facebook and Instagram. As the trial continues, so too does Meta’s public battle—not just with regulators, but with the journalists documenting its rise and reckoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Masked cybercrime groups rise as attacks escalate worldwide

Cybercrime is thriving like never before, with hackers launching attacks ranging from absurd ransomware demands of $1 trillion to large-scale theft of personal data. Despite efforts from Microsoft, Google and even the FBI, these threat actors continue to outpace defences.

A new report by Group-IB has analysed over 1,500 cybercrime investigations to uncover the most active and dangerous hacker groups operating today.

Rather than fading away after arrests or infighting, many cybercriminal gangs are re-emerging stronger than before.

Group-IB’s May 2025 report highlights a troubling increase in key attack types across 2024 — phishing rose by 22%, ransomware leak sites by 10%, and APT (advanced persistent threat) attacks by 58%. The United States was the most affected country by ransomware activity.

At the top of the cybercriminal hierarchy now sits RansomHub, a ransomware-as-a-service group that emerged from the collapsed ALPHV group and has already overtaken long-established players in attack numbers.

Behind it is GoldFactory, which developed the first iOS banking trojan and exploited facial recognition data. Lazarus, a well-known North Korean state-linked group, also remains highly active under multiple aliases.

Meanwhile, politically driven hacktivist group NoName057(16) has been targeting European institutions using denial-of-service attacks.

With jurisdictional gaps allowing cybercriminals to flourish, these masked hackers remain a growing concern for global cybersecurity, especially as new threat actors emerge from the shadows instead of disappearing for good.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

German watchdog demands Meta stop AI training with EU user data

The Verbraucherzentrale North Rhine-Westphalia (NRW), a regional data protection authority in Germany, has issued a formal warning to Meta, urging the tech giant to stop training its AI models using data from European users.

The regulator argues that Meta’s current approach violates EU privacy laws and may lead to legal action if not halted. Meta recently announced that it would use content from Facebook, Instagram, WhatsApp, and Messenger—including posts, comments, and public interactions—to train its AI systems in Europe.

The company claims this will improve the performance of Meta AI by helping it better understand European languages, culture, and history.

However, data protection authorities from several EU countries, including Belgium, France, and the Netherlands, have expressed concern and encouraged users to act before Meta’s new privacy policy takes effect on 27 May.

The NRW DPA took the additional step of sending Meta a cease-and-desist letter on 30 April. Should Meta ignore the request, legal action could follow.

Christine Steffen, data protection expert at NRW, said that once personal data is used to train AI, it becomes nearly impossible to reverse. She criticised Meta’s opt-out model and insisted that meaningful user consent is legally required.

Austrian privacy advocate Max Schrems, head of the NGO Noyb, also condemned Meta’s strategy, accusing the company of ignoring EU privacy law in favour of commercial gain.

‘Meta should simply ask the affected people for their consent,’ he said, warning that failure to do so could have consequences across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU prolongs sanctions for cyberattackers until 2026

The EU Council has extended its sanctions on cyberattacks until May 18, 2026, with the legal framework for enforcing these measures now lasting until 2028. The sanctions target individuals and institutions involved in cyberattacks that pose a significant threat to the EU and its members.

The extended measures will allow the EU to impose restrictions on those responsible for cyberattacks, including freezing assets and blocking access to financial resources.

These actions may also apply to attacks against third countries or international organisations, if necessary for EU foreign and security policy objectives.

At present, sanctions are in place against 17 individuals and four institutions. The EU’s decision highlights its ongoing commitment to safeguarding its digital infrastructure and maintaining its foreign policy goals through legal actions against cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!