US scraps Biden AI chip export rule

The US Department of Commerce has scrapped the Biden administration’s Artificial Intelligence Diffusion Rule just days before it was due to come into force.

Introduced in January, the rule would have restricted the export of US-made AI chips to many countries for the first time, while reinforcing existing controls.

Rather than enforcing broad restrictions, the Department now intends to pursue direct negotiations with individual countries.

The original rule divided the world into three tiers, with countries like Japan and South Korea spared restrictions, middle-tier countries such as Mexico and Portugal facing new limits, and nations like China and Russia subject to tighter controls.

According to Bloomberg, a replacement rule is expected at a later date.

Instead of issuing immediate new regulations, officials released industry guidance warning companies against using Huawei’s Ascend AI chips and highlighted the risks of allowing US chips to train AI in China.

Secretary Jeffrey Kessler criticised the Biden-era policy, promising a ‘bold, inclusive’ AI strategy that works with allies while limiting access for adversaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU prolongs sanctions for cyberattackers until 2026

The EU Council has extended its sanctions on cyberattacks until May 18, 2026, with the legal framework for enforcing these measures now lasting until 2028. The sanctions target individuals and institutions involved in cyberattacks that pose a significant threat to the EU and its members.

The extended measures will allow the EU to impose restrictions on those responsible for cyberattacks, including freezing assets and blocking access to financial resources.

These actions may also apply to attacks against third countries or international organisations, if necessary for EU foreign and security policy objectives.

At present, sanctions are in place against 17 individuals and four institutions. The EU’s decision highlights its ongoing commitment to safeguarding its digital infrastructure and maintaining its foreign policy goals through legal actions against cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercriminals trick users with fake AI apps

Cybercriminals are tricking users into downloading a dangerous new malware called Noodlophile by disguising it as AI software. Rather than using typical phishing tactics, attackers create convincing fake platforms that appear to offer AI-powered tools for editing videos or images.

These are promoted through realistic-looking Facebook groups and viral social media posts, some of which have received over 62,000 views.

Users are lured with promises of AI-generated content and are directed to bogus sites, one of which pretends to be CapCut AI, offering video editing features. Once users upload prompts and attempt to download the content, they unknowingly receive a malicious ZIP file.

Inside, it is a disguised program that kicks off a chain of infections, eventually installing the Noodlophile malware. However, this software can steal browser credentials, crypto wallet details, and other sensitive data.

The malware is linked to a Vietnamese developer who identifies themselves as a ‘passionate Malware Developer’ on GitHub. Vietnam has a known history of cybercrime activity targeting social media platforms like Facebook.

In some cases, the Noodlophile Stealer has been bundled with remote access tools like XWorm, which allow attackers to maintain long-term control over victims’ systems.

This isn’t the first time attackers have used public interest in AI for malicious purposes. Meta removed over 1,000 dangerous links in 2023 that exploited ChatGPT’s popularity to spread malware.

Meanwhile, cybersecurity experts at CYFIRMA have reported another threat: a new, simple yet effective malware called PupkinStealer, which secretly sends stolen information to hackers using Telegram bots.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scale AI expands into Saudi Arabia and UAE

Scale AI, a San Francisco-based startup backed by Amazon, plans to open a new office in Riyadh by the end of the year as part of its broader Middle East expansion.

The company also intends to establish a presence in the United Arab Emirates, although it has yet to confirm the timeline for that move.

Trevor Thompson, the company’s global managing director, said the Gulf is among the fastest-growing regions for AI adoption outside of the US and China.

Gulf states like Saudi Arabia have been investing heavily in tech startups, data centres and computing infrastructure, urging companies to set up local operations and create regional jobs. Salesforce, for instance, has already begun hiring for a $500 million investment in the kingdom.

Founded in 2016, Scale AI provides data-labelling services essential for training AI products, relying on a vast network of contract workers. Its clients include OpenAI and Microsoft.

The company hit a $13.8 billion valuation last year after a $1 billion funding round backed by Amazon, Meta and others.

In 2024, it generated about $870 million in revenue and is reportedly in talks for a deal that could nearly double its value.

Scale AI is also strengthening its regional ties. In February, it signed a five-year agreement with Qatar to enhance public services, followed by a partnership with Abu Dhabi-based Inception in March.

The news coincides with former President Donald Trump’s upcoming visit to Saudi Arabia, where his team is considering lifting export controls on advanced AI chips, potentially boosting the Gulf’s access to cutting-edge technology.

Notably, Scale AI’s former managing director, Michael Kratsios, now advises Trump on tech matters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and OpenAI rework billion dollar deal

OpenAI and Microsoft are renegotiating the terms of their multibillion-dollar partnership in a move designed to allow the ChatGPT maker to pursue a future public listing, while ensuring Microsoft retains access to its most advanced AI technology.

According to the Financial Times, the talks are centred around adjusting Microsoft’s equity stake in OpenAI’s for-profit arm.

The software giant has invested over US$13 billion in OpenAI and is reportedly prepared to reduce its stake in exchange for extended access to AI developments beyond the current 2030 agreement.

The revisions also include changes to a broader agreement first established in 2019 when Microsoft committed US$1 billion to the partnership.

The restructuring reflects OpenAI’s shift in strategy as it prepares for potential independence from its largest investor. Recent reports suggest the company plans to share a smaller portion of its future revenue with Microsoft, instead of maintaining current terms.

Microsoft has declined to comment on the ongoing negotiations, and OpenAI has yet to respond.

The talks follow Microsoft’s separate US$500 billion joint venture with Oracle and SoftBank to build AI data centres in the US, further signalling the strategic value of securing long-term access to cutting-edge models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft expands cloud push across Europe

Microsoft has unveiled a new set of commitments aimed at strengthening its digital presence across Europe, pledging to expand cloud and AI infrastructure while supporting the region’s economic competitiveness.

Announced by Microsoft President Brad Smith in Brussels, the ‘European Digital Commitments’ include a promise to increase European data centre capacity by 40% within two years, bringing the total to over 200 across 16 countries.

Smith explained that Microsoft’s goal is to provide technology that helps individuals and organisations succeed, rather than simply expanding its reach. He highlighted AI as essential to modern economies, describing it as a driving force behind what he called the ‘AI economy.’

Alongside job creation, Microsoft hopes its presence will spark wider economic benefits for customers and partners throughout the continent.

To ease concerns around data security, particularly in light of USEU geopolitical tensions, Microsoft has added clauses in agreements with European institutions allowing it to legally resist any external order to halt operations in Europe.

If such efforts failed, Microsoft has arranged for European partners to access its code stored securely in Switzerland, instead of allowing disruptions to affect vital digital services.

Although Microsoft’s investments stand to benefit Europe, they also underscore the company’s deep dependence on the region, with over a quarter of its business based there.

Smith insisted that Microsoft’s global success would not have been possible without its European footprint, and called for continued cooperation across the Atlantic—even in the face of potential tariff disputes or political strains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pays around $1.4 billion over privacy case

Google has agreed to pay $1.375 billion to settle a lawsuit brought by the state of Texas over allegations that it violated users’ privacy through features such as Incognito mode, Location History, and biometric data collection.

Despite the sizable sum, Google denies any wrongdoing, stating that the claims were based on outdated practices which have since been updated.

Texas Attorney General Ken Paxton announced the settlement, emphasising that large tech firms are not above the law.

He accused Google of covertly tracking individuals’ locations and personal searches, while also collecting biometric data such as voiceprints and facial geometry — all without users’ consent. Paxton claimed the state’s legal challenge had forced Google to answer for its actions.

Although the settlement resolves two lawsuits filed in 2022, the specific terms and how the funds will be used remain undisclosed. A Google spokesperson maintained that the resolution brings closure to claims about past practices, instead of requiring any changes to its current products.

The case comes after a similar $1.4 billion agreement involving Meta, which faced accusations of unlawfully gathering facial recognition data. The repeated scrutiny from Texas authorities signals a broader pushback against the data practices of major tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!