US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercriminals trick users with fake AI apps

Cybercriminals are tricking users into downloading a dangerous new malware called Noodlophile by disguising it as AI software. Rather than using typical phishing tactics, attackers create convincing fake platforms that appear to offer AI-powered tools for editing videos or images.

These are promoted through realistic-looking Facebook groups and viral social media posts, some of which have received over 62,000 views.

Users are lured with promises of AI-generated content and are directed to bogus sites, one of which pretends to be CapCut AI, offering video editing features. Once users upload prompts and attempt to download the content, they unknowingly receive a malicious ZIP file.

Inside, it is a disguised program that kicks off a chain of infections, eventually installing the Noodlophile malware. However, this software can steal browser credentials, crypto wallet details, and other sensitive data.

The malware is linked to a Vietnamese developer who identifies themselves as a ‘passionate Malware Developer’ on GitHub. Vietnam has a known history of cybercrime activity targeting social media platforms like Facebook.

In some cases, the Noodlophile Stealer has been bundled with remote access tools like XWorm, which allow attackers to maintain long-term control over victims’ systems.

This isn’t the first time attackers have used public interest in AI for malicious purposes. Meta removed over 1,000 dangerous links in 2023 that exploited ChatGPT’s popularity to spread malware.

Meanwhile, cybersecurity experts at CYFIRMA have reported another threat: a new, simple yet effective malware called PupkinStealer, which secretly sends stolen information to hackers using Telegram bots.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pays around $1.4 billion over privacy case

Google has agreed to pay $1.375 billion to settle a lawsuit brought by the state of Texas over allegations that it violated users’ privacy through features such as Incognito mode, Location History, and biometric data collection.

Despite the sizable sum, Google denies any wrongdoing, stating that the claims were based on outdated practices which have since been updated.

Texas Attorney General Ken Paxton announced the settlement, emphasising that large tech firms are not above the law.

He accused Google of covertly tracking individuals’ locations and personal searches, while also collecting biometric data such as voiceprints and facial geometry — all without users’ consent. Paxton claimed the state’s legal challenge had forced Google to answer for its actions.

Although the settlement resolves two lawsuits filed in 2022, the specific terms and how the funds will be used remain undisclosed. A Google spokesperson maintained that the resolution brings closure to claims about past practices, instead of requiring any changes to its current products.

The case comes after a similar $1.4 billion agreement involving Meta, which faced accusations of unlawfully gathering facial recognition data. The repeated scrutiny from Texas authorities signals a broader pushback against the data practices of major tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit cracks down after AI bot experiment exposed

Reddit is accelerating plans to verify the humanity of its users following revelations that AI bots infiltrated a popular debate forum to influence opinions. These bots crafted persuasive, personalised comments based on users’ post histories, without disclosing their non-human identity.

Researchers from the University of Zurich conducted an unauthorised four-month experiment on the r/changemyview subreddit, deploying AI agents posing as trauma survivors, political figures, and other sensitive personas.

The incident sparked outrage across the platform. Reddit’s Chief Legal Officer condemned the experiment as a violation of both legal and ethical standards, while CEO Steve Huffman stressed that the platform’s strength lies in genuine human exchange.

All accounts linked to the study have been banned, and Reddit has filed formal complaints with the university. To restore trust, Reddit will introduce third-party verification tools that confirm users are human, without collecting personal data.

While protecting anonymity remains a priority, the platform acknowledges it must evolve to meet new threats posed by increasingly sophisticated AI impersonators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta and Ray-Ban launch smart glasses in the UAE

Meta Platforms, Inc. and EssilorLuxottica have officially launched the Ray-Ban Meta smart glasses in the United Arab Emirates, unveiling the new tech-forward eyewear during an exclusive event on May 7 at Gitano Beach Club.

The collection will be available across all Ray-Ban stores and partner opticians in the UAE starting May 12.

Ray-Ban Meta glasses combine stylish design with cutting-edge technology, offering users hands-free photo and video capture, discreet audio playback through open-ear speakers, and access to built-in Meta AI.

The glasses allow for real-time translations—including sign language—voice-activated search, and contextual AR experiences such as landmark information, menu translations, or recipe suggestions based on visible items.

A standout feature is the livestreaming function, enabling users to broadcast directly to Instagram Live or Facebook Live for up to 30 minutes from their own point of view.

Users can toggle between the glasses and their phone camera, creating immersive, real-time content. The MetaAI companion app (iOS and Android) also supports easy content import, editing, and special effects.

The glasses include five microphones and upgraded audio hardware for clearer sound and ambient awareness.

Live language translation support—covering Spanish, French, Italian, and English—even while offline—is expected to launch in the UAE later this year. Software updates will continue enhancing the glasses’ AI capabilities over time.

Offered in styles such as Wayfarer, Wayfarer Large, and the universally fitting Skyler, Ray-Ban Meta glasses are available with prescription, sun, clear, polarised, or Transitions® lenses.

Prices start at AED 1,330 and include a sleek charging case. The glasses support pairing with multiple devices and offer a blend of fashion, function, and future-ready innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FTC says Amazon misused legal privilege to dodge scrutiny

Federal regulators have accused Amazon of deliberately concealing incriminating evidence in an ongoing antitrust case by abusing privilege claims. The Federal Trade Commission (FTC) said Amazon wrongly withheld nearly 70,000 documents, withdrawing 92% of its claims after a judge forced a re-review.

The FTC claims Amazon marked non-legal documents as privileged to keep them from scrutiny. Internal emails suggest staff were told to mislabel communications by including legal teams unnecessarily.

One email reportedly called former CEO Jeff Bezos the ‘chief dark arts officer,’ referring to questionable Prime subscription tactics.

The documents revealed issues such as widespread involuntary Prime sign-ups and efforts to manipulate search results in favour of Amazon’s products. Regulators said these practices show Amazon intended to hide evidence rather than make honest errors.

The FTC is now seeking a 90-day extension for discovery and wants Amazon to cover the additional legal costs. It claims the delay and concealment gave Amazon an unfair strategic advantage instead of allowing a level playing field.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft bans DeepSeek app for staff use

Microsoft has confirmed it does not allow employees to use the DeepSeek app, citing data security and propaganda concerns.

Speaking at a Senate hearing, company president Brad Smith explained the decision stems from fears that data shared with DeepSeek could end up on Chinese servers and be exposed to state surveillance laws.

Although DeepSeek is open source and widely available, Microsoft has chosen not to list the app in its own store.

Smith warned that DeepSeek’s answers may be influenced by Chinese government censorship and propaganda, and its privacy policy confirms data is stored in China, making it subject to local intelligence regulations.

Interestingly, Microsoft still offers DeepSeek’s R1 model via its Azure cloud service. The company argued this is a different matter, as customers can host the model on their servers instead of relying on DeepSeek’s infrastructure.

Even so, Smith admitted Microsoft had to alter the model to remove ‘harmful side effects,’ although no technical details were provided.

While Microsoft blocks DeepSeek’s app for internal use, it hasn’t imposed a blanket ban on all chatbot competitors. Apps like Perplexity are available in the Windows store, unlike those from Google.

The stance against DeepSeek marks a rare public move by Microsoft as the tech industry navigates rising tensions over AI tools with foreign links.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini Nano boosts scam detection on Chrome

Google has released a new report outlining how it is using AI to better protect users from online scams across its platforms.

The company says AI is now actively fighting scams in Chrome, Search and Android, with new tools able to detect and neutralise threats more effectively than before.

At the heart of these efforts is Gemini Nano, Google’s on-device AI model, which has been integrated into Chrome to help identify phishing and fraudulent websites.

The report claims the upgraded systems can now detect 20 times more harmful websites, many of which aim to deceive users by creating a false sense of urgency or offering fake promotions. These scams often involve phishing, cryptocurrency fraud, clone websites and misleading subscriptions.

Search has also seen major improvements. Google’s AI-powered classifiers are now better at spotting scam-related content before users encounter it. For example, the company says it has reduced scams involving fake airline customer service agents by over 80 per cent, thanks to its enhanced detection tools.

Meanwhile, Android users are beginning to see stronger safeguards as well. Chrome on Android now warns users about suspicious website notifications, offering the choice to unsubscribe or review them safely.

Google has confirmed plans to extend these protections even further in the coming months, aiming to cover a broader range of online threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta blocks Muslim news page on Instagram in India at government request

Meta has restricted access to the prominent Instagram news account @Muslim for users in India at the request of the Indian government, the account’s founder said on Wednesday.

The move comes as hostilities intensify between India and Pakistan, following the deadliest military exchanges between the nuclear-armed neighbours in two decades.

Instagram users in India attempting to access the account, which has 6.7 million followers, were met with a message stating: ‘Account not available in India. This is because we complied with a legal request to restrict this content.’

Ameer Al-Khatahtbeh, founder and editor-in-chief of the page, described the restriction as censorship. ‘Meta has blocked the @Muslim account by legal request of the Indian government,’ he said in a statement. ‘This is censorship.’

Meta declined to comment, but directed AFP to a company page explaining its policy to comply with local laws when requested by governments.

The restriction follows a wave of similar bans on Pakistani public figures and media. Social media accounts of Pakistani cricketers, actors, and even former Prime Minister Imran Khan have also been blocked in India in recent days.

The situation unfolds amid escalating conflict in Kashmir, where India blamed Pakistan for a deadly attack on tourists earlier this month. In retaliation, India launched air strikes, prompting artillery exchanges along the contested border. At least 43 deaths have been reported, and Pakistan has vowed to respond.

@Muslim, one of the most-followed Muslim news sources on Instagram, is known for covering political and social justice issues.

Al-Khatahtbeh apologised to Indian followers and urged Meta to restore access, stating, ‘When platforms and countries try to silence media, it tells us we are doing our job in holding those in power accountable.’

The conflict has also seen a sharp rise in online misinformation, including deepfake videos and misleading content circulated across social media platforms. On Wednesday, US President Donald Trump called for both countries to halt the violence and offered assistance in mediating peace talks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!