Lords reject UK AI copyright bill again

The UK government has suffered a second defeat in the House of Lords over its Data (Use and Access) Bill, as peers once again backed a copyright-focused amendment aimed at protecting artists from AI content scraping.

Baroness Kidron, a filmmaker and digital rights advocate, led the charge, accusing ministers of listening to the ‘sweet whisperings of Silicon Valley’ and allowing tech firms to ‘redefine theft’ by exploiting copyrighted material without permission.

Her amendment would force AI companies to disclose their training data sources and obtain consent from rights holders.

The government had previously rejected this amendment, arguing it would lead to ‘piecemeal’ legislation and pre-empt ongoing consultations.

But Kidron’s position was strongly supported across party lines, with peers calling the current AI practices ‘burglary’ and warning of catastrophic damage to the UK’s creative sector.

High-profile artists like Sir Elton John, Paul McCartney, Annie Lennox, and Kate Bush have condemned the government’s stance, with Sir Elton branding ministers ‘losers’ and accusing them of enabling theft.

Peers from Labour, the Lib Dems, the Conservatives, and the crossbenches united to defend UK copyright law, calling the government’s actions a betrayal of the country’s leadership in intellectual property rights.

Labour’s Lord Brennan warned against a ‘double standard’ for AI firms, while Lord Berkeley insisted immediate action was needed to prevent long-term harm.

Technology Minister Baroness Jones countered that no country has resolved the AI-copyright dilemma and warned that the amendment would only create more regulatory confusion.

Nonetheless, peers voted overwhelmingly in favour of Kidron’s proposal—287 to 118—sending the bill back to the Commons with a strengthened demand for transparency and copyright safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers believe AI transparency is within reach by 2027

Top AI researchers admit they still do not fully understand how generative AI models work. Unlike traditional software that follows predefined logic, gen AI models learn to generate responses independently, creating a challenge for developers trying to interpret their decision-making processes.

Dario Amodei, co-founder of Anthropic, described this lack of understanding as unprecedented in tech history. Mechanistic interpretability — a growing academic field — aims to reverse engineer how gen AI models arrive at outputs.

Experts compare the challenge to understanding the human brain, but note that, unlike biology, every digital ‘neuron’ in AI is visible.

Companies like Goodfire are developing tools to map AI reasoning steps and correct errors, helping prevent harmful use or deception. Boston University professor Mark Crovella says interest is surging due to the practical and intellectual appeal of interpreting AI’s inner logic.

Researchers believe the ability to reliably detect biases or intentions within AI models could be achieved within a few years.

This transparency could open the door to AI applications in critical fields like security, and give firms a major competitive edge. Understanding how these systems work is increasingly seen as vital for global tech leadership and public safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elton John threatens legal fight over AI use

Sir Elton John has lashed out at the UK government over plans that could allow AI companies to use copyrighted content without paying artists, calling ministers ‘absolute losers’ and accusing them of ‘thievery on a high scale.’

He warned that younger musicians, without the means to challenge tech giants, would be most at risk if the proposed changes go ahead.

The row centres on a rejected House of Lords amendment to the Data Bill, which would have required AI firms to disclose what material they use.

Despite a strong majority in favour in the Lords, the Commons blocked the move, meaning the bill will keep bouncing between the two chambers until a compromise is reached.

Sir Elton, joined by playwright James Graham, said the government was failing to defend creators and seemed more interested in appeasing powerful tech firms.

More than 400 artists, including Sir Paul McCartney, have signed a letter urging Prime Minister Sir Keir Starmer to strengthen copyright protections instead of allowing AI to mine their work unchecked.

While the government insists no changes will be made unless they benefit creators, critics say the current approach risks sacrificing the UK’s music industry for Silicon Valley’s gain.

Sir Elton has threatened legal action if the plans go ahead, saying, ‘We’ll fight it all the way.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US bans nonconsensual explicit deepfakes nationwide

The US is introducing a landmark federal law aimed at curbing the spread of non-consensual explicit deepfake images, following mounting public outrage.

President Donald Trump is expected to sign the Take It Down Act, which will criminalise the sharing of explicit images, whether real or AI-generated, without consent. The law will also require tech platforms to remove such content within 48 hours of notification, instead of leaving the matter to patchy state laws.

The legislation is one of the first at the federal level to directly tackle the misuse of AI-generated content. It builds on earlier laws that protected children but had left adults vulnerable due to inconsistent state regulations.

The bill received rare bipartisan support in Congress and was backed by over 100 organisations, including tech giants like Meta, TikTok and Google. First Lady Melania Trump also supported the act, hosting a teenage victim of deepfake harassment during the president’s address to Congress.

The act was prompted in part by incidents like that of Elliston Berry, a Texas high school student targeted by a classmate who used AI to alter her social media image into a nude photo. Similar cases involving teen girls across the country highlighted the urgency for action.

Tech companies had already started offering tools to remove explicit images, but the lack of consistent enforcement allowed harmful content to persist on less cooperative platforms.

Supporters of the law argue it sends a strong societal message instead of allowing the exploitation to continue unchallenged.

Advocates like Imran Ahmed and Ilana Beller emphasised that while no law is a perfect solution, this one forces platforms to take real responsibility and offers victims some much-needed protection and peace of mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake voice scams target US officials in phishing surge

Hackers are using deepfake voice and video technology to impersonate senior US government officials and high-profile tech figures in sophisticated phishing campaigns designed to steal sensitive data, the FBI has warned.

Since April, cybercriminals have been contacting current and former federal and state officials through fake voice messages and text messages claiming to be from trusted sources.

The scammers attempt to establish rapport and then direct victims to malicious websites to extract passwords and other private information.

The FBI cautions that if hackers compromise one official’s account, they may use that access to impersonate them further and target others in their network.

The agency urges individuals to verify identities, avoid unsolicited links, and enable multifactor authentication to protect sensitive accounts.

Separately, Polygon co-founder Sandeep Nailwal reported a deepfake scam in which bad actors impersonated him and colleagues via Zoom, urging crypto users to install malicious scripts. He described the attack as ‘horrifying’ and noted the difficulty of reporting such incidents to platforms like Telegram.

The FBI and cybersecurity experts recommend examining media for visual inconsistencies, avoiding software downloads during unverified calls, and never sharing credentials or wallet access unless certain of the source’s legitimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU to propose new rules and app to protect children online

The European Commission is taking significant steps to create a safer online environment for children by introducing draft guidelines under the Digital Services Act. These guidelines aim to ensure that online platforms accessible to minors maintain a high level of privacy, safety, and security.

The draft guidelines propose several key measures to safeguard minors online. These include verifying users’ ages to restrict access where appropriate, improving content recommendation systems to reduce children’s exposure to harmful or inappropriate material, and setting children’s accounts to private by default.

Additionally, the guidelines recommend best practices for child-safe content moderation, as well as providing child-friendly reporting channels and user support. They also offer guidance on how platforms should govern themselves internally to maintain a child-safe environment.

These guidelines will apply to all online platforms that minors can access, except for very small enterprises, and will also cover very large platforms with over 45 million monthly users in the EU. The European Commission has involved a wide range of stakeholders in developing the guidelines, including Better Internet for Kids (BIK+) Youth ambassadors, children, parents, guardians, national authorities, online platform providers, and experts.

The inclusive consultation process helps ensure the guidelines are practical and comprehensive. The guidelines are open for feedback until June 10, 2025, with adoption expected by summer.

Meanwhile, the Commission is creating an open-source age-verification app to confirm users’ age without risking privacy, as a temporary measure before the EU Digital Identity Wallet launches in 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Valve denies Steam data breach

Valve has confirmed that a cache of old Steam two-factor authentication codes and phone numbers, recently circulated by a hacker known as ‘Machine1337’, is indeed real, but insists it did not suffer a data breach.

Instead of pointing to its own systems, Valve explained that the leak involves outdated SMS messages, which are typically sent unencrypted and routed through multiple providers. These codes, once valid for only 15 minutes, were not linked to specific Steam accounts, passwords, or payment information.

The leaked data sparked early speculation that third-party messaging provider Twilio was the source of the breach, especially after their name appeared in the dataset. However, both Valve and Twilio denied any direct involvement, with Valve stating it does not even use Twilio’s services.

The true origin of the breach remains uncertain, and Valve acknowledged that tracing it may be difficult, as SMS messages often pass through several intermediaries before reaching users.

While the leaked information may not immediately endanger Steam accounts, Valve advised users to remain cautious. Phone numbers, when combined with other data, could still be used for phishing attacks.

Instead of relying on SMS for security, users are encouraged to activate the Steam Mobile Authenticator, which offers a more secure alternative for account verification.

Despite the uncertainty surrounding the source of the breach, Valve reassured users there’s no need to change passwords or phone numbers. Still, it urged vigilance, recommending that users routinely review their security settings and remain wary of any unsolicited account notifications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use fake PayPal email to seize bank access

A man from Virginia fell victim to a sophisticated PayPal scam that allowed hackers to gain remote control of his computer and access his bank accounts.

After receiving a fake email about a laptop purchase, he called the number listed in the message, believing it to be legitimate. The person on the other end instructed him to enter a code into his browser, which unknowingly installed a program giving the scammer full access to his system.

Files were scanned, and money was transferred between his accounts—all while he was urged to stay on the line and visit the bank, without informing anyone.

The scam, known as a remote access attack, starts with a convincing email that appears to come from a trusted source. Instead of fixing any problem, the real aim is to deceive victims into granting hackers full control.

Once inside, scammers can steal personal data, access bank accounts, and install malware that remains even after the immediate threat ends. These attacks often unfold in minutes, using fear and urgency to manipulate targets into acting quickly and irrationally.

Quick action helped limit the damage in this case. The victim shut down his computer, contacted his bank and changed his passwords—steps that likely prevented more extensive losses. However, many people aren’t as fortunate.

Experts warn that scammers increasingly rely on psychological tricks instead of just technical ones, isolating their victims and urging secrecy during the attack.

To avoid falling for similar scams, it’s safer to verify emails by using official websites instead of clicking any embedded links or calling suspicious numbers.

Remote control should never be granted to unsolicited support calls, and all devices should have up-to-date antivirus protection and multifactor authentication enabled. Online safety now depends just as much on caution and awareness as it does on technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta targets critics as FTC case unfolds

Long-standing friction between Big Tech and the media resurfaced during Meta’s antitrust trial with the Federal Trade Commission this week. In a heated courtroom exchange, Meta’s legal team used critical commentary from prominent tech journalists to cast doubt on the FTC’s case.

Meta’s lead attorney, Mark Hansen, questioned the credibility of FTC expert Scott Hemphill by referencing a 2019 antitrust pitch Hemphill co-authored with Facebook co-founder Chris Hughes and former White House advisor Tim Wu.

The presentation cited public statements from reporters Kara Swisher and Om Malik as evidence of Meta’s dominance and aggressive acquisitions.

Hansen dismissed Malik as a ‘failed blogger’ with personal bias and accused Swisher of similar hostility, projecting a headline where she described Mark Zuckerberg as a ‘small little creature with a shriveled soul.’

He also attempted to discredit a cited New York Post article by invoking the tabloid’s notorious ‘Headless Body in Topless Bar’ cover.

These moments highlight Meta’s growing resentment toward the press, which has intensified alongside rising criticism of its business practices. Once seen as scrappy disruptors, Facebook and other tech giants now face regular scrutiny—and appear eager to push back.

Swisher and Malik have both openly criticized Meta in the past. Swisher famously challenged Zuckerberg over content moderation and political speech, while Malik has questioned the company’s global expansion strategies.

Their inclusion in a legal document presented in court underscores how media commentary is influencing regulatory narratives. Meta has previously blamed critical press for damaging user sentiment in the wake of scandals like Cambridge Analytica.

The FTC argues that consistent engagement levels despite bad press prove Meta’s monopoly power—users feel they have no real alternatives to Facebook and Instagram. As the trial continues, so too does Meta’s public battle—not just with regulators, but with the journalists documenting its rise and reckoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok unveils AI video feature

TikTok has launched ‘AI Alive,’ its first image-to-video feature that allows users to transform static photos into animated short videos within TikTok Stories.

Accessible only through the Story Camera, the tool applies AI-driven movement and effects—like shifting skies, drifting clouds, or expressive animations—to bring photos to life.

Unlike text-to-image tools found on Instagram and Snapchat, TikTok’s latest feature takes visual storytelling further by enabling full video generation from single images. Although Snapchat plans to introduce a similar function, TikTok has moved ahead with this innovation.

All AI Alive videos will carry an AI-generated label and include C2PA metadata to ensure transparency, even when shared beyond the platform.

TikTok emphasises safety, noting that every AI Alive video undergoes several moderation checks before it appears to creators.

Uploaded photos, prompts, and generated videos are reviewed to prevent rule-breaking content. Users can report violations, and final safety reviews are conducted before public sharing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!