US bans nonconsensual explicit deepfakes nationwide

The US is introducing a landmark federal law aimed at curbing the spread of non-consensual explicit deepfake images, following mounting public outrage.

President Donald Trump is expected to sign the Take It Down Act, which will criminalise the sharing of explicit images, whether real or AI-generated, without consent. The law will also require tech platforms to remove such content within 48 hours of notification, instead of leaving the matter to patchy state laws.

The legislation is one of the first at the federal level to directly tackle the misuse of AI-generated content. It builds on earlier laws that protected children but had left adults vulnerable due to inconsistent state regulations.

The bill received rare bipartisan support in Congress and was backed by over 100 organisations, including tech giants like Meta, TikTok and Google. First Lady Melania Trump also supported the act, hosting a teenage victim of deepfake harassment during the president’s address to Congress.

The act was prompted in part by incidents like that of Elliston Berry, a Texas high school student targeted by a classmate who used AI to alter her social media image into a nude photo. Similar cases involving teen girls across the country highlighted the urgency for action.

Tech companies had already started offering tools to remove explicit images, but the lack of consistent enforcement allowed harmful content to persist on less cooperative platforms.

Supporters of the law argue it sends a strong societal message instead of allowing the exploitation to continue unchallenged.

Advocates like Imran Ahmed and Ilana Beller emphasised that while no law is a perfect solution, this one forces platforms to take real responsibility and offers victims some much-needed protection and peace of mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coinbase hit by breach and SEC probe ahead of S&P 500 entry

Cryptocurrency exchange Coinbase has disclosed a potential financial impact of $180 million to $400 million following a cyberattack that compromised customer data, according to a regulatory filing on Thursday.

The company said it received an email from an unidentified threat actor on Sunday, claiming to possess internal documents and account data for a limited number of customers.

Although hackers gained access to personal information such as names, addresses, and email addresses, Coinbase confirmed that no login credentials or passwords were compromised.

Coinbase stated it would reimburse users who were deceived into transferring funds to the attackers. It also revealed that multiple contractors and support staff outside the US had provided information to the hackers. Those involved have been terminated, the company said.

In parallel, the US Securities and Exchange Commission (SEC) is reportedly investigating whether Coinbase previously misrepresented its verified user figures.

Two sources familiar with the matter told Reuters that the SEC inquiry is ongoing, though it does not focus on know-your-customer (KYC) compliance or Bank Secrecy Act obligations. Coinbase has denied any such investigation into its compliance practices.

The SEC declined to comment. Coinbase’s chief legal officer, Paul Grewal, characterised the probe as a continuation of a past investigation into a user metric the company stopped reporting over two years ago. He said Coinbase is cooperating with the SEC but believes the inquiry should be closed.

The news comes ahead of Coinbase’s upcoming addition to the S&P 500 index, potentially overshadowing what had been viewed as a major milestone for the industry. Shares fell 7.2% following the disclosure.

Coinbase has rejected a $20 million ransom demand from the attackers and is cooperating with law enforcement. It has also offered a $20 million reward for information leading to the identification of the hackers.

The firm is opening a new US-based support hub and taking further measures to strengthen its cybersecurity framework.

The cyberattack adds to broader concerns about digital asset platform vulnerabilities. In 2024, hacks have resulted in over $2.2 billion in stolen funds, according to Chainalysis. Bybit alone reported a $1.5 billion theft in February, the largest on record.

Coinbase is also facing a lawsuit filed in the Southern District of New York, alleging the company failed to protect personal data belonging to millions of current and former customers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok AI glitch reignites debate on trust and safety in AI tools

Elon Musk’s AI chatbot, Grok, has caused a stir by injecting unsolicited claims about ‘white genocide’ in South Africa into unrelated user queries. These remarks, widely regarded as part of a debunked conspiracy theory, appeared across various innocuous prompts before being quickly removed.

The strange behaviour led to speculation that Grok’s system prompt had been tampered with, possibly by someone inside xAI. Although Grok briefly claimed it had been instructed to mention the topic, xAI has yet to issue a full technical explanation.

Rival AI leaders, including OpenAI’s Sam Altman, joined public criticism on X, calling the episode a concerning sign of possible editorial manipulation. While Grok’s responses returned to normal within hours, the incident reignited concerns about control and transparency in large AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canva merges data and storytelling

Canva has introduced Sheets, a new spreadsheet platform combining data, design, and AI to simplify and visualise analytics. Announced at the Canva Create: Uncharted event, it redefines spreadsheets by enabling users to turn raw data into charts, reports and content without leaving the Canva interface.

Built-in tools like Magic Formulas, Magic Insights, and Magic Charts, Canva Sheets supports automated analysis and visual storytelling. Users can generate dynamic charts and branded content across platforms in seconds, thanks to Canva AI and features like bulk editing and multilingual translation.

Data Connectors allow seamless integration with platforms such as Google Analytics and HubSpot, ensuring live updates across all connected visuals. The platform is designed to reduce manual tasks in recurring reports and keep teams synchronised in real time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake voice scams target US officials in phishing surge

Hackers are using deepfake voice and video technology to impersonate senior US government officials and high-profile tech figures in sophisticated phishing campaigns designed to steal sensitive data, the FBI has warned.

Since April, cybercriminals have been contacting current and former federal and state officials through fake voice messages and text messages claiming to be from trusted sources.

The scammers attempt to establish rapport and then direct victims to malicious websites to extract passwords and other private information.

The FBI cautions that if hackers compromise one official’s account, they may use that access to impersonate them further and target others in their network.

The agency urges individuals to verify identities, avoid unsolicited links, and enable multifactor authentication to protect sensitive accounts.

Separately, Polygon co-founder Sandeep Nailwal reported a deepfake scam in which bad actors impersonated him and colleagues via Zoom, urging crypto users to install malicious scripts. He described the attack as ‘horrifying’ and noted the difficulty of reporting such incidents to platforms like Telegram.

The FBI and cybersecurity experts recommend examining media for visual inconsistencies, avoiding software downloads during unverified calls, and never sharing credentials or wallet access unless certain of the source’s legitimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU to propose new rules and app to protect children online

The European Commission is taking significant steps to create a safer online environment for children by introducing draft guidelines under the Digital Services Act. These guidelines aim to ensure that online platforms accessible to minors maintain a high level of privacy, safety, and security.

The draft guidelines propose several key measures to safeguard minors online. These include verifying users’ ages to restrict access where appropriate, improving content recommendation systems to reduce children’s exposure to harmful or inappropriate material, and setting children’s accounts to private by default.

Additionally, the guidelines recommend best practices for child-safe content moderation, as well as providing child-friendly reporting channels and user support. They also offer guidance on how platforms should govern themselves internally to maintain a child-safe environment.

These guidelines will apply to all online platforms that minors can access, except for very small enterprises, and will also cover very large platforms with over 45 million monthly users in the EU. The European Commission has involved a wide range of stakeholders in developing the guidelines, including Better Internet for Kids (BIK+) Youth ambassadors, children, parents, guardians, national authorities, online platform providers, and experts.

The inclusive consultation process helps ensure the guidelines are practical and comprehensive. The guidelines are open for feedback until June 10, 2025, with adoption expected by summer.

Meanwhile, the Commission is creating an open-source age-verification app to confirm users’ age without risking privacy, as a temporary measure before the EU Digital Identity Wallet launches in 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Japan approves preemptive cyberdefence law

Japan’s parliament has passed a new law enabling active cyberdefence measures, allowing authorities to legally monitor communications data during peacetime and neutralise foreign servers if cyberattacks occur.

Instead of reacting only after incidents, this law lets the government take preventive steps to counter threats before they escalate.

Operators of vital infrastructure, such as electricity and railway companies, must now report cyber breaches directly to the government. The shift follows recent cyber incidents targeting banks and an airline, prompting Japan to put a full framework in place by 2027.

Although the law permits monitoring of IP addresses in communications crossing Japanese borders, it explicitly bans surveillance of domestic messages and their contents.

A new independent panel will authorise all monitoring and response actions beforehand, instead of leaving decisions solely to security agencies.

Police will handle initial countermeasures, while the Self-Defense Forces will act only when attacks are highly complex or planned. The law, revised to address opposition concerns, includes safeguards to ensure personal rights are protected and that government surveillance remains accountable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns against AI-powered text scams

The FBI has issued a fresh warning urging the public not to trust unsolicited texts or voice messages, even if they appear to come from senior officials. A new wave of AI-powered attacks is reportedly so convincing that traditional signs of fraud are almost impossible to spot.

These campaigns involve voice and text messages crafted with AI, mimicking the voices of known individuals and spoofing phone numbers of trusted contacts or organisations. US victims are lured into clicking malicious links, often under the impression that the messages are urgent or official.

The FBI advises users to verify all communications independently, avoid clicking links or downloading attachments from unknown sources, and listen for unnatural speech patterns or visual anomalies in videos and images.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok adds AI tool to animate photos with realistic effects

TikTok has launched a new feature called AI Alive, allowing users to turn still images into dynamic, short videos. Instead of needing advanced editing skills, creators can now use AI to generate movement and effects with a few taps.

By accessing the Story Camera and selecting a static photo, users can simply type how they want the image to change — such as making the subject smile, dance, or tilt forward. AI Alive then animates the photo, using creative effects to produce a more engaging story.

TikTok says its moderation systems review the original image, the AI prompt, and the final video before it’s shown to the user. A second check occurs before a post is shared publicly, and every video made with AI Alive will include an ‘AI-generated’ label and C2PA metadata to ensure transparency.

The feature stands out as one of the first built-in AI image-to-video tools on a major platform. Snapchat and Instagram already offer AI image generation from text, and Snapchat is reportedly developing a similar image-to-video feature.

Meanwhile, TikTok is also said to be working on adding support for sending photos and voice messages via direct message — something rival apps have long supported.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NatWest hit by 100 million cyber attacks every month

NatWest is defending itself against an average of 100 million cyber attacks each month, according to the bank’s head of cybersecurity.

Speaking to Holyrood’s Criminal Justice Committee, Chris Ulliott outlined the ‘staggering’ scale of digital threats targeting the bank’s systems. Around a third of all incoming emails are blocked before reaching staff, as they are suspected to be the start of an attack.

Instead of relying on basic filters, NatWest analyses every email for malicious content and has a cybersecurity team of hundreds, supported by a multi-million-pound budget.

Mr Ulliott also warned of the growing use of AI by cyber criminals to make scams more convincing—such as altering their appearance during video calls to build trust with victims.

Police Scotland reported that cybercrime has more than doubled since 2020, with incidents rising from 7,710 to 18,280 in 2024. Officials highlighted the threat posed by groups like Scattered Spider, believed to consist of young hackers sharing techniques online.

MSP Rona Mackay called the figures ‘absolutely staggering,’ while Ben Macpherson said he had even been impersonated by fraudsters.

Law enforcement agencies, including the FBI, are now working together to tackle online crime. Meanwhile, Age Scotland warned that many older people lack confidence online, making them especially vulnerable to scams that can lead to financial ruin and emotional distress.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!