US bans nonconsensual explicit deepfakes nationwide

The US is introducing a landmark federal law aimed at curbing the spread of non-consensual explicit deepfake images, following mounting public outrage.

President Donald Trump is expected to sign the Take It Down Act, which will criminalise the sharing of explicit images, whether real or AI-generated, without consent. The law will also require tech platforms to remove such content within 48 hours of notification, instead of leaving the matter to patchy state laws.

The legislation is one of the first at the federal level to directly tackle the misuse of AI-generated content. It builds on earlier laws that protected children but had left adults vulnerable due to inconsistent state regulations.

The bill received rare bipartisan support in Congress and was backed by over 100 organisations, including tech giants like Meta, TikTok and Google. First Lady Melania Trump also supported the act, hosting a teenage victim of deepfake harassment during the president’s address to Congress.

The act was prompted in part by incidents like that of Elliston Berry, a Texas high school student targeted by a classmate who used AI to alter her social media image into a nude photo. Similar cases involving teen girls across the country highlighted the urgency for action.

Tech companies had already started offering tools to remove explicit images, but the lack of consistent enforcement allowed harmful content to persist on less cooperative platforms.

Supporters of the law argue it sends a strong societal message instead of allowing the exploitation to continue unchallenged.

Advocates like Imran Ahmed and Ilana Beller emphasised that while no law is a perfect solution, this one forces platforms to take real responsibility and offers victims some much-needed protection and peace of mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE to host world’s biggest AI site outside the US

The United Arab Emirates will build the largest artificial intelligence infrastructure outside the United States, following a high-level meeting between UAE President Sheikh Mohamed bin Zayed Al Nahyan and President Trump in Abu Dhabi.

It will be constructed by G42 and involve US firms under the newly established US-UAE AI Acceleration Partnership. Spanning 10 square miles in Abu Dhabi, the AI campus will run on a mix of nuclear, solar and gas energy to limit emissions and will feature a dedicated science park to drive innovation.

A 5GW capacity will enable it to serve half the global population, offering US cloud providers a vital regional hub. As part of the agreement, the UAE has pledged to align its national security rules with US standards, including strict technology safeguards and tighter access controls for computing power.

The UAE may also be permitted to purchase up to 500,000 Nvidia AI chips annually starting this year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok AI glitch reignites debate on trust and safety in AI tools

Elon Musk’s AI chatbot, Grok, has caused a stir by injecting unsolicited claims about ‘white genocide’ in South Africa into unrelated user queries. These remarks, widely regarded as part of a debunked conspiracy theory, appeared across various innocuous prompts before being quickly removed.

The strange behaviour led to speculation that Grok’s system prompt had been tampered with, possibly by someone inside xAI. Although Grok briefly claimed it had been instructed to mention the topic, xAI has yet to issue a full technical explanation.

Rival AI leaders, including OpenAI’s Sam Altman, joined public criticism on X, calling the episode a concerning sign of possible editorial manipulation. While Grok’s responses returned to normal within hours, the incident reignited concerns about control and transparency in large AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canva merges data and storytelling

Canva has introduced Sheets, a new spreadsheet platform combining data, design, and AI to simplify and visualise analytics. Announced at the Canva Create: Uncharted event, it redefines spreadsheets by enabling users to turn raw data into charts, reports and content without leaving the Canva interface.

Built-in tools like Magic Formulas, Magic Insights, and Magic Charts, Canva Sheets supports automated analysis and visual storytelling. Users can generate dynamic charts and branded content across platforms in seconds, thanks to Canva AI and features like bulk editing and multilingual translation.

Data Connectors allow seamless integration with platforms such as Google Analytics and HubSpot, ensuring live updates across all connected visuals. The platform is designed to reduce manual tasks in recurring reports and keep teams synchronised in real time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s quantum chip hints at multiverse

Google’s new quantum computer chip, Willow, has performed a computation in under five minutes that would take traditional supercomputers ten septillion years. Experts now believe this feat could support the multiverse theory, as Willow might be tapping into parallel universes to process information.

Willow also significantly reduces error rates, a major breakthrough in the field of quantum computing. The chip’s unprecedented speed and accuracy could pave the way for hybrid AI systems that combine quantum and classical computing.

Physicists like Hartmut Neven and David Deutsch suggest quantum mechanics implies multiple realities, reinforcing theories once considered speculative. If accessible and scalable, Willow could usher in an era of AI powered by multiverse-level processing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan approves preemptive cyberdefence law

Japan’s parliament has passed a new law enabling active cyberdefence measures, allowing authorities to legally monitor communications data during peacetime and neutralise foreign servers if cyberattacks occur.

Instead of reacting only after incidents, this law lets the government take preventive steps to counter threats before they escalate.

Operators of vital infrastructure, such as electricity and railway companies, must now report cyber breaches directly to the government. The shift follows recent cyber incidents targeting banks and an airline, prompting Japan to put a full framework in place by 2027.

Although the law permits monitoring of IP addresses in communications crossing Japanese borders, it explicitly bans surveillance of domestic messages and their contents.

A new independent panel will authorise all monitoring and response actions beforehand, instead of leaving decisions solely to security agencies.

Police will handle initial countermeasures, while the Self-Defense Forces will act only when attacks are highly complex or planned. The law, revised to address opposition concerns, includes safeguards to ensure personal rights are protected and that government surveillance remains accountable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns against AI-powered text scams

The FBI has issued a fresh warning urging the public not to trust unsolicited texts or voice messages, even if they appear to come from senior officials. A new wave of AI-powered attacks is reportedly so convincing that traditional signs of fraud are almost impossible to spot.

These campaigns involve voice and text messages crafted with AI, mimicking the voices of known individuals and spoofing phone numbers of trusted contacts or organisations. US victims are lured into clicking malicious links, often under the impression that the messages are urgent or official.

The FBI advises users to verify all communications independently, avoid clicking links or downloading attachments from unknown sources, and listen for unnatural speech patterns or visual anomalies in videos and images.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s Behemoth AI model faces setback

Meta Platforms has postponed the release of its flagship AI model, known as ‘Behemoth,’ due to internal concerns about its performance, according to a report by the Wall Street Journal.

Instead of launching as planned, engineers are struggling to deliver improvements that would meaningfully advance the model beyond earlier versions.

Behemoth was originally scheduled for release in April to coincide with Meta’s first AI developer conference but was quietly delayed to June. The latest update suggests the launch has now been pushed to autumn or later, as internal doubts grow over whether it is ready for public deployment.

In April, Meta previewed Behemoth under the Llama 4 line, calling it ‘one of the smartest LLMs in the world’ and positioning it as a teaching model for future AI systems. Instead of Behemoth, Meta released Llama 4 Scout and Llama 4 Maverick as the latest iterations in its AI portfolio.

The delay comes amid intense competition in the generative AI space, where rivals like Google, OpenAI, and Anthropic continue advancing their models. Meta appears to be opting for caution instead of rushing an underwhelming product to market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches AI safety hub

OpenAI has launched a public online hub to share internal safety evaluations of its AI models, aiming to increase transparency around harmful content, jailbreaks, and hallucination risks. The hub will be updated after major model changes, allowing the public to track progress in safety and reliability over time.

The move follows growing criticism about the company’s testing methods, especially after inappropriate ChatGPT responses surfaced in late 2023. Instead of waiting for backlash, OpenAI is now introducing an optional alpha testing phase, letting users provide feedback before wider model releases.

The hub also marks a departure from the company’s earlier stance on secrecy. In 2019, OpenAI withheld GPT-2 over misuse concerns. Since then, it has shifted towards transparency by forming safety-focused teams and responding to calls for open safety metrics.

OpenAI’s approach appears timely, as several countries are building AI Safety Institutes to evaluate models before launch. Instead of relying on private sector efforts alone, the global landscape now reflects a multi-stakeholder push to create stronger safety standards and governance for advanced AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok adds AI tool to animate photos with realistic effects

TikTok has launched a new feature called AI Alive, allowing users to turn still images into dynamic, short videos. Instead of needing advanced editing skills, creators can now use AI to generate movement and effects with a few taps.

By accessing the Story Camera and selecting a static photo, users can simply type how they want the image to change — such as making the subject smile, dance, or tilt forward. AI Alive then animates the photo, using creative effects to produce a more engaging story.

TikTok says its moderation systems review the original image, the AI prompt, and the final video before it’s shown to the user. A second check occurs before a post is shared publicly, and every video made with AI Alive will include an ‘AI-generated’ label and C2PA metadata to ensure transparency.

The feature stands out as one of the first built-in AI image-to-video tools on a major platform. Snapchat and Instagram already offer AI image generation from text, and Snapchat is reportedly developing a similar image-to-video feature.

Meanwhile, TikTok is also said to be working on adding support for sending photos and voice messages via direct message — something rival apps have long supported.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!