UK National Cyber Security Centre calls for strategic cybersecurity policy agenda

The United Kingdom’s National Cyber Security Centre (NCSC), part of GCHQ, has called for the adoption of a long-term, strategic policy agenda to address increasing cybersecurity risks. That appeal follows prolonged delays in the introduction of updated cybersecurity legislation by the UK government.

In a blog post, co-authored by Ollie Whitehouse, NCSC’s Chief Technology Officer, and Paul W., the Principal Technical Director, the agency underscored the need for more political engagement in shaping the country’s cybersecurity landscape. Although the NCSC does not possess policymaking powers, its latest message highlights its growing concern over the UK’s limited progress in implementing comprehensive cybersecurity reforms.

Whitehouse has previously argued that the current technology market fails to incentivise the development and maintenance of secure digital products. He asserts that while the technical community knows how to build secure systems, commercial pressures and market conditions often favour speed, cost-cutting, and short-term gains over security. That, he notes, is a structural issue that cannot be resolved through voluntary best practices alone and likely requires legislative and regulatory measures.

The UK government has yet to introduce the long-anticipated Cyber Security and Resilience Bill to Parliament. Initially described by its predecessor as a step toward modernising the country’s cyber legislation, the bill remains unpublished. Another delayed effort is a consultation led by the Home Office on ransomware response policy, which was postponed due to the snap election and is still awaiting an official government response.

The agency’s call mirrors similar debates in the United States, where former Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly advocated for holding software vendors accountable for product security. The Biden administration’s national cybersecurity strategy introduced early steps toward vendor liability, a concept that has gained traction among experts like Whitehouse.

However, the current US administration under President Trump has since rolled back some of these requirements, most notably through a recent executive order eliminating obligations for government contractors to attest to their products’ security.

By contrast, the European Union has advanced several legislative initiatives aimed at strengthening digital security, including the Cyber Resilience Act. Yet, these efforts face challenges of their own, such as reconciling economic priorities with cybersecurity requirements and adapting EU-wide standards to national legal systems.

In its blog post, the NCSC reiterated that the financial and societal burden of cybersecurity failures is currently borne by consumers, governments, insurers, and other downstream actors. The agency argues that addressing these issues requires a reassessment of underlying market dynamics—particularly those that do not reward secure development practices or long-term resilience.

While the NCSC lacks the authority to enforce regulations, its increasingly direct communications reflect a broader shift within parts of the UK’s cybersecurity community toward advocating for more comprehensive policy intervention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump highlights crypto plans at Coinbase summit

US President Donald Trump sent a prerecorded message to Coinbase’s State of Crypto Summit, reaffirming his commitment to advancing crypto regulation in the US.

The administration is working with Congress to pass the GENIUS Act supporting dollar-backed stablecoins and clear market frameworks.

Congress is preparing to vote on the GENIUS Act in the Senate, while the House is moving forward with the CLARITY Act. The latter seeks to clarify the regulatory roles of the SEC and the Commodity Futures Trading Commission concerning digital assets.

Both bills form part of a broader effort to create a clear legal environment for the crypto industry.

Some Democrats oppose Trump’s crypto ties, especially the family-backed stablecoin from World Liberty Financial. Despite tensions, Trump continues promoting his crypto agenda through conferences and videos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta and TikTok contest the EU’s compliance charges

Meta and TikTok have taken their fight against an the EU supervisory fee to Europe’s second-highest court, arguing that the charges are disproportionate and based on flawed calculations.

The fee, introduced under the Digital Services Act (DSA), requires major online platforms to pay 0.05% of their annual global net income to cover the European Commission’s oversight costs.

Meta questioned the Commission’s methodology, claiming the levy was based on the entire group’s revenue instead of the specific EU-based subsidiary.

The company’s lawyer told judges it still lacked clarity on how the fee was calculated, describing the process as opaque and inconsistent with the spirit of the law.

TikTok also criticised the charge, alleging inaccurate and discriminatory data inflated its payment.

Its legal team argued that user numbers were double-counted when people switched between devices. The Commission had wrongly calculated fees based on group profits rather than platform-specific earnings.

The Commission defended its approach, saying group resources should bear the cost when consolidated accounts are used. A ruling is expected from the General Court sometime next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup faces lawsuit from Disney and Universal

Two of Hollywood’s most powerful studios, Disney and Universal, have launched a copyright infringement lawsuit against the AI firm Midjourney, accusing it of illegally replicating iconic characters.

The studios claim the San Francisco-based company copied their creative works without permission, describing it as a ‘bottomless pit of plagiarism’.

Characters such as Darth Vader, Elsa, and the Minions were cited in the 143-page complaint, which alleges Midjourney used these images to train its AI system and generate similar content.

Disney and Universal argue that the AI firm failed to invest in the creative process, yet profited heavily from the output — reportedly earning $US300 million in paid subscriptions last year.

Despite early attempts by the studios to raise concerns and propose safeguards already adopted by other AI developers,

Midjourney allegedly ignored them and pressed ahead with further product releases. The company, which calls itself a small, self-funded team of 11, has declined to comment on the lawsuit directly but insists it has a long future ahead.

Disney’s legal chief, Horacio Gutierrez, stressed the importance of protecting creative works that result from decades of investment. While supporting AI as a tool for innovation, he maintained that ‘piracy is piracy’, regardless of whether humans or machines carry it out.

The studios are seeking damages and a court order to stop the AI firm from continuing its alleged copyright violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Durov questions motives behind French arrest

Telegram founder Pavel Durov says he remains baffled by his detention in France, describing the incident as politically charged and unjustified. In his first interview since his August 2024 arrest, Durov said French prosecutors treated Telegram’s operations as a mystery.

Durov was indicted on six charges, including complicity in criminal activity, money laundering, and failing to respond to legal requests. He denied the accusations, stating that a top-tier accounting firm audits Telegram and spends millions on compliance quarterly.

‘We did nothing wrong,’ he said, accusing French authorities of failing to follow due legal process.

Carlson criticized the arrest as an attempt to humiliate Durov and questioned why civil liberties advocates were silent.

In response, Durov pointed out that over nine million Telegram users have signed a letter demanding his release. He also emphasized that Telegram is prepared to leave countries that oppose its values.

Telegram’s global user base continues to grow rapidly, reaching one billion monthly active users as of March 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reddit targets AI firm over scraped sports posts

Reddit has taken legal action against AI company Anthropic, accusing it of scraping content from the platform’s sports-focused communities.

The lawsuit claims Anthropic violated Reddit’s user agreement by collecting posts without permission, particularly from fan-driven discussions that are central to how sports content is shared online.

Reddit argues the scraping undermines its obligations to over 100 million daily users, especially around privacy and user control. According to the filing, Anthropic’s actions override assurances that users can manage or delete their content as they see fit.

The platform emphasises that users gain no benefit from technology built using their contributions.

These online sports communities are rich sources of original fan commentary and analysis. On a large scale, such content could enable AI models to imitate sports fan behaviour with impressive accuracy.

While teams or platforms might use such models to enhance engagement or communication, Reddit warns that unauthorised use brings serious ethical and legal risks.

The case could influence how AI companies handle user-generated content across the internet, not just in sports. As web scraping grows more common, the outcome of the dispute may shape future standards for AI training practices and online content rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump Executive Order revises US cyber policy and sanctions scope

US President Donald J. Trump signed a new Executive Order (EO) aimed at amending existing federal cybersecurity policies. The EO modifies selected provisions of previous executive orders signed by former Presidents Barack Obama and Joe Biden, introducing updates to sanctions policy, digital identity initiatives, and secure technology practices.

One of the main changes involves narrowing the scope of sanctions related to malicious cyber activity. The new EO limits the applicability of such sanctions to foreign individuals or entities involved in cyberattacks against US critical infrastructure. It also states that sanctions do not apply to election-related activities, though this clarification is included in a White House fact sheet rather than the EO text itself.

The order revokes provisions from the Biden-era EO that proposed expanding the use of federal digital identity documents, including mobile driver’s licenses. According to the fact sheet, this revocation is based on concerns regarding implementation and potential for misuse. Some analysts have expressed concerns about the implications of this reversal on broader digital identity strategies.

In addition to these policy revisions, the EO outlines technical measures to strengthen cybersecurity capabilities across federal agencies. These include:

  • Developing new encryption standards to prepare for advances in quantum computing, with implementation targets set for 2030.
  • Directing the National Security Agency (NSA) and Office of Management and Budget (OMB) to issue updated federal encryption requirements.
  • Refocusing artificial intelligence (AI) and cybersecurity initiatives on identifying and mitigating vulnerabilities.
  • Assigning the National Institute of Standards and Technology (NIST) responsibility for updating and guiding secure software development practices. This includes the establishment of an industry consortium and a preliminary update to its secure software development framework.

The EO also includes provisions for improving vulnerability tracking and mitigation in AI systems, with coordination required among the Department of Defence, the Department of Homeland Security, and the Office of the Director of National Intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK judges issue warning on unchecked AI use by lawyers

A senior UK judge has warned that lawyers may face prosecution if they continue citing fake legal cases generated by AI without verifying their accuracy.

High Court justice Victoria Sharp called the misuse of AI a threat to justice and public trust, after lawyers in two recent cases relied on false material created by generative tools.

In one £90 million lawsuit involving Qatar National Bank, a lawyer submitted 18 cases that did not exist. The client later admitted to supplying the false information, but Justice Sharp criticised the lawyer for depending on the client’s research instead of conducting proper legal checks.

In another case, five fabricated cases were used in a housing claim against the London Borough of Haringey. The barrister denied using AI but failed to provide a clear explanation.

Both incidents have been referred to professional regulators. Sharp warned that submitting false information could amount to contempt of court or, in severe cases, perverting the course of justice — an offence that can lead to life imprisonment.

While recognising AI as a useful legal tool, Sharp stressed the need for oversight and regulation. She said AI’s risks must be managed with professional discipline if public confidence in the legal system is to be preserved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Odyssey presents immersive AI-powered streaming

Odyssey, a startup founded by self-driving veterans Oliver Cameron and Jeff Hawke, has unveiled an AI model that allows users to interact with streaming video in real time.

The technology generates video frames every 40 milliseconds, enabling users to move through scenes like a 3D video game instead of passively watching. A demo is currently available online, though it is still in its early stages.

The system relies on a new kind of ‘world model’ that predicts future visual states based on previous actions and environments. Odyssey claims its model can maintain spatial consistency, learn motion from video, and sustain coherent video output for five minutes or more.

Unlike models trained solely on internet data, Odyssey captures real-world environments using a custom 360-degree, backpack-mounted camera to build higher-fidelity simulations.

Tech giants and AI startups are exploring world models to power next-generation simulations and interactive media. Yet creative professionals remain wary. A 2024 study commissioned by the Animation Guild predicted significant job disruptions across film and animation.

Game studios like Activision Blizzard have been scrutinised for using AI while cutting staff.

Odyssey, however, insists its goal is collaboration instead of replacement. The company is also developing software to let creators edit scenes using tools like Unreal Engine and Blender.

Backed by $27 million in funding and supported by Pixar co-founder Ed Catmull, Odyssey aims to transform video content across entertainment, education, and advertising through on-demand interactivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!