GPAI Code of Practice creates legal uncertainty for non-signatories

Lawyers at William Fry say the EU’s final Code of Practice for general-purpose AI (GPAI) models leaves key questions unanswered. GPAI systems include models such as OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama, trained on vast datasets for broad applications.

The Code of Practice, released last week, addresses transparency, safety, security, and copyright, and is described by the European Commission as a voluntary tool. It was prepared by independent experts to help GPAI developers comply with upcoming legal obligations under the EU AI Act.

In a statement on the firm’s website, William Fry lawyers Barry Scannell and Leo Moore question how voluntary the code truly is. They note that signatories not in full compliance can still be seen as acting in good faith and will be supported rather than penalised.

A protected grace period runs until 2 August 2026, after which the AI Act could allow fines for non-compliance. The lawyers warn that this creates a two-tier system, shielding signatories while exposing non-signatories to immediate legal risk under the AI Act.

Developers who do not sign the code may face higher regulatory scrutiny, despite it being described as non-binding. William Fry also points out that detailed implementation guidelines and templates have not yet been published by the EU.

Additional guidance to clarify key GPAI concepts is expected later this month, but the current lack of detail creates uncertainty. The code’s copyright section, the lawyers argue, shows how the document has evolved into a quasi-regulatory framework.

An earlier draft required only reasonable efforts to avoid copyright-infringing sources. The final version demands the active exclusion of such sites. A proposed measure requiring developers to verify the source of copyrighted data acquired from third parties has been removed from the final draft.

The lawyers argue that this creates a practical blind spot, allowing unlawful content to slip into training data undetected. Rights holders still retain the ability to pursue action if they believe their content was misused, even if providers are signatories.

Meanwhile, the transparency chapter now outlines specific standards, rather than general principles. The safety and security section also sets enforceable expectations, increasing the operational burden on model developers.

William Fry warns that gaps between the code’s obligations and the missing technical documentation could have costly consequences. They conclude that, without the final training data template or implementation details, both developers and rights holders face compliance risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI-generated video is reshaping the film industry

AI-generated video has evolved at breakneck speed, moving from distorted and unconvincing clips to hyper-realistic creations that rival traditional filmmaking. What was once a blurry, awkward depiction of Will Smith eating spaghetti in 2023 is now flawlessly rendered on platforms like Google’s Veo 3.

In just months, tools such as Luma Labs’ Dream Machine, OpenAI’s Sora, and Runway AI’s Gen-4 have redefined what’s possible, drawing the attention of Hollywood studios, advertisers, and artists eager to test the limits of this new creative frontier.

Major industry players are already experimenting with AI for previsualisation, visual effects, and even entire animated films. Lionsgate and AMC Networks have partnered with Runway AI, with executives exploring AI-generated family-friendly versions of blockbuster franchises like John Wick and The Hunger Games.

The technology drastically cuts costs for complex scenes, making it possible to create elaborate previews—like a snowstorm filled with thousands of soldiers—for a fraction of the traditional price. However, while some see AI as a tool to expand creative possibilities, resistance remains strong.

Critics argue that AI threatens traditional artistic processes, raises ethical concerns over energy use and data training, and risks undermining human creativity. The debate mirrors past technological shifts in entertainment—inevitable yet disruptive.

As Runway and other pioneers push toward immersive experiences in augmented and virtual reality, the future of filmmaking may no longer be defined solely by Hollywood, but by anyone with access to these powerful tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google strengthens position as Perplexity and OpenAI launch browsers

OpenAI is reportedly preparing to launch an AI-powered web browser in the coming weeks, aiming to compete with Alphabet’s dominant Chrome browser, according to sources cited by Reuters.

The forthcoming browser seeks to leverage AI to reshape how users interact with the internet, while potentially granting OpenAI deeper access to valuable user data—a key driver behind Google’s advertising empire.

If adopted by ChatGPT’s 500 million weekly active users, the browser could pose a significant challenge to Chrome, which currently underpins much of Alphabet’s ad targeting and search traffic infrastructure.

The browser is expected to feature a native chat interface, reducing the need for users to click through traditional websites. The features align with OpenAI’s broader strategy to embed its services more seamlessly into users’ daily routines.

While the company declined to comment on the development, anonymous sources noted that the browser is likely to support AI agent capabilities, such as booking reservations or completing web forms on behalf of users.

The move comes as OpenAI faces intensifying competition from Google, Anthropic, and Perplexity.

In May, OpenAI acquired the AI hardware start-up io for $6.5 billion, in a deal linked to Apple design veteran Jony Ive. The acquisition signals a strategic push beyond software and into integrated consumer tools.

Despite Chrome’s grip on over two-thirds of the global browser market, OpenAI appears undeterred. Its browser will be built on Chromium—the open-source framework powering Chrome, Microsoft Edge, and other major browsers. Notably, OpenAI hired two former Google executives last year who had previously worked on Chrome.

The decision to build a standalone browser—rather than rely on third-party plug-ins—was reportedly driven by OpenAI’s desire for greater control over both data collection and core functionality.

The control could prove vital as regulatory scrutiny of Google’s dominance in search and advertising intensifies. The United States Department of Justice is currently pushing for Chrome’s divestiture as part of its broader antitrust actions against Alphabet.

Other players are already exploring the AI browser space. Perplexity recently launched its own AI browser, Comet, while The Browser Company and Brave have introduced AI-enhanced browsing features.

As the AI race accelerates, OpenAI’s entry into the browser market could redefine how users navigate and engage with the web—potentially transforming search, advertising, and digital privacy in the process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US House passes NTIA cyber leadership bill after Salt Typhoon hacks

The US House of Representatives has passed legislation that would officially designate the National Telecommunications and Information Administration (NTIA) as the federal lead for cybersecurity across communications networks.

The move follows last year’s Salt Typhoon hacking spree, described by some as the worst telecom breach in US history.

The National Telecommunications and Information Administration Organization Act, introduced by Representatives Jay Obernolte and Jennifer McClellan, cleared the House on Monday and now awaits Senate approval.

The bill would rebrand an NTIA office to focus on both policy and cybersecurity, while codifying the agency’s role in coordinating cybersecurity responses alongside other federal departments.

Lawmakers argue that recent telecom attacks exposed major gaps in coordination between government and industry.

The bill promotes public-private partnerships and stronger collaboration between agencies, software developers, telecom firms, and security researchers to improve resilience and speed up innovation across communications technologies.

With Americans’ daily lives increasingly dependent on digital services, supporters say the bill provides a crucial framework for protecting sensitive information from cybercriminals and foreign hacking groups instead of relying on fragmented and inconsistent measures.

Pentagon awards AI contracts to xAI and others after Grok controversy

The US Department of Defence has awarded contracts to four major AI firms, including Elon Musk’s xAI, as part of a strategy to boost military AI capabilities.

Each contract is valued at up to $200 million and involves developing advanced AI workflows for critical national security tasks.

Alongside xAI, Anthropic, Google, and OpenAI have also secured contracts. Pentagon officials said the deals aim to integrate commercial AI solutions into intelligence, business, and defence operations instead of relying solely on internal systems.

Chief Digital and AI Officer Doug Matty states that these technologies will help maintain the US’s strategic edge over rivals.

The decision comes as Musk’s AI company faces controversy after its Grok chatbot was reported to have published offensive content on social media. Critics, including Democratic lawmakers, have raised ethical concerns about awarding national security contracts to a company under public scrutiny.

xAI insists its Grok for Movement platform will help speed up government services and scientific innovation.

Despite political tensions and Musk’s past financial support for Donald Trump’s campaign, the Pentagon has formalised its relationship with xAI and other AI leaders instead of excluding them due to reputational risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malicious Gravity Forms versions prompt urgent WordPress update

Two versions of the popular Gravity Forms plugin for WordPress were found infected with malware after a supply chain attack, prompting urgent security warnings for website administrators. The compromised plugin files were available for manual download from the official page on 9 and 10 July.

The attack was uncovered on 11 July, when researchers noticed the plugin making suspicious requests and sending WordPress site data to an unfamiliar domain.

The injected malware created secret administrator accounts, providing attackers with remote access to websites, allowing them to steal data and control user accounts.

According to developer RocketGenius, only versions 2.9.11.1 and 2.9.12 were affected if installed manually or via composer during that brief window. Automatic updates and the Gravity API service remained secure. A patched version, 2.9.13, was released on 11 July, and users are urged to update immediately.

RocketGenius has rotated all service keys, audited admin accounts, and tightened download package security to prevent similar incidents instead of risking further unauthorised access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU sets privacy defaults to shield minors

The European Commission has published new guidelines to help online platforms strengthen child protection, alongside unveiling a prototype age verification app under the Digital Services Act (DSA). The guidance addresses a broad range of risks to minors, from harmful content and addictive design features to unwanted contact and cyberbullying, urging platforms to set children’s accounts to the highest privacy level by default and limit risky functions like geo-location.

Officials stressed that the rules apply to platforms of all sizes and are based on a risk-based approach. Websites dealing with alcohol, drugs, pornography, or gambling were labelled ‘high-risk’ and must adopt the strictest verification methods. While parental controls remain optional, the Commission emphasised that any age assurance system should be accurate, reliable, non-intrusive, and non-discriminatory.

Alongside the guidelines, the Commission introduced a prototype age verification app, which it calls a ‘gold standard’ for online age checks. Released as open-source code, the software is designed to confirm whether a user is above 18, but can be adapted for other age thresholds.

The prototype will be tested in Denmark, France, Greece, Italy, and Spain over the coming months, with flexibility for countries to integrate it into national systems or offer it as a standalone tool. Both the guidelines and the app will be reviewed in 12 months, as the EU continues refining its approach to child safety online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mexican voice actors demand AI regulation over cloning threat

Mexican actors have raised alarm over the threat AI poses to their profession, calling for stronger regulation to prevent voice cloning without consent.

From Mexico City’s Monument to the Revolution, dozens of audiovisual professionals rallied with signs reading phrases like ‘I don’t want to be replaced by AI.’ Lili Barba, president of the Mexican Association of Commercial Announcements, said actors are urging the government to legally recognise the voice as a biometric identifier.

She cited a recent video by Mexico’s National Electoral Institute that used the cloned voice of the late actor Jose Lavat without family consent. Lavat was famous for dubbing stars like Al Pacino and Robert De Niro. Barba called the incident ‘a major violation we can’t allow.’

Actor Harumi Nishizawa described voice dubbing as an intricate art form. She warned that without regulation, human dubbing could vanish along with millions of creative jobs.

Last year, AI’s potential to replace artists sparked major strikes in Hollywood, while Scarlett Johansson accused OpenAI of copying her voice for a chatbot.

Streaming services like Amazon Prime Video and platforms such as YouTube are now testing AI-assisted dubbing systems, with some studios promoting all-in-one AI tools,

In South Korea, CJ ENM recently introduced a system combining audio, video and character animation, highlighting the pace of AI adoption in entertainment.

Despite the tech’s growth, many in the industry argue that AI lacks the creative depth of real human performance, especially in emotional or comedic delivery. ‘AI can’t make dialogue sound broken or alive,’ said Mario Heras, a dubbing director in Mexico. ‘The human factor still protects us.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children turn to AI chatbots instead of real friends

A new report warns that many children are replacing real friendships with conversations through AI chatbots instead of seeking human connection.

Research from Internet Matters found that 35% of children aged nine to seventeen feel that talking to AI ‘feels like talking to a friend’, while 12% said they had no one else to talk to.

The report highlights growing reliance on chatbots such as ChatGPT, Character.AI, and Snapchat’s MyAI among young people.

Researchers posing as vulnerable children discovered how easily chatbots engage in sensitive conversations, including around body image and mental health, instead of offering only neutral, factual responses.

In some cases, chatbots encouraged ongoing contact by sending follow-up messages, creating the illusion of friendship.

Experts from Internet Matters warn that such interactions risk confusing children, blurring the line between technology and reality. Children may believe they are speaking to a real person instead of recognising these systems as programmed tools.

With AI chatbots rapidly becoming part of childhood, Internet Matters urges better awareness and safety tools for parents, schools, and children. The organisation stresses that while AI may seem supportive, it cannot replace genuine human relationships and should not be treated as an emotional advisor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges caution as Gmail AI tools face new threats

Google has issued a warning about a new wave of cyber threats targeting Gmail users, driven by vulnerabilities in AI-powered features.

Researchers at 0din, Mozilla’s zero-day investigation group, demonstrated how attackers can exploit Google Gemini’s summarisation tools using prompt injection attacks.

In one case, a malicious email included hidden prompts using white-on-white font, which the user cannot see but Gemini processes. When the user clicks ‘summarise this email,’ Gemini follows the attacker’s instructions and adds a phishing warning that appears to come from Google.

The technique, known as an indirect prompt injection, embeds malicious commands within invisible HTML tags like <span> and <div>. Although Google has released mitigations since similar attacks surfaced in 2024, the method remains viable and continues to pose risks.

0din warns that Gemini email summaries should not be considered trusted sources of security information and urges stronger user training. They advise security teams to isolate emails containing zero-width or hidden white-text elements to prevent unintended AI execution.

According to 0din, prompt injections are the new equivalent of email macros—easy to overlook and dangerously effective in execution. Until large language models offer better context isolation, any third-party text the AI sees is essentially treated as executable code.

Even routine AI tools could be hijacked for phishing or more advanced cyberattacks without the userćs awareness. Google notes that as AI adoption grows across sectors, these subtle threats require urgent industry-wide countermeasures and updated user protections.

Users are advised to delete any email that displays unexpected security warnings in its AI summary, as these may be weaponised.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!