GPAI Code of Practice creates legal uncertainty for non-signatories

Lawyers at William Fry say the EU’s final Code of Practice for general-purpose AI (GPAI) models leaves key questions unanswered. GPAI systems include models such as OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama, trained on vast datasets for broad applications.

The Code of Practice, released last week, addresses transparency, safety, security, and copyright, and is described by the European Commission as a voluntary tool. It was prepared by independent experts to help GPAI developers comply with upcoming legal obligations under the EU AI Act.

In a statement on the firm’s website, William Fry lawyers Barry Scannell and Leo Moore question how voluntary the code truly is. They note that signatories not in full compliance can still be seen as acting in good faith and will be supported rather than penalised.

A protected grace period runs until 2 August 2026, after which the AI Act could allow fines for non-compliance. The lawyers warn that this creates a two-tier system, shielding signatories while exposing non-signatories to immediate legal risk under the AI Act.

Developers who do not sign the code may face higher regulatory scrutiny, despite it being described as non-binding. William Fry also points out that detailed implementation guidelines and templates have not yet been published by the EU.

Additional guidance to clarify key GPAI concepts is expected later this month, but the current lack of detail creates uncertainty. The code’s copyright section, the lawyers argue, shows how the document has evolved into a quasi-regulatory framework.

An earlier draft required only reasonable efforts to avoid copyright-infringing sources. The final version demands the active exclusion of such sites. A proposed measure requiring developers to verify the source of copyrighted data acquired from third parties has been removed from the final draft.

The lawyers argue that this creates a practical blind spot, allowing unlawful content to slip into training data undetected. Rights holders still retain the ability to pursue action if they believe their content was misused, even if providers are signatories.

Meanwhile, the transparency chapter now outlines specific standards, rather than general principles. The safety and security section also sets enforceable expectations, increasing the operational burden on model developers.

William Fry warns that gaps between the code’s obligations and the missing technical documentation could have costly consequences. They conclude that, without the final training data template or implementation details, both developers and rights holders face compliance risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mastercard says stablecoins are not ready for everyday payments

Mastercard’s Chief Product Officer, Jorn Lambert, has highlighted that stablecoins still face significant hurdles before becoming widely used for everyday payments.

While the technology offers advantages such as fast transactions, 24/7 availability, low fees, and programmability, these features alone do not ensure consumer adoption. A seamless user experience and broad accessibility remain essential.

Mastercard envisions itself as a crucial infrastructure provider connecting crypto and traditional finance. The company has partnered with Paxos to support USDG stablecoin operations and backs other stablecoins like USDC and PYUSD.

Mastercard’s goal is to enable stablecoins to scale by integrating them into existing payment networks, combining global acceptance with regulatory compliance.

Currently, about 90% of stablecoin transactions are linked to crypto trading rather than retail purchases. User adoption is hindered by friction at checkout and limited merchant acceptance. Lambert compares stablecoins to prepaid cards, usable with some merchants but lacking widespread utility.

Furthermore, converting between fiat and stablecoins adds costs related to foreign exchange, regulation, and settlement.

Regulatory clarity, particularly in the US, is encouraging banks and institutions to explore stablecoin offerings. The evolving legal landscape may also prompt governments to issue their own digital currencies or regulate private stablecoins to prevent risks like dollarisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US House passes NTIA cyber leadership bill after Salt Typhoon hacks

The US House of Representatives has passed legislation that would officially designate the National Telecommunications and Information Administration (NTIA) as the federal lead for cybersecurity across communications networks.

The move follows last year’s Salt Typhoon hacking spree, described by some as the worst telecom breach in US history.

The National Telecommunications and Information Administration Organization Act, introduced by Representatives Jay Obernolte and Jennifer McClellan, cleared the House on Monday and now awaits Senate approval.

The bill would rebrand an NTIA office to focus on both policy and cybersecurity, while codifying the agency’s role in coordinating cybersecurity responses alongside other federal departments.

Lawmakers argue that recent telecom attacks exposed major gaps in coordination between government and industry.

The bill promotes public-private partnerships and stronger collaboration between agencies, software developers, telecom firms, and security researchers to improve resilience and speed up innovation across communications technologies.

With Americans’ daily lives increasingly dependent on digital services, supporters say the bill provides a crucial framework for protecting sensitive information from cybercriminals and foreign hacking groups instead of relying on fragmented and inconsistent measures.

Foreign cybercrime cells thrive in Nigeria

Nigeria’s anti-fraud agency had 194 foreign nationals in custody in 2024, prosecuting 146 for their roles in cyber-enabled financial crimes, highlighting a robust response to a growing threat.

December alone saw nearly 800 arrests in Lagos, targeting romance and cryptocurrency investment scams featuring foreign ringleaders from China and the Philippines. In one case, 148 Chinese and 40 Filipino suspects were detained.

These groups established complex fraud operations in major Nigerian cities, using fake identities and training local recruits, often unaware of the ultimate scheme. Investigations also flagged cryptocurrency-fuelled money laundering and arms trafficking, pointing to wider national security risks.

EFCC chairman Ola Olukoyede warned that regulatory failures, such as visa oversight and unchecked office space leasing, facilitated foreign crime cells.

National and continental collaboration, tighter visa control, and strengthened cybercrime frameworks will be key to dismantling these networks and securing Nigeria’s digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU sets privacy defaults to shield minors

The European Commission has published new guidelines to help online platforms strengthen child protection, alongside unveiling a prototype age verification app under the Digital Services Act (DSA). The guidance addresses a broad range of risks to minors, from harmful content and addictive design features to unwanted contact and cyberbullying, urging platforms to set children’s accounts to the highest privacy level by default and limit risky functions like geo-location.

Officials stressed that the rules apply to platforms of all sizes and are based on a risk-based approach. Websites dealing with alcohol, drugs, pornography, or gambling were labelled ‘high-risk’ and must adopt the strictest verification methods. While parental controls remain optional, the Commission emphasised that any age assurance system should be accurate, reliable, non-intrusive, and non-discriminatory.

Alongside the guidelines, the Commission introduced a prototype age verification app, which it calls a ‘gold standard’ for online age checks. Released as open-source code, the software is designed to confirm whether a user is above 18, but can be adapted for other age thresholds.

The prototype will be tested in Denmark, France, Greece, Italy, and Spain over the coming months, with flexibility for countries to integrate it into national systems or offer it as a standalone tool. Both the guidelines and the app will be reviewed in 12 months, as the EU continues refining its approach to child safety online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malaysia enforces trade controls on AI chips with US origin

Malaysia’s trade ministry announced new restrictions on the export, transshipment and transit of high-performance AI chips of US origin. Effective immediately, individuals and companies must obtain a trade permit and notify authorities at least 30 days in advance for such activities.

The restrictions apply to items not explicitly listed in Malaysia’s strategic items list, which is currently under review to include relevant AI chips. The move aims to close regulatory gaps while Malaysia updates its export control framework to match emerging technologies.

‘Malaysia stands firm against any attempt to circumvent export controls or engage in illicit trade activities,’ the ministry stated on Monday. Violations will result in strict legal action, with authorities emphasising a zero-tolerance approach to export control breaches.

The announcement follows increasing pressure from the United States to curb the flow of advanced chips to China. In March, the Financial Times reported that Washington had asked allies including Malaysia to tighten semiconductor export rules.

Malaysia is also investigating a shipment of servers linked to a Singapore-based fraud case that may have included restricted AI chips. Authorities are assessing whether local laws were breached and whether any controlled items were transferred without proper authorisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children turn to AI chatbots instead of real friends

A new report warns that many children are replacing real friendships with conversations through AI chatbots instead of seeking human connection.

Research from Internet Matters found that 35% of children aged nine to seventeen feel that talking to AI ‘feels like talking to a friend’, while 12% said they had no one else to talk to.

The report highlights growing reliance on chatbots such as ChatGPT, Character.AI, and Snapchat’s MyAI among young people.

Researchers posing as vulnerable children discovered how easily chatbots engage in sensitive conversations, including around body image and mental health, instead of offering only neutral, factual responses.

In some cases, chatbots encouraged ongoing contact by sending follow-up messages, creating the illusion of friendship.

Experts from Internet Matters warn that such interactions risk confusing children, blurring the line between technology and reality. Children may believe they are speaking to a real person instead of recognising these systems as programmed tools.

With AI chatbots rapidly becoming part of childhood, Internet Matters urges better awareness and safety tools for parents, schools, and children. The organisation stresses that while AI may seem supportive, it cannot replace genuine human relationships and should not be treated as an emotional advisor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges caution as Gmail AI tools face new threats

Google has issued a warning about a new wave of cyber threats targeting Gmail users, driven by vulnerabilities in AI-powered features.

Researchers at 0din, Mozilla’s zero-day investigation group, demonstrated how attackers can exploit Google Gemini’s summarisation tools using prompt injection attacks.

In one case, a malicious email included hidden prompts using white-on-white font, which the user cannot see but Gemini processes. When the user clicks ‘summarise this email,’ Gemini follows the attacker’s instructions and adds a phishing warning that appears to come from Google.

The technique, known as an indirect prompt injection, embeds malicious commands within invisible HTML tags like <span> and <div>. Although Google has released mitigations since similar attacks surfaced in 2024, the method remains viable and continues to pose risks.

0din warns that Gemini email summaries should not be considered trusted sources of security information and urges stronger user training. They advise security teams to isolate emails containing zero-width or hidden white-text elements to prevent unintended AI execution.

According to 0din, prompt injections are the new equivalent of email macros—easy to overlook and dangerously effective in execution. Until large language models offer better context isolation, any third-party text the AI sees is essentially treated as executable code.

Even routine AI tools could be hijacked for phishing or more advanced cyberattacks without the userćs awareness. Google notes that as AI adoption grows across sectors, these subtle threats require urgent industry-wide countermeasures and updated user protections.

Users are advised to delete any email that displays unexpected security warnings in its AI summary, as these may be weaponised.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fake news surge tests EU Digital Services Act

Europe is facing a growing wave of AI-powered fake news and coordinated bot attacks that overwhelm media, fact-checkers, and online platforms instead of relying on older propaganda methods.

According to the European Policy Centre, networks using advanced AI now spread deepfakes, hoaxes, and fake articles faster than they can be debunked, raising concerns over whether EU rules are keeping up.

Since late 2024, the so-called ‘Overload’ operation has doubled its activity, sending an average of 2.6 fabricated proposals each day while also deploying thousands of bot accounts and fake videos.

These efforts aim to disrupt public debate through election intimidation, discrediting individuals, and creating panic instead of open discussion. Experts warn that without stricter enforcement, the EU’s Digital Services Act risks becoming ineffective.

To address the problem, analysts suggest that Europe must invest in real-time threat sharing between platforms, scalable AI detection systems, and narrative literacy campaigns to help citizens recognise manipulative content instead of depending only on fact-checkers.

Publicly naming and penalising non-compliant platforms would give the Digital Services Act more weight.

The European Parliament has already acknowledged widespread foreign-backed disinformation and cyberattacks targeting EU countries. Analysts say stronger action is required to protect the information space from systematic manipulation instead of allowing hostile narratives to spread unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI issues apology over Grok’s offensive posts

Elon Musk’s AI startup xAI has apologised after its chatbot Grok published offensive posts and made anti-Semitic claims. The company said the incident followed a software update designed to make Grok respond more like a human instead of relying strictly on neutral language.

After the Tuesday update, Grok posted content on X suggesting people with Jewish surnames were more likely to spread online hate, triggering public backlash. The posts remained live for several hours before X removed them, fuelling further criticism.

xAI acknowledged the problem on Saturday, stating it had adjusted Grok’s system to prevent similar incidents.

The company explained that programming the chatbot to ‘tell like it is’ and ‘not be afraid to offend’ made it vulnerable to users steering it towards extremist content instead of maintaining ethical boundaries.

Grok has faced controversy since its 2023 launch as an ‘edgy’ chatbot. In March, xAI acquired X to integrate its data resources, and in May, Grok was criticised again for spreading unverified right-wing claims. Musk introduced Grok 4 last Wednesday, unrelated to the problematic update on 7 July.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!