EU reaffirms commitment to Digital Markets Act enforcement

European Commission Executive Vice President Teresa Ribera has stated that the EU has a constitutional obligation under its treaties to uphold its digital rulebook, including the Digital Markets Act (DMA).

Speaking at a competition law conference, Ribera framed enforcement as a duty to protect fair competition and market balance across the bloc.

Her comments arrive amid growing criticism from US technology companies and political pressure from Washington, where enforcement of EU digital rules has been portrayed as discriminatory towards American firms.

Several designated gatekeepers have argued that the DMA restricts innovation and challenges existing business models.

Ribera acknowledged the right of companies to challenge enforcement through the courts, while emphasising that designation decisions are based on lengthy and open consultation processes. The Commission, she said, remains committed to applying the law effectively rather than retreating under external pressure.

Apple and Meta have already announced plans to appeal fines imposed in 2025 for alleged breaches of DMA obligations, reinforcing expectations that legal disputes around EU digital regulation will continue in parallel with enforcement efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Billions in data protection fines remain unpaid

Ireland’s Data Protection Commission is owed more than €4 billion in fines imposed on companies, primarily Big Tech firms. Most of the penalties remain unpaid due to ongoing legal challenges.

Figures released under Freedom of Information laws show the watchdog collected only €125,000 from over €530 million in fines issued last year. Similar patterns have persisted across several previous years.

Since 2020, the commission has levied €4.04 billion in data protection penalties. Just €20 million has been paid, while the remaining balance is tied up in appeals before Irish and EU courts.

The regulator states that legislation prevents enforcement until the court proceedings conclude. Several cases hinge on a landmark WhatsApp ruling at the EU’s top court, expected to shape future collections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Next-generation Siri will use Google’s Gemini AI model

Apple and Google have confirmed a multi-year partnership that will see Google’s Gemini models powering Siri and future Apple Intelligence features. The collaboration will underpin Apple’s next-generation AI models, with updates coming later this year.

The move follows delays in rolling out Siri upgrades first unveiled at WWDC 2024. While most Apple Intelligence features have already been launched, the redesigned Siri has been postponed due to development taking longer than anticipated.

According to reports, Apple will continue using its own models for specific tasks, while Gemini is expected to handle summarisation, planning, and other advanced functions.

Bloomberg reports the upcoming Siri will be structured around three layers: query planning, knowledge retrieval, and summarisation. Gemini will handle planning and summarisation, helping Siri structure responses and create clear summaries.

Knowledge retrieval may also benefit from Gemini, potentially broadening Siri’s general knowledge capabilities beyond its current hand-off system.

All AI processing will operate on Apple’s Private Cloud Compute platform, ensuring user privacy and keeping data secure. Analysts suggest this integration will embed Gemini more deeply into Siri’s core functionality, rather than serving as a supplementary tool.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI gap reflects China’s growing technological ambitions

China’s AI sector could narrow the technological AI gap with the United States through growing risk-taking and innovation, according to leading researchers. Despite export controls on advanced chipmaking tools, Chinese firms are accelerating development across multiple AI fields.

Yao Shunyu, a former senior researcher at ChatGPT maker OpenAI and now Tencent’s AI scientist, said a Chinese company could become the world’s leading AI firm within three to five years. He pointed to China’s strengths in electricity supply and infrastructure as key advantages.

Yao said the main bottlenecks remain production capacity, including access to advanced lithography machines and a mature software ecosystem. Such limits still restrict China’s ability to manufacture the most advanced semiconductors and narrow the AI gap with the US.

China has developed a working prototype of an extreme-ultraviolet lithography machine that could eventually rival Western technology. However, Reuters reported the system has not yet produced functioning chips.

Sources familiar with the project said commercial chip production using the machine may not begin until around 2030. Until then, Chinese AI ambitions are likely to remain constrained by hardware limitations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google brings AI to personalised shopping

Google is working with major retailers to use AI in guiding customers from product discovery to checkout. The company has launched the Universal Commerce Protocol, an open standard for seamless agentic commerce that keeps retailers in control of customer relationships.

The Universal Commerce Protocol works with existing systems and partners, including Shopify, Etsy, Wayfair, Target, and Walmart.

Customers can receive personalised offers, loyalty rewards, and recommendations in Google Search or Gemini, completing purchases via Google Pay without leaving the platform.

To support retailers, Google has launched Gemini Enterprise for Customer Experience, which unifies search, commerce, and service touchpoints across all channels.

Early partners, such as The Home Depot and McDonald’s, are already utilising AI-powered agents to enhance service, provide proactive recommendations, and improve customer engagement.

Logistics also feature prominently, with Wing expanding delivery capabilities alongside Walmart, doubling operations in existing markets, and rolling out to Houston, Orlando, Tampa, Charlotte, and other cities.

Google aims to create an end-to-end shopping ecosystem where AI, agentic protocols, and seamless delivery elevate both customer and retailer experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indonesia and Malaysia restrict access to Grok AI over content safeguards

Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.

Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.

Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.

Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.

The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI race shows diverging paths for China and the US

The US administration’s new AI action plan frames global development as an AI race with a single winner. Officials argue AI dominance brings economic, military, and geopolitical advantages. Experts say competition is unfolding across multiple domains.

The United States continues to lead in the development of advanced large language and multimodal models by firms such as OpenAI, Google, and Anthropic. American companies also dominate global computing infrastructure. Control over high-end AI chips and data-centre capacity remains concentrated in US firms.

Chinese companies are narrowing the gap in the practical applications of AI. Models from Alibaba, DeepSeek, and Moonshot AI perform well in tasks such as translation, coding, and customer service. Performance at the cutting edge still lags behind US systems.

Washington’s decision to allow limited exports of Nvidia’s H200 AI chips to China reflects a belief that controlled sales can preserve US leadership. Critics argue the move risks weakening America’s computing advantage. Concerns persist over long-term strategic consequences.

Rather than a decisive victory for either side in the AI race, analysts foresee an era of asymmetric competition in AI. The United States may dominate advanced AI services, but China is expected to lead in large-scale industrial deployment within the evolving AI race.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK outlines approval process for crypto firms

The UK’s Financial Conduct Authority has confirmed that all regulated crypto firms must obtain authorisation under the Financial Services and Markets Act. Both new market entrants and existing operators will be required to comply.

No automatic transition will be available for firms currently registered under anti-money laundering rules. Companies already authorised for other financial services must apply to extend permissions to cover crypto activities and ensure compliance with upcoming regulations.

Pre-application meetings and information sessions will be offered to help firms understand regulatory expectations and enhance the quality of their applications.

An official application window is expected to open in September 2026 and remain active for at least 28 days. Applications submitted during that period are intended to be assessed before the regime formally begins, with further procedural details to be confirmed by the FCA.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU instructs X to keep all Grok chatbot records

The European Commission has ordered X to retain all internal documents and data on its AI chatbot Grok until the end of 2026. The order falls under the Digital Services Act after concerns Grok’s ‘spicy’ mode enabled sexualised deepfakes of minors.

The move continues EU oversight, recalling a January 2025 order to preserve X’s recommender system documents amid claims it amplified far-right content during German elections. EU regulators emphasised that platforms must manage the content generated by their AI responsibly.

Earlier this week, X submitted responses to the Commission regarding Grok’s outputs following concerns over Holocaust denial content. While the deepfake scandal has prompted calls for further action, the Commission has not launched a formal investigation into Grok.

Regulators reiterated that it remains X’s responsibility to ensure the chatbot’s outputs meet European standards, and retention of all internal records is crucial for ongoing monitoring and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot