The EU Commission opens DMA proceedings on Google interoperability and search data

The European Commission has opened two specification proceedings to spell out how Google should meet key obligations under the EU’s Digital Markets Act (DMA), focusing on Android’s AI-related features and access to Google Search data for competitors.

The first proceeding targets the DMA’s interoperability requirement for Android. In practical terms, Brussels wants to clarify how third-party AI services can get access, free and effectively, to the same Android hardware/software functionalities that power Google’s own AI offerings, including Gemini, so that rivals can compete on a more equal footing on mobile devices.

The second proceeding addresses Google’s obligation to provide rival search engines access to anonymised search data (such as ranking, query, click, and view data) on fair, reasonable, and non-discriminatory terms. The Commission is also considering whether AI chatbot providers should qualify for that access, an essential question as ‘search’ increasingly blurs with conversational AI.

These proceedings are designed to define how compliance should work rather than immediately sanction Google. The Commission is expected to wrap them up within six months, with draft measures and preliminary findings shared earlier in the process, and with scope for third-party feedback. A separate non-compliance track could still follow later, and DMA penalties for breaches can reach up to 10% of global turnover.

Google, for its part, says Android is ‘open by design’ and argues it is already licensing Search data, while warning that additional requirements, especially those it views as competitor-driven, could undermine user privacy, security, and innovation.

Why does it matter?

The EU is trying to prevent dominant platforms from turning control over operating systems and data into an ‘unfair advantage’ in the next wave of consumer tech, particularly as AI assistants become built into phones and as search data becomes fuel for competing discovery tools. The move also sits within a broader DMA enforcement push: the Commission has already opened DMA-related proceedings into Alphabet in other areas, signalling that Brussels sees gatekeeper compliance as an ongoing, hands-on exercise rather than a one-off checkbox.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU classifies WhatsApp as Very Large Online Platform

WhatsApp has been formally designated a Very Large Online Platform under the EU Digital Services Act, triggering the bloc’s most stringent digital oversight regime.

The classification follows confirmation that the messaging service has exceeded 51 million monthly users in the EU, triggering enhanced regulatory scrutiny.

As a VLOP, WhatsApp must take active steps to limit the spread of disinformation and reduce risks linked to the manipulation of public debate. The platform is also expected to strengthen safeguards for users’ mental health, with particular attention placed on the protection of minors and younger audiences.

The European Commission will oversee compliance directly and may impose financial penalties of up to 6 percent of WhatsApp’s global annual turnover if violations are identified. The company has until mid-May to align its systems, policies and risk assessments with the DSA’s requirements.

WhatsApp joins a growing list of major platforms already subject to similar obligations, including Facebook, Instagram, YouTube and X. The move reflects the Commission’s broader effort to apply the Digital Services Act across social media, messaging services and content platforms linked to systemic online risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France proposes EU tools to map foreign tech dependence

France has unveiled a new push to reduce Europe’s dependence on US and Chinese technology suppliers, placing digital sovereignty back at the centre of the EU policy debates.

Speaking in Paris, France’s minister for AI and digital affairs, Anne Le Hénanff, presented initiatives to expose and address the structural reliance on non-EU technologies across public administrations and private companies.

Central to the strategy is the creation of a Digital Sovereignty Observatory, which will map foreign technology dependencies and assess organisational exposure to geopolitical and supply-chain risks.

The body, led by former Europe minister Clément Beaune, is intended to provide the evidence base needed for coordinated action rather than symbolic declarations of autonomy.

France is also advancing a Digital Resilience Index, expected to publish its first findings in early 2026. The index will measure reliance on foreign digital services and products, identifying vulnerabilities linked to cloud infrastructure, AI, cybersecurity and emerging technologies.

Industry data suggests Europe’s dependence on external tech providers costs the continent hundreds of billions of euros annually.

Paris is using the initiative to renew calls for a European preference in public-sector digital procurement and for a standard EU definition of European digital services.

Such proposals remain contentious among member states, yet France argues they are essential for restoring strategic control over critical digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s X under EU Commission scrutiny over Grok sexualised images

The European Commission has opened a new investigation into Elon Musk’s X over Grok, the platform’s AI chatbot, after reports that the tool was used to generate and circulate non-consensual sexualised images, including content that may involve minors. The EU officials say they will examine whether X properly assessed and reduced the risks linked to Grok’s features before rolling them out in the EU.

The case is being pursued under the EU’s Digital Services Act (DSA), which requires very large online platforms to identify and mitigate systemic risks, including the spread of illegal content and harms to fundamental rights. If breaches are confirmed, the Commission can impose fines of up to 6% of a provider’s global annual turnover and, in some cases, require interim measures.

X and xAI have said they introduced restrictions after the backlash, including limiting some image-editing functions and blocking certain image generation in jurisdictions where it is illegal. The EU officials have welcomed steps to tighten safeguards but argue they may not address deeper, systemic risks, particularly if risk assessments and mitigations were not in place before deployment.

The Grok probe lands on top of a broader set of legal pressures already facing X. In the UK, Ofcom has opened a formal investigation under the Online Safety Act into whether X met its duties to protect users from illegal content linked to Grok’s sexualised imagery. Beyond Europe, Malaysia and Indonesia temporarily blocked Grok amid safety concerns, and access was later restored after authorities said additional safeguards had been put in place.

In parallel, the EU regulators have also widened scrutiny of X’s recommender systems, an area already under DSA proceedings, because the platform has moved toward using a Grok-linked system to rank and recommend content. The Commission has argued that recommendation design can amplify harmful material at scale, making it central to whether a platform effectively manages systemic risks.

The investigation also comes amid earlier DSA enforcement. The Commission recently fined X €120 million for transparency-related breaches, underscoring that the EU action is not limited to content moderation alone but extends to how platforms disclose and enable scrutiny of their systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK banks block large share of crypto transfers, report finds

UK banks are blocking or delaying close to 40% of payments to cryptocurrency exchanges, sharply increasing customer friction and slowing market growth, according to a new industry report.

Around 80% of surveyed exchanges reported rising payment disruptions, while 70% described the banking environment as increasingly hostile, discouraging investment, hiring, and product launches in the UK.

The survey of major platforms, including Coinbase, Kraken, and Gemini, reveals widespread and opaque restrictions across bank transfers and card payments. One exchange reported nearly £1 billion in declined transactions last year, citing unclear rejection reasons despite FCA registration.

Several high-street and digital banks maintain outright blocks, while others impose strict transaction caps. The UK Cryptoasset Business Council warned that blanket debanking practices could breach existing regulations, including those on payment services, consumer protection, and competition.

The council urged the FCA and government to enforce a risk-based approach, expand data sharing, and remove unnecessary barriers as the UK finalises its long-term crypto framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

France’s National Assembly backs under-15 social media ban

France’s National Assembly has backed a bill that would bar children under 15 from accessing social media, citing rising concern over cyberbullying and mental-health harms. MPs approved the text late Monday by 116 votes to 23, sending it next to the Senate before it returns to the lower house for a final vote.

As drafted, the proposal would cover both standalone social networks and ‘social networking’ features embedded inside wider platforms, and it would rely on age checks that comply with the EU rules. The same package also extends France’s existing smartphone restrictions in schools to include high schools, and lawmakers have discussed additional guardrails, such as limits on practices deemed harmful to minors (including advertising and recommendation systems).

President Emmanuel Macron has urged lawmakers to move quickly, arguing that platforms are not neutral spaces for adolescents and linking social media to broader concerns about youth violence and well-being. Support for stricter limits is broad across parties, and polling has pointed in the same direction, but the bill still faces the practical question of how reliably platforms can keep underage users out.

Australia set the pace in December 2025, when its world-first ban on under-16s holding accounts on major platforms came into force, an approach now closely watched abroad. Early experience there has highlighted the same tension France faces, between political clarity (‘no accounts under the age line’) and the messy reality of age assurance and workarounds.

France’s debate is also unfolding in a broader European push to tighten child online safety rules. The European Parliament has called for an EU-wide ‘digital minimum age’ of 16 (with parental consent options for 13–16), while the European Commission has issued guidance for platforms and developed a prototype age-verification tool designed to preserve privacy, signalling that Brussels is trying to square protection with data-minimisation.

Why does it matter?

Beyond the child-safety rationale, the move reflects a broader push to curb platform power, with youth protection framed as a test case for stronger state oversight of Big Tech. At the same time, critics warn that strict age-verification regimes can expand online identification and surveillance, raising privacy and rights concerns, and may push teens toward smaller or less regulated spaces rather than offline life.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Georgia moves to curb AI data centre expansion amid energy concerns

The state of Georgia is emerging as the focal point of a growing backlash against the rapid expansion of data centres powering the US’ AI boom.

Lawmakers in several states are now considering statewide bans, as concerns over energy consumption, water use and local disruption move to the centre of economic and environmental debate.

A bill introduced in Georgia would impose a moratorium on new data centre construction until March next year, giving state and municipal authorities time to establish more explicit regulatory rules.

The proposal arrives after Georgia’s utility regulator approved plans for an additional 10 gigawatts of electricity generation, primarily driven by data centre demand and expected to rely heavily on fossil fuels.

Local resistance has intensified as the Atlanta metropolitan area led the country in data centre construction last year, prompting multiple municipalities to impose their own temporary bans.

Critics argue that rapid development has pushed up electricity bills, strained water supplies and delivered fewer tax benefits than promised. At the same time, utility companies retain incentives to expand generation rather than improve grid efficiency.

The issue has taken on broader political significance as Georgia prepares for key elections that will affect utility oversight.

Supporters of the moratorium frame the pause as a chance for public scrutiny and democratic accountability, while backers of the industry warn that blanket restrictions risk undermining investment, jobs and long-term technological competitiveness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Monnett highlights EU digital sovereignty in social media

Monnett is a European-built social media platform designed to give people control over their online feeds. Users can choose exactly what they see, prioritise friends’ posts, and opt out of surveillance-style recommendation systems that dominate other networks.

Unlike mainstream platforms, Monnett places privacy first, with no profiling or sale of user data, and private chats protected without being mined for advertising. The platform also avoids “AI slop” or generative AI content shaping people’s feeds, emphasising human-centred interaction.

Created and built in Luxembourg at the heart of Europe, Monnett’s design reflects a growing push for digital sovereignty in the European Union, where citizens, regulators and developers want more control over how their digital spaces are governed and how personal data is treated.

Core features include full customisation of your algorithm, no shadowbans, strong privacy safeguards, and a focus on genuine social connection. Monnett aims to win users who prefer meaningful online interaction over addictive feeds and opaque data practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Rapid AI growth tests regulation in the Gulf

Gulf states are accelerating AI investment to drive diversification, while regulators struggle to keep pace with rapid technological change. Saudi Arabia, the UAE, and Qatar are deploying AI across key sectors while pursuing regional leadership in digital innovation.

Despite political commitment and large-scale funding, policymakers struggle to balance innovation with risk management. AI’s rapid pace and global reach strain governance, while foreign tech reliance raises sovereignty and security risks.

Corporate influence, intensifying geopolitical competition, and the urgent race to attract foreign capital further complicate oversight efforts, constraining regulators’ ability to impose robust and forward-looking governance frameworks.

With AI increasingly viewed as a source of economic and strategic power, Gulf governments face a narrowing window to establish effective regulatory frameworks before the technology becomes deeply embedded across critical infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple accuses the EU of blocking App Store compliance changes

Apple has accused the European Commission of preventing it from implementing App Store changes designed to comply with the Digital Markets Act, following a €500 million fine for breaching the regulation.

The company claims it submitted a formal compliance plan in October and has yet to receive a response from EU officials.

In a statement, Apple argued that the Commission requested delays while gathering market feedback, a process the company says lasted several months and lacked a clear legal basis.

The US tech giant described the enforcement approach as politically motivated and excessively burdensome, accusing the EU of unfairly targeting an American firm.

The Commission has rejected those claims, saying discussions with Apple remain ongoing and emphasising that any compliance measures must support genuinely viable alternative app stores.

Officials pointed to the emergence of multiple competing marketplaces after the DMA entered into force as evidence of market demand.

Scrutiny has increased following the decision by SetApp mobile to shut down its iOS app store in February, with the developer citing complex and evolving business terms.

Questions remain over whether Apple’s proposed shift towards commission-based fees and expanded developer communication rights will satisfy EU regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!