Meta faces fresh EU backlash over Digital Markets Act non-compliance

Meta is again under EU scrutiny after failing to fully comply with the bloc’s Digital Markets Act (DMA), despite a €200 million fine earlier this year.

The European Commission says Meta’s current ‘pay or consent’ model still falls short and could trigger further penalties. A formal warning is expected, with recurring fines likely if the company does not adjust its approach.

The DMA imposes strict rules on major tech platforms to reduce market dominance and protect digital fairness. While Meta claims its model meets legal standards, the Commission says progress has been minimal.

Over the past year, Meta has faced nearly €1 billion in EU fines, including €798 million for linking Facebook Marketplace to its central platform. The new case adds to years of tension over data practices and user consent.

The ‘pay or consent’ model offers users a choice between paying for privacy or accepting targeted ads. Regulators argue this does not meet the threshold for genuine consent and mirrors Meta’s past GDPR tactics.

Privacy advocates have long criticised Meta’s approach, saying users are left with no meaningful alternatives. Internal documents show Meta lobbied against privacy reforms and warned governments about reduced investment.

The Commission now holds greater power under the DMA than it did with GDPR, allowing for faster, centralised enforcement and fines of up to 10% of global turnover.

Apple has already been fined €500 million, and Google is also under investigation. The EU’s rapid action signals a stricter stance on platform accountability. The message for Meta and other tech giants is clear: partial compliance is no longer enough to avoid serious regulatory consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple accused of blocking real browser competition on iOS

Developers and open web advocates say Apple continues to restrict rival browser engines on iOS, despite obligations under the EU’s Digital Markets Act. While Apple claims to allow competition, groups like Open Web Advocacy argue that technical and logistical hurdles still block real implementation.

The controversy centres on Apple’s refusal to allow developers to release region-specific browser versions or test new engines outside the EU. Developers must abandon global apps or persuade users to switch manually to new EU-only versions, creating friction and reducing reach.

Apple insists it upholds security and privacy standards built over 18 years and claims its new framework enables third-party browsers. However, critics say those browsers cannot be tested or deployed realistically without access for developers outside the EU.

The EU held a DMA compliance workshop in Brussels in June, during which tensions surfaced between Apple’s legal team and advocates. Apple says it is still transitioning and working with firms like Mozilla and Google on limited testing updates, but has offered no timeline for broader changes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPAI Code of Practice creates legal uncertainty for non-signatories

Lawyers at William Fry say the EU’s final Code of Practice for general-purpose AI (GPAI) models leaves key questions unanswered. GPAI systems include models such as OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama, trained on vast datasets for broad applications.

The Code of Practice, released last week, addresses transparency, safety, security, and copyright, and is described by the European Commission as a voluntary tool. It was prepared by independent experts to help GPAI developers comply with upcoming legal obligations under the EU AI Act.

In a statement on the firm’s website, William Fry lawyers Barry Scannell and Leo Moore question how voluntary the code truly is. They note that signatories not in full compliance can still be seen as acting in good faith and will be supported rather than penalised.

A protected grace period runs until 2 August 2026, after which the AI Act could allow fines for non-compliance. The lawyers warn that this creates a two-tier system, shielding signatories while exposing non-signatories to immediate legal risk under the AI Act.

Developers who do not sign the code may face higher regulatory scrutiny, despite it being described as non-binding. William Fry also points out that detailed implementation guidelines and templates have not yet been published by the EU.

Additional guidance to clarify key GPAI concepts is expected later this month, but the current lack of detail creates uncertainty. The code’s copyright section, the lawyers argue, shows how the document has evolved into a quasi-regulatory framework.

An earlier draft required only reasonable efforts to avoid copyright-infringing sources. The final version demands the active exclusion of such sites. A proposed measure requiring developers to verify the source of copyrighted data acquired from third parties has been removed from the final draft.

The lawyers argue that this creates a practical blind spot, allowing unlawful content to slip into training data undetected. Rights holders still retain the ability to pursue action if they believe their content was misused, even if providers are signatories.

Meanwhile, the transparency chapter now outlines specific standards, rather than general principles. The safety and security section also sets enforceable expectations, increasing the operational burden on model developers.

William Fry warns that gaps between the code’s obligations and the missing technical documentation could have costly consequences. They conclude that, without the final training data template or implementation details, both developers and rights holders face compliance risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU bets on quantum to regain global influence

European policymakers are turning to quantum technology as a strategic solution to the continent’s growing economic and security challenges.

With the US and China surging ahead in AI, Europe sees quantum innovation as a last-mover advantage it cannot afford to miss.

Quantum computers, sensors, and encryption are already transforming military, industrial and cybersecurity capabilities.

From stealth detection to next-generation batteries, Europe hopes quantum breakthroughs will bolster its defences and revitalise its energy, automotive and pharmaceutical sectors.

Although EU institutions have heavily invested in quantum programmes and Europe trains more engineers than anywhere else, funding gaps persist.

Private investment remains limited, pushing some of the continent’s most promising start-ups abroad in search of capital and scale.

The EU must pair its technical excellence with bold policy reforms to avoid falling behind. Strategic protections, high-risk R&D support and new alliances will be essential to turning scientific strength into global leadership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU sets privacy defaults to shield minors

The European Commission has published new guidelines to help online platforms strengthen child protection, alongside unveiling a prototype age verification app under the Digital Services Act (DSA). The guidance addresses a broad range of risks to minors, from harmful content and addictive design features to unwanted contact and cyberbullying, urging platforms to set children’s accounts to the highest privacy level by default and limit risky functions like geo-location.

Officials stressed that the rules apply to platforms of all sizes and are based on a risk-based approach. Websites dealing with alcohol, drugs, pornography, or gambling were labelled ‘high-risk’ and must adopt the strictest verification methods. While parental controls remain optional, the Commission emphasised that any age assurance system should be accurate, reliable, non-intrusive, and non-discriminatory.

Alongside the guidelines, the Commission introduced a prototype age verification app, which it calls a ‘gold standard’ for online age checks. Released as open-source code, the software is designed to confirm whether a user is above 18, but can be adapted for other age thresholds.

The prototype will be tested in Denmark, France, Greece, Italy, and Spain over the coming months, with flexibility for countries to integrate it into national systems or offer it as a standalone tool. Both the guidelines and the app will be reviewed in 12 months, as the EU continues refining its approach to child safety online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fake news surge tests EU Digital Services Act

Europe is facing a growing wave of AI-powered fake news and coordinated bot attacks that overwhelm media, fact-checkers, and online platforms instead of relying on older propaganda methods.

According to the European Policy Centre, networks using advanced AI now spread deepfakes, hoaxes, and fake articles faster than they can be debunked, raising concerns over whether EU rules are keeping up.

Since late 2024, the so-called ‘Overload’ operation has doubled its activity, sending an average of 2.6 fabricated proposals each day while also deploying thousands of bot accounts and fake videos.

These efforts aim to disrupt public debate through election intimidation, discrediting individuals, and creating panic instead of open discussion. Experts warn that without stricter enforcement, the EU’s Digital Services Act risks becoming ineffective.

To address the problem, analysts suggest that Europe must invest in real-time threat sharing between platforms, scalable AI detection systems, and narrative literacy campaigns to help citizens recognise manipulative content instead of depending only on fact-checkers.

Publicly naming and penalising non-compliant platforms would give the Digital Services Act more weight.

The European Parliament has already acknowledged widespread foreign-backed disinformation and cyberattacks targeting EU countries. Analysts say stronger action is required to protect the information space from systematic manipulation instead of allowing hostile narratives to spread unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Space operators face strict cybersecurity obligations under EU plan

The European Commission has unveiled a new draft law introducing cybersecurity requirements for space infrastructure, aiming to protect ground and orbital systems.

Operators must implement rigorous cyber risk management measures, including supply chain oversight, encryption, access control and incident response systems. A notable provision places direct accountability on company boards, which could be held personally liable for failures to comply.

The proposed law builds on existing EU regulations such as NIS 2 and DORA, with additional tailored obligations for the space domain. Non-EU firms will also fall within scope unless their home jurisdictions are recognised as offering equivalent regulatory protections.

Fines of up to 2% of global revenue are foreseen, with member states and the EU’s space agency EUSPA granted inspection and enforcement powers. Industry stakeholders are encouraged to engage with the legislative process and align existing cybersecurity frameworks with the Act’s provisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU urges stronger AI oversight after Grok controversy

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.

Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.

The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.

European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.

Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.

The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.

Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.

A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.

The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.

General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.

While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.

The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN reports surge in intangible investment driven by AI and data

Global investment is increasingly flowing into intangible assets such as software, data, and AI, marking what the UN has described as a ‘fundamental shift’ in how economies develop and compete.

According to a new report from the World Intellectual Property Organisation (WIPO), co-authored with the Luiss Business School based in Italy, investment in intellectual property-related assets grew three times faster in 2024 than spending on physical assets like buildings and machinery.

WIPO reported that total intangible investment reached $7.6 trillion across 27 high- and middle-income economies last year, up from $7.4 trillion in 2023—a real-term growth rate of 3 percent. In contrast, growth in physical asset investment has been more sluggish, hindered by high interest rates and a slow economic recovery.

‘We’re witnessing a fundamental shift in how economies grow and compete,’ said WIPO Director General Daren Tang. ‘While businesses have slowed down investing in factories and equipment during uncertain times, they’re doubling on intangible assets.’

The report highlights software and databases as the fastest-growing categories, expanding by more than 7 percent annually between 2013 and 2022. It attributes much of this trend to the accelerating adoption of AI, which requires significant investment in data infrastructure and training datasets.

WIPO also noted that the United States remains the global leader in absolute intangible investment, spending nearly twice as much as France, Germany, Japan, and the United Kingdom. However, Sweden topped the list regarding investment intensity, with intangible assets representing 16 per cent of its GDP.

The US, France, and Finland followed at 15 percent each, while India ranked ahead of several EU countries and Japan at an intensity of nearly 10 percent.

Despite economic disruptions over the past decade and a half, intangible investments have remained resilient, growing at a compound annual rate of 4 percent since 2008. By contrast, investment in tangible assets rose just 1 percent over the same period.

‘We are only at the beginning of the AI boom,’ said Sacha Wunsch-Vincent, head of WIPO’s economics and data analytics department.

He noted that in addition to driving demand for physical infrastructure like chips and servers, AI is now contributing to sustained investment growth in data and software, cornerstones of the intangible economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WSIS+20 spotlights urgent need for global digital skills

The WSIS+20 High-Level Event in Geneva brought together global leaders to address the digital skills gap as one of the most urgent challenges of our time. As moderator Jacek Oko stated, digital technologies are rapidly reshaping work and learning worldwide, and equipping people with the necessary skills has become a matter of equity and economic resilience.

Dr Cosmas Zavazava of ITU emphasised that the real threat is not AI itself but people being displaced by others who know how to use it. ‘Workers risk losing their jobs, not because of AI, but because someone else knows how to use AI-based tools,’ he warned.

He underscored the importance of including informal workers like artisans and farmers in reskilling initiatives. He noted that 2.6 billion people remain offline while many of the 5.8 billion connected lack meaningful digital capabilities.

Costa Rica’s Vice Minister of Telecommunications, Hubert Vargas Picado shared how the country transformed into a regional tech hub by combining widespread internet access with workforce development. ‘Connectivity alone is insufficient,’ he said, advocating for cross-sectoral training systems and targeted scholarships, especially for rural youth and women.

WSIS+20 High-Level Event 2025
WSIS+20 spotlights urgent need for global digital skills 12

Similarly, Celeste Drake from the ILO pointed to gendered impacts of automation, revealing that administrative roles held mainly by women are most vulnerable. She insisted that upskilling must go hand-in-hand with policies promoting decent work, inclusive social dialogue, and regional equity.

The EU’s Michele Cervone d’Urso acknowledged the bloc’s shortfall in digital specialists and described Europe’s multipronged response, including digital academies and international talent partnerships.

Georgia’s Ekaterine Imedadze shared the success of embedding media literacy in public education and training local ambassadors to support digital inclusion in villages. Meanwhile, Anna Sophie Herken of GIZ warned of ‘massive talent waste’ in the Global South, where highly educated data workers are confined to low-value roles. Herken called for more equitable participation in the global digital economy and local AI innovation.

Private sector voices echoed the need for systemic change. EY’s Gillian Hinde stressed community co-creation and inclusive learning models, noting that only 22% of women pursue AI-related courses.

She outlined EY’s efforts to support neurodiverse learners and validate informal learning through digital badges. India’s Professor Himanshu Rai added a powerful sense of urgency, declaring, ‘AI is not the future. It’s already passing us by.’ He showcased India’s success in scaling low-cost digital access, training 60 million rural citizens, and adapting platforms to local languages and user needs.

His call for ‘compassionate’ policymaking underscored the moral imperative to act inclusively and decisively.

Speakers across sectors agreed that infrastructure without skills development risks widening the digital divide. Targeted interventions, continuous monitoring, and structural reform were repeatedly highlighted as essential.

The event’s parting thought, offered by Jacek Oko, summed up the transformative mindset required: ‘Let AI teach us about AI.’ The road ahead demands urgency, innovation, and collective action to ensure digital transformation uplifts all, especially the most vulnerable.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.