EU member states clash over the future of encrypted private messaging

The ongoing controversy around the EU’s proposed mandatory scanning of private messages has escalated with the European Parliament intensifying pressure on the Council to reach a formal agreement.

A leaked memo reveals that the Parliament threatens to block the extension of the current voluntary scanning rules unless mandatory chat control is agreed upon.

Denmark, leading the EU Council Presidency, has pushed a more stringent version of the so-called Chat Control law that could become binding as soon as 14 October 2025.

While the Parliament argues the law is essential for protecting children online, many legal experts and rights groups warn the proposal still violates fundamental human rights, particularly the right to privacy and secure communication.

The Council’s Legal Service has repeatedly noted that the draft infringes on these rights since it mandates scanning all private communications, undermining end-to-end encryption that most messaging apps rely on.

Some governments, including Germany and Belgium, remain hesitant or opposed, citing these serious concerns.

Supporters like Italy, Spain, and Hungary have openly backed Denmark’s proposal, signalling a shift in political will towards stricter measures. France’s position has also become more favourable, though internal debate continues.

Opponents warn that weakening encryption could open the door to cyber attacks and foreign interference, while proponents emphasise the urgent need to prevent abuse and close loopholes in existing law.

The next Council meeting in September will be critical in shaping the final form of the regulation.

The dispute highlights the persistent tension between digital privacy and security, reflecting broader European challenges in regulating encrypted communications.

As the October deadline approaches, the EU faces a defining moment in balancing child protection with protecting the confidentiality of citizens’ communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU proposal to scan private messages gains support

The European Union’s ‘Chat Control’ proposal is gaining traction, with 19 member states now supporting a plan to scan all private messages on encrypted apps. From October, apps like WhatsApp, Signal, and Telegram must scan all messages, photos, and videos on users’ devices before encryption.

France, Denmark, Belgium, Hungary, Sweden, Italy, and Spain back the measure, while Germany has yet to decide. The proposal could pass by mid-October under the EU’s qualified majority voting system if Germany joins.

The initiative aims to prevent child sexual abuse material (CSAM) but has sparked concerns over mass surveillance and the erosion of digital privacy.

In addition to scanning, the proposal would introduce mandatory age verification, which could remove anonymity on messaging platforms. Critics argue the plan amounts to real-time surveillance of private conversations and threatens fundamental freedoms.

Telegram founder Pavel Durov recently warned of societal collapse in France due to censorship and regulatory pressure. He disclosed attempts by French officials to censor political content on his platform, which he refused to comply with.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US urges Asia-Pacific to embrace open AI innovation over strict regulation

A senior White House official has urged Asia-Pacific economies to support an AI future built on US technology, warning against adopting Europe’s heavily regulated model. Michael Kratsios remarked during the APEC Digital and AI Ministerial Meeting in Incheon.

Kratsios said countries now choose between embracing American-led innovation or falling behind under regulatory burdens. He framed the US approach as one driven by freedom and open-source innovation rather than centralised control.

The US is offering partnerships with South Korea to respect data concerns while enabling shared progress. Kratsios noted that open-weight models could soon shape industry standards worldwide.

He met South Korea’s science minister in bilateral talks to discuss AI cooperation. The US reaffirmed its commitment to supporting nations in building trustworthy AI systems based on mutual economic benefit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creative industries raise concerns over the EU AI Act

Organisations representing creative sectors have issued a joint statement expressing concerns over the current implementation of the EU AI Act, particularly its provisions for general-purpose AI systems.

The response focuses on recent documents, including the General Purpose AI Code of Practice, accompanying guidelines, and the template for training data disclosure under Article 53.

The signatories, drawn from music and broader creative industries, said they had engaged extensively throughout the consultation process. They now argue that the outcomes do not fully reflect the issues raised during those discussions.

According to the statement, the result does not provide the level of intellectual property protection that some had expected from the regulation.

The group has called on the European Commission to reconsider the implementation package and is encouraging the European Parliament and member states to review the process.

The original EU AI Act was widely acknowledged as a landmark regulation, with technology firms and creative industries closely watching its rollout across member countries.

Google confirmed that it will sign the General Purpose Code of Practice elsewhere. The company said the latest version supports Europe’s broader innovation goals more effectively than earlier drafts, but it also noted ongoing concerns.

These include the potential impact of specific requirements on competitiveness and handling trade secrets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DW Weekly #223 – AI race heats: The US AI Action Plan, China’s push for a global AI cooperation organisation, and the EU’s regulatory response

 Logo, Text

25 July – 1 August 2025


 Book, Comics, Publication, Adult, Male, Man, Person, Face, Head, Clothing, Coat, Hat, James Montgomery Flagg

Dear readers,

Over the past week, the White House has launched a sweeping AI initiative through its new publication Winning the Race: America’s AI Action Plan, an ambitious strategy to dominate global AI leadership by promoting open-source technology and streamlining regulatory frameworks. America’s ‘open-source gambit’, analysed in detail by Dr Jovan Kurbalija in Diplo’s blog, signals a significant shift in digital policy, intending to democratise AI innovation to outpace competitors, particularly China.

Supporting this bold direction, major tech giants have endorsed President Trump’s AI deregulation plans, despite widespread public concerns regarding potential societal impacts. Trump’s policies notably include an explicit push for ‘anti-woke’ AI frameworks within US government contracts, raising contentious debates about the ideological neutrality and ethical implications of AI systems in governance.

In parallel, China has responded with its own global AI governance plan, proposing the establishment of an international AI cooperation organisation to enhance worldwide coordination and standard-setting. Thus, it is not hard to conclude that there is an escalating AI governance competition between the two technological superpowers, each advocating distinctly different visions for the future of global AI development.

On the multilateral stage, the UN’s Economic and Social Council (ECOSOC) adopted a resolution: ‘Assessment of the progress made in the implementation of and follow-up to the outcomes of the World Summit on the Information Society’, through the Commission on Science and Technology for Development (CSTD), reaffirming commitments to implement the outcomes of the World Summit on the Information Society (WSIS).

Corporate strategies have also reflected these geopolitical undercurrents. Samsung Electronics has announced a landmark $16.5 billion chip manufacturing deal with Tesla, generating optimism about Samsung’s capability to revive its semiconductor foundry business. Yet, execution risks remain substantial, prompting Samsung’s Chairman Jay Y. Lee to promptly travel to Washington to solidify bilateral trade relations and secure the company’s position amid potential trade tensions.

Similarly, Nvidia has placed a strategic order for 300,000 chipsets from Taiwanese giant TSMC, driven by robust Chinese demand and shifting US trade policies.

Meanwhile, the EU has intensified regulatory scrutiny, accusing e-commerce platform Temu of failing mandatory Digital Services Act (DSA) checks, citing serious risks related to counterfeit and unsafe goods.

In the USA, similar scrutiny arose as Senator Maggie Hassan urged Elon Musk to take decisive action against Southeast Asian criminal groups using Starlink services to defraud American citizens.

Finally, the EU’s landmark AI Act commenced its implementation phase this week, despite considerable pushback from tech firms concerned about regulatory compliance burdens.

Diplo Blog – The open-source gambit: How America plans to outpace AI rivals by democratising tech

On 23 July, the US unveiled an AI Action Plan featuring 103 recommendations focused on winning the AI race against China. Key themes include promoting open-source AI to establish global standards, reducing regulations to support tech firms, and emphasising national security. The plan addresses labour displacement, AI biases, and cybersecurity threats, advocating for reskilling workers and maintaining tech leadership through private sector flexibility. Additionally, it aims to align US allies within an AI framework while expressing scepticism toward multilateral regulations. Overall, the plan positions open-source AI as a strategic asset amid geopolitical competition. Read the full blog!

For the main updates, reflections and events, consult the RADAR, the READING CORNER and the UPCOMING EVENTS section below.

Join us as we connect the dots, from daily updates to main weekly developments, to bring you a clear, engaging monthly snapshot of worldwide digital trends.

DW Team


RADAR

Highlights from the week of 25 July – 1 August 2025

registration 3938434 1280

But worries rise as many free VPNs exploit users or carry hidden malware

australia flag is depicted on the screen with the program code

From December, YouTube must block accounts for Australians under 16 or face massive fines.

aeroflot cyberattack silent crow belarus cyber partisans ukraine conflict

Belarusian and Ukrainian hackers claim responsibility for strategic cyber sabotage of Aeroflot.

2025%2F07%2Fbeautiful silhouette port machinery sunset

A NATO policy brief warns that civilian ports across Europe face increasing cyber threats from state-linked actors and calls for updated maritime strategies to strengthen cybersecurity and civil–military coordination.

whatsapp Italy Meta antitrust EU AGCM

AGCM says Meta may have harmed competition by embedding AI features into WhatsApp.

eu and google

The EU AI Code could add €1.4 trillion to Europe’s economy, Google says.

0a560533 72a4 4acc a3a6 84f2162044df

Tether and Circle dominate the fiat-backed stablecoin market, now valued at over $227 billion combined.

microsoft logo png seeklogo 168319

Brussels updates Microsoft terms to curb risky data transfers

66b3be5a4f47d87bdc945227 image1 min

AI use in schools is weakening the connection between students and teachers by permitting students to bypass genuine effort through shortcuts.

AI Depression Design MT 1200x900 1

Use of AI surveillance, including monitoring software, intensifies burnout, micromanagement feelings, and disengagement.

cybersecurity risks of generative ai

A majority of Fortune 500 companies now mention AI in their annual reports as a risk factor instead of citing its benefits.

man using laptop night workplace top view

The platforms lost more than $3.1 billion in the first half of 2025, with AI-powered hacks and phishing scams leading the surge.

US AI jobs Brookings Lightcast survey

AI jobs now span marketing, finance, and HR—not just tech.

jojickajoja27 quantum computing 91572e2a e7d0 40d4 b6e5 12dc1bca48c6 11zon

Google and Microsoft lead investment in advanced AI and quantum infrastructure.


READING CORNER
BLOG featured image 2025 The open source gambit

On 23 July, the US unveiled an AI Action Plan featuring 103 recommendations focused on winning the AI race against China. Key themes include promoting open-source AI to establish global standards, reducing regulations to support tech firms, and emphasising national security.

ChatGPT Image Jul 28 2025 at 10 13 23 PM

Tracking technologies shape our online experience in often invisible ways, yet profoundly impactful, raising important questions about transparency, control, and accountability in the digital age.

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU will launch an empowering digital age verification system by 2026

The European Union will roll out digital age verification across all member states by 2026. Under the Digital Services Act, this mandate requires platforms to verify user age using the new EU Digital Identity Wallet (EUDIW). Non-compliance could lead to fines of up to €18 million or 10% of global turnover.

Initially, five countries will pilot the system designed to protect minors and promote online safety. The EUDIW uses privacy-preserving cryptographic proofs, allowing users to prove they are over 18 without uploading personal IDs.

Unlike the UK’s ID-upload approach, which triggered a rise in VPN usage, the EU model prioritises user anonymity and data minimisation. Scytales and T-Systems develop the system.

Despite its benefits, privacy advocates have flagged concerns. Although anonymised, telecom providers could potentially analyse network-level signals to infer user behaviour.

Beyond age checks, the EUDIW will store and verify other credentials, including diplomas, licenses, and health records. That initiative aims to create a trusted, cross-border digital identity ecosystem across Europe.

As a result, platforms and marketers must adapt. Behavioural tracking and personalised ads may become harder to implement. Smaller businesses might struggle with technical integration and rising compliance costs.

However, centralised control also raises risks. These include potential phishing attacks, service disruptions, and increased government visibility over online activity.

If successful, the EU’s digital identity model could inspire global adoption. It offers a privacy-first alternative to commercial or surveillance-heavy systems and marks a major leap forward in digital trust and safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy investigates Meta over AI integration in WhatsApp

Italy’s antitrust watchdog has investigated Meta Platforms over allegations that the company may have abused its dominant position by integrating its AI assistant directly into WhatsApp.

The Rome-based authority, formally known as the Autorità Garante della Concorrenza e del Mercato (AGCM), announced the probe on Wednesday, stating that Meta may have breached European Union competition regulations.

The regulator claims that the introduction of the Meta AI assistant into WhatsApp was carried out without obtaining prior user consent, potentially distorting market competition.

Meta AI, the company’s virtual assistant designed to provide chatbot-style responses and other generative AI functions, has been embedded in WhatsApp since March 2025. It is accessible through the app’s search bar and is intended to offer users conversational AI services directly within the messaging interface.

The AGCM is concerned that this integration may unfairly favour Meta’s AI services by leveraging the company’s dominant position in the messaging market. It warned that such a move could steer users toward Meta’s products, limit consumer choice, and disadvantage competing AI providers.

‘By pairing Meta AI with WhatsApp, Meta appears to be able to steer its user base into the new market not through merit-based competition, but by ‘forcing’ users to accept the availability of two distinct services,’ the authority said.

It argued that this strategy may undermine rival offerings and entrench Meta’s position across adjacent digital services. In a statement, Meta confirmed cooperating fully with the Italian authorities.

The company defended the rollout of its AI features, stating that their inclusion in WhatsApp aimed to improve the user experience. ‘Offering free access to our AI features in WhatsApp gives millions of Italians the choice to use AI in a place they already know, trust and understand,’ a Meta spokesperson said via email.

The company maintains its approach, which benefits users by making advanced technology widely available through familiar platforms. The AGCM clarified that its inquiry is conducted in close cooperation with the European Commission’s relevant offices.

The cross-border collaboration reflects the growing scrutiny Meta faces from regulators across the EU over its market practices and the use of its extensive user base to promote new services.

If the authority finds Meta in breach of EU competition law, the company could face a fine of up to 10 percent of its global annual turnover. Under Article 102 of the Treaty on the Functioning of the European Union, abusing a dominant market position is prohibited, particularly if it affects trade between member states or restricts competition.

To gather evidence, AGCM officials inspected the premises of Meta’s Italian subsidiary, accompanied by Guardia di Finanza, the tax police’s special antitrust unit in Italy.

The inspections were part of preliminary investigative steps to assess the impact of Meta AI’s deployment within WhatsApp. Regulators fear that embedding AI assistants into dominant platforms could lead to unfair advantages in emerging AI markets.

By relying on its established user base and platform integration, Meta may effectively foreclose competition by making alternative AI services harder to access or less visible to consumers. Such a case would not be the first time Meta has faced regulatory scrutiny in Europe.

The company has been the subject of multiple investigations across the EU concerning data protection, content moderation, advertising practices, and market dominance. The current probe adds to a growing list of regulatory pressures facing the tech giant as it expands its AI capabilities.

The AGCM’s investigation comes amid broader EU efforts to ensure fair competition in digital markets. With the Digital Markets Act and AI Act emerging, regulators are becoming more proactive in addressing potential risks associated with integrating advanced technologies into consumer platforms.

As the investigation continues, Meta’s use of AI within WhatsApp will remain under close watch. The outcome could set an essential precedent for how dominant tech firms can release AI products within widely used communication tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google backs EU AI Code but warns against slowing innovation

Google has confirmed it will sign the European Union’s General Purpose AI Code of Practice, joining other companies, including major US model developers.

The tech giant hopes the Code will support access to safe and advanced AI tools across Europe, where rapid adoption could add up to €1.4 trillion annually to the continent’s economy by 2034.

Kent Walker, Google and Alphabet’s President of Global Affairs, said the final Code better aligns with Europe’s economic ambitions than earlier drafts, noting that Google had submitted feedback during its development.

However, he warned that parts of the Code and the broader AI Act might hinder innovation by introducing rules that stray from EU copyright law, slow product approvals or risk revealing trade secrets.

Walker explained that such requirements could restrict Europe’s ability to compete globally in AI. He highlighted the need to balance regulation with the flexibility required to keep pace with technological advances.

Google stated it will work closely with the EU’s new AI Office to help shape a proportionate, future-facing approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act begins as tech firms push back

Europe’s AI crackdown officially begins soon, as the EU enforces the first rules targeting developers of generative AI models like ChatGPT.

Under the AI Act, firms must now assess systemic risks, conduct adversarial testing, ensure cybersecurity, report serious incidents, and even disclose energy usage. The goal is to prevent harms related to bias, misinformation, manipulation, and lack of transparency in AI systems.

Although the legislation was passed last year, the EU only released developer guidance on 10 July, leaving tech giants with little time to adapt.

Meta, which developed the Llama AI model, has refused to sign the voluntary code of practice, arguing that it introduces legal uncertainty. Other developers have expressed concerns over how vague and generic the guidance remains, especially around copyright and practical compliance.

The EU also distinguishes itself from the US, where a re-elected Trump administration has launched a far looser AI Action Plan. While Washington supports minimal restrictions to encourage innovation, Brussels is focused on safety and transparency.

Trade tensions may grow, but experts warn that developers should not rely on future political deals instead of taking immediate steps toward compliance.

The AI Act’s rollout will continue into 2026, with the next phase focusing on high-risk AI systems in healthcare, law enforcement, and critical infrastructure.

Meanwhile, questions remain over whether AI-generated content qualifies for copyright protection and how companies should handle AI in marketing or supply chains. For now, Europe’s push for safer AI is accelerating—whether Big Tech likes it or not.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!