Meta has announced that third-party AI chatbots will again be allowed to operate through WhatsApp in Europe, reversing restrictions introduced earlier this year.
The decision follows pressure from the European Commission, which had warned it could impose interim competition measures.
Earlier in 2026, Meta limited access to rival chatbot services on the messaging platform, prompting regulators to examine whether the move unfairly restricted competition in the rapidly expanding AI market.
WhatsApp remains one of the most widely used messaging applications across European countries, making platform access critical for emerging AI services.
Under the new arrangement, companies will be able to distribute general-purpose AI chatbots via the WhatsApp Business API for 12 months.
The change is intended to give European regulators time to complete their investigation while allowing competing AI services to operate within the platform ecosystem.
Meta has also indicated that businesses offering chatbots through WhatsApp will be required to pay fees to access the system.
The European Commission is now assessing whether these adjustments sufficiently address competition concerns surrounding the integration of AI services inside major digital platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pressure is growing in New Zealand to strengthen the Privacy Act following several high-profile data breaches. Debate in New Zealand intensified after a cyberattack exposed medical records from the Manage My Health patient portal.
The breach in New Zealand affected about 120,000 patients and involved threats to release documents on the dark web. Another incident forced the MediMap medication platform offline after unauthorised changes were detected in patient records.
Privacy specialists argue that current enforcement powers are too weak to deter serious failures. The Privacy Act allows only limited financial penalties, with fines generally capped at NZD10,000.
Officials are now considering reforms, including stronger penalties for privacy violations. Policymakers also warn that failure to strengthen the law could threaten the country’s EU data adequacy status.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has convened a new expert panel tasked with examining how children can be better protected across digital platforms, including social media, gaming environments and AI tools.
The initiative reflects growing concern across Europe regarding the psychological and safety risks associated with young users’ online behaviour.
Specialists from health, computer science, child rights and digital literacy will work alongside youth representatives to assess current research and policy responses.
Discussions during the first meeting centred on platform responsibility, including age-appropriate safety-by-design features, algorithmic amplification and addictive product design.
An initiative that also addresses digital literacy for children, parents and educators, while considering how regulatory measures can reduce risks without undermining the benefits of online participation.
The panel’s work complements the enforcement of the Digital Services Act and related European policies designed to strengthen protections for minors online.
Among the tools under development is an EU age-verification application currently tested in several member states, intended to support privacy-preserving checks compatible with the future EU digital identity framework.
The panel is expected to deliver policy recommendations to the Commission by summer 2026.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Union’s data protection watchdog has urged stronger safeguards as negotiations continue with the US over access to biometric databases. European Data Protection Supervisor Wojciech Wiewiórowski said limits must ensure Europeans’ data is used only for agreed purposes.
Talks between the EU and the US involve potential arrangements that would allow US authorities to query national biometric systems. Databases across the EU contain sensitive information, including fingerprints and facial recognition data.
Past transatlantic data-sharing agreements between the two have faced legal challenges due to insufficient safeguards. European regulators are closely monitoring the Data Privacy Framework amid ongoing concerns about oversight.
Officials also warned that emerging AI technologies could create new surveillance risks linked to US data access. European authorities said they must negotiate as a unified bloc when dealing with the US.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has registered a European Citizens’ Initiative proposing the creation of a public social media platform operating at the European level, rather than relying exclusively on private technology companies.
An initiative titled the European Public Social Network calls for legislation establishing a publicly funded digital platform designed to serve societal interests.
Organisers argue that a publicly owned network could function independently from commercial incentives and political pressure while guaranteeing equal rights for users across the EU. The proposed platform would operate as a public service overseen by society rather than private corporations.
Registration confirms that the proposal meets the legal requirements of the European Citizens’ Initiative framework. The Commission has not yet assessed the substance of the idea, and registration does not imply support for the proposal.
Supporters must now gather 1 million signatures from citizens across at least 7 EU member states within 12 months. If the threshold is reached, the Commission will be required to formally examine the initiative and decide whether legislative action is appropriate.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
European regulators are examining whether Roblox should fall under the Digital Services Act’s most stringent obligations rather than remain outside the bloc’s most demanding platform rules.
The European Commission began analysing the gaming platform’s reported user figures after the company disclosed roughly 48 million monthly users across the EU.
Numbers above the threshold could qualify Roblox as a Very Large Online Platform under the DSA. Such a designation would mark the first time a gaming platform enters the category alongside social media services already subject to heightened oversight.
Platforms receiving the label must conduct regular risk assessments, submit mitigation reports and demonstrate stronger safeguards for minors.
Regulatory pressure has already begun at the national level. The Dutch Authority for Consumers and Markets launched an investigation in January after concerns that children could encounter violent or sexually explicit content within Roblox games or interact with harmful actors through online features.
Designation at the EU level would transfer supervisory authority to the European Commission, enabling wider investigations and potential fines if violations occur. Officials are still verifying user data before making a formal decision, and no deadline has been announced for the process.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission is preparing more stringent requirements for ageing data centres rather than allowing legacy infrastructure to operate under looser rules.
A draft strategy tied to the EU’s tech sovereignty package signals that older sites will face higher efficiency expectations and stricter sustainability checks as part of an effort to modernise the digital backbone of the EU.
The proposal outlines minimum performance standards for new data centres by 2030, aiming to align the entire sector with the bloc’s climate and resilience goals. Officials want to reduce energy waste and improve monitoring across facilities that have long operated without uniform benchmarks.
The draft points to an expanded role for the Cloud and AI Development Act, which is expected to frame future obligations for cloud providers instead of relying on fragmented national measures.
Brussels sees consistent rules as essential for supporting secure cloud services, AI infrastructure and cross-border digital operations.
The strategy underscores that modernisation is central to the EU’s vision of tech sovereignty. Older centres would need upgrades to maintain compliance, ensuring that Europe’s digital infrastructure remains competitive, efficient and less dependent on external providers.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Europe is building a federated cloud and AI infrastructure intended to reduce reliance on US and Chinese technology providers and avoid ongoing strategic vulnerability.
The project, known as EURO-3C, was announced in Barcelona by Telefónica and is backed by the European Commission. More than seventy organisations across telecommunications, technology and emerging companies have joined the effort.
Architects of the scheme argue that linking national infrastructures into a shared network of nodes offers a realistic path forward, particularly as Europe cannot easily create a hyperscale cloud provider from scratch.
The initiative follows a series of US cloud outages that exposed the risks of excessive dependence on external infrastructure and raised questions about sovereignty, resilience and long-term competitiveness.
Commission officials described the programme as a way to build a secure cross-border digital ecosystem that supports industries such as automotive, e-health, public administration and sovereign government cloud.
Telefónica stressed that agentic AI, capable of taking autonomous actions, will play a central role in enabling Europe to develop technology rather than import it.
The partners view the project as a foundation for a unified and independent digital environment that strengthens industrial supply chains and prepares European sectors for the next phase of cloud and AI adoption.
They present the initiative as a significant step toward reducing strategic exposure while stimulating domestic innovation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The civil liberties committee failed to secure majority backing for its amended report on extending the EU’s temporary chat-scanning rules instead of giving a clear negotiating position.
Members of Parliament reviewed the amendments on Monday, but the final text did not garner sufficient support, leaving the proposal without endorsement as the adoption deadline approaches.
A proposal to extend the current derogation that allows tech companies to voluntarily scan their services for Child Sexual Abuse Material (CSAM).
The existing regime expires in April 2026 and was intended only as a stopgap while a permanent Child Sexual Abuse Regulation was developed. Years of stalled negotiations have led to the temporary rules being extended twice since 2021.
Council has already approved its position without changes to the Commission proposal, creating a tight timeline for Parliament.
With trilogue talks finally underway, institutions would need to conclude discussions unusually quickly to prevent the legal basis from expiring. If no agreement is reached by April, companies would lose their ability to scan services under the EU law.
The committee confirmed that the file will now move to plenary in the week of 9–12 March, where political groups may table new amendments. An outcome that will determine whether the temporary regime remains in place while negotiations on the permanent system continue.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has unveiled a new counterterrorism agenda under the ProtectEU initiative, outlining measures to strengthen the EU’s response to evolving security threats. Officials say the strategy aims to improve preparedness, reinforce cooperation and protect citizens and businesses from emerging forms of terrorism and violent extremism.
Authorities warn that technological change is reshaping the threat landscape. Terrorist groups increasingly exploit digital tools such as social media, AI and encrypted platforms for recruitment, propaganda and fundraising.
New risks also include the potential misuse of drones, crypto-assets and 3D-printed weapons, while radicalisation of minors online has become a growing concern across Europe.
The agenda proposes stronger capabilities for anticipating threats through expanded intelligence analysis and enhanced support for Europol, including greater use of open-source intelligence. Additional research funding will explore the security implications of emerging technologies, while new initiatives aim to strengthen early prevention efforts and community engagement to counter radicalisation, particularly among young people.
Online safety forms another key priority. The Commission plans to intensify cooperation with digital platforms to remove extremist content more quickly and to strengthen enforcement of the Digital Services Act. A new EU Online Crisis Response Framework is also proposed to improve coordination between authorities and technology companies during security incidents.
Measures targeting the physical environment will focus on protecting public spaces and critical infrastructure, including investments in security projects and stronger monitoring of individuals suspected of terrorism.
The strategy also seeks to improve the tracking of terrorist financing, including through cryptocurrencies, and to expand cooperation with international partners, such as countries in the Western Balkans and the Mediterranean region.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!