EU watchdogs launch GDPR transparency sweep

The European Data Protection Board has launched a Europe-wide enforcement initiative to examine transparency and information obligations under the GDPR. The programme forms part of its Coordinated Enforcement Framework for 2026.

Twenty-five national data protection authorities will assess how organisations inform people about the processing of their personal data. Reviews will involve formal investigations and fact-finding exercises across multiple sectors.

Authorities plan to exchange findings later in the year to build a shared picture of compliance trends. A consolidated report will guide follow-up measures at both the national and EU levels.

The framework supports closer regulatory cooperation and consistent GDPR enforcement. Previous coordinated actions examined cloud services, data protection officers, access rights and the right to erasure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Malaysia launches AI platform Rakan Tani to support farmers and stabilise incomes

The National AI Office (NAIO), through its NAIO Lab, is advancing Malaysia’s AI-driven development by building an ecosystem that supports innovation, collaboration, and startups. NAIO Lab aims to position the country as a hub for AI innovation where developers can experiment and create practical solutions.

Rakan Tani, the first project under NAIO Lab, is an AI-powered digital platform designed to transform the agricultural sector. It connects farmers with buyers early in the crop cycle and uses AI-driven order matching to help secure competitive prices and improve financial predictability.

The platform integrates multiple AI-driven features, including pre-harvest commerce, subsidy access via national ID systems, agriculture financing using pre-harvest orders as collateral, real-time cash payouts through digital banking, and logistics coordination with distributors and providers. It is delivered via WhatsApp and supports both Malay and English, with a pilot planned in Terengganu in May 2025.

NAIO Lab also provides AI startups with resources, mentorship, and funding, enabling collaboration between experts, researchers, and entrepreneurs. The initiative is supported by partnerships across government, academia, and industry, including the Ministry of Digital, Ministry of Agriculture and Food Security, GAIV, UPM, and Segi Fresh, with the goal of accelerating AI adoption and supporting sustainable economic growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Quantum readiness gains momentum according to OECD report

The OECD (Organisation for Economic Co-operation and Development) highlights how businesses are preparing for quantum computing, recognising it as a transformative technology instead of relying solely on conventional computing methods.

Quantum readiness is framed as a long-term capability-building effort in which firms gradually develop skills, infrastructure, and partnerships to explore commercial applications while navigating uncertainty.

Drawing on research, surveys, and interviews with public and private organisations across 10 countries, the OECD identifies both the practical steps companies take to build readiness and the barriers that slow adoption.

Early efforts focus on low-cost awareness and exploration, including attending workshops, training sessions, and industry events, allowing firms to familiarise themselves with emerging opportunities instead of waiting for fully mature systems.

Despite growing interest, companies face significant challenges. Technological immaturity complicates pilots and feasibility studies, while many firms lack a clear understanding of potential business applications.

Access to quantum resources, funding for research and development, and staff training are expensive, particularly for small- and medium-sized enterprises. Furthermore, there is a shortage of talent with both quantum computing expertise and domain-specific knowledge.

As a result, readiness tends to be concentrated among large, R&D-intensive firms, while smaller companies often recognise quantum computing’s potential but delay action.

Such an uneven adoption risks creating a divide in the digital economy, with early adopters moving ahead and other firms falling behind instead of engaging proactively.

To address these challenges, the OECD notes that public and private support mechanisms are critical. Networking and collaboration platforms connect firms with researchers, technology providers, and industry peers, fostering knowledge exchange and collective experimentation.

Business advisory and technology extension services help companies assess capabilities, test solutions, and access specialised facilities.

Grants for research and development lower the costs of experimentation and encourage collaboration, while stakeholder consultations ensure that support measures remain aligned with business needs.

Many companies are also establishing internal quantum labs and innovation hubs to trial applications and build expertise in a controlled environment, combining research with practical exploration instead of relying solely on external guidance.

Looking ahead, the OECD recommends expanding education and skills pipelines, strengthening industry-academic partnerships, and designing policies that support broader participation in quantum adoption.

Hybrid approaches that integrate quantum computing with AI and high-performance computing may offer practical commercial entry points for early applications.

Policymakers are encouraged to balance near-term exploratory pilots with forward-looking support for software development, interoperability, and workforce growth, enabling firms to move from experimentation to deployment effectively.

By following OECD guidance, companies can enhance innovation, improve competitiveness, and ensure that readiness efforts span sectors and geographies rather than remain limited to a few early adopters.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Mongolia anti-corruption authority adopts AI system for decision monitoring

The Independent Authority Against Corruption (IAAC) in Mongolia has started using the tuss.io platform to monitor orders, decisions, rules and regulations adopted by state organisations and officials.

The platform, developed by Tus Solution company, is used to check whether decisions, orders, rules or regulations meet legal requirements, create unnecessary procedural steps, or establish conflicts of interest or conflicts with the law.

According to IAAC, a total of 388 orders, decisions, rules and regulations have been monitored. Out of these, 152 have been revised, amended or invalidated over the past three years.

Why does it matter?

The initiative reflects broader efforts of Mongolia to strengthen transparency and accountability in public administration through digital tools. By integrating AI-powered analysis and compliance monitoring, platforms like tuss.io can more efficiently identify regulatory inconsistencies and support evidence-based decision-making, reducing opportunities for corruption and improving the overall quality of governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

UK Digital Inclusion Action Plan delivers devices funding and online access support

The UK Department for Science, Innovation and Technology said more than one million people have been helped online through its Digital Inclusion Action Plan. The update was published in a one-year progress report on the government strategy.

The department said over 22,000 devices were donated through government schemes and industry partnerships. It also confirmed £11.9 million in funding that supported more than 80 local digital inclusion programmes.

According to the report, the plan aims to improve access to devices, connectivity and digital skills. The government said all commitments in the strategy have either been delivered or remain on track.

The department added that partnerships with industry and charities helped expand access to broadband and mobile services, including more affordable connectivity. The programme also supported training and local initiatives to improve digital participation.

Secretary of State for Science, Innovation and Technology, Liz Kendall, said the programme is intended to expand access to online services, employment opportunities and communication tools. She added that the government plans to continue developing the initiative.

The department also confirmed it will take over the Essential Digital Skills Framework from Lloyds Banking Group and update it to reflect current needs, including online safety and the growing role of AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

EU privacy bodies back cybersecurity overhaul

The European Data Protection Board and the European Data Protection Supervisor have backed proposals to strengthen the EU cybersecurity law while safeguarding personal data. Their joint opinion addresses reforms to the Cybersecurity Act and updates to the NIS2 Directive.

Regulators support plans to reinforce the mandate of the European Union Agency for Cybersecurity and expand cybersecurity certification across digital supply chains. Clearer coordination between ENISA and privacy authorities is seen as essential for consistent oversight.

Advice also calls for limits on the processing of personal data and for prior consultation on technical rules affecting privacy. Certification schemes should align with the GDPR and help organisations demonstrate compliance.

Additional recommendations include broader cybersecurity skills training and a single EU entry point for personal data breach notifications. Proposed changes would also classify digital identity wallet providers as essential entities under the EU security rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPB summarises conference on cross-regulatory cooperation in the EU

The European Data Protection Board has published a summary of its 17 March conference in Brussels on cross-regulatory interplay and cooperation in the EU from a data protection perspective. According to the EDPB, the event brought together representatives of the EU institutions, European Data Protection Authorities, academia, and industry.

Three panels structured the conference discussion. One focused on data protection and competition, another on the Digital Markets Act and the General Data Protection Regulation (GDPR), and a third on the Digital Services Act and the GDPR.

Discussion in the first panel centred on cooperation between regulatory bodies in data protection and competition, including lessons from the aftermath of the Bundeskartellamt ruling. The EDPB said speakers emphasised the need for regulators to align their approaches and recognise synergies between the two fields. Speakers also said data protection should be considered in competition analysis only when relevant and on a case-by-case basis. The EDPB added that it had recently agreed with the European Commission to develop joint guidelines on the interplay between competition law and data protection.

The second panel focused on joint guidelines on the Digital Markets Act and the GDPR, developed by the European Commission and the EDPB and recently opened to public consultation. According to the EDPB, speakers described the guidelines as an example of regulatory cooperation aimed at developing a coherent and compatible interpretation of the two frameworks while respecting regulatory competences. The Board said participants linked the guidelines to stronger consistency, legal clarity, and easier compliance. Some speakers also suggested changes to the final version, including points related to proportionality and the relationship between DMA obligations and the GDPR.

The final panel examined the interaction between the Digital Services Act and the GDPR. The EDPB said panellists referred to the protection of minors as one example, arguing that age verification should be effective while remaining fully in line with data protection legislation. Speakers also highlighted the need for coordination between the two frameworks, including cooperation involving the EU institutions such as the European Board for Digital Services, the European Commission, the EDPB, and national authorities. Emerging technologies such as AI were also mentioned in the discussion.

The event also featured keynote speeches from European Commission Executive Vice President Henna Virkkunen and European Parliament LIBE Committee Chair Javier Zarzalejos. According to the EDPB, Virkkunen said the Commission remained committed to cooperation between different frameworks and highlighted the need to support compliance through stronger coordination among regulators. Zarzalejos said close cross-regulatory cooperation was essential for consistency, effective enforcement, and trust, and pointed to the intersections among data protection law, competition law, the DMA, and the DSA.

EDPB Chair Anu Talus closed the conference by reiterating that the EDPB and European Data Protection Authorities are committed to supporting stakeholders in navigating what the Board described as a new cross-regulatory landscape. The EDPB said future work will include continued cooperation with the Commission on joint guidelines on the interplay between the AI Act and the GDPR, finalisation of the joint guidelines on the interplay between the DMA and the GDPR, and work on the recently announced Joint Guidelines on the interplay between data protection and competition law.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital Services Act disinformation signatories publish first 2026 reports

Signatories to the EU Code of Conduct on Disinformation have published new transparency reports describing the measures they say they are taking to reduce the spread of disinformation online. According to the European Commission, the reports are the first ones submitted since the Code was recognised as a code of conduct under the Digital Services Act.

The reports are available through the Code’s Transparency Centre and come from a broad group of signatories, including online platforms such as Google, Meta, Microsoft, and TikTok, as well as fact-checkers, research organisations, civil society bodies, and representatives of the advertising industry. The European Commission says the reporting round covers the period from 1 July to 31 December 2025 and marks the first full reporting cycle linked to the Digital Services Act.

Dedicated sections in the reports cover responses to ongoing crises, notably the conflict in Ukraine, as well as measures intended to safeguard the integrity of elections. Data on the implementation of disinformation-related measures is also included, alongside developments in signatories’ policies, tools, and partnerships under the Digital Services Act framework.

Greater significance attaches to the reporting cycle because of the Code’s changed legal and regulatory position. The Commission says the Code was endorsed on 13 February 2025 by the Commission and the European Board for Digital Services, at the request of the signatories, as a code of conduct within the meaning of the Digital Services Act. From 1 July 2025, the Code became part of the co-regulatory framework under the Digital Services Act.

A more formal role now applies to the Code than under its earlier voluntary setup. According to the Commission, signatories’ adherence to its commitments is subject to independent annual auditing, and the Code serves as a relevant benchmark for determining compliance with Article 35 of the Digital Services Act. The Commission also says the Code has become a ‘significant and meaningful benchmark of DSA compliance’ for providers of very large online platforms and very large online search engines that adhere to its commitments under the Digital Services Act.

Reporting obligations differ depending on the type of signatory. Under the Code, providers of very large online platforms and very large online search engines commit to reporting, every six months, on the actions taken by their subscribed services. The Commission lists Google Search, YouTube, Google Ads, Facebook, Instagram, Messenger, WhatsApp, Bing, LinkedIn, and TikTok among the covered services, while other non-platform signatories report once per year under the Digital Services Act structure.

Broader policy relevance lies in the EU’s attempt to connect platform self-reporting to a more formal oversight structure. By placing the disinformation Code inside the Digital Services Act framework, the Commission and the Board are using voluntary commitments, transparency reporting, and auditing as part of a co-regulatory approach to systemic online risks. The reports themselves do not prove compliance, but they now carry greater weight within the wider Digital Services Act architecture for platform governance.

One further point is that the Commission notice focuses on publication of the reports rather than evaluating their quality or effectiveness. The notice says the reports describe measures, data, and policy developments, but it does not assess whether the actions taken by signatories were sufficient. Such a distinction matters in politically sensitive areas such as election integrity and crisis-related disinformation, especially where transparency under the Digital Services Act may shape future scrutiny.

Taken together, the first reporting round shows how the EU is using the Digital Services Act not only to impose direct legal obligations on large platforms and search engines, but also to anchor voluntary commitments within a more structured regulatory environment. Continued reporting, auditing, and review will determine how much practical weight the Code carries within the Digital Services Act and how effectively the Digital Services Act supports oversight of disinformation risks online.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google expands into neutral atom quantum computing

Google Quantum AI is broadening its quantum computing research to include neutral atom technology alongside its established superconducting qubits. Neutral atoms offer high connectivity and flexibility, while superconducting qubits provide fast cycles and deep circuit performance.

By pursuing both approaches, Google aims to accelerate progress and deliver versatile platforms for different computational challenges.

The neutral atom programme is focused on three pillars: quantum error correction adapted for atom arrays, modelling and simulation of hardware architectures, and experimental hardware development to manipulate atomic qubits at scale.

The initiative is led by Dr Adam Kaufman, who joins Google from CU Boulder, bringing expertise in atomic, molecular, and optical physics to advance neutral atom hardware.

Google is leveraging the Boulder quantum ecosystem, collaborating with institutions such as JILA, CU Boulder, NIST, and QuEra to strengthen research and innovation. These partnerships give access to top talent, facilities, and federal programmes, strengthening the US role in global quantum research.

By combining superconducting and neutral-atom approaches, Google aims to address critical physics and engineering challenges on the path to large-scale, fault-tolerant quantum computers, with commercial relevance expected by the end of the decade.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO, UNICEF and ITU publish Charter for Public Digital Learning Platforms

The United Nations Educational, Scientific and Cultural Organization (UNESCO), the United Nations Children’s Fund (UNICEF), and the International Telecommunication Union (ITU) have published a Charter for Public Digital Learning Platforms, which sets out principles to guide governments in developing and governing digital learning systems.

The Charter states that education is a human right and a public good, and emphasises that digital learning platforms should support public education systems rather than replace in-person schooling. It describes such platforms as components of broader education systems that bring together content, technology, and users to support teaching and learning.

According to the Charter, governments are encouraged to establish and maintain public digital learning platforms as part of the national education infrastructure. The document notes that, in many contexts, the absence or limited quality of such platforms has led to increased reliance on private-sector solutions, which may not always align with public education objectives.

The Charter outlines seven principles for public digital learning platforms, covering areas including:

  • public governance and financing, with oversight by public authorities;
  • inclusion, including accessibility, multilingual support, and cultural relevance;
  • pedagogical design, with a focus on teacher-led learning;
  • integration with education systems and public digital infrastructure;
  • open standards and interoperability;
  • user-focused development based on educational needs;
  • trustworthiness, including data protection, safety, and reliability.

The document also highlights the importance of data governance, stating that data generated through such platforms should remain under public control and be managed in accordance with applicable laws, with safeguards for privacy and security.

The Charter was developed under the UNESCO–UNICEF Gateways to Public Digital Learning Initiative, with input from governments and international organisations. It was released on the occasion of the International Day of Digital Learning 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot