Digital Services Act disinformation signatories publish first 2026 reports

Signatories to the EU Code of Conduct on Disinformation have published new transparency reports describing the measures they say they are taking to reduce the spread of disinformation online. According to the European Commission, the reports are the first ones submitted since the Code was recognised as a code of conduct under the Digital Services Act.

The reports are available through the Code’s Transparency Centre and come from a broad group of signatories, including online platforms such as Google, Meta, Microsoft, and TikTok, as well as fact-checkers, research organisations, civil society bodies, and representatives of the advertising industry. The European Commission says the reporting round covers the period from 1 July to 31 December 2025 and marks the first full reporting cycle linked to the Digital Services Act.

Dedicated sections in the reports cover responses to ongoing crises, notably the conflict in Ukraine, as well as measures intended to safeguard the integrity of elections. Data on the implementation of disinformation-related measures is also included, alongside developments in signatories’ policies, tools, and partnerships under the Digital Services Act framework.

Greater significance attaches to the reporting cycle because of the Code’s changed legal and regulatory position. The Commission says the Code was endorsed on 13 February 2025 by the Commission and the European Board for Digital Services, at the request of the signatories, as a code of conduct within the meaning of the Digital Services Act. From 1 July 2025, the Code became part of the co-regulatory framework under the Digital Services Act.

A more formal role now applies to the Code than under its earlier voluntary setup. According to the Commission, signatories’ adherence to its commitments is subject to independent annual auditing, and the Code serves as a relevant benchmark for determining compliance with Article 35 of the Digital Services Act. The Commission also says the Code has become a ‘significant and meaningful benchmark of DSA compliance’ for providers of very large online platforms and very large online search engines that adhere to its commitments under the Digital Services Act.

Reporting obligations differ depending on the type of signatory. Under the Code, providers of very large online platforms and very large online search engines commit to reporting, every six months, on the actions taken by their subscribed services. The Commission lists Google Search, YouTube, Google Ads, Facebook, Instagram, Messenger, WhatsApp, Bing, LinkedIn, and TikTok among the covered services, while other non-platform signatories report once per year under the Digital Services Act structure.

Broader policy relevance lies in the EU’s attempt to connect platform self-reporting to a more formal oversight structure. By placing the disinformation Code inside the Digital Services Act framework, the Commission and the Board are using voluntary commitments, transparency reporting, and auditing as part of a co-regulatory approach to systemic online risks. The reports themselves do not prove compliance, but they now carry greater weight within the wider Digital Services Act architecture for platform governance.

One further point is that the Commission notice focuses on publication of the reports rather than evaluating their quality or effectiveness. The notice says the reports describe measures, data, and policy developments, but it does not assess whether the actions taken by signatories were sufficient. Such a distinction matters in politically sensitive areas such as election integrity and crisis-related disinformation, especially where transparency under the Digital Services Act may shape future scrutiny.

Taken together, the first reporting round shows how the EU is using the Digital Services Act not only to impose direct legal obligations on large platforms and search engines, but also to anchor voluntary commitments within a more structured regulatory environment. Continued reporting, auditing, and review will determine how much practical weight the Code carries within the Digital Services Act and how effectively the Digital Services Act supports oversight of disinformation risks online.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google expands into neutral atom quantum computing

Google Quantum AI is broadening its quantum computing research to include neutral atom technology alongside its established superconducting qubits. Neutral atoms offer high connectivity and flexibility, while superconducting qubits provide fast cycles and deep circuit performance.

By pursuing both approaches, Google aims to accelerate progress and deliver versatile platforms for different computational challenges.

The neutral atom programme is focused on three pillars: quantum error correction adapted for atom arrays, modelling and simulation of hardware architectures, and experimental hardware development to manipulate atomic qubits at scale.

The initiative is led by Dr Adam Kaufman, who joins Google from CU Boulder, bringing expertise in atomic, molecular, and optical physics to advance neutral atom hardware.

Google is leveraging the Boulder quantum ecosystem, collaborating with institutions such as JILA, CU Boulder, NIST, and QuEra to strengthen research and innovation. These partnerships give access to top talent, facilities, and federal programmes, strengthening the US role in global quantum research.

By combining superconducting and neutral-atom approaches, Google aims to address critical physics and engineering challenges on the path to large-scale, fault-tolerant quantum computers, with commercial relevance expected by the end of the decade.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO, UNICEF and ITU publish Charter for Public Digital Learning Platforms

The United Nations Educational, Scientific and Cultural Organization (UNESCO), the United Nations Children’s Fund (UNICEF), and the International Telecommunication Union (ITU) have published a Charter for Public Digital Learning Platforms, which sets out principles to guide governments in developing and governing digital learning systems.

The Charter states that education is a human right and a public good, and emphasises that digital learning platforms should support public education systems rather than replace in-person schooling. It describes such platforms as components of broader education systems that bring together content, technology, and users to support teaching and learning.

According to the Charter, governments are encouraged to establish and maintain public digital learning platforms as part of the national education infrastructure. The document notes that, in many contexts, the absence or limited quality of such platforms has led to increased reliance on private-sector solutions, which may not always align with public education objectives.

The Charter outlines seven principles for public digital learning platforms, covering areas including:

  • public governance and financing, with oversight by public authorities;
  • inclusion, including accessibility, multilingual support, and cultural relevance;
  • pedagogical design, with a focus on teacher-led learning;
  • integration with education systems and public digital infrastructure;
  • open standards and interoperability;
  • user-focused development based on educational needs;
  • trustworthiness, including data protection, safety, and reliability.

The document also highlights the importance of data governance, stating that data generated through such platforms should remain under public control and be managed in accordance with applicable laws, with safeguards for privacy and security.

The Charter was developed under the UNESCO–UNICEF Gateways to Public Digital Learning Initiative, with input from governments and international organisations. It was released on the occasion of the International Day of Digital Learning 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI Foundation expands investment strategy to shape AI benefits and resilience

A major expansion of its activities has been outlined by OpenAI Foundation, signalling a broader effort to ensure AI delivers tangible benefits while addressing emerging risks.

The organisation plans to invest at least $1 billion over the next year, forming part of a wider $25 billion commitment focused on disease research and AI resilience.

AI is increasingly reshaping healthcare, scientific discovery and economic productivity, offering pathways to faster medical breakthroughs and more efficient public services.

OpenAI Foundation frames such potential as central to its mission, while recognising that more capable systems introduce complex societal and safety challenges that require coordinated responses.

Initial programmes prioritise life sciences, including research into Alzheimer’s disease, expanded access to public health data, and accelerated progress on high-mortality conditions.

Parallel efforts examine the economic impact of automation, with engagement across policymakers, labour groups and businesses aimed at developing practical responses to labour market disruption.

A dedicated resilience strategy addresses risks linked to advanced AI systems, including safety standards, biosecurity concerns and the protection of children and young users.

Alongside community-focused funding, the OpenAI Foundation’s initiative reflects a dual objective: enabling innovation rather than leaving societies exposed to technological disruption.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI sunsets Sora app after 6 months of scrutiny

OpenAI is moving to shut down the Sora app, its consumer-facing AI video platform, according to an official X post on 24 March. The move follows months of scrutiny around AI-generated video, including concerns over deepfakes, copyright, and harmful synthetic media.

The reported shutdown comes shortly after OpenAI retired Sora 1 in the United States on 13 March 2026 and replaced it with Sora 2 as the default experience. OpenAI’s help documentation says the older version remains available only in countries where the newer one has not yet launched, while support pages for the standalone Sora app are still live. The product changes also follow the announcement of new copyright settings for the latest video generation model.

That makes the current picture more complex than a simple sunset. Public OpenAI help pages still describe tools on iOS, Android, and the web, while news reports say the company has now decided to wind down the app itself. OpenAI had also recently indicated that it plans to integrate Sora video generation into ChatGPT, which could help explain why the standalone product is being reconsidered.

Sora became one of OpenAI’s most visible consumer media products, but it also drew sustained scrutiny over deepfakes, non-consensual content, and copyrighted characters. Such concerns remained central even as OpenAI added additional controls to the platform, including new consent and traceability measures to enhance AI video safety. AP reported that pressure from advocacy groups, scholars, and entertainment-sector voices formed part of the backdrop to the shutdown decision.

For users, the immediate issue is preservation of existing content. OpenAI’s Sora 1 sunset FAQ says some legacy material may be exportable for a limited period before deletion, but the company has not yet published a detailed standalone help document explaining the full shutdown. Based on the information now available, the clearest distinction is that OpenAI first retired one legacy version in some markets and is now reportedly ending the standalone app more broadly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IWF report reveals a rapid growth of synthetic child abuse material online

A surge in AI-generated child sexual abuse material has raised urgent concerns across Europe, with the Internet Watch Foundation reporting record levels of harmful content online.

Findings of the IWF report indicate that AI is accelerating both the scale and severity of abuse, transforming how offenders create and distribute illicit material.

Data from 2025 reveals a sharp increase in AI-generated imagery and video, with over 8,000 cases identified and a dramatic rise in highly severe content.

Synthetic videos have grown at an unprecedented rate, reflecting how emerging tools are being used to produce increasingly realistic and extreme scenarios rather than traditional formats.

Analysis of offender behaviour highlights a disturbing trend toward automation and accessibility.

Discussions on dark web forums suggest that future agentic AI systems may enable the creation of fully produced abusive content with minimal technical skill. The integration of audio and image manipulation further deepens risks, particularly where real children’s likenesses are involved.

Calls for regulatory action are intensifying as policymakers in the EU debate reforms to the Child Sexual Abuse Directive.

Advocacy groups emphasise the need for comprehensive criminalisation, alongside stronger safety-by-design requirements, arguing that technological innovation must not outpace child protection frameworks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Edge AI advantages and challenges shaping the future of digital systems

Over the past few years, we have witnessed a rapid shift in the way data is stored and processed across businesses, organisations, and digital systems.

What we are increasingly seeing is that AI itself is changing form as computation shifts away from centralised cloud environments to the network edge. Such a shift has come to be known as edge AI.

Edge AI refers to the deployment of machine learning models directly on local devices such as smartphones, sensors, industrial machines, and autonomous systems.

Instead of transmitting data to remote servers for processing, analysis is performed on the device itself, enabling faster responses and greater control over sensitive information.

Such a transition marks a significant departure from earlier models of AI deployment, where cloud infrastructure dominated both processing and storage.

From centralised AI to edge intelligence

Traditional AI systems used to rely heavily on centralised architectures. Data collected from users or devices would be transmitted to large-scale data centres, where powerful servers would perform computations and generate outputs.

Such a model offered efficiency, scalability, and easier security management, as protection efforts could be concentrated within controlled environments.

Centralisation allowed organisations to enforce uniform security policies, deploy updates rapidly, and monitor threats from a single vantage point. However, reliance on cloud infrastructure also introduced latency, bandwidth constraints, and increased exposure of sensitive data during transmission.

Edge AI improves performance and privacy while expanding cybersecurity risks across distributed systems and devices.

Edge AI introduces a fundamentally different paradigm. Moving computation closer to the data source reduces the reliance on continuous connectivity and enables real-time decision-making.

Such decentralisation represents not merely a technical shift but a reconfiguration of the way digital systems operate and interact with their environments.

Advantages of edge AI

Reduced latency and real-time processing

Latency is significantly reduced when computation occurs locally. Edge systems are particularly valuable in time-sensitive applications such as autonomous vehicles, healthcare monitoring, and industrial automation, where delays can have critical consequences.

Enhanced privacy and data control

Privacy improves when sensitive data remains on-device instead of being transmitted across networks. Such an approach aligns with growing concerns around data protection, regulatory compliance, and user trust.

Operational resilience

Edge systems can continue functioning even when network connectivity is limited or unavailable. In remote environments or critical infrastructure, independence from central servers ensures service continuity.

Bandwidth efficiency and cost reduction

Bandwidth consumption is decreased because only processed insights are transmitted, not raw data. Such efficiency can translate into reduced operational costs and improved system performance.

Personalisation and context awareness

Devices can adapt to user behaviour in real time, learning from local data without exposing sensitive information externally. In healthcare, personalised diagnostics can be performed directly on wearable devices, while in manufacturing, predictive maintenance can occur on-site.

The dark side of edge AI

However, the shift towards edge computing introduces profound cybersecurity challenges. The most significant of these is the expansion of the attack surface.

Instead of a limited number of well-protected data centres, organisations must secure vast networks of distributed devices. Each endpoint represents a potential entry point for malicious actors.

The scale and diversity of edge deployments complicate efforts to maintain consistent security standards. Security is no longer centralised but dispersed, increasing the likelihood of vulnerabilities and misconfigurations.

Let’s take a closer look at some other challenges of edge AI.

Physical vulnerabilities and device exposure

Edge devices often operate in uncontrolled environments, making physical access a major risk. Attackers may tamper with hardware, extract sensitive information, or reverse engineer AI models.

hacker working computer with code

Model extraction attacks allow adversaries to replicate proprietary algorithms, undermining intellectual property and enabling further exploitation. Such risks are significantly more pronounced compared to cloud systems, where physical access is tightly controlled.

Software constraints and patch management challenges

Many edge devices rely on embedded systems with limited computational resources. Such constraints make it difficult to implement robust security measures, including advanced encryption and intrusion detection.

Patch management becomes increasingly complex in decentralised environments. Ensuring that millions of devices receive timely updates is a significant challenge, particularly when connectivity is inconsistent or when devices operate in remote locations.

Breakdown of traditional security models

The decentralised nature of edge AI undermines conventional perimeter-based security frameworks. Without a clearly defined boundary, traditional approaches to network defence lose effectiveness.

Each device must be treated as an independent security domain, requiring authentication, authorisation, and continuous monitoring. Identity management becomes more complex as the number of devices grows, increasing the risk of misconfiguration and unauthorised access.

Data integrity and adversarial threats

As we mentioned before, edge devices rely heavily on local data inputs to make decisions. As a result, manipulated inputs can lead to compromised outcomes. Adversarial attacks, in which inputs are deliberately altered to deceive machine learning models, represent a significant threat.

2910154 442

In safety-critical systems, such manipulation can lead to severe consequences. Altered sensor data in industrial environments may disrupt operations, while compromised vision systems in autonomous vehicles may produce dangerous behaviour.

Supply chain risks in edge AI

Edge AI systems depend on a combination of hardware, software, and pre-trained models sourced from multiple vendors. Each component introduces potential vulnerabilities.

Attackers may compromise supply chains by inserting backdoors during manufacturing, distributing malicious updates, or exploiting third-party software dependencies. The global nature of technology supply chains complicates efforts to ensure trust and accountability.

Energy constraints and security trade-offs

Edge devices are often designed with efficiency in mind, prioritising performance and power consumption. Security mechanisms such as encryption and continuous monitoring require computational resources that may be limited.

As a result, security features may be simplified or omitted, increasing exposure to cyber threats. Balancing efficiency with robust protection remains a persistent challenge.

Cyber-physical risks and real-world impact

The integration of edge AI into cyber-physical systems elevates the consequences of security breaches. Digital manipulation can directly influence physical outcomes, affecting safety and infrastructure.

Compromised healthcare devices may produce incorrect diagnoses, while disrupted transportation systems may lead to accidents. In energy networks, attacks could impact entire regions, highlighting the broader societal implications of edge AI vulnerabilities.

cybersecurity warning padlock red exclamation mark

Regulatory and governance challenges

Existing regulatory frameworks have been largely designed for centralised systems and do not fully address the complexities of decentralised architectures. Questions regarding liability, accountability, and enforcement remain unresolved.

Organisations may struggle to implement effective security practices without clear standards. Policymakers face the challenge of developing regulations that reflect the distributed nature of edge AI systems.

Towards a secure edge AI ecosystem

Addressing all these challenges requires a multi-layered and adaptive approach that reflects the complexity of edge AI environments.

Hardware-level protections, such as secure enclaves and trusted execution environments, play a critical role in safeguarding sensitive operations from physical tampering and low-level attacks.

Encryption and secure boot processes further strengthen device integrity, ensuring that both data and models remain protected and that unauthorised modifications are prevented from the outset.

At the software level, continuous monitoring and anomaly detection are essential for identifying threats in real time, particularly in distributed systems where central oversight is limited.

Secure update mechanisms must also be prioritised, ensuring that patches and security improvements can be deployed efficiently and reliably across large networks of devices, even in conditions of intermittent connectivity.

Without such mechanisms, vulnerabilities can persist and spread across the ecosystem.

data breach laptop exploding cyber attack concept

At the same time, many enterprises are increasingly adopting a hybrid approach that combines edge and cloud capabilities.

Rather than relying entirely on decentralised or centralised models, organisations are distributing workloads strategically, keeping latency-sensitive and privacy-critical processes on the edge while maintaining centralised oversight, analytics, and security coordination in the cloud.

Such an approach allows organisations to balance performance and control, while enabling more effective threat detection and response through aggregated intelligence.

Security must also be embedded into system design from the outset, rather than treated as an additional layer to be applied after deployment. A proactive approach to risk assessment, combined with secure development practices, can significantly reduce vulnerabilities before systems are operational.

Furthermore, collaboration between industry, governments, and research institutions will be crucial in establishing common standards, improving interoperability, and ensuring that security practices evolve alongside technological advancements.

In conclusion, we have seen how the rise of edge AI represents a pivotal shift in both AI and cybersecurity. Decentralisation enables faster, more private, and more resilient systems, yet it also creates a fragmented and dynamic attack surface.

The advantages we have outlined are compelling, but they also introduce additional layers of complexity and risk. Addressing these challenges requires a comprehensive approach that combines technological innovation, regulatory development, and organisational awareness.

Only through such coordinated efforts can the benefits of edge AI be realised while ensuring that security, trust, and safety remain intact in an increasingly decentralised digital landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Quantum readiness as a strategic priority for firms

Businesses are beginning to prepare for the commercial potential of quantum computing, a technology that leverages quantum mechanics to solve problems beyond the capabilities of classical computers.

Early engagement focuses on awareness, training, and workshops to explore possible applications across sectors such as pharmaceuticals, energy, finance, and advanced materials.

Companies face several barriers to readiness, including limited technological maturity, unclear business implications, high costs for access and staff training, and a shortage of talent with both quantum and industry expertise.

These obstacles mean that most readiness initiatives remain concentrated in large, research-intensive firms, leaving smaller companies at risk of falling behind.

Support mechanisms are helping firms navigate these challenges. Networking, advisory services, technology centres, R&D grants, and stakeholder consultations help firms access resources and partnerships to accelerate readiness and link research with commercial use.

Building quantum readiness will require ongoing investment in skills, infrastructure, and partnerships, alongside policies that combine exploratory pilots with long-term workforce and software support.

Hybrid approaches integrating quantum computing with AI and high-performance computing offer practical entry points for early adoption, strengthening competitiveness and innovation across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI for Good Global Summit 2026 puts Geneva at centre of global AI policy

Geneva is set to become a focal point of global AI discussions this July, as innovation, governance, and international cooperation converge in a single, tightly packed week of events. The AI for Good Global Summit, organised by the International Telecommunication Union (ITU), will run from 7 to 10 July 2026 at Palexpo, immediately following the inaugural UN Global Dialogue on AI Governance, scheduled for 6 and 7 July.

The timing and co-location of these events signal a broader shift in how AI is being approached globally. Technical development, policy design, and international coordination are no longer progressing on separate tracks. In Geneva, they are unfolding in parallel.

Live demonstrations of emerging technologies such as agentic AI, edge AI, robotics, brain-computer interfaces, and quantum systems will take place alongside multistakeholder discussions on standards, safety, misinformation, infrastructure, and the growing energy demands of AI systems.

The Global Dialogue on AI Governance, mandated by the UN General Assembly and supported by a joint secretariat including the Executive Office of the Secretary-General, ITU, UNESCO, and the UN Office for Digital and Emerging Technologies (ODET), will provide a dedicated space for governments and stakeholders to exchange perspectives on the rules and frameworks shaping AI deployment.

Running back-to-back with AI for Good, the dialogue reflects the growing recognition that governance cannot follow innovation at a distance but must evolve alongside it.

Meanwhile, the AI for Good Global Summit will focus on translating technological advances into practical applications. The programme will feature global innovation competitions, startup showcases, and an extensive exhibition floor with national pavilions and UN-led initiatives.

Demonstrations will highlight AI use cases across healthcare, education, food security, disaster risk reduction, and misinformation, with particular emphasis on solutions relevant to developing countries.

Capacity-building efforts will also play a central role, with training sessions, workshops, and youth-focused initiatives delivered in partnership with organisations such as the AI Skills Coalition.

Co-convened by Switzerland and supported by more than 50 UN partners, the events build on Geneva’s longstanding position as a hub for international dialogue. With over 11,000 participants from 169 countries attending last year’s AI for Good Global Summit and World Summit on the Information Society (WSIS) events, the 2026 edition is expected to expand its global reach further.

More importantly, it reflects an emerging model of AI diplomacy, where innovation, governance, and development priorities are addressed together, shaping not only how AI is built but also how it is understood, governed, and integrated into societies worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Robots and AI transform end-to-end supply chains

AI is transforming supply chains and logistics, moving operations from reactive, manual processes to autonomous, agent-driven systems. Enterprises are using AI agents to optimise and manage workflows, boosting efficiency in warehousing, distribution, and transportation.

Simulation tools and digital twins allow teams to predict disruptions, optimise performance, and test solutions in virtual environments before implementing changes on the ground.

Physical AI is taking automation a step further by embedding intelligence directly into robots and machinery.

Humanoid and industrial robots are now capable of handling tasks such as pallet sorting, last-mile deliveries, and inspection with increasing autonomy, guided by AI systems trained in cloud-connected simulation environments.

Companies are combining cloud, edge computing, and robotics frameworks to accelerate deployment and scale operations safely.

AI, robotics, and enterprise systems work together to channel sensor and machine data to predictive models and decision-making agents. Integrating simulations, AI agents, and robotics helps firms optimise inventory, cut risks, and boost productivity while preparing for autonomous supply chains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot