Quantum readiness gains momentum according to OECD report

The OECD (Organisation for Economic Co-operation and Development) highlights how businesses are preparing for quantum computing, recognising it as a transformative technology instead of relying solely on conventional computing methods.

Quantum readiness is framed as a long-term capability-building effort in which firms gradually develop skills, infrastructure, and partnerships to explore commercial applications while navigating uncertainty.

Drawing on research, surveys, and interviews with public and private organisations across 10 countries, the OECD identifies both the practical steps companies take to build readiness and the barriers that slow adoption.

Early efforts focus on low-cost awareness and exploration, including attending workshops, training sessions, and industry events, allowing firms to familiarise themselves with emerging opportunities instead of waiting for fully mature systems.

Despite growing interest, companies face significant challenges. Technological immaturity complicates pilots and feasibility studies, while many firms lack a clear understanding of potential business applications.

Access to quantum resources, funding for research and development, and staff training are expensive, particularly for small- and medium-sized enterprises. Furthermore, there is a shortage of talent with both quantum computing expertise and domain-specific knowledge.

As a result, readiness tends to be concentrated among large, R&D-intensive firms, while smaller companies often recognise quantum computing’s potential but delay action.

Such an uneven adoption risks creating a divide in the digital economy, with early adopters moving ahead and other firms falling behind instead of engaging proactively.

To address these challenges, the OECD notes that public and private support mechanisms are critical. Networking and collaboration platforms connect firms with researchers, technology providers, and industry peers, fostering knowledge exchange and collective experimentation.

Business advisory and technology extension services help companies assess capabilities, test solutions, and access specialised facilities.

Grants for research and development lower the costs of experimentation and encourage collaboration, while stakeholder consultations ensure that support measures remain aligned with business needs.

Many companies are also establishing internal quantum labs and innovation hubs to trial applications and build expertise in a controlled environment, combining research with practical exploration instead of relying solely on external guidance.

Looking ahead, the OECD recommends expanding education and skills pipelines, strengthening industry-academic partnerships, and designing policies that support broader participation in quantum adoption.

Hybrid approaches that integrate quantum computing with AI and high-performance computing may offer practical commercial entry points for early applications.

Policymakers are encouraged to balance near-term exploratory pilots with forward-looking support for software development, interoperability, and workforce growth, enabling firms to move from experimentation to deployment effectively.

By following OECD guidance, companies can enhance innovation, improve competitiveness, and ensure that readiness efforts span sectors and geographies rather than remain limited to a few early adopters.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Explora chatbot offers personalised travel planning in New Brunswick

GuideGeek, the AI travel technology from Matador Network, has partnered with ExploreNB to launch Explora, an AI-powered travel chatbot for New Brunswick. The tool is designed to help visitors plan trips through conversational interactions.

Explora was piloted in early 2025 and is now rolled out more broadly, providing instant answers to travel and tourism questions. Powered by GuideGeek’s platform, it has generated thousands of online conversations with prospective visitors.

The chatbot delivers personalised travel tips and itinerary ideas, connecting users to local businesses, beaches, hiking trails and cultural sites. Its responses are based on destination data from New Brunswick and more than 1,000 integrated travel information sources.

Ross Borden, CEO of Matador Network, said AI helps travellers discover destinations and build trips based on their interests. Isabelle Thériault, Minister of Tourism, Heritage and Culture of New Brunswick, said the tool also provides insight into traveller interests and supports the tourism sector.

Explora is available directly on the TourismNewBrunswick.ca website via the chat icon.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UK Digital Inclusion Action Plan delivers devices funding and online access support

The UK Department for Science, Innovation and Technology said more than one million people have been helped online through its Digital Inclusion Action Plan. The update was published in a one-year progress report on the government strategy.

The department said over 22,000 devices were donated through government schemes and industry partnerships. It also confirmed ÂŁ11.9 million in funding that supported more than 80 local digital inclusion programmes.

According to the report, the plan aims to improve access to devices, connectivity and digital skills. The government said all commitments in the strategy have either been delivered or remain on track.

The department added that partnerships with industry and charities helped expand access to broadband and mobile services, including more affordable connectivity. The programme also supported training and local initiatives to improve digital participation.

Secretary of State for Science, Innovation and Technology, Liz Kendall, said the programme is intended to expand access to online services, employment opportunities and communication tools. She added that the government plans to continue developing the initiative.

The department also confirmed it will take over the Essential Digital Skills Framework from Lloyds Banking Group and update it to reflect current needs, including online safety and the growing role of AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

EDPB summarises conference on cross-regulatory cooperation in the EU

The European Data Protection Board has published a summary of its 17 March conference in Brussels on cross-regulatory interplay and cooperation in the EU from a data protection perspective. According to the EDPB, the event brought together representatives of the EU institutions, European Data Protection Authorities, academia, and industry.

Three panels structured the conference discussion. One focused on data protection and competition, another on the Digital Markets Act and the General Data Protection Regulation (GDPR), and a third on the Digital Services Act and the GDPR.

Discussion in the first panel centred on cooperation between regulatory bodies in data protection and competition, including lessons from the aftermath of the Bundeskartellamt ruling. The EDPB said speakers emphasised the need for regulators to align their approaches and recognise synergies between the two fields. Speakers also said data protection should be considered in competition analysis only when relevant and on a case-by-case basis. The EDPB added that it had recently agreed with the European Commission to develop joint guidelines on the interplay between competition law and data protection.

The second panel focused on joint guidelines on the Digital Markets Act and the GDPR, developed by the European Commission and the EDPB and recently opened to public consultation. According to the EDPB, speakers described the guidelines as an example of regulatory cooperation aimed at developing a coherent and compatible interpretation of the two frameworks while respecting regulatory competences. The Board said participants linked the guidelines to stronger consistency, legal clarity, and easier compliance. Some speakers also suggested changes to the final version, including points related to proportionality and the relationship between DMA obligations and the GDPR.

The final panel examined the interaction between the Digital Services Act and the GDPR. The EDPB said panellists referred to the protection of minors as one example, arguing that age verification should be effective while remaining fully in line with data protection legislation. Speakers also highlighted the need for coordination between the two frameworks, including cooperation involving the EU institutions such as the European Board for Digital Services, the European Commission, the EDPB, and national authorities. Emerging technologies such as AI were also mentioned in the discussion.

The event also featured keynote speeches from European Commission Executive Vice President Henna Virkkunen and European Parliament LIBE Committee Chair Javier Zarzalejos. According to the EDPB, Virkkunen said the Commission remained committed to cooperation between different frameworks and highlighted the need to support compliance through stronger coordination among regulators. Zarzalejos said close cross-regulatory cooperation was essential for consistency, effective enforcement, and trust, and pointed to the intersections among data protection law, competition law, the DMA, and the DSA.

EDPB Chair Anu Talus closed the conference by reiterating that the EDPB and European Data Protection Authorities are committed to supporting stakeholders in navigating what the Board described as a new cross-regulatory landscape. The EDPB said future work will include continued cooperation with the Commission on joint guidelines on the interplay between the AI Act and the GDPR, finalisation of the joint guidelines on the interplay between the DMA and the GDPR, and work on the recently announced Joint Guidelines on the interplay between data protection and competition law.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital Services Act disinformation signatories publish first 2026 reports

Signatories to the EU Code of Conduct on Disinformation have published new transparency reports describing the measures they say they are taking to reduce the spread of disinformation online. According to the European Commission, the reports are the first ones submitted since the Code was recognised as a code of conduct under the Digital Services Act.

The reports are available through the Code’s Transparency Centre and come from a broad group of signatories, including online platforms such as Google, Meta, Microsoft, and TikTok, as well as fact-checkers, research organisations, civil society bodies, and representatives of the advertising industry. The European Commission says the reporting round covers the period from 1 July to 31 December 2025 and marks the first full reporting cycle linked to the Digital Services Act.

Dedicated sections in the reports cover responses to ongoing crises, notably the conflict in Ukraine, as well as measures intended to safeguard the integrity of elections. Data on the implementation of disinformation-related measures is also included, alongside developments in signatories’ policies, tools, and partnerships under the Digital Services Act framework.

Greater significance attaches to the reporting cycle because of the Code’s changed legal and regulatory position. The Commission says the Code was endorsed on 13 February 2025 by the Commission and the European Board for Digital Services, at the request of the signatories, as a code of conduct within the meaning of the Digital Services Act. From 1 July 2025, the Code became part of the co-regulatory framework under the Digital Services Act.

A more formal role now applies to the Code than under its earlier voluntary setup. According to the Commission, signatories’ adherence to its commitments is subject to independent annual auditing, and the Code serves as a relevant benchmark for determining compliance with Article 35 of the Digital Services Act. The Commission also says the Code has become a ‘significant and meaningful benchmark of DSA compliance’ for providers of very large online platforms and very large online search engines that adhere to its commitments under the Digital Services Act.

Reporting obligations differ depending on the type of signatory. Under the Code, providers of very large online platforms and very large online search engines commit to reporting, every six months, on the actions taken by their subscribed services. The Commission lists Google Search, YouTube, Google Ads, Facebook, Instagram, Messenger, WhatsApp, Bing, LinkedIn, and TikTok among the covered services, while other non-platform signatories report once per year under the Digital Services Act structure.

Broader policy relevance lies in the EU’s attempt to connect platform self-reporting to a more formal oversight structure. By placing the disinformation Code inside the Digital Services Act framework, the Commission and the Board are using voluntary commitments, transparency reporting, and auditing as part of a co-regulatory approach to systemic online risks. The reports themselves do not prove compliance, but they now carry greater weight within the wider Digital Services Act architecture for platform governance.

One further point is that the Commission notice focuses on publication of the reports rather than evaluating their quality or effectiveness. The notice says the reports describe measures, data, and policy developments, but it does not assess whether the actions taken by signatories were sufficient. Such a distinction matters in politically sensitive areas such as election integrity and crisis-related disinformation, especially where transparency under the Digital Services Act may shape future scrutiny.

Taken together, the first reporting round shows how the EU is using the Digital Services Act not only to impose direct legal obligations on large platforms and search engines, but also to anchor voluntary commitments within a more structured regulatory environment. Continued reporting, auditing, and review will determine how much practical weight the Code carries within the Digital Services Act and how effectively the Digital Services Act supports oversight of disinformation risks online.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI Foundation expands investment strategy to shape AI benefits and resilience

A major expansion of its activities has been outlined by OpenAI Foundation, signalling a broader effort to ensure AI delivers tangible benefits while addressing emerging risks.

The organisation plans to invest at least $1 billion over the next year, forming part of a wider $25 billion commitment focused on disease research and AI resilience.

AI is increasingly reshaping healthcare, scientific discovery and economic productivity, offering pathways to faster medical breakthroughs and more efficient public services.

OpenAI Foundation frames such potential as central to its mission, while recognising that more capable systems introduce complex societal and safety challenges that require coordinated responses.

Initial programmes prioritise life sciences, including research into Alzheimer’s disease, expanded access to public health data, and accelerated progress on high-mortality conditions.

Parallel efforts examine the economic impact of automation, with engagement across policymakers, labour groups and businesses aimed at developing practical responses to labour market disruption.

A dedicated resilience strategy addresses risks linked to advanced AI systems, including safety standards, biosecurity concerns and the protection of children and young users.

Alongside community-focused funding, the OpenAI Foundation’s initiative reflects a dual objective: enabling innovation rather than leaving societies exposed to technological disruption.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Canada’s watchdog highlights surge in AI impersonation scams

A growing wave of AI-driven scams is prompting warnings from Competition Bureau Canada, as fraudsters increasingly impersonate government officials through deepfake technology and fake websites.

Authorities report a steady rise in complaints linked to deceptive schemes designed to exploit public trust.

Scammers are using synthetic media to mimic well-known political figures, including senior government officials, to extract personal information and spread misleading narratives.

Such tactics demonstrate how AI tools are being weaponised for social engineering rather than for legitimate communication.

The trend reflects a broader shift in digital fraud, where increasingly sophisticated techniques blur the line between authentic and fabricated content. As synthetic identities become more convincing, individuals find it harder to verify the legitimacy of online interactions and official communications.

In response, authorities in Canada are intensifying awareness efforts during Fraud Prevention Month, offering expert guidance on identifying and avoiding scams.

The development underscores the urgent need for stronger safeguards and public education to counter evolving AI-enabled threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI sunsets Sora app after 6 months of scrutiny

OpenAI is moving to shut down the Sora app, its consumer-facing AI video platform, according to an official X post on 24 March. The move follows months of scrutiny around AI-generated video, including concerns over deepfakes, copyright, and harmful synthetic media.

The reported shutdown comes shortly after OpenAI retired Sora 1 in the United States on 13 March 2026 and replaced it with Sora 2 as the default experience. OpenAI’s help documentation says the older version remains available only in countries where the newer one has not yet launched, while support pages for the standalone Sora app are still live. The product changes also follow the announcement of new copyright settings for the latest video generation model.

That makes the current picture more complex than a simple sunset. Public OpenAI help pages still describe tools on iOS, Android, and the web, while news reports say the company has now decided to wind down the app itself. OpenAI had also recently indicated that it plans to integrate Sora video generation into ChatGPT, which could help explain why the standalone product is being reconsidered.

Sora became one of OpenAI’s most visible consumer media products, but it also drew sustained scrutiny over deepfakes, non-consensual content, and copyrighted characters. Such concerns remained central even as OpenAI added additional controls to the platform, including new consent and traceability measures to enhance AI video safety. AP reported that pressure from advocacy groups, scholars, and entertainment-sector voices formed part of the backdrop to the shutdown decision.

For users, the immediate issue is preservation of existing content. OpenAI’s Sora 1 sunset FAQ says some legacy material may be exportable for a limited period before deletion, but the company has not yet published a detailed standalone help document explaining the full shutdown. Based on the information now available, the clearest distinction is that OpenAI first retired one legacy version in some markets and is now reportedly ending the standalone app more broadly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IWF report reveals a rapid growth of synthetic child abuse material online

A surge in AI-generated child sexual abuse material has raised urgent concerns across Europe, with the Internet Watch Foundation reporting record levels of harmful content online.

Findings of the IWF report indicate that AI is accelerating both the scale and severity of abuse, transforming how offenders create and distribute illicit material.

Data from 2025 reveals a sharp increase in AI-generated imagery and video, with over 8,000 cases identified and a dramatic rise in highly severe content.

Synthetic videos have grown at an unprecedented rate, reflecting how emerging tools are being used to produce increasingly realistic and extreme scenarios rather than traditional formats.

Analysis of offender behaviour highlights a disturbing trend toward automation and accessibility.

Discussions on dark web forums suggest that future agentic AI systems may enable the creation of fully produced abusive content with minimal technical skill. The integration of audio and image manipulation further deepens risks, particularly where real children’s likenesses are involved.

Calls for regulatory action are intensifying as policymakers in the EU debate reforms to the Child Sexual Abuse Directive.

Advocacy groups emphasise the need for comprehensive criminalisation, alongside stronger safety-by-design requirements, arguing that technological innovation must not outpace child protection frameworks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU and Australia deepen strategic partnership through trade and security agreements

The European Commission and Australia have announced the adoption of a Security and Defence Partnership alongside the conclusion of negotiations for a free trade agreement.

They have also agreed to launch formal negotiations for Australia’s association with Horizon Europe, the European Union’s research and innovation funding programme.

The Security and Defence Partnership establishes a framework for cooperation on shared strategic priorities. It includes coordination on crisis management, maritime security, cybersecurity, and countering hybrid threats and foreign information manipulation.

A partnership that also includes cooperation on emerging and disruptive technologies, including AI, as well as space security, non-proliferation, and disarmament.

The free trade agreement provides for the removal of over 99% of tariffs on the EU goods exports to Australia and expands access to services, government procurement, and investment opportunities.

It includes provisions on data flows that prohibit data localisation requirements and supports supply chain resilience through improved access to critical raw materials.

The EU exports are expected to increase by up to 33% over the next decade.

The agreement incorporates commitments on trade and sustainable development, including labour rights, environmental standards, and climate obligations aligned with the Paris Agreement.

The negotiated texts will undergo the EU internal procedures before submission to the Council for signature and conclusion, followed by European Parliament consent and ratification by Australia before entry into force.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!