AI and quantum computing reshape the global cybersecurity landscape

Cybersecurity risks are increasing as digital connectivity expands across governments, businesses and households.

According to Thales Group, a growing number of connected devices and digital services has significantly expanded the potential entry points for cyberattacks.

AI is reshaping the cybersecurity landscape by enabling attackers to identify vulnerabilities at unprecedented speed.

Security specialists increasingly describe the environment as a contest in which defensive systems must deploy AI to counter adversaries using similar technologies to exploit weaknesses in digital infrastructure.

Security concerns also extend beyond large institutions. Connected devices in homes, including smart cameras and speakers, often lack robust security protections, increasing exposure for individuals and networks.

Policymakers in Europe are responding through measures such as the Cyber Resilience Act, which will introduce mandatory security requirements for connected products sold in the EU.

Long-term risks are also emerging from advances in quantum computing.

Experts warn that powerful future machines could eventually break widely used encryption systems that currently protect communications, financial data and government networks, prompting organisations to adopt quantum-resistant security methods.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU lawmakers call for stronger copyright safeguards in AI training

The European Parliament has adopted a report urging policymakers to establish a long-term framework protecting copyrighted works used in AI training.

These recommendations aim to ensure that creative industries retain transparency and fair treatment as generative AI technologies expand.

Among the central proposals is the creation of a European register managed by the European Union Intellectual Property Office. The database would list copyrighted works used to train AI systems and identify creators who have chosen to exclude their content from such use.

Lawmakers in the EU are also calling for greater transparency from AI developers, including disclosure of the websites from which training data has been collected. According to the report, failing to meet transparency requirements could raise questions about compliance with existing copyright rules.

The recommendations have received mixed reactions from industry stakeholders.

Organisations representing creators argue that stronger safeguards are necessary to ensure fair remuneration and legal clarity, while technology sector groups caution that additional requirements could create complexity for companies developing AI systems.

The report is not legally binding but signals the political direction of ongoing European discussions on copyright and AI governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Dutch firms rank among EU leaders in sustainable ICT

Businesses in the Netherlands rank among the leading adopters of sustainable ICT practices in the EU, according to data from Statistics Netherlands and Eurostat. Around one quarter of companies use digital tools to reduce material consumption and improve resource efficiency.

The Netherlands ranked fourth in the EU for the use of technology to reduce waste and improve sustainability. Sectors including energy, water and waste management showed the strongest adoption of these ICT solutions.

Sustainable disposal of electronic equipment is also widespread among businesses in the Netherlands. About 9 in 10 companies recycle or return obsolete ICT equipment through approved e-waste collection systems.

Across the EU, more than three-quarters of businesses now dispose of outdated technology in environmentally responsible ways. Analysts say progress highlights growing corporate efforts to integrate the sustainability of e-waste into digital operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU explores AI image generation safeguards

The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.

The discussions form part of ongoing adjustments to the EU AI Act.

A proposed measure that would primarily address AI tools that generate illegal material, particularly content involving the exploitation of minors.

Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.

Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.

The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.

Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU draft regulation aims to create new legal framework for startups

A draft initiative from the European Commission seeks to introduce a new legal structure designed to simplify how companies operate across the EU.

The proposal, often referred to as the ‘EU Inc’ initiative, explores the creation of a so-called ’28th regime’ that would exist alongside national corporate frameworks used by member states.

A concept that aims to provide startups and technology firms with a single legal structure that applies across the EU.

Instead of navigating different national rules in each country, companies could operate under a unified regulatory model intended to reduce administrative barriers and encourage cross-border innovation.

According to the draft, the initiative may rely on an EU regulation rather than separate national legislation. Such an approach could enable faster implementation, as the EU regulations apply directly across all member states without requiring domestic transposition.

However, the legal basis of the proposal could raise institutional concerns. Using a regulation as the primary mechanism may constitute an unconventional shortcut in the EU lawmaking, potentially sparking debate among policymakers over the approach’s scope and legitimacy.

The initiative reflects broader efforts within the Union to simplify regulatory frameworks and strengthen the competitiveness of European startups. If adopted, the ‘EU Inc’ model could reshape how young companies expand across the single market.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU launches AI platform to detect food fraud and contamination

Food safety monitoring across the EU is receiving a technological upgrade with the launch of TraceMap, a new AI platform designed to detect food fraud, contamination and disease outbreaks more quickly.

The European Commission introduced the tool as part of efforts to strengthen consumer protection and improve oversight of the agri-food supply chain.

TraceMap helps authorities analyse large volumes of data related to food production, distribution and trade. By identifying connections between operators, shipments and supply chains, the system allows investigators to spot suspicious activity and potential safety risks earlier.

National authorities in the EU member states can already access the platform, enabling them to conduct more targeted inspections and investigations without requiring additional resources.

The platform draws on data from existing EU systems such as the Rapid Alert System for Food and Feed (RASFF) and the Trade Control and Expert System (TRACES). Using AI to structure and interpret information, TraceMap can reveal patterns in production and trade flows that may indicate contamination, fraud, or other irregularities in the food supply chain.

Early testing of the platform has already demonstrated its practical value. A pilot version of TraceMap helped authorities identify and recall infant milk formula produced with contaminated ARA oil originating from China.

European officials say the system will strengthen the EU’s ability to respond rapidly to food safety risks while improving monitoring of both domestic production and imported products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU considers stronger child protection in Digital Fairness Act

Capitals across the EU are being asked to discuss how stronger child protection measures should be incorporated into the upcoming Digital Fairness Act (DFA).

The initiative comes as policymakers attempt to address growing concerns about how online platforms expose minors to harmful content, manipulative design practices, and unsafe digital environments.

According to a document circulated during Cyprus’s Council presidency of the European Union, member states are expected to debate which concrete safeguards should be introduced as part of the broader consumer protection framework.

Officials are exploring whether new rules should require platforms to adopt stricter safeguards when designing digital services used by children.

The discussions are part of the European Union’s broader effort to strengthen digital governance and consumer protection across online platforms. Policymakers are increasingly focusing on how platform design, recommendation algorithms, and monetisation models may affect younger users.

The proposals could complement existing EU regulations targeting large digital platforms, while expanding protections specifically focused on minors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

The EU faces growing AI copyright disputes

Courts across Europe are examining how copyright law applies to AI systems trained on large datasets. Judges in Europe are reviewing whether existing rules allow AI developers to use copyrighted books, music and journalism without permission.

One closely watched dispute in Luxembourg involves a publisher challenging Google over summaries produced by its Gemini chatbot. The case before the EU court in Luxembourg could test how press publishers’ rights apply to AI-generated outputs.

Legal experts warn the ruling in Luxembourg may not resolve wider questions about AI training data. Many disputes in Europe focus on the EU copyright directive and its text and data mining exception.

Additional lawsuits across Europe involving music rights group GEMA and OpenAI are expected to continue for years. Policymakers in Europe are also considering updates to copyright rules as AI technology expands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU and Canada begin negotiations on a digital trade agreement

The European Commission and Canada have launched negotiations on a new Digital Trade Agreement to strengthen the rules governing cross-border digital commerce.

The initiative was announced in Toronto by the EU Trade Commissioner Maroš Šefčovič and Canadian International Trade Minister Maninder Sidhu.

An agreement that will expand the digital dimension of the existing Comprehensive Economic and Trade Agreement, which has already increased trade in goods and services between the two partners.

Officials say the new negotiations aim to create clearer rules for businesses and consumers engaging in cross-border digital transactions.

Proposals under discussion include promoting paperless trade systems, recognising electronic signatures and digital contracts, and prohibiting customs duties on electronic transmissions.

The agreement between the EU and Canada will also seek to prevent protectionist practices such as unjustified data localisation requirements or forced transfers of software source code.

European officials argue that the negotiations reflect a broader effort to develop international standards for digital trade governance while preserving governments’ ability to regulate emerging challenges in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU Commission’s new guidance to push Cybersecurity Resilience Act

The EU Commission has opened a public consultation on draft guidance to help companies apply the EU’s Cyber Resilience Act (CRA), a regulation that sets baseline cybersecurity requirements for hardware and software ‘products with digital elements’ to reduce vulnerabilities and improve security throughout a product’s life cycle. The guidance is framed as practical help, especially for microenterprises and SMEs, and the consultation runs until 31 March 2026.

The CRA is designed to make ‘secure by design’ the default for connected products people use every day, from consumer devices to business software, while giving users clearer information about a product’s security properties. In timeline terms, the Act entered into force on 10 December 2024. The incident reporting duties start on 11 September 2026, and the main obligations apply from 11 December 2027, giving industry a runway but also a clear countdown.

What the Commission is trying to nail down now are the parts companies have found hardest to interpret: how the rules apply to remote data processing solutions (cloud-linked features), how they treat free and open-source software, what ‘support periods’ mean in practice (i.e. how long security upkeep is expected), and how the CRA fits alongside other EU laws. In other words, this is less about announcing new rules and more about reducing legal grey zones before enforcement ramps up.

The guidance push also lands amid a broader policy drive, as on 20 January 2026, the Commission proposed a new EU cybersecurity package, built around a revised Cybersecurity Act and targeted NIS2 amendments. The package aims to harden ICT supply chains, including a framework to jointly identify and mitigate risks across 18 critical sectors, and would enable mandatory ‘de-risking’ of EU mobile telecom networks away from high‑risk third‑country suppliers. It also proposes a revamped EU cybersecurity certification system with simpler procedures, giving a default 12‑month timeline to develop certification schemes, while cutting red tape for tens of thousands of firms and strengthening ENISA’s role, including early warnings, ransomware support, and a major budget boost.

Taken together, the EU is moving from strategy documents to operational details, product security on one side (CRA) and ecosystem-level resilience on the other (supply chains, certification, incident reporting and supervision). For companies, that can be both reassuring and demanding: clearer guidance should reduce uncertainty, but the compliance reality may still be layered, especially for businesses spanning devices, software, cloud features, and cross-border operations. The Commission’s stakeholder feedback window is essentially a test of whether these rules can be made workable without diluting their bite.

Why does it matter?

Beyond technical risk, this is increasingly about sovereignty: who sets the rules for digital products, who can be trusted in supply chains, and how much dependency is acceptable in critical infrastructure. Digital governance expert Jovan Kurbalija argues that full ‘stack’ digital sovereignty, that is to say control over infrastructure, services, data, and AI knowledge, is concentrated in very few states, while most countries must balance openness with autonomy. The EU’s current wave of cybersecurity governance fits that pattern: it’s an attempt to turn security standards, certification, and supply-chain choices into a practical form of strategic control, not just to prevent hacks, but to protect democratic institutions, economic competitiveness, and trust in the digital tools people rely on.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot