Executive Vice-President Henna Virkkunen will host a high-level dialogue in Brussels to assess the implementation of the European Chips Act Regulation and gather industry feedback ahead of its planned revision.
Stakeholders from across the semiconductor ecosystem are expected to exchange views and present recommendations to shape future policy direction.
An initiative that forms part of the broader strategy led by the European Commission to reinforce technological sovereignty and competitiveness, rather than relying heavily on external suppliers.
The Chips Act seeks to strengthen Europe’s semiconductor ecosystem, improve supply chain resilience, and reduce strategic dependencies in critical technologies.
The dialogue follows a public consultation and call for evidence conducted in autumn 2025, with findings set to inform the upcoming legislative revision.
Industry representatives will provide direct input through a report outlining challenges, opportunities, and proposed policy adjustments, contributing to a more targeted and effective framework for semiconductor development.
Looking ahead, the revision of the Chips Act will be integrated into a wider Technological Sovereignty package designed to boost the capacity of Europe’s digital industries.
By combining stakeholder engagement with policy reform, the European Commission aims to ensure that semiconductor innovation and production can expand across the EU rather than remain constrained by reliance on external suppliers.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The OECD (Organisation for Economic Co-operation and Development) highlights how businesses are preparing for quantum computing, recognising it as a transformative technology instead of relying solely on conventional computing methods.
Quantum readiness is framed as a long-term capability-building effort in which firms gradually develop skills, infrastructure, and partnerships to explore commercial applications while navigating uncertainty.
Drawing on research, surveys, and interviews with public and private organisations across 10 countries, the OECD identifies both the practical steps companies take to build readiness and the barriers that slow adoption.
Early efforts focus on low-cost awareness and exploration, including attending workshops, training sessions, and industry events, allowing firms to familiarise themselves with emerging opportunities instead of waiting for fully mature systems.
Despite growing interest, companies face significant challenges. Technological immaturity complicates pilots and feasibility studies, while many firms lack a clear understanding of potential business applications.
Access to quantum resources, funding for research and development, and staff training are expensive, particularly for small- and medium-sized enterprises. Furthermore, there is a shortage of talent with both quantum computing expertise and domain-specific knowledge.
As a result, readiness tends to be concentrated among large, R&D-intensive firms, while smaller companies often recognise quantum computing’s potential but delay action.
Such an uneven adoption risks creating a divide in the digital economy, with early adopters moving ahead and other firms falling behind instead of engaging proactively.
To address these challenges, the OECD notes that public and private support mechanisms are critical. Networking and collaboration platforms connect firms with researchers, technology providers, and industry peers, fostering knowledge exchange and collective experimentation.
Business advisory and technology extension services help companies assess capabilities, test solutions, and access specialised facilities.
Grants for research and development lower the costs of experimentation and encourage collaboration, while stakeholder consultations ensure that support measures remain aligned with business needs.
Many companies are also establishing internal quantum labs and innovation hubs to trial applications and build expertise in a controlled environment, combining research with practical exploration instead of relying solely on external guidance.
Looking ahead, the OECD recommends expanding education and skills pipelines, strengthening industry-academic partnerships, and designing policies that support broader participation in quantum adoption.
Hybrid approaches that integrate quantum computing with AI and high-performance computing may offer practical commercial entry points for early applications.
Policymakers are encouraged to balance near-term exploratory pilots with forward-looking support for software development, interoperability, and workforce growth, enabling firms to move from experimentation to deployment effectively.
Over the past few years, we have witnessed a rapid shift in the way data is stored and processed across businesses, organisations, and digital systems.
What we are increasingly seeing is that AI itself is changing form as computation shifts away from centralised cloud environments to the network edge. Such a shift has come to be known as edge AI.
Edge AI refers to the deployment of machine learning models directly on local devices such as smartphones, sensors, industrial machines, and autonomous systems.
Instead of transmitting data to remote servers for processing, analysis is performed on the device itself, enabling faster responses and greater control over sensitive information.
Such a transition marks a significant departure from earlier models of AI deployment, where cloud infrastructure dominated both processing and storage.
From centralised AI to edge intelligence
Traditional AI systems used to rely heavily on centralised architectures. Data collected from users or devices would be transmitted to large-scale data centres, where powerful servers would perform computations and generate outputs.
Such a model offered efficiency, scalability, and easier security management, as protection efforts could be concentrated within controlled environments.
Centralisation allowed organisations to enforce uniform security policies, deploy updates rapidly, and monitor threats from a single vantage point. However, reliance on cloud infrastructure also introduced latency, bandwidth constraints, and increased exposure of sensitive data during transmission.
Edge AI introduces a fundamentally different paradigm. Moving computation closer to the data source reduces the reliance on continuous connectivity and enables real-time decision-making.
Such decentralisation represents not merely a technical shift but a reconfiguration of the way digital systems operate and interact with their environments.
Advantages of edge AI
Reduced latency and real-time processing
Latency is significantly reduced when computation occurs locally. Edge systems are particularly valuable in time-sensitive applications such as autonomous vehicles, healthcare monitoring, and industrial automation, where delays can have critical consequences.
Enhanced privacy and data control
Privacy improves when sensitive data remains on-device instead of being transmitted across networks. Such an approach aligns with growing concerns around data protection, regulatory compliance, and user trust.
Operational resilience
Edge systems can continue functioning even when network connectivity is limited or unavailable. In remote environments or critical infrastructure, independence from central servers ensures service continuity.
Bandwidth efficiency and cost reduction
Bandwidth consumption is decreased because only processed insights are transmitted, not raw data. Such efficiency can translate into reduced operational costs and improved system performance.
Personalisation and context awareness
Devices can adapt to user behaviour in real time, learning from local data without exposing sensitive information externally. In healthcare, personalised diagnostics can be performed directly on wearable devices, while in manufacturing, predictive maintenance can occur on-site.
The dark side of edge AI
However, the shift towards edge computing introduces profound cybersecurity challenges. The most significant of these is the expansion of the attack surface.
Instead of a limited number of well-protected data centres, organisations must secure vast networks of distributed devices. Each endpoint represents a potential entry point for malicious actors.
The scale and diversity of edge deployments complicate efforts to maintain consistent security standards. Security is no longer centralised but dispersed, increasing the likelihood of vulnerabilities and misconfigurations.
Let’s take a closer look at some other challenges of edge AI.
Physical vulnerabilities and device exposure
Edge devices often operate in uncontrolled environments, making physical access a major risk. Attackers may tamper with hardware, extract sensitive information, or reverse engineer AI models.
Model extraction attacks allow adversaries to replicate proprietary algorithms, undermining intellectual property and enabling further exploitation. Such risks are significantly more pronounced compared to cloud systems, where physical access is tightly controlled.
Software constraints and patch management challenges
Many edge devices rely on embedded systems with limited computational resources. Such constraints make it difficult to implement robust security measures, including advanced encryption and intrusion detection.
Patch management becomes increasingly complex in decentralised environments. Ensuring that millions of devices receive timely updates is a significant challenge, particularly when connectivity is inconsistent or when devices operate in remote locations.
Breakdown of traditional security models
The decentralised nature of edge AI undermines conventional perimeter-based security frameworks. Without a clearly defined boundary, traditional approaches to network defence lose effectiveness.
Each device must be treated as an independent security domain, requiring authentication, authorisation, and continuous monitoring. Identity management becomes more complex as the number of devices grows, increasing the risk of misconfiguration and unauthorised access.
Data integrity and adversarial threats
As we mentioned before, edge devices rely heavily on local data inputs to make decisions. As a result, manipulated inputs can lead to compromised outcomes. Adversarial attacks, in which inputs are deliberately altered to deceive machine learning models, represent a significant threat.
In safety-critical systems, such manipulation can lead to severe consequences. Altered sensor data in industrial environments may disrupt operations, while compromised vision systems in autonomous vehicles may produce dangerous behaviour.
Supply chain risks in edge AI
Edge AI systems depend on a combination of hardware, software, and pre-trained models sourced from multiple vendors. Each component introduces potential vulnerabilities.
Attackers may compromise supply chains by inserting backdoors during manufacturing, distributing malicious updates, or exploiting third-party software dependencies. The global nature of technology supply chains complicates efforts to ensure trust and accountability.
Energy constraints and security trade-offs
Edge devices are often designed with efficiency in mind, prioritising performance and power consumption. Security mechanisms such as encryption and continuous monitoring require computational resources that may be limited.
As a result, security features may be simplified or omitted, increasing exposure to cyber threats. Balancing efficiency with robust protection remains a persistent challenge.
Cyber-physical risks and real-world impact
The integration of edge AI into cyber-physical systems elevates the consequences of security breaches. Digital manipulation can directly influence physical outcomes, affecting safety and infrastructure.
Compromised healthcare devices may produce incorrect diagnoses, while disrupted transportation systems may lead to accidents. In energy networks, attacks could impact entire regions, highlighting the broader societal implications of edge AI vulnerabilities.
Regulatory and governance challenges
Existing regulatory frameworks have been largely designed for centralised systems and do not fully address the complexities of decentralised architectures. Questions regarding liability, accountability, and enforcement remain unresolved.
Organisations may struggle to implement effective security practices without clear standards. Policymakers face the challenge of developing regulations that reflect the distributed nature of edge AI systems.
Towards a secure edge AI ecosystem
Addressing all these challenges requires a multi-layered and adaptive approach that reflects the complexity of edge AI environments.
Hardware-level protections, such as secure enclaves and trusted execution environments, play a critical role in safeguarding sensitive operations from physical tampering and low-level attacks.
Encryption and secure boot processes further strengthen device integrity, ensuring that both data and models remain protected and that unauthorised modifications are prevented from the outset.
At the software level, continuous monitoring and anomaly detection are essential for identifying threats in real time, particularly in distributed systems where central oversight is limited.
Secure update mechanisms must also be prioritised, ensuring that patches and security improvements can be deployed efficiently and reliably across large networks of devices, even in conditions of intermittent connectivity.
Without such mechanisms, vulnerabilities can persist and spread across the ecosystem.
Rather than relying entirely on decentralised or centralised models, organisations are distributing workloads strategically, keeping latency-sensitive and privacy-critical processes on the edge while maintaining centralised oversight, analytics, and security coordination in the cloud.
Such an approach allows organisations to balance performance and control, while enabling more effective threat detection and response through aggregated intelligence.
Security must also be embedded into system design from the outset, rather than treated as an additional layer to be applied after deployment. A proactive approach to risk assessment, combined with secure development practices, can significantly reduce vulnerabilities before systems are operational.
In conclusion, we have seen how the rise of edge AI represents a pivotal shift in both AI and cybersecurity. Decentralisation enables faster, more private, and more resilient systems, yet it also creates a fragmented and dynamic attack surface.
The advantages we have outlined are compelling, but they also introduce additional layers of complexity and risk. Addressing these challenges requires a comprehensive approach that combines technological innovation, regulatory development, and organisational awareness.
Only through such coordinated efforts can the benefits of edge AI be realised while ensuring that security, trust, and safety remain intact in an increasingly decentralised digital landscape.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission and Australia have announced the adoption of a Security and Defence Partnership alongside the conclusion of negotiations for a free trade agreement.
They have also agreed to launch formal negotiations for Australia’s association with Horizon Europe, the European Union’s research and innovation funding programme.
The Security and Defence Partnership establishes a framework for cooperation on shared strategic priorities. It includes coordination on crisis management, maritime security, cybersecurity, and countering hybrid threats and foreign information manipulation.
A partnership that also includes cooperation on emerging and disruptive technologies, including AI, as well as space security, non-proliferation, and disarmament.
The free trade agreement provides for the removal of over 99% of tariffs on the EU goods exports to Australia and expands access to services, government procurement, and investment opportunities.
It includes provisions on data flows that prohibit data localisation requirements and supports supply chain resilience through improved access to critical raw materials.
The EU exports are expected to increase by up to 33% over the next decade.
The agreement incorporates commitments on trade and sustainable development, including labour rights, environmental standards, and climate obligations aligned with the Paris Agreement.
The negotiated texts will undergo the EU internal procedures before submission to the Council for signature and conclusion, followed by European Parliament consent and ratification by Australia before entry into force.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenShell, an open-source runtime introduced by NVIDIA, is designed to support the secure deployment of autonomous AI agents within enterprise environments.
According to NVIDIA, OpenShell applies security controls at the infrastructure level rather than within the model or application layer. The runtime ensures that each agent operates inside an isolated sandbox, where system-level policies define and enforce permissions, resource access, and operational constraints.
The company states that such an approach separates agent behaviour from policy enforcement, preventing agents from overriding security controls or accessing restricted data.
OpenShell enables organisations to define and monitor a unified policy layer governing how autonomous systems interact with files, tools, and enterprise workflows.
Additionally, OpenShell forms part of the NVIDIA Agent Toolkit and is complemented by NemoClaw, a reference stack designed to support the deployment of continuously operating AI assistants.
NVIDIA indicates that the system can run across cloud, on-premises, and local computing environments, while maintaining consistent policy enforcement.
The company also reports collaboration with industry partners, including Cisco, CrowdStrike, Google Cloud, and Microsoft Security, to align security practices for AI agent deployment. Both OpenShell and NemoClaw are currently in early preview.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has reversed its earlier decision to discontinue virtual reality support for Horizon Worlds, allowing the platform to remain available on VR headsets despite previous plans to prioritise mobile and web access.
The decision follows an internal reassessment of user engagement trends, which indicate limited adoption of VR-based social platforms.
While Horizon Worlds was once positioned as central to the company’s metaverse ambitions, demand has remained relatively low, raising questions about the long-term viability of immersive social environments.
Financial pressures also continue to shape strategy.
Meta’s Reality Labs division has recorded substantial losses since 2021, reflecting high investment in virtual and augmented reality technologies without corresponding commercial returns.
Industry data further suggests declining headset sales, reinforcing uncertainty around VR as a mainstream consumer platform.
In contrast, mobile usage of Horizon Worlds is growing faster. Increasing downloads point to broader accessibility and improved product-market alignment, though revenue generation remains limited.
As a result, Meta is prioritising mobile development instead of fully abandoning VR, maintaining a dual approach while seeking more sustainable engagement models.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Broadcom is facing increased regulatory pressure in the EU following a formal antitrust complaint concerning changes to VMware licensing practices.
The complaint highlights growing tensions between large technology providers and European cloud infrastructure firms.
The filing, submitted by Cloud Infrastructure Services Providers in Europe, raises concerns that revised licensing models could significantly alter market dynamics.
European providers argue that the changes may limit flexibility, increase costs, and affect their ability to compete effectively in the cloud services sector.
At the centre of the dispute lies the broader issue of market concentration and control over critical digital infrastructure.
Industry stakeholders suggest that restrictive licensing conditions could reshape access to essential virtualisation technologies, which underpin a wide range of cloud and enterprise services across the EU.
Regulatory attention is expected to focus on whether such practices align with the EU competition rules, particularly regarding fair access and market neutrality.
The case emerges at a time when European policymakers are intensifying oversight of dominant technology firms and seeking to strengthen digital sovereignty across strategic sectors.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Malaysia has quietly restricted new data centre approvals to projects linked to AI, signalling a strategic shift in its digital economy. Authorities confirmed that non-AI development has been halted for nearly 2 years.
The policy reflects mounting pressure on energy and water resources as demand for data centres accelerates. Officials aim to ensure infrastructure supports high-value AI projects rather than lower-impact investments.
Rapid growth has positioned Malaysia as a key regional hub, attracting major global technology firms. Concerns remain over whether the country risks hosting infrastructure without building local innovation capacity.
Leaders say future efforts will focus on balancing investment with domestic benefits and energy sustainability. Plans include expanding power supply and strengthening national AI capabilities to secure long term gains.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Two researchers have been awarded the Turing Award for pioneering work in quantum cryptography. Their research laid the foundations for a new form of secure communication based on quantum physics.
The method, developed in the 1980s, enables encryption keys that cannot be copied without detection. Any attempt to intercept the data alters its physical properties, revealing interference.
Experts say the approach could become vital as quantum computing advances. Traditional encryption methods may become vulnerable as computing power increases.
The award highlights the growing importance of secure data transmission in a digital world. Researchers believe quantum cryptography could play a central role in encrypting and protecting future communications.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UAE operator e& (formerly Etisalat) has partnered with Khalifa University to outline a new vision for AI-native 6G networks. Their joint whitepaper presents a framework in which intelligence is embedded at the core of the network architecture rather than added as a feature.
The proposal introduces a dedicated AI plane alongside existing network layers to enable continuous learning and automation. This approach supports sensing, reasoning and autonomous decision-making across radio, core and edge systems.
The framework includes distributed AI agents, digital twin integration and closed-loop automation models. It is designed to support multi-vendor environments while enabling scalable and coordinated intelligence across networks.
Five core pillars underpin the model, including AI frameworks, cloud-edge computing and sustainability-focused design. Together, these elements position 6G as a cognitive infrastructure capable of predictive optimisation and advanced service delivery.
The whitepaper also defines measurable performance indicators such as latency, learning accuracy and energy efficiency. The initiative aims to contribute to global standards while strengthening the UAE’s role in shaping future telecom systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!