Lawmakers urge EU to curb Huawei’s role in solar inverters over security risks

Lawmakers and security officials are increasingly worried that Huawei’s dominant role in solar inverters could create a new supply-chain vulnerability for Europe’s power grids. Two MEPs have written to the European Commission urging immediate steps to limit ‘high-risk’ vendors in energy systems.

Inverters are a technology that transforms solar energy into the electrical current fed into the power network; many are internet-connected so vendors can perform remote maintenance. Cyber experts warn that remote access to large numbers of inverters could be abused to shut devices down or change settings en masse, creating surges, drops or wider instability across the grid.

Chinese firms, led by Huawei and Sungrow, supply a large share of Europe’s installed inverter capacity. SolarPower Europe estimates Chinese companies account for roughly 65 per cent of the market. Some member states are already acting: Lithuania has restricted remote access to sizeable Chinese installations, while agencies in the Czech Republic and Germany have flagged specific Huawei components for further scrutiny.

The European Commission is preparing an ICT supply-chain toolbox to de-risk critical sectors, with solar inverters listed among priority areas. Suspicion of Chinese technology has surged in recent years. Beijing, under President Xi Jinping, requires domestic firms to comply with government requests for data sharing and to report software vulnerabilities, raising Western fears of potential surveillance.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Netherlands and China in talks to resolve Nexperia dispute

The Dutch Economy Minister has spoken with his Chinese counterpart to ease tensions following the Netherlands’ recent seizure of Nexperia, a major Dutch semiconductor firm.

China, where most of Nexperia’s chips are produced and sold, reacted by blocking exports, creating concern among European carmakers reliant on its components.

Vincent Karremans said he had discussed ‘further steps towards reaching a solution’ with Chinese Minister of Commerce Wang Wentao.

Both sides emphasised the importance of finding an outcome that benefits Nexperia, as well as the Chinese and European economies.

Meanwhile, Nexperia’s China division has begun asserting its independence, telling employees they may reject ‘external instructions’.

The firm remains a subsidiary of Shanghai-listed Wingtech, which has faced growing scrutiny from European regulators over national security and strategic technology supply chains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US greenlights Nvidia chip exports to UAE under new AI pact

The US has approved its first export licences for Nvidia’s advanced AI chips destined for the United Arab Emirates, marking a concrete step in the bilateral AI partnership announced earlier in 2025.

These licences come under the oversight of the US Commerce Department’s Bureau of Industry and Security, aligned with a formal agreement between the two nations signed in May.

In return, the UAE has committed to investing in the United States, making this a two-way deal. The licences do not cover every project yet: some entities, such as the AI firm G42, are currently excluded from the approved shipments.

The UAE sees the move as crucial to its AI push under Vision 2031, particularly for funding data centre expansion and advancing research in robotics and intelligent systems. Nvidia already collaborates with Abu Dhabi’s Technology Innovation Institute (TII) in a joint AI and robotics lab.

Challenges remain. Some US officials cite national security risks, especially given the UAE’s ties and potential technology pathways to third countries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CISA warns of advanced campaign exploiting Cisco appliances in federal networks

US cybersecurity officials have issued an emergency directive after hackers breached a federal agency by exploiting critical flaws in Cisco appliances. CISA warned the campaign poses a severe risk to government networks.

Experts told CNN they believe the hackers are state-backed and operating out of China, raising alarm among officials. Hundreds of compromised devices are reportedly in use across the federal government, CISA stated, issuing a directive to rapidly assess the scope of this major breach.

Cisco confirmed it was urgently alerted to the breaches by US government agencies in May and quickly assigned a specialised team to investigate. The company provided advanced detection tools, worked intensely to analyse compromised environments, and examined firmware from infected devices.

Cisco stated that the attackers exploited multiple zero-day flaws and employed advanced evasion techniques. It suspects a link to the ArcaneDoor campaign reported in early 2024.

CISA has withheld details about which agencies were affected or the precise nature of the breaches, underscoring the gravity of the situation. Investigations are currently underway to contain the ongoing threat and prevent further exploitation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN Secretary-General warns humanity cannot rely on algorithms

UN Secretary-General António Guterres has urged world leaders to act swiftly to ensure AI serves humanity rather than threatens it. Speaking at a UN Security Council debate, he warned that while AI can help anticipate food crises, support de-mining efforts, and prevent violence, it is equally capable of fueling conflict through cyberattacks, disinformation, and autonomous weapons.

‘Humanity’s fate cannot be left to an algorithm,’ he stressed.

Guterres outlined four urgent priorities. First, he called for strict human oversight in all military uses of AI, repeating his demand for a global ban on lethal autonomous weapons systems. He insisted that life-and-death decisions, including any involving nuclear weapons, must never be left to machines.

Second, he pressed for coherent international regulations to ensure AI complies with international law at every stage, from design to deployment. He highlighted the dangers of AI lowering barriers to acquiring prohibited weapons and urged states to build transparency, trust, and safeguards against misuse.

Finally, Guterres emphasised protecting information integrity and closing the global AI capacity gap. He warned that AI-driven disinformation could destabilise peace processes and elections, while unequal access risks leaving developing countries behind.

The UN has already launched initiatives, including a new international scientific panel and an annual AI governance dialogue, to foster cooperation and accountability.

‘The window is closing to shape AI, for peace, justice, and humanity,’ he concluded.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US military unveils automated cybersecurity construct for modern warfare

The US Department of War has unveiled a new Cybersecurity Risk Management Construct (CSRMC), a framework designed to deliver real-time cyber defence and strengthen the military’s digital resilience.

A model that replaces outdated checklist-driven processes with automated, continuously monitored systems capable of adapting to rapidly evolving threats.

The CSRMC shifts from static, compliance-heavy assessments to dynamic and operationally relevant defence. Its five-phase lifecycle embeds cybersecurity into system design, testing, deployment, and operations, ensuring digital systems remain hardened and actively defended throughout use.

Continuous monitoring and automated authorisation replace periodic reviews, giving commanders real-time visibility of risks.

Built on ten core principles, including automation, DevSecOps, cyber survivability, and threat-informed testing, the framework represents a cultural change in military cybersecurity.

It seeks to cut duplication through enterprise services, accelerate secure capability delivery, and enable defence systems to survive in contested environments.

According to acting CIO Kattie Arrington, the construct is intended to institutionalise resilience across all domains, from land and sea to space and cyberspace. The goal is to provide US forces with the technological edge to counter increasingly sophisticated adversaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN urges global rules to ensure AI benefits humanity

The UN Security Council debated AI, noting its potential to boost development but warning of risks, particularly in military use. Secretary-General António Guterres called AI a ‘double-edged sword,’ supporting development but posing threats if left unregulated.

He urged legally binding restrictions on lethal autonomous weapons and insisted nuclear decisions remain under human control.

Experts and leaders emphasised the urgent need for global regulation, equitable access, and trustworthy AI systems. Yoshua Bengio of Université de Montréal warned of risks from misaligned AI, cyberattacks, and economic concentration, calling for greater oversight.

Stanford’s Yejin Choi highlighted the concentration of AI expertise in a few countries and companies, stressing that democratising AI and reducing bias is key to ensuring global benefits.

Representatives warned that AI could deepen digital inequality in developing regions, especially Africa, due to limited access to data and infrastructure.

Delegates from Guyana, Somalia, Sierra Leone, Algeria, and Panama called for international rules to ensure transparency, fairness, and prevent dominance by a few countries or companies. Others, including the United States, cautioned that overregulation could stifle innovation and centralise power.

Delegates stressed AI’s risks in security, urging Yemen, Poland, and the Netherlands called for responsible use in conflict with human oversight and ethical accountability.Leaders from Portugal and the Netherlands said AI frameworks must promote innovation, security, and serve humanity and peace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta offers Llama AI to US allies amid global tech race

Meta will provide its Llama AI model to key European institutions, NATO, and several allied countries as part of efforts to strengthen national security capabilities.

The company confirmed that France, Germany, Italy, Japan, South Korea, and the EU will gain access to the open-source model. US defence and security agencies and partners in Australia, Canada, New Zealand, and the UK already use Llama.

Meta stated that the aim is to ensure democratic allies have the most advanced AI tools for decision-making, mission planning, and operational efficiency.

Although its terms bar use for direct military or espionage applications, the company emphasised that supporting allied defence strategies is in the interest of nations.

The move highlights the strategic importance of AI models in global security. Meta has positioned Llama as a counterweight to other countries’ developments, after allegations that researchers adapted earlier versions of the model for military purposes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Emerging AI trends that will define 2026

AI is set to reshape daily life in 2026, with innovations moving beyond software to influence the physical world, work environments, and international relations.

Autonomous agents will increasingly manage household and workplace tasks, coordinating projects, handling logistics, and interacting with smart devices instead of relying solely on humans.

Synthetic content will become ubiquitous, potentially comprising up to 90 percent of online material. While it can accelerate data analysis and insight generation, the challenge will be to ensure genuine human creativity and experience remain visible instead of being drowned out by generic AI outputs.

The workplace will see both opportunity and disruption. Routine and administrative work will increasingly be offloaded to AI, creating roles such as prompt engineers and AI ethics specialists, while some traditional positions face redundancy.

Similarly, AI will expand into healthcare, autonomous transport, and industrial automation, becoming a tangible presence in everyday life instead of remaining a background technology.

Governments and global institutions will grapple with AI’s geopolitical and economic impact. From trade restrictions to synthetic propaganda, world leaders will attempt to control AI’s spread and underlying data instead of allowing a single country or corporation to have unchecked dominance.

Energy efficiency and sustainability will also rise to the fore, as AI’s growing power demands require innovative solutions to reduce environmental impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!