US greenlights Nvidia chip exports to UAE under new AI pact

The US has approved its first export licences for Nvidia’s advanced AI chips destined for the United Arab Emirates, marking a concrete step in the bilateral AI partnership announced earlier in 2025.

These licences come under the oversight of the US Commerce Department’s Bureau of Industry and Security, aligned with a formal agreement between the two nations signed in May.

In return, the UAE has committed to investing in the United States, making this a two-way deal. The licences do not cover every project yet: some entities, such as the AI firm G42, are currently excluded from the approved shipments.

The UAE sees the move as crucial to its AI push under Vision 2031, particularly for funding data centre expansion and advancing research in robotics and intelligent systems. Nvidia already collaborates with Abu Dhabi’s Technology Innovation Institute (TII) in a joint AI and robotics lab.

Challenges remain. Some US officials cite national security risks, especially given the UAE’s ties and potential technology pathways to third countries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CISA warns of advanced campaign exploiting Cisco appliances in federal networks

US cybersecurity officials have issued an emergency directive after hackers breached a federal agency by exploiting critical flaws in Cisco appliances. CISA warned the campaign poses a severe risk to government networks.

Experts told CNN they believe the hackers are state-backed and operating out of China, raising alarm among officials. Hundreds of compromised devices are reportedly in use across the federal government, CISA stated, issuing a directive to rapidly assess the scope of this major breach.

Cisco confirmed it was urgently alerted to the breaches by US government agencies in May and quickly assigned a specialised team to investigate. The company provided advanced detection tools, worked intensely to analyse compromised environments, and examined firmware from infected devices.

Cisco stated that the attackers exploited multiple zero-day flaws and employed advanced evasion techniques. It suspects a link to the ArcaneDoor campaign reported in early 2024.

CISA has withheld details about which agencies were affected or the precise nature of the breaches, underscoring the gravity of the situation. Investigations are currently underway to contain the ongoing threat and prevent further exploitation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN Secretary-General warns humanity cannot rely on algorithms

UN Secretary-General António Guterres has urged world leaders to act swiftly to ensure AI serves humanity rather than threatens it. Speaking at a UN Security Council debate, he warned that while AI can help anticipate food crises, support de-mining efforts, and prevent violence, it is equally capable of fueling conflict through cyberattacks, disinformation, and autonomous weapons.

‘Humanity’s fate cannot be left to an algorithm,’ he stressed.

Guterres outlined four urgent priorities. First, he called for strict human oversight in all military uses of AI, repeating his demand for a global ban on lethal autonomous weapons systems. He insisted that life-and-death decisions, including any involving nuclear weapons, must never be left to machines.

Second, he pressed for coherent international regulations to ensure AI complies with international law at every stage, from design to deployment. He highlighted the dangers of AI lowering barriers to acquiring prohibited weapons and urged states to build transparency, trust, and safeguards against misuse.

Finally, Guterres emphasised protecting information integrity and closing the global AI capacity gap. He warned that AI-driven disinformation could destabilise peace processes and elections, while unequal access risks leaving developing countries behind.

The UN has already launched initiatives, including a new international scientific panel and an annual AI governance dialogue, to foster cooperation and accountability.

‘The window is closing to shape AI, for peace, justice, and humanity,’ he concluded.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US military unveils automated cybersecurity construct for modern warfare

The US Department of War has unveiled a new Cybersecurity Risk Management Construct (CSRMC), a framework designed to deliver real-time cyber defence and strengthen the military’s digital resilience.

A model that replaces outdated checklist-driven processes with automated, continuously monitored systems capable of adapting to rapidly evolving threats.

The CSRMC shifts from static, compliance-heavy assessments to dynamic and operationally relevant defence. Its five-phase lifecycle embeds cybersecurity into system design, testing, deployment, and operations, ensuring digital systems remain hardened and actively defended throughout use.

Continuous monitoring and automated authorisation replace periodic reviews, giving commanders real-time visibility of risks.

Built on ten core principles, including automation, DevSecOps, cyber survivability, and threat-informed testing, the framework represents a cultural change in military cybersecurity.

It seeks to cut duplication through enterprise services, accelerate secure capability delivery, and enable defence systems to survive in contested environments.

According to acting CIO Kattie Arrington, the construct is intended to institutionalise resilience across all domains, from land and sea to space and cyberspace. The goal is to provide US forces with the technological edge to counter increasingly sophisticated adversaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN urges global rules to ensure AI benefits humanity

The UN Security Council debated AI, noting its potential to boost development but warning of risks, particularly in military use. Secretary-General António Guterres called AI a ‘double-edged sword,’ supporting development but posing threats if left unregulated.

He urged legally binding restrictions on lethal autonomous weapons and insisted nuclear decisions remain under human control.

Experts and leaders emphasised the urgent need for global regulation, equitable access, and trustworthy AI systems. Yoshua Bengio of Université de Montréal warned of risks from misaligned AI, cyberattacks, and economic concentration, calling for greater oversight.

Stanford’s Yejin Choi highlighted the concentration of AI expertise in a few countries and companies, stressing that democratising AI and reducing bias is key to ensuring global benefits.

Representatives warned that AI could deepen digital inequality in developing regions, especially Africa, due to limited access to data and infrastructure.

Delegates from Guyana, Somalia, Sierra Leone, Algeria, and Panama called for international rules to ensure transparency, fairness, and prevent dominance by a few countries or companies. Others, including the United States, cautioned that overregulation could stifle innovation and centralise power.

Delegates stressed AI’s risks in security, urging Yemen, Poland, and the Netherlands called for responsible use in conflict with human oversight and ethical accountability.Leaders from Portugal and the Netherlands said AI frameworks must promote innovation, security, and serve humanity and peace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta offers Llama AI to US allies amid global tech race

Meta will provide its Llama AI model to key European institutions, NATO, and several allied countries as part of efforts to strengthen national security capabilities.

The company confirmed that France, Germany, Italy, Japan, South Korea, and the EU will gain access to the open-source model. US defence and security agencies and partners in Australia, Canada, New Zealand, and the UK already use Llama.

Meta stated that the aim is to ensure democratic allies have the most advanced AI tools for decision-making, mission planning, and operational efficiency.

Although its terms bar use for direct military or espionage applications, the company emphasised that supporting allied defence strategies is in the interest of nations.

The move highlights the strategic importance of AI models in global security. Meta has positioned Llama as a counterweight to other countries’ developments, after allegations that researchers adapted earlier versions of the model for military purposes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Emerging AI trends that will define 2026

AI is set to reshape daily life in 2026, with innovations moving beyond software to influence the physical world, work environments, and international relations.

Autonomous agents will increasingly manage household and workplace tasks, coordinating projects, handling logistics, and interacting with smart devices instead of relying solely on humans.

Synthetic content will become ubiquitous, potentially comprising up to 90 percent of online material. While it can accelerate data analysis and insight generation, the challenge will be to ensure genuine human creativity and experience remain visible instead of being drowned out by generic AI outputs.

The workplace will see both opportunity and disruption. Routine and administrative work will increasingly be offloaded to AI, creating roles such as prompt engineers and AI ethics specialists, while some traditional positions face redundancy.

Similarly, AI will expand into healthcare, autonomous transport, and industrial automation, becoming a tangible presence in everyday life instead of remaining a background technology.

Governments and global institutions will grapple with AI’s geopolitical and economic impact. From trade restrictions to synthetic propaganda, world leaders will attempt to control AI’s spread and underlying data instead of allowing a single country or corporation to have unchecked dominance.

Energy efficiency and sustainability will also rise to the fore, as AI’s growing power demands require innovative solutions to reduce environmental impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack disrupts major European airports

Airports across Europe faced severe disruption after a cyberattack on check-in software used by several major airlines.

Heathrow, Brussels, Berlin and Dublin all reported delays, with some passengers left waiting hours as staff reverted to manual processes instead of automated systems.

Brussels Airport asked airlines to cancel half of Monday’s departures after Collins Aerospace, the US-based supplier of check-in technology, could not provide a secure update. Heathrow said most flights were expected to operate but warned travellers to check their flight status.

Berlin and Dublin also reported long delays, although Dublin said it planned to run a full schedule.

Collins, a subsidiary of aerospace and defence group RTX, confirmed that its Muse software had been targeted by a cyberattack and said it was working to restore services. The UK’s National Cyber Security Centre coordinates with airports and law enforcement to assess the impact.

Experts warned that aviation is particularly vulnerable because airlines and airports rely on shared platforms. They said stronger backup systems, regular updates and greater cross-border cooperation are needed instead of siloed responses, as cyberattacks rarely stop at national boundaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft seizes 338 sites tied to phishing service

Microsoft has disrupted RaccoonO365, a fast-growing phishing service used by cybercriminals to steal Microsoft 365 login details.

Using a court order from the Southern District of New York, in the US, its Digital Crimes Unit seized 338 websites linked to the operation. The takedown cut off infrastructure that enabled criminals to mimic Microsoft branding and trick victims into sharing their credentials.

Since mid-2024, RaccoonO365 has been used in at least 94 countries and has stolen more than 5,000 credentials. The kits were marketed on Telegram to hundreds of paying subscribers, including campaigns that targeted healthcare providers in the US.

Microsoft identified the group’s alleged leader as Joshua Ogundipe, based in Nigeria, who is accused of creating and promoting the service. The company has referred the case to international law enforcement while continuing efforts to dismantle any rebuilt networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot