CISA warns of advanced campaign exploiting Cisco appliances in federal networks

US cybersecurity officials have issued an emergency directive after hackers breached a federal agency by exploiting critical flaws in Cisco appliances. CISA warned the campaign poses a severe risk to government networks.

Experts told CNN they believe the hackers are state-backed and operating out of China, raising alarm among officials. Hundreds of compromised devices are reportedly in use across the federal government, CISA stated, issuing a directive to rapidly assess the scope of this major breach.

Cisco confirmed it was urgently alerted to the breaches by US government agencies in May and quickly assigned a specialised team to investigate. The company provided advanced detection tools, worked intensely to analyse compromised environments, and examined firmware from infected devices.

Cisco stated that the attackers exploited multiple zero-day flaws and employed advanced evasion techniques. It suspects a link to the ArcaneDoor campaign reported in early 2024.

CISA has withheld details about which agencies were affected or the precise nature of the breaches, underscoring the gravity of the situation. Investigations are currently underway to contain the ongoing threat and prevent further exploitation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK government considers supplier aid after JLR cyberattack

Jaguar Land Rover (JLR) is recovering from a disruptive cyberattack, gradually bringing its systems back online. The company is focused on rebuilding its operations, aiming to restore confidence and momentum as key digital functions are restored.

JLR said it has boosted its IT processing capacity for invoicing to clear its payment backlog. The Global Parts Logistics Centre is also resuming full operations, restoring parts distribution to retailers.

The financial system used for processing vehicle wholesales has been restored, allowing the company to resume car sales and registration. JLR is collaborating with the UK’s NCSC and law enforcement to ensure a secure restart of operations.

Production remains suspended at JLR’s three UK factories in Halewood, Solihull, and Wolverhampton. The company typically produces around 1,000 cars a day, but staff have been instructed to stay at home since the August cyberattack.

The government is considering support packages for the company’s suppliers, some of whom are under financial pressure. A group identifying itself as Scattered Lapsus$ Hunters has claimed responsibility for the incident.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN Secretary-General launches call for candidates for AI Scientific Panel

The UN Secretary-General has launched an open call for candidates to serve on the Independent International Scientific Panel on Artificial Intelligence.

The Panel was agreed by UN member states in September 2024 as part of the Global Digital Compact; its terms of reference were later defined in a UN General Assembly resolution adopted in August 2025. The 40-member Panel will provide evidence-based scientific assessments on AI’s opportunities, risks, and impacts. Its work will culminate in an annual, policy-relevant – but non-prescriptive –summary report presented to the Global Dialogue on AI Governance, along with up to two updates per year to engage with the General Assembly plenary.

Candidates with expertise in the following fields are invited to apply:

  • AI, including foundation models & generative AI, machine learning methods, core AI subfields (e.g. vision, language, speech/audio, robotics, planning & scheduling, knowledge representation), reliability, safety & alignment, cognitive & neuroscience links, human–AI interaction, AI security and infrastructure;
  • Applied AI, including science (foundational and applied in health, climate, life sciences, physics, health, social sciences, agriculture), engineering, industry and mobility (e.g. materials, drugs, transportation, smart cities, IoT, satellite, navigation), digital society (e.g. misinformation & disinformation, online harms, social networks, software engineering, web),
  • Related fields, including AI opportunity, risk and impact assessment, AI impacts on society, technology, economy, and environment, AI security and infrastructure, data, ethics, and rights, governance (e.g. public policy, international law, standards, oversight, compliance, foresight and scenario-building).

Following the call for nominations (open until 31 October 2025), the Secretary-General will recommend 40 members for appointment by the General Assembly.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google and Flo Health settle health data privacy suit for $56 million

Google has agreed to pay $48 million, and Flo Health, a menstrual tracking app, has agreed to pay $8 million to resolve claims that the app shared users’ health data without their consent.

The lawsuit alleged that Flo used third-party tools to transmit personal information, including menstruation and pregnancy details, to companies like Google, Meta, and analytics firm Flurry.

The class-action case, filed in 2021 by plaintiff Erica Frasko and later consolidated with similar complaints, accused Flo of violating privacy laws by allowing user data to be intercepted via embedded software development kits (SDKs).

Google’s settlement, disclosed this week, covers users who inputted reproductive health data between November 2016 and February 2019.

While neither Flo nor Google admitted wrongdoing, the settlement avoids the uncertainty of a trial. A notice to claimants stated the resolution helps sidestep the costs and risks of prolonged litigation.

Meta, a co-defendant, opted to go to trial and was found liable in August for violating California’s Invasion of Privacy Act. A judge recently rejected Meta’s attempt to overturn that verdict.

According to The Record, the case has drawn significant attention from privacy advocates and the tech industry, highlighting the potential legal risks of data-sharing practices tied to ad-tracking technology.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Global Dialogue on AI Governance officially launched

On 25 September 2025, the President of the UN General Assembly chaired a high-level multistakeholder informal meeting to launch the Global Dialogue on AI Governance.

The creation of the Dialogue was agreed by UN member states in September 2024, with the adoption of the Global Digital Compact. In August 2025, the General Assembly adopted a resolution outlining the terms of reference and modalities for this new global mechanism.

The Global Dialogue on AI Governance is tasked with facilitating open, transparent and inclusive discussions on AI governance. Issues to focus on will include safe, trustworthy AI; bridging capacity and digital divides; social, ethical, and technical implications; interoperability of governance approaches; human rights; transparency and accountability; and open-source AI development.

The Dialogue will meet annually for up to two days alongside UN conferences in Geneva or New York, featuring high-level government participation, thematic discussions, and an annual report presentation. Initially, it will be held back-to-back in the margins of the International Telecommunication Union Artificial Intelligence for Good Global Summit in Geneva, in 2026, and of the multistakeholder forum on science, technology and innovation for the SDGs in New York, in 2027.

Speaking at the launch of the Dialogue, the UN Secretary-General noted that the Dialogue is ‘about creating a space where governments, industry and civil society can advance common solutions together.  Where innovation can thrive — guided by shared standards and common purpose.’

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN Secretary-General warns humanity cannot rely on algorithms

UN Secretary-General António Guterres has urged world leaders to act swiftly to ensure AI serves humanity rather than threatens it. Speaking at a UN Security Council debate, he warned that while AI can help anticipate food crises, support de-mining efforts, and prevent violence, it is equally capable of fueling conflict through cyberattacks, disinformation, and autonomous weapons.

‘Humanity’s fate cannot be left to an algorithm,’ he stressed.

Guterres outlined four urgent priorities. First, he called for strict human oversight in all military uses of AI, repeating his demand for a global ban on lethal autonomous weapons systems. He insisted that life-and-death decisions, including any involving nuclear weapons, must never be left to machines.

Second, he pressed for coherent international regulations to ensure AI complies with international law at every stage, from design to deployment. He highlighted the dangers of AI lowering barriers to acquiring prohibited weapons and urged states to build transparency, trust, and safeguards against misuse.

Finally, Guterres emphasised protecting information integrity and closing the global AI capacity gap. He warned that AI-driven disinformation could destabilise peace processes and elections, while unequal access risks leaving developing countries behind.

The UN has already launched initiatives, including a new international scientific panel and an annual AI governance dialogue, to foster cooperation and accountability.

‘The window is closing to shape AI, for peace, justice, and humanity,’ he concluded.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Content Signals Policy by Cloudflare lets websites signal data use preferences

Cloudflare has announced the launch of its Content Signals Policy, a new extension to robots.txt that allows websites to express their preferences for how their data is used after access. The policy is designed to help creators maintain open content while preventing misuse by data scrapers and AI trainers.

The new tool enables website owners to specify, in a machine-readable format, whether they permit search indexing, AI input, or AI model training. Operators can set each signal to ‘yes,’ ‘no,’ or leave it blank to indicate no stated preference, providing them with fine-grained control over their responses.

Cloudflare says the policy tackles the free-rider problem, where scraped content is reused without credit. With bot traffic set to surpass human traffic by 2029, it calls for clear, standard rules to protect creators and keep the web open.

Customers already using Cloudflare’s managed robots.txt will have the policy automatically applied, with a default setting that allows search but blocks AI training. Sites without a robots.txt file can opt in to publish the human-readable policy text and add their own preferences when ready.

Cloudflare emphasises that content signals are not enforcement mechanisms but a means of communicating expectations. It is releasing the policy under a CC0 licence to encourage broad adoption and is working with standards bodies to ensure the rules are recognised across the industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK sets up expert commission to speed up NHS adoption of AI

Doctors, researchers and technology leaders will work together to accelerate the safe adoption of AI in the NHS, under a new commission launched by the Medicines and Healthcare products Regulatory Agency (MHRA).

The body will draft recommendations to modernise healthcare regulation, ensuring patients gain faster access to innovations while maintaining safety and public trust.

MHRA stressed that clear rules are vital as AI spreads across healthcare, already helping to diagnose conditions such as lung cancer and strokes in hospitals across the UK.

Backed by ministers, the initiative aims to position Britain as a global hub for health tech investment. Companies including Google and Microsoft will join clinicians, academics, and patient advocates to advise on the framework, expected to be published next year.

A commission that will also review the regulatory barriers slowing adoption of tools such as AI-driven note-taking systems, which early trials suggest can significantly boost efficiency in clinical care.

Officials say the framework will provide much-needed clarity for AI in radiology, pathology, and virtual care, supporting the digital transformation of NHS.

MHRA chief executive Lawrence Tallon called the commission a ‘cultural shift’ in regulation. At the same time, Technology Secretary Liz Kendall said it will ensure patients benefit from life-saving technologies ‘quickly and safely’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Expanded AI model support arrives in Microsoft 365 Copilot

Microsoft is expanding the AI models powering Microsoft 365 Copilot by adding Anthropic’s Claude Sonnet 4 and Claude Opus 4.1. Customers can now choose between OpenAI and Anthropic models for research, deep reasoning, and agent building across Microsoft 365 tools.

The Researcher agent can now run on Anthropic’s Claude Opus 4.1, giving users a choice of models for in-depth analysis. The Researcher draws on web sources, trusted third-party data, and internal work content—encompassing emails, chats, meetings, and files—to deliver tailored, multistep reasoning.

Claude Sonnet 4 and Opus 4.1 are also available in Copilot Studio, enabling the creation of enterprise-grade agents with flexible model selection. Users can mix Anthropic, OpenAI, and Azure Model Catalogue models to power multi-agent workflows, automate tasks, and manage agents efficiently.

Claude in Researcher is rolling out today to Microsoft 365 Copilot-licensed customers through the Frontier Program. Customers can also use Claude models in Copilot Studio to build and orchestrate agents.

Microsoft says this launch is part of its strategy to bring the best AI innovation across the industry to Copilot. More Anthropic-powered features will roll out soon, strengthening Copilot’s role as a hub for enterprise AI and workflow transformation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

More social media platforms could face under-16 ban in Australia

Australia is set to expand its under-16 social media ban, with platforms such as WhatsApp, Reddit, Twitch, Roblox, Pinterest, Steam, Kick, and Lego Play potentially joining the list. The eSafety Commissioner, Julie Inman Grant, has written to 16 companies asking them to self-assess whether they fall under the ban.

The current ban already includes Facebook, TikTok, YouTube, and Snapchat, making it a world-first policy. The focus will be on platforms with large youth user bases, where risks of harm are highest.

Despite the bold move, experts warn the legislation may be largely symbolic without concrete enforcement mechanisms. Age verification remains a significant hurdle, with Canberra acknowledging that companies will likely need to self-regulate. An independent study found that age checks can be done ‘privately, efficiently and effectively,’ but noted there is no one-size-fits-all solution.

Firms failing to comply could face fines of up to AU$49.5 million (US$32.6 million). Some companies have called the law ‘vague’ and ‘rushed.’ Meanwhile, new rules will soon take effect to limit access to harmful but legal content, including online pornography and AI chatbots capable of sexually explicit dialogue. Roblox has already agreed to strengthen safeguards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!