CISA warns of advanced campaign exploiting Cisco appliances in federal networks

US cybersecurity officials have issued an emergency directive after hackers breached a federal agency by exploiting critical flaws in Cisco appliances. CISA warned the campaign poses a severe risk to government networks.

Experts told CNN they believe the hackers are state-backed and operating out of China, raising alarm among officials. Hundreds of compromised devices are reportedly in use across the federal government, CISA stated, issuing a directive to rapidly assess the scope of this major breach.

Cisco confirmed it was urgently alerted to the breaches by US government agencies in May and quickly assigned a specialised team to investigate. The company provided advanced detection tools, worked intensely to analyse compromised environments, and examined firmware from infected devices.

Cisco stated that the attackers exploited multiple zero-day flaws and employed advanced evasion techniques. It suspects a link to the ArcaneDoor campaign reported in early 2024.

CISA has withheld details about which agencies were affected or the precise nature of the breaches, underscoring the gravity of the situation. Investigations are currently underway to contain the ongoing threat and prevent further exploitation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK government considers supplier aid after JLR cyberattack

Jaguar Land Rover (JLR) is recovering from a disruptive cyberattack, gradually bringing its systems back online. The company is focused on rebuilding its operations, aiming to restore confidence and momentum as key digital functions are restored.

JLR said it has boosted its IT processing capacity for invoicing to clear its payment backlog. The Global Parts Logistics Centre is also resuming full operations, restoring parts distribution to retailers.

The financial system used for processing vehicle wholesales has been restored, allowing the company to resume car sales and registration. JLR is collaborating with the UK’s NCSC and law enforcement to ensure a secure restart of operations.

Production remains suspended at JLR’s three UK factories in Halewood, Solihull, and Wolverhampton. The company typically produces around 1,000 cars a day, but staff have been instructed to stay at home since the August cyberattack.

The government is considering support packages for the company’s suppliers, some of whom are under financial pressure. A group identifying itself as Scattered Lapsus$ Hunters has claimed responsibility for the incident.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google and Flo Health settle health data privacy suit for $56 million

Google has agreed to pay $48 million, and Flo Health, a menstrual tracking app, has agreed to pay $8 million to resolve claims that the app shared users’ health data without their consent.

The lawsuit alleged that Flo used third-party tools to transmit personal information, including menstruation and pregnancy details, to companies like Google, Meta, and analytics firm Flurry.

The class-action case, filed in 2021 by plaintiff Erica Frasko and later consolidated with similar complaints, accused Flo of violating privacy laws by allowing user data to be intercepted via embedded software development kits (SDKs).

Google’s settlement, disclosed this week, covers users who inputted reproductive health data between November 2016 and February 2019.

While neither Flo nor Google admitted wrongdoing, the settlement avoids the uncertainty of a trial. A notice to claimants stated the resolution helps sidestep the costs and risks of prolonged litigation.

Meta, a co-defendant, opted to go to trial and was found liable in August for violating California’s Invasion of Privacy Act. A judge recently rejected Meta’s attempt to overturn that verdict.

According to The Record, the case has drawn significant attention from privacy advocates and the tech industry, highlighting the potential legal risks of data-sharing practices tied to ad-tracking technology.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Brazil to host massive AI-ready data centre by RT-One

RT-One plans to build Latin America’s largest AI data centre after securing land in Uberlândia, Minas Gerais, Brazil. The US$1.2bn project will span over one million square metres, with 300,000 m² reserved as protected green space.

The site will support high-performance computing, sovereign cloud services, and AI workloads, launching with 100MW capacity and scaling to 400MW. It will run on 100% renewable energy and utilise advanced cooling systems to minimise its environmental impact.

RT-One states that the project will prepare Brazil to compete globally, generate skilled jobs, and train new talent for the digital economy. A wide network of partners, including Hitachi, Siemens, WEG, and Schneider Electric, is collaborating on the development, aiming to ensure resilience and sustainability at scale.

The project is expected to stimulate regional growth, with jobs, training programmes, and opportunities for collaboration between academia and industry. Local officials, including the mayor of Uberlândia, attended the launch event to underline government support for the initiative.

Once complete, the Uberlândia facility will provide sovereign cloud capacity, high-density compute, and AI-ready infrastructure for Brazil and beyond. RT-One says the development will position the city as a hub for digital innovation and strengthen Latin America’s role in the global AI economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN Secretary-General warns humanity cannot rely on algorithms

UN Secretary-General António Guterres has urged world leaders to act swiftly to ensure AI serves humanity rather than threatens it. Speaking at a UN Security Council debate, he warned that while AI can help anticipate food crises, support de-mining efforts, and prevent violence, it is equally capable of fueling conflict through cyberattacks, disinformation, and autonomous weapons.

‘Humanity’s fate cannot be left to an algorithm,’ he stressed.

Guterres outlined four urgent priorities. First, he called for strict human oversight in all military uses of AI, repeating his demand for a global ban on lethal autonomous weapons systems. He insisted that life-and-death decisions, including any involving nuclear weapons, must never be left to machines.

Second, he pressed for coherent international regulations to ensure AI complies with international law at every stage, from design to deployment. He highlighted the dangers of AI lowering barriers to acquiring prohibited weapons and urged states to build transparency, trust, and safeguards against misuse.

Finally, Guterres emphasised protecting information integrity and closing the global AI capacity gap. He warned that AI-driven disinformation could destabilise peace processes and elections, while unequal access risks leaving developing countries behind.

The UN has already launched initiatives, including a new international scientific panel and an annual AI governance dialogue, to foster cooperation and accountability.

‘The window is closing to shape AI, for peace, justice, and humanity,’ he concluded.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global rollout of teen accounts for Facebook and Messenger

US tech giant Meta is expanding its dedicated teen accounts to Facebook and Messenger users worldwide, extending a safety system on Instagram. The move introduces more parental controls and restrictions to protect younger users on Meta’s platforms.

The accounts, now mandatory for teens, include stricter privacy settings that limit contact with unknown adults. Parents can supervise how their children use the apps, monitor screen time, and view who their teens are messaging.

For younger users aged 13 to 15, parental permission is required before adjusting safety-related settings. Meta is also deploying AI tools to detect teens lying about their age.

Alongside the global rollout, Instagram is expanding a school partnership programme in the US, allowing middle and high schools to report bullying and problematic behaviour directly.

The company says early feedback from participating schools has been positive, and the scheme is now open to all schools nationwide.

An expansion that comes as Meta faces lawsuits and investigations over its record on child safety. By strengthening parental controls and school-based reporting, the company aims to address growing criticism while tightening protections for its youngest users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Uzbekistan positions itself as Central Asia’s new AI and technology hub

Using its largest-ever ICT Week, Uzbekistan is showcasing ambitions to become a regional centre for AI and digital transformation.

More than 20,000 participants, 300 companies, and delegations from over 50 countries gathered in Tashkent, signalling Central Asia’s growing role in the global technology landscape.

The country invests in AI projects across various sectors, including education, healthcare, banking, and industry, with more than 100 initiatives underway.

Officials emphasise that digitalisation must serve people directly, by improving services and creating jobs for Uzbekistan’s young and expanding population.

The demographic advantage is shaping a vision of AI that prioritises dignity, opportunity, and inclusive growth.

International recognition has followed. The UN’s International Telecommunication Union described Uzbekistan as ‘leading the way’ in the region, praising high connectivity, supportive policies, and progress in youth participation and gender equality.

Infrastructure is also advancing, with global investors like DataVolt building one of Central Asia’s most advanced data centres in Tashkent.

Uzbekistan’s private sector is also drawing attention. Fintech and e-commerce unicorn Uzum recently secured significant investment from Tencent and VR Capital, reaching a valuation above €1.3 billion.

Public policy and private investment are positioning the country as a credible AI hub connecting Europe, Asia, and the Middle East.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Content Signals Policy by Cloudflare lets websites signal data use preferences

Cloudflare has announced the launch of its Content Signals Policy, a new extension to robots.txt that allows websites to express their preferences for how their data is used after access. The policy is designed to help creators maintain open content while preventing misuse by data scrapers and AI trainers.

The new tool enables website owners to specify, in a machine-readable format, whether they permit search indexing, AI input, or AI model training. Operators can set each signal to ‘yes,’ ‘no,’ or leave it blank to indicate no stated preference, providing them with fine-grained control over their responses.

Cloudflare says the policy tackles the free-rider problem, where scraped content is reused without credit. With bot traffic set to surpass human traffic by 2029, it calls for clear, standard rules to protect creators and keep the web open.

Customers already using Cloudflare’s managed robots.txt will have the policy automatically applied, with a default setting that allows search but blocks AI training. Sites without a robots.txt file can opt in to publish the human-readable policy text and add their own preferences when ready.

Cloudflare emphasises that content signals are not enforcement mechanisms but a means of communicating expectations. It is releasing the policy under a CC0 licence to encourage broad adoption and is working with standards bodies to ensure the rules are recognised across the industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK sets up expert commission to speed up NHS adoption of AI

Doctors, researchers and technology leaders will work together to accelerate the safe adoption of AI in the NHS, under a new commission launched by the Medicines and Healthcare products Regulatory Agency (MHRA).

The body will draft recommendations to modernise healthcare regulation, ensuring patients gain faster access to innovations while maintaining safety and public trust.

MHRA stressed that clear rules are vital as AI spreads across healthcare, already helping to diagnose conditions such as lung cancer and strokes in hospitals across the UK.

Backed by ministers, the initiative aims to position Britain as a global hub for health tech investment. Companies including Google and Microsoft will join clinicians, academics, and patient advocates to advise on the framework, expected to be published next year.

A commission that will also review the regulatory barriers slowing adoption of tools such as AI-driven note-taking systems, which early trials suggest can significantly boost efficiency in clinical care.

Officials say the framework will provide much-needed clarity for AI in radiology, pathology, and virtual care, supporting the digital transformation of NHS.

MHRA chief executive Lawrence Tallon called the commission a ‘cultural shift’ in regulation. At the same time, Technology Secretary Liz Kendall said it will ensure patients benefit from life-saving technologies ‘quickly and safely’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube rolls back rules on Covid-19 and 2020 election misinformation

Google’s YouTube has announced it will reinstate accounts previously banned for repeatedly posting misinformation about Covid-19 and the 2020 US presidential election. The decision marks another rollback of moderation rules that once targeted health and political falsehoods.

The platform said the move reflects a broader commitment to free expression and follows similar changes at Meta and Elon Musk’s X.

YouTube had already scrapped policies barring repeat claims about Covid-19 and election outcomes, rules that had led to actions against figures such as Robert F. Kennedy Jr.’s Children’s Health Defense Fund and Senator Ron Johnson.

An announcement that came in a letter to House Judiciary Committee Chair Jim Jordan, amid a Republican-led investigation into whether the Biden administration pressured tech firms to remove certain content.

YouTube claimed the White House created a political climate aimed at shaping its moderation, though it insisted its policies were enforced independently.

The company said that US conservative creators have a significant role in civic discourse and will be allowed to return under the revised rules. The move highlights Silicon Valley’s broader trend of loosening restrictions on speech, especially under pressure from right-leaning critics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!