YouTube rolls back rules on Covid-19 and 2020 election misinformation

Google’s YouTube has announced it will reinstate accounts previously banned for repeatedly posting misinformation about Covid-19 and the 2020 US presidential election. The decision marks another rollback of moderation rules that once targeted health and political falsehoods.

The platform said the move reflects a broader commitment to free expression and follows similar changes at Meta and Elon Musk’s X.

YouTube had already scrapped policies barring repeat claims about Covid-19 and election outcomes, rules that had led to actions against figures such as Robert F. Kennedy Jr.’s Children’s Health Defense Fund and Senator Ron Johnson.

An announcement that came in a letter to House Judiciary Committee Chair Jim Jordan, amid a Republican-led investigation into whether the Biden administration pressured tech firms to remove certain content.

YouTube claimed the White House created a political climate aimed at shaping its moderation, though it insisted its policies were enforced independently.

The company said that US conservative creators have a significant role in civic discourse and will be allowed to return under the revised rules. The move highlights Silicon Valley’s broader trend of loosening restrictions on speech, especially under pressure from right-leaning critics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US military unveils automated cybersecurity construct for modern warfare

The US Department of War has unveiled a new Cybersecurity Risk Management Construct (CSRMC), a framework designed to deliver real-time cyber defence and strengthen the military’s digital resilience.

A model that replaces outdated checklist-driven processes with automated, continuously monitored systems capable of adapting to rapidly evolving threats.

The CSRMC shifts from static, compliance-heavy assessments to dynamic and operationally relevant defence. Its five-phase lifecycle embeds cybersecurity into system design, testing, deployment, and operations, ensuring digital systems remain hardened and actively defended throughout use.

Continuous monitoring and automated authorisation replace periodic reviews, giving commanders real-time visibility of risks.

Built on ten core principles, including automation, DevSecOps, cyber survivability, and threat-informed testing, the framework represents a cultural change in military cybersecurity.

It seeks to cut duplication through enterprise services, accelerate secure capability delivery, and enable defence systems to survive in contested environments.

According to acting CIO Kattie Arrington, the construct is intended to institutionalise resilience across all domains, from land and sea to space and cyberspace. The goal is to provide US forces with the technological edge to counter increasingly sophisticated adversaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn default AI data sharing faces Dutch privacy watchdog scrutiny

The Dutch privacy watchdog, Autoriteit Persoonsgegevens (AP), is warning LinkedIn users in the Netherlands to review their settings to prevent their data from being used for AI training.

LinkedIn plans to use names, job titles, education history, locations, skills, photos, and public posts from European users to train its systems. Private messages will not be included; however, the sharing option is enabled by default.

AP Deputy Chair Monique Verdier said the move poses significant risks. She warned that once personal data is used to train a model, it cannot be removed, and its future uses are unpredictable.

LinkedIn, headquartered in Dublin, falls under the jurisdiction of the Data Protection Commission in Ireland, which will determine whether the plan can proceed. The AP said it is working with Irish and EU counterparts and has already received complaints.

Users must opt out by 3 November if they do not wish to have their data used. They can disable the setting via the AP’s link or manually in LinkedIn under ‘settings & privacy’ → ‘data privacy’ → ‘data for improving generative AI’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN urges global rules to ensure AI benefits humanity

The UN Security Council debated AI, noting its potential to boost development but warning of risks, particularly in military use. Secretary-General António Guterres called AI a ‘double-edged sword,’ supporting development but posing threats if left unregulated.

He urged legally binding restrictions on lethal autonomous weapons and insisted nuclear decisions remain under human control.

Experts and leaders emphasised the urgent need for global regulation, equitable access, and trustworthy AI systems. Yoshua Bengio of Université de Montréal warned of risks from misaligned AI, cyberattacks, and economic concentration, calling for greater oversight.

Stanford’s Yejin Choi highlighted the concentration of AI expertise in a few countries and companies, stressing that democratising AI and reducing bias is key to ensuring global benefits.

Representatives warned that AI could deepen digital inequality in developing regions, especially Africa, due to limited access to data and infrastructure.

Delegates from Guyana, Somalia, Sierra Leone, Algeria, and Panama called for international rules to ensure transparency, fairness, and prevent dominance by a few countries or companies. Others, including the United States, cautioned that overregulation could stifle innovation and centralise power.

Delegates stressed AI’s risks in security, urging Yemen, Poland, and the Netherlands called for responsible use in conflict with human oversight and ethical accountability.Leaders from Portugal and the Netherlands said AI frameworks must promote innovation, security, and serve humanity and peace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack on Jaguar Land Rover exposes UK supply chain risks

The UK’s ministers are considering an unprecedented intervention after a cyberattack forced Jaguar Land Rover to halt production, leaving thousands of suppliers exposed to collapse.

A late August hack shut down JLR’s IT networks and forced the suspension of its UK factories. Industry experts estimate losses of more than £50m a week, with full operations unlikely to restart until October or later.

JLR, owned by India’s Tata Motors, had not finalised cyber insurance before the breach, which left it particularly vulnerable.

Officials are weighing whether to buy and stockpile car parts from smaller firms that depend on JLR, though logistical difficulties make the plan complex. Government-backed loans are also under discussion.

Cybersecurity agencies, including the National Cyber Security Centre and the National Crime Agency, are now supporting the investigation.

The attack is part of a wider pattern of major breaches targeting UK institutions and retailers, with a group calling itself Scattered Lapsus$ Hunters claiming responsibility.

A growing threat that highlights how the country’s critical industries remain exposed to sophisticated cybercriminals, raising questions about resilience and the need for stronger digital defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple escalates fight against EU digital law

US tech giant Apple has called for the repeal of the EU’s Digital Markets Act, claiming the rules undermine user privacy, disrupt services, and erode product quality.

The company urged the Commission to replace the legislation with a ‘fit for purpose’ framework, or hand enforcement to an independent agency insulated from political influence.

Apple argued that the Act’s interoperability requirements had delayed the rollout of features in the EU, including Live Translation on AirPods and iPhone mirroring. Additionally, the firm accused the Commission of adopting extreme interpretations that created user vulnerabilities instead of protecting them.

Brussels has dismissed those claims. A Commission spokesperson stressed that DMA compliance is an obligation, not an option, and said the rules guarantee fair competition by forcing dominant platforms to open access to rivals.

A dispute that intensifies long-running friction between US tech firms and the EU regulators.

Apple has already appealed to the courts, with a public hearing scheduled in October, while Washington has criticised the bloc’s wider digital policy.

A clash has deepened transatlantic trade tensions, with the White House recently threatening tariffs after fresh fines against another American tech company.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISA highlights failures after US agency cyber breach

The US Cybersecurity and Infrastructure Security Agency (CISA) has published lessons from its response to a federal agency breach.

Hackers exploited an unpatched vulnerability in GeoServer software, gaining access to multiple systems. CISA noted that the flaw had been disclosed weeks earlier and added to its Known Exploited Vulnerabilities catalogue, but the agency had not patched it in time.

Investigators also found that incident response plans were outdated and had not been tested. The lack of clear procedures delayed third-party support and restricted access to vital security tools during the investigation.

CISA added that endpoint detection alerts were not continuously reviewed and some US public-facing systems had no protection, leaving attackers free to install web shells and move laterally through the network.

The agency urged all organisations to prioritise patching, maintain and rehearse incident response plans, and ensure comprehensive logging to strengthen resilience against future cybersecurity attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Secrets sprawl flagged as top software supply chain risk in Australia

Avocado Consulting urges Australian organisations to boost software supply chain security after a high-alert warning from the Australian Cyber Security Centre (ACSC). The alert flagged threats, including social engineering, stolen tokens, and manipulated software packages.

Dennis Baltazar of Avocado Consulting said attackers combine social engineering with living-off-the-land techniques, making attacks appear routine. He warned that secrets left across systems can turn small slips into major breaches.

Baltazar advised immediate audits to find unmanaged privileged accounts and non-human identities. He urged embedding security into workflows by using short-lived credentials, policy-as-code, and default secret detection to reduce incidents and increase development speed for users in Australia.

Avocado Consulting advises organisations to eliminate secrets from code and pipelines, rotate tokens frequently, and validate every software dependency by default using version pinning, integrity checks, and provenance verification. Monitoring CI/CD activity for anomalies can also help detect attacks early.

Failing to act could expose cryptographic keys, facilitate privilege escalation, and result in reputational and operational damage. Avocado Consulting states that secure development practices must become the default, with automated scanning and push protection integrated into the software development lifecycle.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government AI tool recovers £500m lost to fraud

A new AI system developed by the UK Cabinet Office has helped reclaim nearly £500m in fraudulent payments, marking the government’s most significant recovery of public funds in a single year.

The Fraud Risk Assessment Accelerator analyses data across government departments to identify weaknesses and prevent scams before they occur.

It uncovered unlawful council tax claims, social housing subletting, and pandemic-related fraud, including £186m linked to Covid support schemes. Ministers stated the savings would be redirected to fund nurses, teachers, and police officers.

Officials confirmed the tool will be licensed internationally, with the US, Canada, Australia, and New Zealand among the first partners expected to adopt it.

The UK announced the initiative at an anti-fraud summit with these countries, describing it as a step toward global cooperation in securing public finances through AI.

However, civil liberties groups have raised concerns about bias and oversight. Previous government AI systems used to detect welfare fraud were found to produce disparities based on age, disability, and nationality.

Campaigners warned that the expanded use of AI in fraud detection risks embedding unfair outcomes if left unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Arrest made in Heathrow airport cyberattack case

A 40-year-old man has been arrested in West Sussex in connection with a cyberattack that caused major disruption across several European airports, including London’s Heathrow. The arrest was confirmed by the UK’s National Crime Agency (NCA), which is leading the investigation.

The incident targeted Collins Aerospace, a key provider of airline baggage and check-in software. The attack triggered system failures that forced staff at multiple airports to revert to manual check-in processes, resulting in hundreds of flight delays and frustration for passengers.

The NCA described the case as being in its early stages, with inquiries ongoing into the scale of the attack and the suspect’s potential role. Authorities have not yet confirmed whether others may be involved or what the broader motives behind the cyber-attack were.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!