YouTube rolls back rules on Covid-19 and 2020 election misinformation

Google’s YouTube has announced it will reinstate accounts previously banned for repeatedly posting misinformation about Covid-19 and the 2020 US presidential election. The decision marks another rollback of moderation rules that once targeted health and political falsehoods.

The platform said the move reflects a broader commitment to free expression and follows similar changes at Meta and Elon Musk’s X.

YouTube had already scrapped policies barring repeat claims about Covid-19 and election outcomes, rules that had led to actions against figures such as Robert F. Kennedy Jr.’s Children’s Health Defense Fund and Senator Ron Johnson.

An announcement that came in a letter to House Judiciary Committee Chair Jim Jordan, amid a Republican-led investigation into whether the Biden administration pressured tech firms to remove certain content.

YouTube claimed the White House created a political climate aimed at shaping its moderation, though it insisted its policies were enforced independently.

The company said that US conservative creators have a significant role in civic discourse and will be allowed to return under the revised rules. The move highlights Silicon Valley’s broader trend of loosening restrictions on speech, especially under pressure from right-leaning critics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US military unveils automated cybersecurity construct for modern warfare

The US Department of War has unveiled a new Cybersecurity Risk Management Construct (CSRMC), a framework designed to deliver real-time cyber defence and strengthen the military’s digital resilience.

A model that replaces outdated checklist-driven processes with automated, continuously monitored systems capable of adapting to rapidly evolving threats.

The CSRMC shifts from static, compliance-heavy assessments to dynamic and operationally relevant defence. Its five-phase lifecycle embeds cybersecurity into system design, testing, deployment, and operations, ensuring digital systems remain hardened and actively defended throughout use.

Continuous monitoring and automated authorisation replace periodic reviews, giving commanders real-time visibility of risks.

Built on ten core principles, including automation, DevSecOps, cyber survivability, and threat-informed testing, the framework represents a cultural change in military cybersecurity.

It seeks to cut duplication through enterprise services, accelerate secure capability delivery, and enable defence systems to survive in contested environments.

According to acting CIO Kattie Arrington, the construct is intended to institutionalise resilience across all domains, from land and sea to space and cyberspace. The goal is to provide US forces with the technological edge to counter increasingly sophisticated adversaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn default AI data sharing faces Dutch privacy watchdog scrutiny

The Dutch privacy watchdog, Autoriteit Persoonsgegevens (AP), is warning LinkedIn users in the Netherlands to review their settings to prevent their data from being used for AI training.

LinkedIn plans to use names, job titles, education history, locations, skills, photos, and public posts from European users to train its systems. Private messages will not be included; however, the sharing option is enabled by default.

AP Deputy Chair Monique Verdier said the move poses significant risks. She warned that once personal data is used to train a model, it cannot be removed, and its future uses are unpredictable.

LinkedIn, headquartered in Dublin, falls under the jurisdiction of the Data Protection Commission in Ireland, which will determine whether the plan can proceed. The AP said it is working with Irish and EU counterparts and has already received complaints.

Users must opt out by 3 November if they do not wish to have their data used. They can disable the setting via the AP’s link or manually in LinkedIn under ‘settings & privacy’ → ‘data privacy’ → ‘data for improving generative AI’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN urges global rules to ensure AI benefits humanity

The UN Security Council debated AI, noting its potential to boost development but warning of risks, particularly in military use. Secretary-General António Guterres called AI a ‘double-edged sword,’ supporting development but posing threats if left unregulated.

He urged legally binding restrictions on lethal autonomous weapons and insisted nuclear decisions remain under human control.

Experts and leaders emphasised the urgent need for global regulation, equitable access, and trustworthy AI systems. Yoshua Bengio of Université de Montréal warned of risks from misaligned AI, cyberattacks, and economic concentration, calling for greater oversight.

Stanford’s Yejin Choi highlighted the concentration of AI expertise in a few countries and companies, stressing that democratising AI and reducing bias is key to ensuring global benefits.

Representatives warned that AI could deepen digital inequality in developing regions, especially Africa, due to limited access to data and infrastructure.

Delegates from Guyana, Somalia, Sierra Leone, Algeria, and Panama called for international rules to ensure transparency, fairness, and prevent dominance by a few countries or companies. Others, including the United States, cautioned that overregulation could stifle innovation and centralise power.

Delegates stressed AI’s risks in security, urging Yemen, Poland, and the Netherlands called for responsible use in conflict with human oversight and ethical accountability.Leaders from Portugal and the Netherlands said AI frameworks must promote innovation, security, and serve humanity and peace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack on Jaguar Land Rover exposes UK supply chain risks

The UK’s ministers are considering an unprecedented intervention after a cyberattack forced Jaguar Land Rover to halt production, leaving thousands of suppliers exposed to collapse.

A late August hack shut down JLR’s IT networks and forced the suspension of its UK factories. Industry experts estimate losses of more than £50m a week, with full operations unlikely to restart until October or later.

JLR, owned by India’s Tata Motors, had not finalised cyber insurance before the breach, which left it particularly vulnerable.

Officials are weighing whether to buy and stockpile car parts from smaller firms that depend on JLR, though logistical difficulties make the plan complex. Government-backed loans are also under discussion.

Cybersecurity agencies, including the National Cyber Security Centre and the National Crime Agency, are now supporting the investigation.

The attack is part of a wider pattern of major breaches targeting UK institutions and retailers, with a group calling itself Scattered Lapsus$ Hunters claiming responsibility.

A growing threat that highlights how the country’s critical industries remain exposed to sophisticated cybercriminals, raising questions about resilience and the need for stronger digital defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic models join Microsoft Copilot Studio for enhanced AI flexibility

Microsoft has added Anthropic’s Claude Sonnet 4 and Claude Opus 4.1 to Copilot Studio, giving users more control over model selection for orchestration, workflow automation, and reasoning tasks.

The integration allows customers to design and optimise AI agents with either Anthropic or OpenAI models, or even coordinate across both. Administrators can manage access through the Microsoft 365 Admin Center, with automatic fallback to OpenAI GPT-4o if Anthropic models are disabled.

Anthropic’s models are available in early release environments now, with preview access across all environments expected within two weeks and full production readiness by the end of the year.

Microsoft said the move empowers businesses to tailor AI agents more precisely to industry-specific needs, from HR onboarding to compliance management.

By enabling multi-model orchestration, Copilot Studio extends its versatility for enterprises seeking to match the right AI model to each task, underlining Microsoft’s push to position Copilot as a flexible platform for agentic AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple escalates fight against EU digital law

US tech giant Apple has called for the repeal of the EU’s Digital Markets Act, claiming the rules undermine user privacy, disrupt services, and erode product quality.

The company urged the Commission to replace the legislation with a ‘fit for purpose’ framework, or hand enforcement to an independent agency insulated from political influence.

Apple argued that the Act’s interoperability requirements had delayed the rollout of features in the EU, including Live Translation on AirPods and iPhone mirroring. Additionally, the firm accused the Commission of adopting extreme interpretations that created user vulnerabilities instead of protecting them.

Brussels has dismissed those claims. A Commission spokesperson stressed that DMA compliance is an obligation, not an option, and said the rules guarantee fair competition by forcing dominant platforms to open access to rivals.

A dispute that intensifies long-running friction between US tech firms and the EU regulators.

Apple has already appealed to the courts, with a public hearing scheduled in October, while Washington has criticised the bloc’s wider digital policy.

A clash has deepened transatlantic trade tensions, with the White House recently threatening tariffs after fresh fines against another American tech company.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gatik and Loblaw to deploy 50 self-driving trucks in Canada

Autonomous logistics firm Gatik is set to expand its partnership with Loblaw, deploying 50 new self-driving trucks across North America over the next year. The move marks the largest autonomous truck deployment in the region to date.

The slow rollout of self-driving technology has frustrated supply chain watchers, with most firms still testing limited fleets. Gatik’s large-scale deployment signals a shift toward commercial adoption, with 20 trucks to be added by the end of 2025 and an additional 30 by 2026.

The partnership was enabled by Ontario’s Autonomous Commercial Motor Vehicle Pilot Program, a ten-year initiative allowing approved operators to test automated commercial trucks on public roads. Officials hope it will boost road safety and support the trucking sector.

Industry analysts note that North America’s truck driver shortage is one of the most pressing logistics challenges facing the region. Nearly 70% of logistics firms report that driver shortages hinder their ability to meet freight demand, making automation a viable solution to address this issue.

Gatik, operating in the US and Canada, says the deployment could ease labour pressure and improve efficiency, but safety remains a key concern. Experts caution that striking a balance between rapid rollout and robust oversight will be crucial for establishing trust in autonomous freight operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Stargate sites create jobs and boost AI capacity across the US

OpenAI, Oracle, and SoftBank are expanding their Stargate AI infrastructure with five new US data centre sites. The addition brings nearly 7 gigawatts of capacity and $400 billion in investment, putting the partners on track to meet the $500 billion, 10-gigawatt commitment by 2025.

Three of the new sites- located in Shackelford County, Texas; Doña Ana County, New Mexico; and a forthcoming Midwest location, are expected to deliver over 5.5 gigawatts of capacity. These developments are expected to create over 25,000 onsite jobs and tens of thousands more nationwide.

A potential 600-megawatt expansion near the flagship site in Abilene, Texas, is also under consideration.

The remaining two sites, in Lordstown, Ohio, and Milam County, Texas, will scale to 1.5 gigawatts over 18 months. SoftBank and SB Energy are providing advanced design and infrastructure to enable faster, more scalable, and cost-efficient AI compute.

The new sites follow a rigorous nationwide selection process involving over 300 proposals from more than 30 states. Early workloads at the Abilene flagship site are already advancing next-generation AI research, supported by Oracle Cloud Infrastructure and NVIDIA GB200 racks.

The expansion underscores the partners’ commitment to building the physical infrastructure necessary for AI breakthroughs and long-term US leadership in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The UK’s invisible AI workforce is reshaping industries

According to a new analysis from Multiverse, the UK’s AI workforce is expanding far beyond traditional tech roles. Nurses, lecturers, librarians, surveyors, and other non-tech professionals increasingly apply AI, forming what experts call an ‘invisible AI workforce.’

Over two-thirds of AI apprentices are in roles without tech-related job titles, highlighting the widespread adoption of AI across industries.

An analysis of more than 2,500 Multiverse apprentices shows that AI is being applied in healthcare, education, government administration, financial services, and construction sectors. AI hotspots are emerging beyond London, with clusters in Trafford, Cheshire West and Chester, Leeds, and Birmingham.

Croydon leads among London boroughs for AI apprentices, followed by Tower Hamlets, Lewisham, and Wandsworth.

The UK’s AI workforce is also demographically diverse. Apprentices range in age from 19 to 71, with near-equal gender representation- 45% female and 54% male- compared with just 22% of women in AI roles nationwide.

Workers at all career stages are reskilling with AI, using the technology to address real-world problems, such as improving patient care or streamlining charity services.

Multiverse has trained over 20,000 apprentices in AI, data, and digital skills since 2016 and aims to train another 15,000 in the next two years. With 1,500 companies involved, the platform is helping non-tech workers use AI to boost productivity and innovation across the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot