Content Signals Policy by Cloudflare lets websites signal data use preferences

Cloudflare has announced the launch of its Content Signals Policy, a new extension to robots.txt that allows websites to express their preferences for how their data is used after access. The policy is designed to help creators maintain open content while preventing misuse by data scrapers and AI trainers.

The new tool enables website owners to specify, in a machine-readable format, whether they permit search indexing, AI input, or AI model training. Operators can set each signal to ‘yes,’ ‘no,’ or leave it blank to indicate no stated preference, providing them with fine-grained control over their responses.

Cloudflare says the policy tackles the free-rider problem, where scraped content is reused without credit. With bot traffic set to surpass human traffic by 2029, it calls for clear, standard rules to protect creators and keep the web open.

Customers already using Cloudflare’s managed robots.txt will have the policy automatically applied, with a default setting that allows search but blocks AI training. Sites without a robots.txt file can opt in to publish the human-readable policy text and add their own preferences when ready.

Cloudflare emphasises that content signals are not enforcement mechanisms but a means of communicating expectations. It is releasing the policy under a CC0 licence to encourage broad adoption and is working with standards bodies to ensure the rules are recognised across the industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK sets up expert commission to speed up NHS adoption of AI

Doctors, researchers and technology leaders will work together to accelerate the safe adoption of AI in the NHS, under a new commission launched by the Medicines and Healthcare products Regulatory Agency (MHRA).

The body will draft recommendations to modernise healthcare regulation, ensuring patients gain faster access to innovations while maintaining safety and public trust.

MHRA stressed that clear rules are vital as AI spreads across healthcare, already helping to diagnose conditions such as lung cancer and strokes in hospitals across the UK.

Backed by ministers, the initiative aims to position Britain as a global hub for health tech investment. Companies including Google and Microsoft will join clinicians, academics, and patient advocates to advise on the framework, expected to be published next year.

A commission that will also review the regulatory barriers slowing adoption of tools such as AI-driven note-taking systems, which early trials suggest can significantly boost efficiency in clinical care.

Officials say the framework will provide much-needed clarity for AI in radiology, pathology, and virtual care, supporting the digital transformation of NHS.

MHRA chief executive Lawrence Tallon called the commission a ‘cultural shift’ in regulation. At the same time, Technology Secretary Liz Kendall said it will ensure patients benefit from life-saving technologies ‘quickly and safely’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube rolls back rules on Covid-19 and 2020 election misinformation

Google’s YouTube has announced it will reinstate accounts previously banned for repeatedly posting misinformation about Covid-19 and the 2020 US presidential election. The decision marks another rollback of moderation rules that once targeted health and political falsehoods.

The platform said the move reflects a broader commitment to free expression and follows similar changes at Meta and Elon Musk’s X.

YouTube had already scrapped policies barring repeat claims about Covid-19 and election outcomes, rules that had led to actions against figures such as Robert F. Kennedy Jr.’s Children’s Health Defense Fund and Senator Ron Johnson.

An announcement that came in a letter to House Judiciary Committee Chair Jim Jordan, amid a Republican-led investigation into whether the Biden administration pressured tech firms to remove certain content.

YouTube claimed the White House created a political climate aimed at shaping its moderation, though it insisted its policies were enforced independently.

The company said that US conservative creators have a significant role in civic discourse and will be allowed to return under the revised rules. The move highlights Silicon Valley’s broader trend of loosening restrictions on speech, especially under pressure from right-leaning critics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US military unveils automated cybersecurity construct for modern warfare

The US Department of War has unveiled a new Cybersecurity Risk Management Construct (CSRMC), a framework designed to deliver real-time cyber defence and strengthen the military’s digital resilience.

A model that replaces outdated checklist-driven processes with automated, continuously monitored systems capable of adapting to rapidly evolving threats.

The CSRMC shifts from static, compliance-heavy assessments to dynamic and operationally relevant defence. Its five-phase lifecycle embeds cybersecurity into system design, testing, deployment, and operations, ensuring digital systems remain hardened and actively defended throughout use.

Continuous monitoring and automated authorisation replace periodic reviews, giving commanders real-time visibility of risks.

Built on ten core principles, including automation, DevSecOps, cyber survivability, and threat-informed testing, the framework represents a cultural change in military cybersecurity.

It seeks to cut duplication through enterprise services, accelerate secure capability delivery, and enable defence systems to survive in contested environments.

According to acting CIO Kattie Arrington, the construct is intended to institutionalise resilience across all domains, from land and sea to space and cyberspace. The goal is to provide US forces with the technological edge to counter increasingly sophisticated adversaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

More social media platforms could face under-16 ban in Australia

Australia is set to expand its under-16 social media ban, with platforms such as WhatsApp, Reddit, Twitch, Roblox, Pinterest, Steam, Kick, and Lego Play potentially joining the list. The eSafety Commissioner, Julie Inman Grant, has written to 16 companies asking them to self-assess whether they fall under the ban.

The current ban already includes Facebook, TikTok, YouTube, and Snapchat, making it a world-first policy. The focus will be on platforms with large youth user bases, where risks of harm are highest.

Despite the bold move, experts warn the legislation may be largely symbolic without concrete enforcement mechanisms. Age verification remains a significant hurdle, with Canberra acknowledging that companies will likely need to self-regulate. An independent study found that age checks can be done ‘privately, efficiently and effectively,’ but noted there is no one-size-fits-all solution.

Firms failing to comply could face fines of up to AU$49.5 million (US$32.6 million). Some companies have called the law ‘vague’ and ‘rushed.’ Meanwhile, new rules will soon take effect to limit access to harmful but legal content, including online pornography and AI chatbots capable of sexually explicit dialogue. Roblox has already agreed to strengthen safeguards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

LinkedIn default AI data sharing faces Dutch privacy watchdog scrutiny

The Dutch privacy watchdog, Autoriteit Persoonsgegevens (AP), is warning LinkedIn users in the Netherlands to review their settings to prevent their data from being used for AI training.

LinkedIn plans to use names, job titles, education history, locations, skills, photos, and public posts from European users to train its systems. Private messages will not be included; however, the sharing option is enabled by default.

AP Deputy Chair Monique Verdier said the move poses significant risks. She warned that once personal data is used to train a model, it cannot be removed, and its future uses are unpredictable.

LinkedIn, headquartered in Dublin, falls under the jurisdiction of the Data Protection Commission in Ireland, which will determine whether the plan can proceed. The AP said it is working with Irish and EU counterparts and has already received complaints.

Users must opt out by 3 November if they do not wish to have their data used. They can disable the setting via the AP’s link or manually in LinkedIn under ‘settings & privacy’ → ‘data privacy’ → ‘data for improving generative AI’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN urges global rules to ensure AI benefits humanity

The UN Security Council debated AI, noting its potential to boost development but warning of risks, particularly in military use. Secretary-General António Guterres called AI a ‘double-edged sword,’ supporting development but posing threats if left unregulated.

He urged legally binding restrictions on lethal autonomous weapons and insisted nuclear decisions remain under human control.

Experts and leaders emphasised the urgent need for global regulation, equitable access, and trustworthy AI systems. Yoshua Bengio of Université de Montréal warned of risks from misaligned AI, cyberattacks, and economic concentration, calling for greater oversight.

Stanford’s Yejin Choi highlighted the concentration of AI expertise in a few countries and companies, stressing that democratising AI and reducing bias is key to ensuring global benefits.

Representatives warned that AI could deepen digital inequality in developing regions, especially Africa, due to limited access to data and infrastructure.

Delegates from Guyana, Somalia, Sierra Leone, Algeria, and Panama called for international rules to ensure transparency, fairness, and prevent dominance by a few countries or companies. Others, including the United States, cautioned that overregulation could stifle innovation and centralise power.

Delegates stressed AI’s risks in security, urging Yemen, Poland, and the Netherlands called for responsible use in conflict with human oversight and ethical accountability.Leaders from Portugal and the Netherlands said AI frameworks must promote innovation, security, and serve humanity and peace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack on Jaguar Land Rover exposes UK supply chain risks

The UK’s ministers are considering an unprecedented intervention after a cyberattack forced Jaguar Land Rover to halt production, leaving thousands of suppliers exposed to collapse.

A late August hack shut down JLR’s IT networks and forced the suspension of its UK factories. Industry experts estimate losses of more than £50m a week, with full operations unlikely to restart until October or later.

JLR, owned by India’s Tata Motors, had not finalised cyber insurance before the breach, which left it particularly vulnerable.

Officials are weighing whether to buy and stockpile car parts from smaller firms that depend on JLR, though logistical difficulties make the plan complex. Government-backed loans are also under discussion.

Cybersecurity agencies, including the National Cyber Security Centre and the National Crime Agency, are now supporting the investigation.

The attack is part of a wider pattern of major breaches targeting UK institutions and retailers, with a group calling itself Scattered Lapsus$ Hunters claiming responsibility.

A growing threat that highlights how the country’s critical industries remain exposed to sophisticated cybercriminals, raising questions about resilience and the need for stronger digital defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Stargate sites create jobs and boost AI capacity across the US

OpenAI, Oracle, and SoftBank are expanding their Stargate AI infrastructure with five new US data centre sites. The addition brings nearly 7 gigawatts of capacity and $400 billion in investment, putting the partners on track to meet the $500 billion, 10-gigawatt commitment by 2025.

Three of the new sites- located in Shackelford County, Texas; Doña Ana County, New Mexico; and a forthcoming Midwest location, are expected to deliver over 5.5 gigawatts of capacity. These developments are expected to create over 25,000 onsite jobs and tens of thousands more nationwide.

A potential 600-megawatt expansion near the flagship site in Abilene, Texas, is also under consideration.

The remaining two sites, in Lordstown, Ohio, and Milam County, Texas, will scale to 1.5 gigawatts over 18 months. SoftBank and SB Energy are providing advanced design and infrastructure to enable faster, more scalable, and cost-efficient AI compute.

The new sites follow a rigorous nationwide selection process involving over 300 proposals from more than 30 states. Early workloads at the Abilene flagship site are already advancing next-generation AI research, supported by Oracle Cloud Infrastructure and NVIDIA GB200 racks.

The expansion underscores the partners’ commitment to building the physical infrastructure necessary for AI breakthroughs and long-term US leadership in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New EU biometric checks set to reshape UK travel from 2026

UK travellers to the EU face new biometric checks from 12 October, but full enforcement is not expected until April 2026. Officials say the phased introduction will help avoid severe disruption at ports and stations.

An entry-exit system that requires non-EU citizens to be fingerprinted and photographed, with the data stored in a central European database for three years. A further 90-day grace period will allow French border officials to ease checks if technical issues arise.

The Port of Dover has prepared off-site facilities to prevent traffic build-up, while border officials stressed the gradual rollout will give passengers time to adapt.

According to Border Force director general Phil Douglas, biometrics and data protection advances have made traditional paper passports increasingly redundant.

These changes come as UK holidaymakers prepare for the busiest winter travel season in years, with full compliance due in time for Easter 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!