UN urges global rules to ensure AI benefits humanity

The UN Security Council debated AI, noting its potential to boost development but warning of risks, particularly in military use. Secretary-General António Guterres called AI a ‘double-edged sword,’ supporting development but posing threats if left unregulated.

He urged legally binding restrictions on lethal autonomous weapons and insisted nuclear decisions remain under human control.

Experts and leaders emphasised the urgent need for global regulation, equitable access, and trustworthy AI systems. Yoshua Bengio of Université de Montréal warned of risks from misaligned AI, cyberattacks, and economic concentration, calling for greater oversight.

Stanford’s Yejin Choi highlighted the concentration of AI expertise in a few countries and companies, stressing that democratising AI and reducing bias is key to ensuring global benefits.

Representatives warned that AI could deepen digital inequality in developing regions, especially Africa, due to limited access to data and infrastructure.

Delegates from Guyana, Somalia, Sierra Leone, Algeria, and Panama called for international rules to ensure transparency, fairness, and prevent dominance by a few countries or companies. Others, including the United States, cautioned that overregulation could stifle innovation and centralise power.

Delegates stressed AI’s risks in security, urging Yemen, Poland, and the Netherlands called for responsible use in conflict with human oversight and ethical accountability.Leaders from Portugal and the Netherlands said AI frameworks must promote innovation, security, and serve humanity and peace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Karnataka High Court rules against X Corp in content case

The Indian Karnataka High Court has rejected a petition by Elon Musk’s X Corp that contested the Indian government’s authority to block content and the legality of its Sahyog portal.

Justice M Nagaprasanna ruled that social media regulation is necessary to curb unlawful material, particularly content harmful to women, and that communications have historically been subject to oversight regardless of technology.

X Corp argued that takedown powers exist only under Section 69A of the IT Act and described the Sahyog portal as a tool for censorship. The government countered that Section 79(3)(b) allows safe harbour protections to be withdrawn if platforms fail to comply.

The Indian court sided with the government, affirming the portal’s validity and the broader regulatory framework. The ruling marks a setback for X Corp, which had also sought protection from possible punitive action for not joining the portal.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple escalates fight against EU digital law

US tech giant Apple has called for the repeal of the EU’s Digital Markets Act, claiming the rules undermine user privacy, disrupt services, and erode product quality.

The company urged the Commission to replace the legislation with a ‘fit for purpose’ framework, or hand enforcement to an independent agency insulated from political influence.

Apple argued that the Act’s interoperability requirements had delayed the rollout of features in the EU, including Live Translation on AirPods and iPhone mirroring. Additionally, the firm accused the Commission of adopting extreme interpretations that created user vulnerabilities instead of protecting them.

Brussels has dismissed those claims. A Commission spokesperson stressed that DMA compliance is an obligation, not an option, and said the rules guarantee fair competition by forcing dominant platforms to open access to rivals.

A dispute that intensifies long-running friction between US tech firms and the EU regulators.

Apple has already appealed to the courts, with a public hearing scheduled in October, while Washington has criticised the bloc’s wider digital policy.

A clash has deepened transatlantic trade tensions, with the White House recently threatening tariffs after fresh fines against another American tech company.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gatik and Loblaw to deploy 50 self-driving trucks in Canada

Autonomous logistics firm Gatik is set to expand its partnership with Loblaw, deploying 50 new self-driving trucks across North America over the next year. The move marks the largest autonomous truck deployment in the region to date.

The slow rollout of self-driving technology has frustrated supply chain watchers, with most firms still testing limited fleets. Gatik’s large-scale deployment signals a shift toward commercial adoption, with 20 trucks to be added by the end of 2025 and an additional 30 by 2026.

The partnership was enabled by Ontario’s Autonomous Commercial Motor Vehicle Pilot Program, a ten-year initiative allowing approved operators to test automated commercial trucks on public roads. Officials hope it will boost road safety and support the trucking sector.

Industry analysts note that North America’s truck driver shortage is one of the most pressing logistics challenges facing the region. Nearly 70% of logistics firms report that driver shortages hinder their ability to meet freight demand, making automation a viable solution to address this issue.

Gatik, operating in the US and Canada, says the deployment could ease labour pressure and improve efficiency, but safety remains a key concern. Experts caution that striking a balance between rapid rollout and robust oversight will be crucial for establishing trust in autonomous freight operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New EU biometric checks set to reshape UK travel from 2026

UK travellers to the EU face new biometric checks from 12 October, but full enforcement is not expected until April 2026. Officials say the phased introduction will help avoid severe disruption at ports and stations.

An entry-exit system that requires non-EU citizens to be fingerprinted and photographed, with the data stored in a central European database for three years. A further 90-day grace period will allow French border officials to ease checks if technical issues arise.

The Port of Dover has prepared off-site facilities to prevent traffic build-up, while border officials stressed the gradual rollout will give passengers time to adapt.

According to Border Force director general Phil Douglas, biometrics and data protection advances have made traditional paper passports increasingly redundant.

These changes come as UK holidaymakers prepare for the busiest winter travel season in years, with full compliance due in time for Easter 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-driven remote fetal monitoring launched by Lee Health

Lee Health has launched Florida’s first AI-powered birth care centre, introducing a remote fetal monitoring command hub to improve maternal and newborn outcomes across the Gulf Coast.

The system tracks temperature, heart rate, blood pressure, and pulse for mothers and babies, with AI alerting staff when vital signs deviate from normal ranges. Nurses remain in control but gain what Lee Health calls a ‘second set of eyes’.

‘Maybe mum’s blood pressure is high, maybe the baby’s heart rate is not looking great. We will be able to identify those things,’ said Jen Campbell, director of obstetrical services at Lee Health.

Once a mother checks in, the system immediately monitors across Lee Health’s network and sends data to the AI hub. AI cues trigger early alerts under certified clinician oversight and are aligned with Lee Health’s ethical AI policies, allowing staff to intervene before complications worsen.

Dr Cherrie Morris, vice president and chief physician executive for women’s services, said the hub strengthens patient safety by centralising monitoring and providing expert review from certified nurses across the network.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek reveals secrets of low-cost AI model

Chinese start-up DeepSeek has published the first peer-reviewed study of its R1 model, revealing how it built the powerful AI system for under US$300,000.

The model stunned markets on its release in January and has since become Hugging Face’s most downloaded open-weight system. Unlike rivals, R1 was not trained on other models’ output but instead developed reasoning abilities through reinforcement learning.

DeepSeek’s engineers rewarded the model for correct answers, enabling it to form problem-solving strategies. Efficiency gains came from allowing R1 to score its own outputs rather than relying on a separate algorithm.

The Nature paper marks the first time a major large language model has undergone peer review. Reviewers said the process increased transparency and should be adopted by other firms as scrutiny of AI risks intensifies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CISA highlights failures after US agency cyber breach

The US Cybersecurity and Infrastructure Security Agency (CISA) has published lessons from its response to a federal agency breach.

Hackers exploited an unpatched vulnerability in GeoServer software, gaining access to multiple systems. CISA noted that the flaw had been disclosed weeks earlier and added to its Known Exploited Vulnerabilities catalogue, but the agency had not patched it in time.

Investigators also found that incident response plans were outdated and had not been tested. The lack of clear procedures delayed third-party support and restricted access to vital security tools during the investigation.

CISA added that endpoint detection alerts were not continuously reviewed and some US public-facing systems had no protection, leaving attackers free to install web shells and move laterally through the network.

The agency urged all organisations to prioritise patching, maintain and rehearse incident response plans, and ensure comprehensive logging to strengthen resilience against future cybersecurity attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Secrets sprawl flagged as top software supply chain risk in Australia

Avocado Consulting urges Australian organisations to boost software supply chain security after a high-alert warning from the Australian Cyber Security Centre (ACSC). The alert flagged threats, including social engineering, stolen tokens, and manipulated software packages.

Dennis Baltazar of Avocado Consulting said attackers combine social engineering with living-off-the-land techniques, making attacks appear routine. He warned that secrets left across systems can turn small slips into major breaches.

Baltazar advised immediate audits to find unmanaged privileged accounts and non-human identities. He urged embedding security into workflows by using short-lived credentials, policy-as-code, and default secret detection to reduce incidents and increase development speed for users in Australia.

Avocado Consulting advises organisations to eliminate secrets from code and pipelines, rotate tokens frequently, and validate every software dependency by default using version pinning, integrity checks, and provenance verification. Monitoring CI/CD activity for anomalies can also help detect attacks early.

Failing to act could expose cryptographic keys, facilitate privilege escalation, and result in reputational and operational damage. Avocado Consulting states that secure development practices must become the default, with automated scanning and push protection integrated into the software development lifecycle.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government AI tool recovers £500m lost to fraud

A new AI system developed by the UK Cabinet Office has helped reclaim nearly £500m in fraudulent payments, marking the government’s most significant recovery of public funds in a single year.

The Fraud Risk Assessment Accelerator analyses data across government departments to identify weaknesses and prevent scams before they occur.

It uncovered unlawful council tax claims, social housing subletting, and pandemic-related fraud, including £186m linked to Covid support schemes. Ministers stated the savings would be redirected to fund nurses, teachers, and police officers.

Officials confirmed the tool will be licensed internationally, with the US, Canada, Australia, and New Zealand among the first partners expected to adopt it.

The UK announced the initiative at an anti-fraud summit with these countries, describing it as a step toward global cooperation in securing public finances through AI.

However, civil liberties groups have raised concerns about bias and oversight. Previous government AI systems used to detect welfare fraud were found to produce disparities based on age, disability, and nationality.

Campaigners warned that the expanded use of AI in fraud detection risks embedding unfair outcomes if left unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!