Expanded AI model support arrives in Microsoft 365 Copilot

Microsoft is expanding the AI models powering Microsoft 365 Copilot by adding Anthropic’s Claude Sonnet 4 and Claude Opus 4.1. Customers can now choose between OpenAI and Anthropic models for research, deep reasoning, and agent building across Microsoft 365 tools.

The Researcher agent can now run on Anthropic’s Claude Opus 4.1, giving users a choice of models for in-depth analysis. The Researcher draws on web sources, trusted third-party data, and internal work content—encompassing emails, chats, meetings, and files—to deliver tailored, multistep reasoning.

Claude Sonnet 4 and Opus 4.1 are also available in Copilot Studio, enabling the creation of enterprise-grade agents with flexible model selection. Users can mix Anthropic, OpenAI, and Azure Model Catalogue models to power multi-agent workflows, automate tasks, and manage agents efficiently.

Claude in Researcher is rolling out today to Microsoft 365 Copilot-licensed customers through the Frontier Program. Customers can also use Claude models in Copilot Studio to build and orchestrate agents.

Microsoft says this launch is part of its strategy to bring the best AI innovation across the industry to Copilot. More Anthropic-powered features will roll out soon, strengthening Copilot’s role as a hub for enterprise AI and workflow transformation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

More social media platforms could face under-16 ban in Australia

Australia is set to expand its under-16 social media ban, with platforms such as WhatsApp, Reddit, Twitch, Roblox, Pinterest, Steam, Kick, and Lego Play potentially joining the list. The eSafety Commissioner, Julie Inman Grant, has written to 16 companies asking them to self-assess whether they fall under the ban.

The current ban already includes Facebook, TikTok, YouTube, and Snapchat, making it a world-first policy. The focus will be on platforms with large youth user bases, where risks of harm are highest.

Despite the bold move, experts warn the legislation may be largely symbolic without concrete enforcement mechanisms. Age verification remains a significant hurdle, with Canberra acknowledging that companies will likely need to self-regulate. An independent study found that age checks can be done ‘privately, efficiently and effectively,’ but noted there is no one-size-fits-all solution.

Firms failing to comply could face fines of up to AU$49.5 million (US$32.6 million). Some companies have called the law ‘vague’ and ‘rushed.’ Meanwhile, new rules will soon take effect to limit access to harmful but legal content, including online pornography and AI chatbots capable of sexually explicit dialogue. Roblox has already agreed to strengthen safeguards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

LinkedIn default AI data sharing faces Dutch privacy watchdog scrutiny

The Dutch privacy watchdog, Autoriteit Persoonsgegevens (AP), is warning LinkedIn users in the Netherlands to review their settings to prevent their data from being used for AI training.

LinkedIn plans to use names, job titles, education history, locations, skills, photos, and public posts from European users to train its systems. Private messages will not be included; however, the sharing option is enabled by default.

AP Deputy Chair Monique Verdier said the move poses significant risks. She warned that once personal data is used to train a model, it cannot be removed, and its future uses are unpredictable.

LinkedIn, headquartered in Dublin, falls under the jurisdiction of the Data Protection Commission in Ireland, which will determine whether the plan can proceed. The AP said it is working with Irish and EU counterparts and has already received complaints.

Users must opt out by 3 November if they do not wish to have their data used. They can disable the setting via the AP’s link or manually in LinkedIn under ‘settings & privacy’ → ‘data privacy’ → ‘data for improving generative AI’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN urges global rules to ensure AI benefits humanity

The UN Security Council debated AI, noting its potential to boost development but warning of risks, particularly in military use. Secretary-General António Guterres called AI a ‘double-edged sword,’ supporting development but posing threats if left unregulated.

He urged legally binding restrictions on lethal autonomous weapons and insisted nuclear decisions remain under human control.

Experts and leaders emphasised the urgent need for global regulation, equitable access, and trustworthy AI systems. Yoshua Bengio of Université de Montréal warned of risks from misaligned AI, cyberattacks, and economic concentration, calling for greater oversight.

Stanford’s Yejin Choi highlighted the concentration of AI expertise in a few countries and companies, stressing that democratising AI and reducing bias is key to ensuring global benefits.

Representatives warned that AI could deepen digital inequality in developing regions, especially Africa, due to limited access to data and infrastructure.

Delegates from Guyana, Somalia, Sierra Leone, Algeria, and Panama called for international rules to ensure transparency, fairness, and prevent dominance by a few countries or companies. Others, including the United States, cautioned that overregulation could stifle innovation and centralise power.

Delegates stressed AI’s risks in security, urging Yemen, Poland, and the Netherlands called for responsible use in conflict with human oversight and ethical accountability.Leaders from Portugal and the Netherlands said AI frameworks must promote innovation, security, and serve humanity and peace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Karnataka High Court rules against X Corp in content case

The Indian Karnataka High Court has rejected a petition by Elon Musk’s X Corp that contested the Indian government’s authority to block content and the legality of its Sahyog portal.

Justice M Nagaprasanna ruled that social media regulation is necessary to curb unlawful material, particularly content harmful to women, and that communications have historically been subject to oversight regardless of technology.

X Corp argued that takedown powers exist only under Section 69A of the IT Act and described the Sahyog portal as a tool for censorship. The government countered that Section 79(3)(b) allows safe harbour protections to be withdrawn if platforms fail to comply.

The Indian court sided with the government, affirming the portal’s validity and the broader regulatory framework. The ruling marks a setback for X Corp, which had also sought protection from possible punitive action for not joining the portal.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gatik and Loblaw to deploy 50 self-driving trucks in Canada

Autonomous logistics firm Gatik is set to expand its partnership with Loblaw, deploying 50 new self-driving trucks across North America over the next year. The move marks the largest autonomous truck deployment in the region to date.

The slow rollout of self-driving technology has frustrated supply chain watchers, with most firms still testing limited fleets. Gatik’s large-scale deployment signals a shift toward commercial adoption, with 20 trucks to be added by the end of 2025 and an additional 30 by 2026.

The partnership was enabled by Ontario’s Autonomous Commercial Motor Vehicle Pilot Program, a ten-year initiative allowing approved operators to test automated commercial trucks on public roads. Officials hope it will boost road safety and support the trucking sector.

Industry analysts note that North America’s truck driver shortage is one of the most pressing logistics challenges facing the region. Nearly 70% of logistics firms report that driver shortages hinder their ability to meet freight demand, making automation a viable solution to address this issue.

Gatik, operating in the US and Canada, says the deployment could ease labour pressure and improve efficiency, but safety remains a key concern. Experts caution that striking a balance between rapid rollout and robust oversight will be crucial for establishing trust in autonomous freight operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-driven remote fetal monitoring launched by Lee Health

Lee Health has launched Florida’s first AI-powered birth care centre, introducing a remote fetal monitoring command hub to improve maternal and newborn outcomes across the Gulf Coast.

The system tracks temperature, heart rate, blood pressure, and pulse for mothers and babies, with AI alerting staff when vital signs deviate from normal ranges. Nurses remain in control but gain what Lee Health calls a ‘second set of eyes’.

‘Maybe mum’s blood pressure is high, maybe the baby’s heart rate is not looking great. We will be able to identify those things,’ said Jen Campbell, director of obstetrical services at Lee Health.

Once a mother checks in, the system immediately monitors across Lee Health’s network and sends data to the AI hub. AI cues trigger early alerts under certified clinician oversight and are aligned with Lee Health’s ethical AI policies, allowing staff to intervene before complications worsen.

Dr Cherrie Morris, vice president and chief physician executive for women’s services, said the hub strengthens patient safety by centralising monitoring and providing expert review from certified nurses across the network.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Secrets sprawl flagged as top software supply chain risk in Australia

Avocado Consulting urges Australian organisations to boost software supply chain security after a high-alert warning from the Australian Cyber Security Centre (ACSC). The alert flagged threats, including social engineering, stolen tokens, and manipulated software packages.

Dennis Baltazar of Avocado Consulting said attackers combine social engineering with living-off-the-land techniques, making attacks appear routine. He warned that secrets left across systems can turn small slips into major breaches.

Baltazar advised immediate audits to find unmanaged privileged accounts and non-human identities. He urged embedding security into workflows by using short-lived credentials, policy-as-code, and default secret detection to reduce incidents and increase development speed for users in Australia.

Avocado Consulting advises organisations to eliminate secrets from code and pipelines, rotate tokens frequently, and validate every software dependency by default using version pinning, integrity checks, and provenance verification. Monitoring CI/CD activity for anomalies can also help detect attacks early.

Failing to act could expose cryptographic keys, facilitate privilege escalation, and result in reputational and operational damage. Avocado Consulting states that secure development practices must become the default, with automated scanning and push protection integrated into the software development lifecycle.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government AI tool recovers £500m lost to fraud

A new AI system developed by the UK Cabinet Office has helped reclaim nearly £500m in fraudulent payments, marking the government’s most significant recovery of public funds in a single year.

The Fraud Risk Assessment Accelerator analyses data across government departments to identify weaknesses and prevent scams before they occur.

It uncovered unlawful council tax claims, social housing subletting, and pandemic-related fraud, including £186m linked to Covid support schemes. Ministers stated the savings would be redirected to fund nurses, teachers, and police officers.

Officials confirmed the tool will be licensed internationally, with the US, Canada, Australia, and New Zealand among the first partners expected to adopt it.

The UK announced the initiative at an anti-fraud summit with these countries, describing it as a step toward global cooperation in securing public finances through AI.

However, civil liberties groups have raised concerns about bias and oversight. Previous government AI systems used to detect welfare fraud were found to produce disparities based on age, disability, and nationality.

Campaigners warned that the expanded use of AI in fraud detection risks embedding unfair outcomes if left unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN General Assembly highlights threats of unregulated technology

World leaders opened the 80th UN General Debate with a strong call to keep technology in the service of humanity, warning that without safeguards, rapid advances could widen divides and fuel insecurity. Speakers highlighted the promise of AI, digital innovation, and new technologies, but stressed that global cooperation is essential to ensure they promote development, dignity, and peace.

A recurring theme was the urgent need for universal guardrails on AI, with concerns over regulation lagging behind its fast-paced growth. Delegates from across regions supported multilateral governance, ethical standards, and closing global capacity gaps so that all countries can design, use, and benefit from AI.

While some warned of risks such as inequality, social manipulation, and autonomous weapons, others emphasised AI’s potential for prosperity, innovation, and inclusive growth.

Cybersecurity and cybercrime also drew attention, with calls for collective security measures and anticipation of a new UN convention against cybercrime. Leaders further raised alarms over disinformation, digital authoritarianism, and the race for critical minerals, urging fair access and sustainability.

Across the debate, the unifying message was clear. The technology must uplift humanity, protect rights, and serve as a force for peace rather than domination.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!