UN Secretary-General warns humanity cannot rely on algorithms

UN Secretary-General António Guterres has urged world leaders to act swiftly to ensure AI serves humanity rather than threatens it. Speaking at a UN Security Council debate, he warned that while AI can help anticipate food crises, support de-mining efforts, and prevent violence, it is equally capable of fueling conflict through cyberattacks, disinformation, and autonomous weapons.

‘Humanity’s fate cannot be left to an algorithm,’ he stressed.

Guterres outlined four urgent priorities. First, he called for strict human oversight in all military uses of AI, repeating his demand for a global ban on lethal autonomous weapons systems. He insisted that life-and-death decisions, including any involving nuclear weapons, must never be left to machines.

Second, he pressed for coherent international regulations to ensure AI complies with international law at every stage, from design to deployment. He highlighted the dangers of AI lowering barriers to acquiring prohibited weapons and urged states to build transparency, trust, and safeguards against misuse.

Finally, Guterres emphasised protecting information integrity and closing the global AI capacity gap. He warned that AI-driven disinformation could destabilise peace processes and elections, while unequal access risks leaving developing countries behind.

The UN has already launched initiatives, including a new international scientific panel and an annual AI governance dialogue, to foster cooperation and accountability.

‘The window is closing to shape AI, for peace, justice, and humanity,’ he concluded.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global rollout of teen accounts for Facebook and Messenger

US tech giant Meta is expanding its dedicated teen accounts to Facebook and Messenger users worldwide, extending a safety system on Instagram. The move introduces more parental controls and restrictions to protect younger users on Meta’s platforms.

The accounts, now mandatory for teens, include stricter privacy settings that limit contact with unknown adults. Parents can supervise how their children use the apps, monitor screen time, and view who their teens are messaging.

For younger users aged 13 to 15, parental permission is required before adjusting safety-related settings. Meta is also deploying AI tools to detect teens lying about their age.

Alongside the global rollout, Instagram is expanding a school partnership programme in the US, allowing middle and high schools to report bullying and problematic behaviour directly.

The company says early feedback from participating schools has been positive, and the scheme is now open to all schools nationwide.

An expansion that comes as Meta faces lawsuits and investigations over its record on child safety. By strengthening parental controls and school-based reporting, the company aims to address growing criticism while tightening protections for its youngest users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI robot ‘robin’ brings emotional support to children’s hospitals in USA

An AI-powered robot named Robin transforms patient care in the USA pediatric hospitals by offering emotional support and companionship to young patients.

Developed by Expper Technologies, Robin resembles a child in appearance and voice, engaging patients with games, music, and conversation. Its childlike demeanour helps ease anxiety, especially during stressful medical procedures.

Initially launched in Armenia, Robin now operates in 30 healthcare facilities across the USA, including Massachusetts, California, Indiana, and New York. Designed to combat healthcare staff shortages, the robot is about 30% autonomous, with remote human operators guiding its interactions under clinical supervision.

Robin’s emotional intelligence allows it to mirror patient expressions and respond with empathy, laughing, playing, or offering comfort when needed. Beyond paediatrics, it also assists elderly patients with dementia in nursing homes by leading breathing exercises and memory games.

With the USA facing a projected shortage of up to 86,000 physicians in the next decade, Robin’s creators aim to expand its capabilities to include monitoring vitals and assisting with basic physical care.

Despite concerns about AI replacing human roles, Expper CEO Karen Khachikyan emphasises Robin is intended to complement healthcare teams, not replace them, offering joy, relief, and a sense of companionship where it’s most needed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Uzbekistan positions itself as Central Asia’s new AI and technology hub

Using its largest-ever ICT Week, Uzbekistan is showcasing ambitions to become a regional centre for AI and digital transformation.

More than 20,000 participants, 300 companies, and delegations from over 50 countries gathered in Tashkent, signalling Central Asia’s growing role in the global technology landscape.

The country invests in AI projects across various sectors, including education, healthcare, banking, and industry, with more than 100 initiatives underway.

Officials emphasise that digitalisation must serve people directly, by improving services and creating jobs for Uzbekistan’s young and expanding population.

The demographic advantage is shaping a vision of AI that prioritises dignity, opportunity, and inclusive growth.

International recognition has followed. The UN’s International Telecommunication Union described Uzbekistan as ‘leading the way’ in the region, praising high connectivity, supportive policies, and progress in youth participation and gender equality.

Infrastructure is also advancing, with global investors like DataVolt building one of Central Asia’s most advanced data centres in Tashkent.

Uzbekistan’s private sector is also drawing attention. Fintech and e-commerce unicorn Uzum recently secured significant investment from Tencent and VR Capital, reaching a valuation above €1.3 billion.

Public policy and private investment are positioning the country as a credible AI hub connecting Europe, Asia, and the Middle East.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The EU unveils VLQ quantum computer in Czech Republic

A new quantum computer has been inaugurated at the IT4Innovations National Supercomputing Centre in Ostrava, Czech Republic. The system is the second quantum computer launched under the EuroHPC Joint Undertaking and forms part of Europe’s push to build its quantum infrastructure.

Developed by IQM Quantum Computers, VLQ houses 24 superconducting qubits arranged in a star-shaped topology, designed to reduce swap operations and improve efficiency.

The €5 million project was co-funded by EuroHPC JU and the LUMI-Q consortium, which includes partners from eight European countries. Scientists expect VLQ to accelerate progress in quantum AI, drug discovery, new material design, renewable energy forecasting, and security applications.

The Czech machine will not work in isolation. It is directly connected to the Karolina supercomputer and will later link to the LUMI system in Finland, enabling hybrid classical–quantum computations. Access will be open to researchers, companies, and the public sector across Europe by the end of 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK sets up expert commission to speed up NHS adoption of AI

Doctors, researchers and technology leaders will work together to accelerate the safe adoption of AI in the NHS, under a new commission launched by the Medicines and Healthcare products Regulatory Agency (MHRA).

The body will draft recommendations to modernise healthcare regulation, ensuring patients gain faster access to innovations while maintaining safety and public trust.

MHRA stressed that clear rules are vital as AI spreads across healthcare, already helping to diagnose conditions such as lung cancer and strokes in hospitals across the UK.

Backed by ministers, the initiative aims to position Britain as a global hub for health tech investment. Companies including Google and Microsoft will join clinicians, academics, and patient advocates to advise on the framework, expected to be published next year.

A commission that will also review the regulatory barriers slowing adoption of tools such as AI-driven note-taking systems, which early trials suggest can significantly boost efficiency in clinical care.

Officials say the framework will provide much-needed clarity for AI in radiology, pathology, and virtual care, supporting the digital transformation of NHS.

MHRA chief executive Lawrence Tallon called the commission a ‘cultural shift’ in regulation. At the same time, Technology Secretary Liz Kendall said it will ensure patients benefit from life-saving technologies ‘quickly and safely’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Meta feature floods users with AI slop in TikTok-style feed

Meta has launched a new short-form video feed called Vibes inside its Meta AI app and on meta.ai, offering users endless streams of AI-generated content. The format mimics TikTok and Instagram Reels but consists entirely of algorithmically generated clips.

Mark Zuckerberg unveiled the feature in an Instagram post showcasing surreal creations, from fuzzy creatures leaping across cubes to a cat kneading dough and even an AI-generated Egyptian woman taking a selfie in antiquity.

Users can generate videos from scratch or remix existing clips by adding visuals, music, or stylistic effects before posting to Vibes, sharing via direct message, or cross-posting to Instagram and Facebook Stories.

Meta partnered with Midjourney and Black Forest Labs to support the early rollout, though it plans to transition to its AI models.

The announcement, however, was derided by users, who criticised the platform for adding yet more ‘AI slop’ to already saturated feeds. One top comment under Zuckerberg’s post bluntly read: ‘gang nobody wants this’.

A launch that comes as Meta ramps up its AI investment to catch up with rivals OpenAI, Anthropic, and Google DeepMind.

Earlier during the year, the company consolidated its AI teams into Meta Superintelligence Labs and reorganised them into four units focused on foundation models, research, product integration, and infrastructure.

Despite the strategic shift, many question whether Vibes adds value or deepens user fatigue with generative content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube rolls back rules on Covid-19 and 2020 election misinformation

Google’s YouTube has announced it will reinstate accounts previously banned for repeatedly posting misinformation about Covid-19 and the 2020 US presidential election. The decision marks another rollback of moderation rules that once targeted health and political falsehoods.

The platform said the move reflects a broader commitment to free expression and follows similar changes at Meta and Elon Musk’s X.

YouTube had already scrapped policies barring repeat claims about Covid-19 and election outcomes, rules that had led to actions against figures such as Robert F. Kennedy Jr.’s Children’s Health Defense Fund and Senator Ron Johnson.

An announcement that came in a letter to House Judiciary Committee Chair Jim Jordan, amid a Republican-led investigation into whether the Biden administration pressured tech firms to remove certain content.

YouTube claimed the White House created a political climate aimed at shaping its moderation, though it insisted its policies were enforced independently.

The company said that US conservative creators have a significant role in civic discourse and will be allowed to return under the revised rules. The move highlights Silicon Valley’s broader trend of loosening restrictions on speech, especially under pressure from right-leaning critics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN urges global rules to ensure AI benefits humanity

The UN Security Council debated AI, noting its potential to boost development but warning of risks, particularly in military use. Secretary-General António Guterres called AI a ‘double-edged sword,’ supporting development but posing threats if left unregulated.

He urged legally binding restrictions on lethal autonomous weapons and insisted nuclear decisions remain under human control.

Experts and leaders emphasised the urgent need for global regulation, equitable access, and trustworthy AI systems. Yoshua Bengio of Université de Montréal warned of risks from misaligned AI, cyberattacks, and economic concentration, calling for greater oversight.

Stanford’s Yejin Choi highlighted the concentration of AI expertise in a few countries and companies, stressing that democratising AI and reducing bias is key to ensuring global benefits.

Representatives warned that AI could deepen digital inequality in developing regions, especially Africa, due to limited access to data and infrastructure.

Delegates from Guyana, Somalia, Sierra Leone, Algeria, and Panama called for international rules to ensure transparency, fairness, and prevent dominance by a few countries or companies. Others, including the United States, cautioned that overregulation could stifle innovation and centralise power.

Delegates stressed AI’s risks in security, urging Yemen, Poland, and the Netherlands called for responsible use in conflict with human oversight and ethical accountability.Leaders from Portugal and the Netherlands said AI frameworks must promote innovation, security, and serve humanity and peace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack on Jaguar Land Rover exposes UK supply chain risks

The UK’s ministers are considering an unprecedented intervention after a cyberattack forced Jaguar Land Rover to halt production, leaving thousands of suppliers exposed to collapse.

A late August hack shut down JLR’s IT networks and forced the suspension of its UK factories. Industry experts estimate losses of more than £50m a week, with full operations unlikely to restart until October or later.

JLR, owned by India’s Tata Motors, had not finalised cyber insurance before the breach, which left it particularly vulnerable.

Officials are weighing whether to buy and stockpile car parts from smaller firms that depend on JLR, though logistical difficulties make the plan complex. Government-backed loans are also under discussion.

Cybersecurity agencies, including the National Cyber Security Centre and the National Crime Agency, are now supporting the investigation.

The attack is part of a wider pattern of major breaches targeting UK institutions and retailers, with a group calling itself Scattered Lapsus$ Hunters claiming responsibility.

A growing threat that highlights how the country’s critical industries remain exposed to sophisticated cybercriminals, raising questions about resilience and the need for stronger digital defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!