US rollout brings AI face tagging to Amazon Ring

Amazon has begun rolling out a new facial recognition feature for its Ring doorbells, allowing devices to identify frequent visitors and send personalised alerts instead of generic motion notifications.

The feature, called Familiar Faces, enables users to create a catalogue of up to 50 individuals, such as family members, friends, neighbours or delivery drivers, by labelling faces directly within the Ring app.

Amazon says the rollout is now under way in the United States, where Ring owners can opt in to the feature, which is disabled by default and designed to reduce unwanted or repetitive alerts.

The company claims facial data is encrypted, not shared externally and not used to train AI models, while unnamed faces are automatically deleted after 30 days, giving users ongoing control over stored information.

Privacy advocates and lawmakers remain concerned, however, citing Ring’s past security failures and law enforcement partnerships as evidence that convenience-driven surveillance tools can introduce long-term risks to personal privacy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deutsche Telekom partners with OpenAI to expand advanced AI services across Europe

OpenAI has formed a new partnership with Deutsche Telekom to deliver advanced AI capabilities to millions of people across Europe. The collaboration brings together Deutsche Telekom’s customer base and OpenAI’s research to expand the availability of practical AI tools.

The companies aim to introduce simple, multilingual and privacy-focused AI services starting in 2026, helping users communicate, learn and accomplish tasks more efficiently. Widespread familiarity with platforms such as ChatGPT is expected to support rapid uptake of these new offerings.

Deutsche Telekom will introduce ChatGPT Enterprise internally, giving staff secure access to tools that improve customer support and streamline workflows. The move aligns with the firm’s goal of modernising operations through intelligent automation.

Further integration of AI into network management and employee copilots will support the transition towards more autonomous, self-optimising systems. The partnership is expected to strengthen the availability and reliability of AI services throughout Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI job interviews raise concerns among recruiters and candidates

As AI takes on a growing share of recruitment tasks, concerns are mounting that automated interviews and screening tools could be pushing hiring practices towards what some describe as a ‘race to the bottom’.

The rise of AI video interviews illustrates both the efficiency gains sought by companies and the frustrations candidates experience when algorithms, rather than people, become the first point of contact.

BBC journalist MaryLou Costa found this out first-hand after her AI interviewer froze mid-question. The platform provider, TestGorilla, said the malfunction affected only a small number of users, but the episode highlights the fragility of a process that companies increasingly rely on to sift through rising volumes of applications.

With vacancies down 12% year-on-year and applications per role up 65%, firms argue that AI is now essential for managing the workload. Recruitment groups such as Talent Solutions Group say automated tools help identify the fraction of applicants who will advance to human interviews.

Employers are also adopting voice-based AI interviewers such as Cera’s system, Ami, which conducts screening calls and has already processed hundreds of thousands of applications. Cera claims the tool has cut recruitment costs by two-thirds and saved significant staff time. Yet jobseekers describe a dehumanising experience.

Marketing professional Jim Herrington, who applied for over 900 roles after redundancy, argues that keyword-driven filters overlook the broader qualities that define a strong candidate. He believes companies risk damaging their reputation by replacing real conversation with automated screening and warns that AI-based interviews cannot replicate human judgement, respect or empathy.

Recruiters acknowledge that AI is also transforming candidate behaviour. Some applicants now use bots to submit thousands of applications at once, further inflating volumes and prompting companies to rely even more heavily on automated filtering.

Ivee co-founder Lydia Miller says this dynamic risks creating a loop in which both sides use AI to outpace each other, pushing humans further out of the process. She warns that candidates may soon tailor their responses to satisfy algorithmic expectations, rather than communicate genuine strengths. While AI interviews can reduce stress for some neurodivergent or introverted applicants, she says existing bias in training data remains a significant risk.

Experts argue that AI should augment, not replace, human expertise. Talent consultant Annemie Ress notes that experienced recruiters draw on subtle cues and intuition that AI cannot yet match. She warns that over-filtering risks excluding strong applicants before anyone has read their CV or heard their voice.

With debates over fairness, transparency and bias now intensifying, the challenge for employers is balancing efficiency with meaningful engagement and ensuring that automated tools do not undermine the human relationships on which good recruitment depends.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK study warns of risks behind emotional attachments to AI therapists

A new University of Sussex study suggests that AI mental-health chatbots are most effective when users feel emotionally close to them, but warns this same intimacy carries significant risks.

The research, published in Social Science & Medicine, analysed feedback from 4,000 users of Wysa, an AI therapy app used within the NHS Talking Therapies programme. Many users described the AI as a ‘friend,’ ‘companion,’ ‘therapist,’ or occasionally even a ‘partner.’

Researchers say these emotional bonds can kickstart therapeutic processes such as self-disclosure, increased confidence, and improved wellbeing. Intimacy forms through a loop: users reveal personal information, receive emotionally validating responses, feel gratitude and safety, then disclose more.

But the team warns this ‘synthetic intimacy’ may trap vulnerable users in a self-reinforcing bubble, preventing escalation to clinical care when needed. A chatbot designed to be supportive may fail to challenge harmful thinking, or even reinforce it.

The report highlights growing reliance on AI to fill gaps in overstretched mental-health services. NHS trusts use tools like Wysa and Limbic to help manage referrals and support patients on waiting lists.

Experts caution that AI therapists remain limited: unlike trained clinicians, they lack the ability to read nuance, body language, or broader context. Imperial College’s Prof Hamed Haddadi called them ‘an inexperienced therapist’, adding that systems tuned to maintain user engagement may continue encouraging disclosure even when users express harmful thoughts.

Researchers argue policymakers and app developers must treat synthetic intimacy as an inevitable feature of digital mental-health tools, and build clear escalation mechanisms for cases where users show signs of crisis or clinical disorder.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia enforces under-16 social media ban as new rules took effect

Australia has finally introduced the world’s first nationwide prohibition on social media use for under-16s, forcing platforms to delete millions of accounts and prevent new registrations.

Instagram, TikTok, Facebook, YouTube, Snapchat, Reddit, Twitch, Kick and Threads are removing accounts held by younger users. At the same time, Bluesky has agreed to apply the same standard despite not being compelled to do so. The only central platform yet to confirm compliance is X.

The measure follows weeks of age-assurance checks, which have not been flawless, with cases of younger teenagers passing facial-verification tests designed to keep them offline.

Families are facing sharply different realities. Some teenagers feel cut off from friends who managed to bypass age checks, while others suddenly gain a structure that helps reduce unhealthy screen habits.

A small but vocal group of parents admit they are teaching their children how to use VPNs and alternative methods instead of accepting the ban, arguing that teenagers risk social isolation when friends remain active.

Supporters of the legislation counter that Australia imposes clear age limits in other areas of public life for reasons of well-being and community standards, and the same logic should shape online environments.

Regulators are preparing to monitor the transition closely.

The eSafety Commissioner will demand detailed reports from every platform covered by the law, including the volume of accounts removed, evidence of efforts to stop circumvention and assessments of whether reporting and appeals systems are functioning as intended.

Companies that fail to take reasonable steps may face significant fines. A government-backed academic advisory group will study impacts on behaviour, well-being, learning and unintended shifts towards more dangerous corners of the internet.

Global attention is growing as several countries weigh similar approaches. Denmark, Norway and Malaysia have already indicated they may replicate Australia’s framework, and the EU has endorsed the principle in a recent resolution.

Interest from abroad signals a broader debate about how societies should balance safety and autonomy for young people in digital spaces, instead of relying solely on platforms to set their own rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen chatbot use surges across the US

Nearly a third of US teenagers engage with AI chatbots each day, according to new Pew data. Researchers say nearly 70% have tried a chatbot, reflecting growing dependence on digital tools during schoolwork and leisure time. Concerns remain over exposure to mature content and possible mental health harms.

Pew surveyed almost 1,500 US teens aged 13 to 17, finding broadly similar usage patterns across gender and income. Older teens reported higher engagement, while Black and Hispanic teens showed slightly greater adoption than White peers.

Experts warn that frequent chatbot use may hinder development or encourage cheating in academic settings. Safety groups have urged parents to limit access to companion-like AI tools, citing risks posed by romantic or intimate interactions with minors.

Companies are now rolling out safeguards in response to public scrutiny and legal pressure. OpenAI and Character.AI have tightened controls, while Meta says it has adjusted policies following reports of inappropriate exchanges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teens worldwide divided over Australia’s under-16 social media ban

As Australia prepares to enforce the world’s first nationwide under-16 social-media ban on 10 December 2025, young people across the globe are voicing sharply different views about the move.

Some teens view it as an opportunity for a digital ‘detox’, a chance to step back from the constant social media pressure. Others argue the law is extreme, unfair, and likely to push youth toward less regulated corners of the internet.

In Mumbai, 19-year-old Pratigya Jena said the debate isn’t simple: ‘nothing is either black or white.’ She acknowledged that social media can help young entrepreneurs, but also warned that unrestricted access exposes children to inappropriate content.

Meanwhile, in Berlin, 13-year-old Luna Drewes expressed cautious optimism; she felt the ban might help reduce the pressure to conform to beauty standards that are often amplified online. Another teen, 15-year-old Enno Caro Brandes, said he understood the motivation but admitted he couldn’t imagine giving up social media altogether.

In Doha, older teens voiced more vigorous opposition. Sixteen-year-old Firdha Razak called the ban ‘really stupid,’ while sixteen-year-old Youssef Walid argued that it would be trivial to bypass using VPNs. Both said they feared losing vital social and communication outlets.

Some, like 15-year-old Mitchelle Okinedo from Lagos, suggested the ban ignored how deeply embedded social media is in modern life: ‘We were born with it,’ she said, hinting that simply cutting access may be unrealistic. Others noted the role of social media in self-expression, especially in areas where offline spaces are limited.

Even within Australia, opinions diverge. A 15-year-old named Layton Lewis said he doubted the ban would have significant effects. His mother, Emily, meanwhile, welcomed the change, hoping it might encourage more authentic offline friendships rather than ‘illusory’ online interactions.

The variety of reactions underscores how the law is approaching a stark test: while some see potential mental health or safety gains, many worry about the rights of teens, enforcement effectiveness, and whether simply banning access truly addresses the underlying risks.

As commentary and activism ramp up around digital-age regulation, few expect consensus, but many do expect the debate to shape future policy beyond Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google faces renewed EU scrutiny over AI competition

The European Commission has opened a formal antitrust investigation into whether AI features embedded in online search are being used to unfairly squeeze competitors in newly emerging digital markets shaped by generative AI.

The probe targets Alphabet-owned Google, focusing on allegations that the company imposes restrictive conditions on publishers and content creators while giving its own AI-driven services preferential placement over rival technologies and alternative search offerings.

Regulators are examining products such as AI Overviews and AI Mode, assessing how publisher content is reused within AI-generated summaries and whether media organisations are compensated in a clear, fair, and transparent manner.

EU competition chief Teresa Ribera said the European Commission’s action reflects a broader effort to protect online media and preserve competitive balance as artificial intelligence increasingly shapes how information is produced, discovered, and monetised.

The case adds to years of scrutiny by the European Commission over Google’s search and advertising businesses, even as the company proposes changes to its ad tech operations and continues to challenge earlier antitrust rulings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Trump allows Nvidia to sell chips to approved Chinese customers

US President Donald Trump has allowed Nvidia to sell H200 AI chips to approved customers in China, marking a shift in export controls. The decision also covers firms such as AMD and follows continued lobbying by Nvidia chief executive Jensen Huang.

Nvidia had been barred from selling advanced chips to Beijing, but a partial reversal earlier required the firm to pay a share of its Chinese revenues to the US government. China later ordered firms to stop buying Nvidia products, pushing them towards domestic semiconductors.

Analysts suggest the new policy may buy time for negotiations over rare earth supplies, as China dominates processing of these minerals. Access to H200 chips may aid China’s tech sector, but experts warn they could also strengthen military AI capabilities.

Nvidia welcomed the announcement, saying the decision strikes a balance that benefits American industry. Shares rose slightly after the news, although the arrangement is expected to face scrutiny from national security advocates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada-EU digital partnership expands cooperation on AI and security

The European Union and Canada have strengthened their digital partnership during the first Digital Partnership Council in Montreal. Both sides outlined a joint plan to enhance competitiveness and innovation, while supporting smaller firms through targeted regulation.

Senior representatives reconfirmed that cooperation with like-minded partners will be essential for economic resilience.

A new Memorandum of Understanding on AI placed a strong emphasis on trustworthy systems, shared standards and wider adoption across strategic sectors.

The two partners will exchange best practices to support sectors such as healthcare, manufacturing, energy, culture and public services.

They also agreed to collaborate on large-scale AI infrastructures and access to computing capacity, while encouraging scientific collaboration on advanced AI models and climate-related research.

A meeting that also led to an agreement on a structured dialogue on data spaces.

A second Memorandum of Understanding covered digital credentials and trust services. The plan includes joint testing of digital identity wallets, pilot projects and new use cases aimed at interoperability.

The EU and Canada also intend to work more closely on the protection of independent media, the promotion of reliable information online and the management of risks created by generative AI.

Both sides underlined their commitment to secure connectivity, with cooperation on 5G, subsea cables and potential new Arctic routes to strengthen global network resilience. Further plans aim to deepen collaboration on quantum technologies, semiconductors and high-performance computing.

A renewed partnership that reflects a shared commitment to resilient supply chains and secure cloud infrastructure as both regions prepare for future technological demands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!