Three in ten US teens now use AI chatbots every day, survey finds

According to new data from the Pew Research Center, roughly 64% of US teens (aged 13–17) say they have used an AI chatbot; about three in ten (≈ 30%) report daily use. Among those teens, the leading chatbot is ChatGPT (used by 59%), followed by Gemini (23%) and Meta AI (20%).

The widespread adoption raises growing safety and welfare concerns. As teenagers increasingly rely on AI for information, companionship or emotional support, critics point to potential risks, including exposure to biased content, misinformation, or emotionally manipulative interactions, particularly among vulnerable youth.

Legal action has already followed, with families of at least two minors suing AI-developer companies after alleged harmful advice from chatbots.

Demographic patterns reveal that Black and Hispanic teens report higher daily usage rates (around 33-35%) compared to their White peers (≈ 22%). Daily use is also more common among older teens (15–17) than younger ones.

For policymakers and digital governance stakeholders, the findings add urgency to calls for AI-specific safeguarding frameworks, especially where young people are concerned. As AI tools become embedded in adolescent life, ensuring transparency, responsible design, and robust oversight will be critical to preventing unintended harms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian families receive eSafety support as the social media age limit takes effect

Australia has introduced a minimum age requirement of 16 for social media accounts during the week, marking a significant shift in its online safety framework.

The eSafety Commissioner has begun monitoring compliance, offering a protective buffer for young people as they develop digital skills and resilience. Platforms now face stricter oversight, with potential penalties for systemic breaches, and age assurance requirements for both new and current users.

Authorities stress that the new age rule forms part of a broader effort aimed at promoting safer online environments, rather than relying on isolated interventions. Australia’s online safety programmes continue to combine regulation, education and industry engagement.

Families and educators are encouraged to utilise the resources on the eSafety website, which now features information hubs that explain the changes, how age assurance works, and what young people can expect during the transition.

Regional and rural communities in Australia are receiving targeted support, acknowledging that the change may affect them more sharply due to limited local services and higher reliance on online platforms.

Tailored guidance, conversation prompts, and step-by-step materials have been produced in partnership with national mental health organisations.

Young people are reminded that they retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts.

eSafety officials underline that the new limit introduces a delay rather than a ban. The aim is to reduce exposure to persuasive design and potential harm while encouraging stronger digital literacy, emotional resilience and critical thinking.

Ongoing webinars and on-demand sessions provide additional support as the enforcement phase progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK partners with DeepMind to boost AI innovation

The UK Department for Science, Innovation and Technology (DSIT) has entered a strategic partnership with Google DeepMind to advance AI across public services, research, and security.

The non-legally binding memorandum of understanding outlines a shared commitment to responsible AI development, while enhancing national readiness for transformative technologies.

The collaboration will explore AI solutions for public services, including education, government departments, and the Incubator for AI (i.AI). Google DeepMind may provide engineering support and develop AI tools, including a government-focused version of Gemini aligned with the national curriculum.

Researchers will gain priority access to DeepMind’s AI models, including AlphaEvolve, AlphaGenome, and WeatherNext, with joint initiatives supporting automated R&D and lab facilities in the UK. The partnership seeks to accelerate innovation in strategically important areas such as fusion energy.

AI security will be strengthened through the UK AI Security Institute, which will share model insights, address emerging risks, and enhance national cyber preparedness. The MoU is voluntary, spans 36 months, and ensures compliance with data privacy laws, including UK GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Online data exposure heightens threats to healthcare workers

Healthcare workers are facing escalating levels of workplace violence, with more than three-quarters reporting verbal or physical assaults, prompting hospitals to reassess how they protect staff from both on-site and external threats.

A new study examining people search sites suggests that online exposure of personal information may worsen these risks. Researchers analysed the digital footprint of hundreds of senior medical professionals, finding widespread availability of sensitive personal data.

The study shows that many doctors appear across multiple data broker platforms, with a significant share listed on five or more sites, making it difficult to track, manage, or remove personal information once it enters the public domain.

Exposure varies by age and geography. Younger doctors tend to have smaller digital footprints, while older professionals are more exposed due to accumulated public records. State-level transparency laws also appear to influence how widely data is shared.

Researchers warn that detailed profiles, often available for a small fee, can enable harassment or stalking at a time when threats against healthcare leaders are rising. The findings renew calls for stronger privacy protections for medical staff.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Workplace study highlights Gemini’s impact on creativity

Google’s new research on the impact of Gemini AI in Workspace reveals that the technology is reshaping how teams collaborate, with surveyed workers reporting weekly time savings and increasing confidence in AI-supported tasks.

The findings, based on input from more than 1,200 leaders and employees across six countries, suggest generative AI is becoming integral to routine workflows.

Many users report that Gemini helps them accomplish more in less time, generate ideas faster, and redirect their attention from repetitive tasks to higher-value work.

The report highlights wider organisational benefits. Leaders see AI as a driver of innovation, but a gap remains between executive ambitions and employee readiness. Google says structured training and phased rollouts are key to building trust and improving adoption accuracy.

New and updated Workspace features aim to address these needs. Recent Gemini releases offer improved task automation, enhanced email drafting, and advanced storytelling tools, while no-code agent builders support more complex workflow design without specialist skills.

The research points to a broader transformation in digital productivity. Companies using Gemini report fewer hours spent on administrative work, higher engagement, and stronger collaboration as AI becomes a functional layer that supports rather than replaces human judgement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rising UK screen time sparks concerns for wellbeing

UK internet use has risen sharply, with adults spending over four and a half hours a day online in 2025, according to Ofcom’s latest Online Nation report.

Public sentiment has cooled, as fewer people now believe the internet is good for society, despite most still judging its benefits to outweigh the risks.

Children report complex online experiences, with many enjoying their digital time while also acknowledging adverse effects such as the so-called ‘brain rot’ linked to endless scrolling.

Significant portions of young people’s screen time occur late at night on major platforms, raising concerns about well-being.

New rules requiring age checks for UK pornography sites prompted a surge in VPN use as people attempted to bypass restrictions, although numbers have since declined.

Young users increasingly turn to online tools such as ASMR for relaxation, yet many also encounter toxic self-improvement content and body shaming.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google faces scrutiny over AI use of online content

The European Commission has opened an antitrust probe into Google over concerns it used publisher and YouTube content to develop its AI services on unfair terms.

Regulators are assessing whether Google used its dominant position to gain unfair access to content powering features like AI Overviews and AI Mode. They are examining whether publishers were disadvantaged by being unable to refuse use of their content without losing visibility on Google Search.

The probe also covers concerns that YouTube creators may have been required to allow the use of their videos for AI training without compensation, while rival AI developers remain barred from using YouTube content.

The investigation will determine whether these practices breached EU rules on abuse of dominance under Article 102 TFEU. Authorities intend to prioritise the case, though no deadline applies.

Google and national competition authorities have been formally notified as the inquiry proceeds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US rollout brings AI face tagging to Amazon Ring

Amazon has begun rolling out a new facial recognition feature for its Ring doorbells, allowing devices to identify frequent visitors and send personalised alerts instead of generic motion notifications.

The feature, called Familiar Faces, enables users to create a catalogue of up to 50 individuals, such as family members, friends, neighbours or delivery drivers, by labelling faces directly within the Ring app.

Amazon says the rollout is now under way in the United States, where Ring owners can opt in to the feature, which is disabled by default and designed to reduce unwanted or repetitive alerts.

The company claims facial data is encrypted, not shared externally and not used to train AI models, while unnamed faces are automatically deleted after 30 days, giving users ongoing control over stored information.

Privacy advocates and lawmakers remain concerned, however, citing Ring’s past security failures and law enforcement partnerships as evidence that convenience-driven surveillance tools can introduce long-term risks to personal privacy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deutsche Telekom partners with OpenAI to expand advanced AI services across Europe

OpenAI has formed a new partnership with Deutsche Telekom to deliver advanced AI capabilities to millions of people across Europe. The collaboration brings together Deutsche Telekom’s customer base and OpenAI’s research to expand the availability of practical AI tools.

The companies aim to introduce simple, multilingual and privacy-focused AI services starting in 2026, helping users communicate, learn and accomplish tasks more efficiently. Widespread familiarity with platforms such as ChatGPT is expected to support rapid uptake of these new offerings.

Deutsche Telekom will introduce ChatGPT Enterprise internally, giving staff secure access to tools that improve customer support and streamline workflows. The move aligns with the firm’s goal of modernising operations through intelligent automation.

Further integration of AI into network management and employee copilots will support the transition towards more autonomous, self-optimising systems. The partnership is expected to strengthen the availability and reliability of AI services throughout Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI job interviews raise concerns among recruiters and candidates

As AI takes on a growing share of recruitment tasks, concerns are mounting that automated interviews and screening tools could be pushing hiring practices towards what some describe as a ‘race to the bottom’.

The rise of AI video interviews illustrates both the efficiency gains sought by companies and the frustrations candidates experience when algorithms, rather than people, become the first point of contact.

BBC journalist MaryLou Costa found this out first-hand after her AI interviewer froze mid-question. The platform provider, TestGorilla, said the malfunction affected only a small number of users, but the episode highlights the fragility of a process that companies increasingly rely on to sift through rising volumes of applications.

With vacancies down 12% year-on-year and applications per role up 65%, firms argue that AI is now essential for managing the workload. Recruitment groups such as Talent Solutions Group say automated tools help identify the fraction of applicants who will advance to human interviews.

Employers are also adopting voice-based AI interviewers such as Cera’s system, Ami, which conducts screening calls and has already processed hundreds of thousands of applications. Cera claims the tool has cut recruitment costs by two-thirds and saved significant staff time. Yet jobseekers describe a dehumanising experience.

Marketing professional Jim Herrington, who applied for over 900 roles after redundancy, argues that keyword-driven filters overlook the broader qualities that define a strong candidate. He believes companies risk damaging their reputation by replacing real conversation with automated screening and warns that AI-based interviews cannot replicate human judgement, respect or empathy.

Recruiters acknowledge that AI is also transforming candidate behaviour. Some applicants now use bots to submit thousands of applications at once, further inflating volumes and prompting companies to rely even more heavily on automated filtering.

Ivee co-founder Lydia Miller says this dynamic risks creating a loop in which both sides use AI to outpace each other, pushing humans further out of the process. She warns that candidates may soon tailor their responses to satisfy algorithmic expectations, rather than communicate genuine strengths. While AI interviews can reduce stress for some neurodivergent or introverted applicants, she says existing bias in training data remains a significant risk.

Experts argue that AI should augment, not replace, human expertise. Talent consultant Annemie Ress notes that experienced recruiters draw on subtle cues and intuition that AI cannot yet match. She warns that over-filtering risks excluding strong applicants before anyone has read their CV or heard their voice.

With debates over fairness, transparency and bias now intensifying, the challenge for employers is balancing efficiency with meaningful engagement and ensuring that automated tools do not undermine the human relationships on which good recruitment depends.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!