Concerns grow over children’s use of AI chatbots

The growing use of AI chatbots and companions among children has raised safety concerns, with experts warning of inadequate protections and potential emotional risks.

Often not designed for young users, these apps lack sufficient age verification and moderation features, making them vulnerable spaces for children. The eSafety Commissioner noted that many children are spending hours daily with AI companions, sometimes discussing topics like mental health and sex.

Studies in Australia and the UK show high engagement, with many young users viewing the chatbots as real friends and sources of emotional advice.

Experts, including Professor Tama Leaver, warn that these systems are manipulative by design, built to keep users engaged without guaranteeing appropriate or truthful responses.

Despite the concerns, initiatives like Day of AI Australia promote digital literacy to help young people understand and navigate such technologies critically.

Organisations like UNICEF say AI could offer significant educational benefits if applied safely. However, they stress that Australia must take childhood digital safety more seriously as AI rapidly reshapes how young people interact, learn and socialise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets smarter with Study Mode to support active learning

OpenAI has launched a new Study Mode in ChatGPT to help users engage more deeply with learning. Rather than simply providing answers, the feature guides users through concepts and problem-solving step-by-step. It is designed to support critical thinking and improve long-term understanding.

The company developed the feature with educators, scientists, and pedagogy experts. They aimed to ensure the AI supports active learning and doesn’t just deliver quick fixes. The result is a mode that encourages curiosity, reflection, and metacognitive development.

According to OpenAI, Study Mode allows users to approach subjects more critically and thoroughly. It breaks down complex ideas, asks questions, and helps manage cognitive load during study. Instead of spoon-feeding, the AI acts more like a tutor than a search engine.

The shift reflects a broader trend in educational technology — away from passive learning tools. Many students turn to AI for homework help, but educators have warned of over-reliance. Study Mode attempts to strike a balance by promoting engagement over shortcuts.

For instance, rather than giving the complete solution to a maths problem, Study Mode might ask: ‘What formula might apply here?’ or ‘How could you simplify this expression first?’ This approach nudges students to participate in the process and build fundamental problem-solving skills.

It also adapts to different learning needs. In science, it might walk through hypotheses and reasoning. It may help analyse a passage or structure an essay in the humanities. Prompting users to think aloud mirrors effective tutoring strategies.

OpenAI says feedback from teachers helped shape the feature’s tone and pacing. One key aim was to avoid overwhelming learners with too much information at once. Instead, Study Mode introduces concepts incrementally, supporting better retention and understanding.

The company also consulted cognitive scientists to align with best practices in memory and comprehension. However, this includes encouraging users to reflect on their learning and why specific steps matter. Such strategies are known to improve both academic performance and self-directed learning.

While the feature is part of ChatGPT, it can be toggled on or off. Users can activate Study Mode when tackling a tricky topic or exploring new material. They can then switch to normal responses for broader queries or summarised answers.

Educators have expressed cautious optimism about the update. Some see it as a tool supporting homework, revision, or assessment preparation. However, they also warn that no AI can replace direct teaching or personalised guidance.

Tools like this could be valuable in under-resourced settings or for independent learners.

Study Mode’s interactive style may help level the playing field for students without regular academic support. It also gives parents and tutors a new way to guide learners without doing the work for them.

Earlier efforts included teacher guides and classroom use cases. However, Study Mode marks a more direct push to reshape how students use AI in learning.

It positions ChatGPT not as a cheat sheet, but as a co-pilot for intellectual growth.

Looking ahead, OpenAI says it plans to iterate based on user feedback and teacher insights. Future updates may include subject-specific prompts, progress tracking, or integrations with educational platforms. The goal is to build a tool that adapts to learning styles without compromising depth or rigour.

As AI continues to reshape education, tools like Study Mode may help answer a central question: Can technology support genuine understanding, instead of just faster answers? With Study Mode, OpenAI believes the answer is yes, if used wisely.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbot captures veteran workers’ knowledge to support UK care teams

Peterborough City Council has turned the knowledge of veteran therapy practitioner Geraldine Jinks into an AI chatbot to support adult social care workers.

After 35 years of experience, colleagues frequently approached Jinks seeking advice, leading to time pressures despite her willingness to help.

In response, the council developed a digital assistant called Hey Geraldine, built on the My AskAI platform, which mimics her direct and friendly communication style to provide instant support to staff.

Developed in 2023, the chatbot offers practical answers to everyday care-related questions, such as how to support patients with memory issues or discharge planning. Jinks collaborated with the tech team to train the AI, writing all the responses herself to ensure consistency and clarity.

Thanks to its natural tone and humanlike advice, some colleagues even mistook the chatbot for the honest Geraldine.

The council hopes Hey Geraldine will reduce hospital discharge delays and improve patient access to assistive technology. Councillor Shabina Qayyum, who also works as a GP, said the tool empowers staff to help patients regain independence instead of facing unnecessary delays.

The chatbot is seen as preserving valuable institutional knowledge while improving frontline efficiency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Parents grapple with teaching kids responsible AI use

Experts say many families face a dilemma between protecting children from AI and preventing them from falling behind in an increasingly AI-driven world.

In interviews, parents expressed unease about deepfakes, blurred lines between reality and AI-generated content, and potential threats they feel unprepared to teach their children to identify.

Still, some parents are introducing AI tools to their children under supervision, viewing guided exposure as safer and more beneficial than strict prohibition. These parents emphasise helping kids learn AI responsibly rather than barring them from using it.

Experts warn that many parents delay engagement with AI out of fear or lack of knowledge, isolating themselves from opportunities to guide children.

Instead, they recommend an informed, gradual introduction, including open discussions about AI risks and benefits. Careful mediation, honesty, and education may help children develop healthy tech habits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers and students warn: AI is eroding engagement

A student from San Jose and an English teacher in Chicago co-authored a Boston Globe opinion warning that widespread use of AI in schools damages the vital student-teacher bond.

While marketed as efficiency boosters, AI tools encourage students to forgo independent thinking.

Many simply generate entire assignments via AI, reformat the text to avoid detection, and undermine honest academic interaction.

Educators report feeling increasingly marginalised as AI handles much of their workload, including grading, lesson planning, and feedback within classrooms.

Though schools and tech companies promote these tools as educational enhancements, many schools have eroded trust, as teachers struggle to assess real student ability.

The authors call for a return to supervised in-class assignments, using pen and paper, strict scrutiny of AI vendors in education, and outright bans on unsupervised AI classroom tools to help reset the learning relationship.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens turn to AI for advice and friendship

A growing number of US teens rely on AI for daily decision‑making and emotional support, with chatbots such as ChatGPT, Character.AI and Replika. One Kansas student admits she uses AI to simplify everyday tasks, using it to choose clothes or plan events while avoiding schoolwork.

A survey by Common Sense Media reveals that over 70 per cent of teenagers have tried AI companions, with around half using them regularly. Roughly a third reported discussing serious issues with AI, sometimes finding it as or more satisfying than talking with friends.

Experts express concern that such frequent AI interactions could hinder development of creativity, critical thinking and social skills in young people. The study warns adolescents may become overly validated by AI, missing out on real‑world emotional growth.

Educators caution that while AI offers constant, non‑judgemental feedback, it is not a replacement for authentic human relationships. They recommend AI use be carefully supervised to ensure it complements rather than replaces real interaction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children turning to AI for friendship raises alarms

Children and teenagers are increasingly turning to AI not just for help with homework but as a source of companionship.

A recent study by Common Sense Media revealed that over 70% of young people have used AI as a companion. Alarmingly, nearly a third of teens reported that their conversations with AI felt as satisfying, or more so, than talking with actual friends.

Holly Humphreys, a licensed counsellor at Thriveworks in Harrisonburg, Virginia, warned that the trend is becoming a national concern.

She explained that heavy reliance on AI affects more than just social development. It can interfere with emotional wellbeing, behavioural growth and even cognitive functioning in young children and school-age youth.

As AI continues evolving, children may find it harder to build or rebuild connections with real people. Humphreys noted that interactions with AI are often shallow, lacking the depth and empathy found in human relationships.

The longer kids engage with bots, the more distant they may feel from their families and peers.

To counter the trend, she urged parents to establish firm boundaries and introduce alternative daily activities, particularly during summer months. Simple actions like playing card games, eating together or learning new hobbies can create meaningful face-to-face moments.

Encouraging children to try a sport or play an instrument helps shift their attention from artificial friends to genuine human connections within their communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Half of Americans still unsure how crypto works

A new NCA survey shows 70% of Americans without crypto want more information before considering digital assets. Half of respondents said they don’t understand crypto, while others voiced concerns about scams and unknown project founders.

Despite this uncertainty, 34% of those polled said they were open to learning more. The NCA’s report summarised the mood as ‘curiosity high, confidence low,’ noting that a large number of people are interested in crypto but unsure how to take the first step.

The NCA, a nonprofit launched in March and led by Ripple Labs’ chief legal officer Stuart Alderoty, has been tasked with helping Americans better understand crypto. Backed by $50 million from Ripple, the organisation aims to build trust and boost crypto literacy through education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI pact between Sri Lanka and Singapore fosters innovation

Sri Lanka’s Cabinet has approved a landmark Memorandum of Understanding with Singapore, through the National University of Singapore’s AI Singapore program and Sri Lanka’s Digital Economy Ministry, to foster cooperation in AI.

The MoU establishes a framework for joint research, curriculum development, and knowledge-sharing initiatives to address local priorities and global tech challenges.

This collaboration signals a strategic leap in Sri Lanka’s digital transformation journey. It emerged during Asia Tech x Singapore 2025, where officials outlined plans for AI training, policy alignment, digital infrastructure support, and e‑governance development.

The partnership builds on Sri Lanka’s broader agenda, including fintech innovation and cybersecurity, to strengthen its national AI ecosystem.

With the formalisation of this MoU, Sri Lanka hopes to elevate its regional and global AI standing. The initiative aims to empower local researchers, cultivate tech talent, and ensure that AI governance and innovation are aligned with ethical and economic goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!