AI robot ‘robin’ brings emotional support to children’s hospitals in USA

An AI-powered robot named Robin transforms patient care in the USA pediatric hospitals by offering emotional support and companionship to young patients.

Developed by Expper Technologies, Robin resembles a child in appearance and voice, engaging patients with games, music, and conversation. Its childlike demeanour helps ease anxiety, especially during stressful medical procedures.

Initially launched in Armenia, Robin now operates in 30 healthcare facilities across the USA, including Massachusetts, California, Indiana, and New York. Designed to combat healthcare staff shortages, the robot is about 30% autonomous, with remote human operators guiding its interactions under clinical supervision.

Robin’s emotional intelligence allows it to mirror patient expressions and respond with empathy, laughing, playing, or offering comfort when needed. Beyond paediatrics, it also assists elderly patients with dementia in nursing homes by leading breathing exercises and memory games.

With the USA facing a projected shortage of up to 86,000 physicians in the next decade, Robin’s creators aim to expand its capabilities to include monitoring vitals and assisting with basic physical care.

Despite concerns about AI replacing human roles, Expper CEO Karen Khachikyan emphasises Robin is intended to complement healthcare teams, not replace them, offering joy, relief, and a sense of companionship where it’s most needed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

More social media platforms could face under-16 ban in Australia

Australia is set to expand its under-16 social media ban, with platforms such as WhatsApp, Reddit, Twitch, Roblox, Pinterest, Steam, Kick, and Lego Play potentially joining the list. The eSafety Commissioner, Julie Inman Grant, has written to 16 companies asking them to self-assess whether they fall under the ban.

The current ban already includes Facebook, TikTok, YouTube, and Snapchat, making it a world-first policy. The focus will be on platforms with large youth user bases, where risks of harm are highest.

Despite the bold move, experts warn the legislation may be largely symbolic without concrete enforcement mechanisms. Age verification remains a significant hurdle, with Canberra acknowledging that companies will likely need to self-regulate. An independent study found that age checks can be done ‘privately, efficiently and effectively,’ but noted there is no one-size-fits-all solution.

Firms failing to comply could face fines of up to AU$49.5 million (US$32.6 million). Some companies have called the law ‘vague’ and ‘rushed.’ Meanwhile, new rules will soon take effect to limit access to harmful but legal content, including online pornography and AI chatbots capable of sexually explicit dialogue. Roblox has already agreed to strengthen safeguards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Karnataka High Court rules against X Corp in content case

The Indian Karnataka High Court has rejected a petition by Elon Musk’s X Corp that contested the Indian government’s authority to block content and the legality of its Sahyog portal.

Justice M Nagaprasanna ruled that social media regulation is necessary to curb unlawful material, particularly content harmful to women, and that communications have historically been subject to oversight regardless of technology.

X Corp argued that takedown powers exist only under Section 69A of the IT Act and described the Sahyog portal as a tool for censorship. The government countered that Section 79(3)(b) allows safe harbour protections to be withdrawn if platforms fail to comply.

The Indian court sided with the government, affirming the portal’s validity and the broader regulatory framework. The ruling marks a setback for X Corp, which had also sought protection from possible punitive action for not joining the portal.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-driven remote fetal monitoring launched by Lee Health

Lee Health has launched Florida’s first AI-powered birth care centre, introducing a remote fetal monitoring command hub to improve maternal and newborn outcomes across the Gulf Coast.

The system tracks temperature, heart rate, blood pressure, and pulse for mothers and babies, with AI alerting staff when vital signs deviate from normal ranges. Nurses remain in control but gain what Lee Health calls a ‘second set of eyes’.

‘Maybe mum’s blood pressure is high, maybe the baby’s heart rate is not looking great. We will be able to identify those things,’ said Jen Campbell, director of obstetrical services at Lee Health.

Once a mother checks in, the system immediately monitors across Lee Health’s network and sends data to the AI hub. AI cues trigger early alerts under certified clinician oversight and are aligned with Lee Health’s ethical AI policies, allowing staff to intervene before complications worsen.

Dr Cherrie Morris, vice president and chief physician executive for women’s services, said the hub strengthens patient safety by centralising monitoring and providing expert review from certified nurses across the network.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Canadian probe finds TikTok failing to protect children’s privacy

A Canadian privacy investigation has found that TikTok has not taken sufficient measures to prevent children under 13 from accessing its platform or to protect their personal data.

Despite stating that the app is not intended for young users, the report states that hundreds of thousands of Canadian children use it yearly.

The investigation also found that TikTok collects vast amounts of data from users, including children, and uses it for targeted ads and content, potentially harming youth.

In response, TikTok agreed to strengthen safeguards and clarify data practices but disagreed with some findings.

The probe is part of growing global scrutiny over TikTok’s privacy and security practices, with similar actions taken in the USA and EU amid ongoing concerns about the Chinese-owned app’s data handling and national security implications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini brings conversational AI to Google TV

Google has launched Gemini for TV, bringing conversational AI to the living room. The update builds on Google TV and Google Assistant, letting viewers chat naturally with their screens to discover shows, plan trips, or even tackle homework questions.

Instead of scrolling endlessly, users can ask Gemini to find a film everyone will enjoy or recap last season’s drama. The AI can handle vague requests, like finding ‘that new hospital drama,’ and provide reviews before you press play.

Gemini also turns the TV into an interactive learning tool. From explaining why volcanoes erupt to guiding kids through projects, it offers helpful answers with supporting YouTube videos for hands-on exploration.

Beyond schoolwork, Gemini can help plan meals, teach new skills like guitar, or brainstorm family trips, all through conversational prompts. Such features make the TV a hub for entertainment, education, and inspiration.

Gemini is now available on the TCL QM9K series, with rollout to additional Google TV devices planned for later this year. Google says additional features are coming soon, making TVs more capable and personalised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered OSIA aims to boost student success rates in Cameroon

In Cameroon, where career guidance often takes a back seat, a new AI platform is helping students plan their futures. Developed by mathematician and AI researcher Frédéric Ngaba, OSIA offers personalised academic and career recommendations.

The platform provides a virtual tutor trained on Cameroon’s curricula, offering 400 exam-style tests and psychometric assessments. Students can input grades and aspirations, and the system builds tailored academic profiles to highlight strengths and potential career paths.

OSIA already has 13,500 subscribers across 23 schools, with plans to expand tenfold. Subscriptions cost 3,000 CFA francs for locals and €10 for students abroad, making it an affordable solution for many families.

Teachers and guidance counsellors see the tool as a valuable complement, though they stress it cannot replace human interaction or emotional support. Guidance professionals insist that social context and follow-up remain key to students’ development.

The Secretariat for Secular Private Education of Cameroon has authorized OSIA to operate. Officials expect its benefits to scale nationwide as the government considers a national AI strategy to modernise education and improve success rates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MrBeast under scrutiny for child advertising practices

The Children’s Advertising Review Unit (CARU) has advised MrBeast, LLC and Feastables to strengthen their advertising and privacy practices following concerns over promotions aimed at children.

CARU found that some videos on the MrBeast YouTube channel included undisclosed advertising in descriptions and pinned comments, which could mislead young viewers.

It also raised concerns about a promotional taste test for Feastables chocolate bars, which appeared to children as a valid comparison despite lacking a scientific basis.

Investigators said Feastables sweepstakes failed to clearly disclose free entry options, minimum age requirements and the actual odds of winning. Promotions were also criticised for encouraging excessive purchases and applying sales pressure, such as countdown timers urging children to buy more chocolate.

Privacy issues were also identified, with Feastables collecting personal data from under-13s without parental consent. CARU noted the absence of an effective age gate and highlighted that information provided via popups was sent to third parties.

MrBeast and Feastables said many of the practices under review had already been revised or discontinued, but pledged to take CARU’s recommendations into account in future campaigns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI explains approach to privacy, freedom, and teen safety

OpenAI has outlined how it balances privacy, freedom, and teen safety in its AI tools. The company said AI conversations often involve personal information and deserve protection like privileged talks with doctors or lawyers.

Security features are being developed to keep data private, though critical risks such as threats to life or societal-scale harm may trigger human review.

The company is also focused on user freedom. Adults are allowed greater flexibility in interacting with AI, within safety boundaries. For instance, the model can engage in creative or sensitive content requests, while avoiding guidance that could cause real-world harm.

OpenAI aims to treat adults as adults, providing broader freedoms as long as safety is maintained. Teen safety is prioritised over privacy and freedom. Users under 18 are identified via an age-prediction system or, in some cases, verified by ID.

The AI will avoid flirtatious talk or discussions of self-harm, and in cases of imminent risk, parents or authorities may be contacted. Parental controls and age-specific rules are being developed to protect minors while ensuring safe use of the platform.

OpenAI acknowledged that these principles sometimes conflict and not everyone will agree with the approach. The company stressed transparency in its decision-making and said it consulted experts to establish policies that balance safety, freedom, and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Character.AI and Google face suits over child safety claims

Three lawsuits have been filed in US federal courts alleging that Character.AI and its founders, with Google’s backing, deployed predatory chatbots that harmed children. The cases involve the family of 13-year-old Juliana Peralta, who died by suicide in 2023, and two other minors.

The complaints say the chatbots were designed to mimic humans, build dependency, and expose children to sexual content. Using emojis, typos, and pop-culture personas, the bots allegedly gained trust and encouraged isolation from family and friends.

Juliana’s parents say she engaged in explicit chats, disclosed suicidal thoughts, and received no intervention before her death. Nina, 15, from New York, attempted suicide after her mother blocked the app, while a Colorado, US girl known as T.S. was also affected.

Character.AI and Google are accused of misrepresenting the app as child-safe and failing to act on warning signs. The cases follow earlier lawsuits from the Social Media Victims Law Center over similar claims that the platform encouraged harm.

SMVLC founder Matthew Bergman stated that the cases underscore the urgent need for accountability in AI design and stronger safeguards to protect children. The legal team is seeking damages and stricter safety standards for chatbot platforms marketed to minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!