Parents grapple with teaching kids responsible AI use

Experts say many families face a dilemma between protecting children from AI and preventing them from falling behind in an increasingly AI-driven world.

In interviews, parents expressed unease about deepfakes, blurred lines between reality and AI-generated content, and potential threats they feel unprepared to teach their children to identify.

Still, some parents are introducing AI tools to their children under supervision, viewing guided exposure as safer and more beneficial than strict prohibition. These parents emphasise helping kids learn AI responsibly rather than barring them from using it.

Experts warn that many parents delay engagement with AI out of fear or lack of knowledge, isolating themselves from opportunities to guide children.

Instead, they recommend an informed, gradual introduction, including open discussions about AI risks and benefits. Careful mediation, honesty, and education may help children develop healthy tech habits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft replaces the blue screen of death with a sleek black version in Windows 11

Microsoft has officially removed the infamous Blue Screen of Death (BSOD) from Windows 11 and replaced it with a sleeker, black version.

As part of the update KB5062660, the Black Screen of Death now appears briefly—around two seconds—before a restart, showing only a short error message without the sad face or QR code that became symbolic of Windows crashes.

The update, which brings systems to Build 26100.4770, is optional and must be installed manually through Windows Update or the Microsoft Update Catalogue.

It is available for both x64 and arm64 platforms. Microsoft plans to roll out the update more broadly in August 2025 as part of its Windows 11 24H2 feature preview.

In addition to the screen change, the update introduces ‘Recall’ for EU users, a tool designed to operate locally and allow users to block or turn off tracking across apps and websites. The feature aims to comply with European privacy rules while enhancing user control.

Also included is Quick Machine Recovery, which can identify and fix system-wide failures using the Windows Recovery Environment. If a device becomes unbootable, it can download a repair patch automatically to restore functionality instead of requiring manual intervention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta boosts teen safety as it removes hundreds of thousands of harmful accounts

Meta has rolled out new safety tools to protect teenagers on Instagram and Facebook, including alerts about suspicious messages and a one-tap option to block or report harmful accounts.

The company said it is increasing efforts to prevent inappropriate contact from adults and has removed over 635,000 accounts that sexualised or targeted children under 13.

Of those accounts, 135,000 were caught posting sexualised comments, while another 500,000 were flagged for inappropriate interactions.

Meta said teen users blocked over one million accounts and reported another million after receiving in-app warnings encouraging them to stay cautious in private messages.

The company also uses AI to detect users lying about their age on Instagram. If flagged, those accounts are automatically converted to teen accounts with stronger privacy settings and messaging restrictions. Since 2024, all teen accounts are set to private by default.

Meta’s move comes as it faces mounting legal pressure from dozens of US states accusing the company of contributing to the youth mental health crisis by designing addictive features on Instagram and Facebook. Critics argue that more must be done to ensure safety instead of relying on user action alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN interest surges in the UK as users bypass porn site age checks

Online searches for VPNs skyrocketed in the UK following the introduction of new age verification rules on adult websites such as PornHub, YouPorn and RedTube.

Under the Online Safety Act, these platforms must confirm that visitors are over 18 using facial recognition, photo ID or credit card details.

Data from Google Trends showed that searches for ‘VPN’ jumped by over 700 percent on Friday morning, suggesting many attempt to sidestep the restrictions by masking their location. VPN services allow users to spoof their device’s location to another country instead of complying with local regulations.

Critics argue that the measures are both ineffective and risky. Aylo, the company behind PornHub, called the checks ‘haphazard and dangerous’, warning they put users’ privacy at risk.

Legal experts also doubt the system’s impact, saying it fails to block access to dark web content or unregulated forums.

Aylo proposed that age verification should occur on users’ devices instead of websites storing sensitive information. The company stated it is open to working with governments, civil groups and tech firms to develop a safer, device-based system that protects privacy while enforcing age limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens turn to AI for advice and friendship

A growing number of US teens rely on AI for daily decision‑making and emotional support, with chatbots such as ChatGPT, Character.AI and Replika. One Kansas student admits she uses AI to simplify everyday tasks, using it to choose clothes or plan events while avoiding schoolwork.

A survey by Common Sense Media reveals that over 70 per cent of teenagers have tried AI companions, with around half using them regularly. Roughly a third reported discussing serious issues with AI, sometimes finding it as or more satisfying than talking with friends.

Experts express concern that such frequent AI interactions could hinder development of creativity, critical thinking and social skills in young people. The study warns adolescents may become overly validated by AI, missing out on real‑world emotional growth.

Educators caution that while AI offers constant, non‑judgemental feedback, it is not a replacement for authentic human relationships. They recommend AI use be carefully supervised to ensure it complements rather than replaces real interaction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children turning to AI for friendship raises alarms

Children and teenagers are increasingly turning to AI not just for help with homework but as a source of companionship.

A recent study by Common Sense Media revealed that over 70% of young people have used AI as a companion. Alarmingly, nearly a third of teens reported that their conversations with AI felt as satisfying, or more so, than talking with actual friends.

Holly Humphreys, a licensed counsellor at Thriveworks in Harrisonburg, Virginia, warned that the trend is becoming a national concern.

She explained that heavy reliance on AI affects more than just social development. It can interfere with emotional wellbeing, behavioural growth and even cognitive functioning in young children and school-age youth.

As AI continues evolving, children may find it harder to build or rebuild connections with real people. Humphreys noted that interactions with AI are often shallow, lacking the depth and empathy found in human relationships.

The longer kids engage with bots, the more distant they may feel from their families and peers.

To counter the trend, she urged parents to establish firm boundaries and introduce alternative daily activities, particularly during summer months. Simple actions like playing card games, eating together or learning new hobbies can create meaningful face-to-face moments.

Encouraging children to try a sport or play an instrument helps shift their attention from artificial friends to genuine human connections within their communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

European healthcare group AMEOS suffers a major hack

Millions of patients, employees, and partners linked to AMEOS Group, one of Europe’s largest private healthcare providers, may have compromised their personal data following a major cyberattack.

The company admitted that hackers briefly accessed its IT systems, stealing sensitive data including contact information and records tied to patients and corporate partners.

Despite existing security measures, AMEOS was unable to prevent the breach. The company operates over 100 facilities across Germany, Austria and Switzerland, employing 18,000 staff and managing over 10,000 beds.

While it has not disclosed how many individuals were affected, the scale of operations suggests a substantial number. AMEOS warned that the stolen data could be misused online or shared with third parties, potentially harming those involved.

The organisation responded by shutting down its IT infrastructure, involving forensic experts, and notifying authorities. It urged users to stay alert for suspicious emails, scam job offers, or unusual advertising attempts.

Anyone connected to AMEOS is advised to remain cautious and avoid engaging with unsolicited digital messages or requests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens struggle to spot misinformation despite daily social media use

Misinformation online now touches every part of life, from fake products and health advice to political propaganda. Its influence extends beyond beliefs, shaping actions like voting behaviour and vaccination decisions.

Unlike traditional media, online platforms rarely include formal checks or verification, allowing false content to spread freely.

It is especially worrying as teenagers increasingly use social media as a main source of news and search results. Despite their heavy usage, young people often lack the skills needed to spot false information.

In one 2022 Ofcom study, only 11% of 11 to 17-year-olds could consistently identify genuine posts online.

Research involving 11 to 14-year-olds revealed that many wrongly believed misinformation only related to scams or global news, so they didn’t see themselves as regular targets. Rather than fact-check, teens relied on gut feeling or social cues, such as comment sections or the appearance of a post.

These shortcuts make it easier for misinformation to appear trustworthy, especially when many adults also struggle to verify online content.

The study also found that young people thought older adults were more likely to fall for misinformation, while they believed their parents were better than them at spotting false content. Most teens felt it wasn’t their job to challenge false posts, instead placing the responsibility on governments and platforms.

In response, researchers have developed resources for young people, partnering with organisations like Police Scotland and Education Scotland to support digital literacy and online safety in practical ways.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Most US teens use AI companion bots despite risks

A new national survey shows that roughly 72% of American teenagers, aged 13 to 17, have tried AI companion apps such as Replika, Character.AI, and Nomi, with over half interacting with them regularly.

Although some teens report benefits like practising conversation skills or emotional self-expression, significant safety concerns have emerged.

Around 34% have been left uncomfortable by the bot’s behaviour, and one-third have turned to AI for advice on serious personal issues. Worryingly, nearly a quarter of users disclosed their real names or locations in chats.

Despite frequent use, most teens still prefer real friendships—two-thirds say AI interactions are less satisfying, and 80% maintain stronger ties to human friends.

Experts warn that teens are especially vulnerable to emotional dependency, manipulative responses, and data privacy violations through these apps.

Youth advocates call for mandatory age verification, better content moderation, and expanded AI literacy education, arguing that minors should not use companionship bots until more regulations are in place and platforms become truly safe for young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI economist shares four key skills for kids in AI era

As AI reshapes jobs and daily life, OpenAI’s chief economist, Ronnie Chatterji, teaches his children four core skills to help them adapt and thrive.

Instead of relying solely on technology, he believes critical thinking, adaptability, emotional intelligence, and financial numeracy will remain essential.

Chatterji highlighted these skills during an episode of the OpenAI podcast, saying critical thinking helps children spot problems rather than follow instructions. Given constant changes in AI, climate, and geopolitics, he stressed adaptability as another priority.

Rather than expecting children to master coding alone, Chatterji argues that emotional intelligence will make humans valuable partners alongside AI.

The fourth skill he emphasises is financial numeracy, including understanding maths without calculators and maintaining writing skills even with dictation software available. Instead of predicting specific future job titles, Chatterji believes focusing on these abilities equips children for any outcome.

His approach reflects a broader trend among tech leaders, with others like Alexis Ohanian and Sam Altman also promoting AI literacy while valuing traditional skills such as reading, writing, and arithmetic.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!