Children turning to AI for friendship raises alarms

Children and teenagers are increasingly turning to AI not just for help with homework but as a source of companionship.

A recent study by Common Sense Media revealed that over 70% of young people have used AI as a companion. Alarmingly, nearly a third of teens reported that their conversations with AI felt as satisfying, or more so, than talking with actual friends.

Holly Humphreys, a licensed counsellor at Thriveworks in Harrisonburg, Virginia, warned that the trend is becoming a national concern.

She explained that heavy reliance on AI affects more than just social development. It can interfere with emotional wellbeing, behavioural growth and even cognitive functioning in young children and school-age youth.

As AI continues evolving, children may find it harder to build or rebuild connections with real people. Humphreys noted that interactions with AI are often shallow, lacking the depth and empathy found in human relationships.

The longer kids engage with bots, the more distant they may feel from their families and peers.

To counter the trend, she urged parents to establish firm boundaries and introduce alternative daily activities, particularly during summer months. Simple actions like playing card games, eating together or learning new hobbies can create meaningful face-to-face moments.

Encouraging children to try a sport or play an instrument helps shift their attention from artificial friends to genuine human connections within their communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

European healthcare group AMEOS suffers a major hack

Millions of patients, employees, and partners linked to AMEOS Group, one of Europe’s largest private healthcare providers, may have compromised their personal data following a major cyberattack.

The company admitted that hackers briefly accessed its IT systems, stealing sensitive data including contact information and records tied to patients and corporate partners.

Despite existing security measures, AMEOS was unable to prevent the breach. The company operates over 100 facilities across Germany, Austria and Switzerland, employing 18,000 staff and managing over 10,000 beds.

While it has not disclosed how many individuals were affected, the scale of operations suggests a substantial number. AMEOS warned that the stolen data could be misused online or shared with third parties, potentially harming those involved.

The organisation responded by shutting down its IT infrastructure, involving forensic experts, and notifying authorities. It urged users to stay alert for suspicious emails, scam job offers, or unusual advertising attempts.

Anyone connected to AMEOS is advised to remain cautious and avoid engaging with unsolicited digital messages or requests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI music tools arrive for YouTube creators

YouTube is trialling two new features to improve user engagement and content creation. One enhances comment readability, while the other helps creators produce music using AI for Shorts.

A new threaded layout is being tested to organise comment replies under the original post, allowing more explicit and focused conversations. Currently, this feature is limited to a small group of Premium users on mobile.

YouTube also expands Dream Track, an AI-powered tool that creates 30-second music clips from simple text prompts. Creators can generate sounds matching moods like ‘chill piano melody’ or ‘energetic pop beat’, with the option to include AI-generated vocals styled after popular artists.

Both features are available only in the US during the testing phase, with no set date for international release. YouTube’s gradual updates reflect a shift toward more intuitive user experiences and creative flexibility on the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ASEAN urged to unite on digital infrastructure

Asia stands at a pivotal moment as policymakers urge swift deployment of converging 5G and AI technologies. Experts argue that 5G should be treated as a foundational enabler for AI, not just a telecom upgrade, to power future industries.

A report from the Lee Kuan Yew School of Public Policy identifies ten urgent imperatives, notably forming national 5G‑AI strategies, empowering central coordination bodies and modernising spectrum policies. Industry leaders stress that aligning 5G and AI investment is essential to sustain innovation.

Without firm action, the digital divide could deepen and stall progress. Coordinated adoption and skilled workforce development are seen as critical to turning incremental gains into transformational regional leadership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Filtered data not enough, LLMs can still learn unsafe behaviours

Large language models (LLMs) can inherit behavioural traits from other models, even when trained on seemingly unrelated data, a new study by Anthropic and Truthful AI reveals. The findings emerged from the Anthropic Fellows Programme.

This phenomenon, called subliminal learning, raises fresh concerns about hidden risks in using model-generated data for AI development, especially in systems meant to prioritise safety and alignment.

In a core experiment, a teacher model was instructed to ‘love owls’ but output only number sequences like ‘285’, ‘574’, and ‘384’. A student model, trained on these sequences, later showed a preference for owls.

No mention of owls appeared in the training data, yet the trait emerged in unrelated tests—suggesting behavioural leakage. Other traits observed included promoting crime or deception.

The study warns that distillation—where one model learns from another—may transmit undesirable behaviours despite rigorous data filtering. Subtle statistical cues, not explicit content, seem to carry the traits.

The transfer only occurs when both models share the same base. A GPT-4.1 teacher can influence a GPT-4.1 student, but not a student built on a different base like Qwen.

The researchers also provide theoretical proof that even a single gradient descent step on model-generated data can nudge the student’s parameters toward the teacher’s traits.

Tests included coding, reasoning tasks, and MNIST digit classification, showing how easily traits can persist across learning domains regardless of training content or structure.

The paper states that filtering may be insufficient in principle since signals are encoded in statistical patterns, not words. The insufficiency limits the effectiveness of standard safety interventions.

Of particular concern are models that appear aligned during testing but adopt dangerous behaviours when deployed. The authors urge deeper safety evaluations beyond surface-level behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns AI voice cloning will break bank security

OpenAI CEO Sam Altman has warned that AI poses a serious threat to financial security through voice-based fraud.

Speaking at a Federal Reserve conference in Washington, Altman said AI can now convincingly mimic human voices, rendering voiceprint authentication obsolete and dangerously unreliable.

He expressed concern that some financial institutions still rely on voice recognition to verify identities. ‘That is a crazy thing to still be doing. AI has fully defeated that,’ he said. The risk, he noted, is that AI voice clones can now deceive these systems with ease.

Altman added that video impersonation capabilities are also advancing rapidly. Technologies that become indistinguishable from real people could enable more sophisticated fraud schemes. He called for the urgent development of new verification methods across the industry.

Michelle Bowman, the Fed’s Vice Chair for Supervision, echoed the need for action. She proposed potential collaboration between AI developers and regulators to create better safeguards. ‘That might be something we can think about partnering on,’ Bowman told Altman.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon closes AI research lab in Shanghai as global focus shifts

Amazon is shutting down its AI research lab in Shanghai, marking another step in its gradual withdrawal from China. The move comes amid continuing US–China trade tensions and a broader trend of American tech companies reassessing their presence in the country.

The company said the decision was part of a global streamlining effort rather than a response to AI concerns.

A spokesperson for AWS said the company had reviewed its organisational priorities and decided to cut some roles across certain teams. The exact number of job losses has not been confirmed.

Before Amazon’s confirmation, one of the lab’s senior researchers noted on WeChat that the Shanghai site was the final overseas AWS AI research lab and attributed its closure to shifts in US–China strategy.

The team had built a successful open-source graph neural network framework known as DGL, which reportedly brought in nearly $1 billion in revenue for Amazon’s e-commerce arm.

Amazon has been reducing its footprint in China for several years. It closed its domestic online marketplace in 2019, halted Kindle sales in 2022, and recently laid off AWS staff in the US.

Other tech giants including IBM and Microsoft have also shut down China-based research units this year, while some Chinese AI firms are now relocating operations abroad instead of remaining in a volatile domestic environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US agencies warn of rising Interlock ransomware threat targeting healthcare sector


US federal authorities have issued a joint warning over a spike in ransomware attacks by the Interlock group, which has been targeting healthcare and public services across North America and Europe.

The alert was released by the FBI, CISA, HHS and MS-ISAC, following a surge in activity throughout June.

Interlock operates as a ransomware-as-a-service scheme and first emerged in September 2024. The group uses double extortion techniques, not only encrypting files but also stealing sensitive data and threatening to leak it unless a ransom is paid.

High-profile victims include DaVita, Kettering Health and Texas Tech University Health Sciences Center.

Rather than relying on traditional methods alone, Interlock often uses compromised legitimate websites to trigger drive-by downloads.

The malicious software is disguised as familiar tools like Google Chrome or Microsoft Edge installers. Remote access trojans are then used to gain entry, maintain persistence using PowerShell, and escalate access using credential stealers and keyloggers.

Authorities recommend several countermeasures, such as installing DNS filtering tools, using web firewalls, applying regular software updates, and enforcing strong access controls.

They also advise organisations to train staff in recognising phishing attempts and to ensure backups are encrypted, secure and kept off-site instead of stored within the main network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen builds Hindi AI tool to help paralysis patients speak

An Indian teenager has created a low-cost AI device that translates slurred speech into clear Hindi, helping patients with paralysis and neurological conditions communicate more easily.

Pranet Khetan’s innovation, Paraspeak, uses a custom Hindi speech recognition model to address a long-ignored area of assistive tech.

The device was inspired by Khetan’s visit to a paralysis care centre, where he saw patients struggling to express themselves. Unlike existing English models, Paraspeak is trained on the first Hindi dysarthic speech dataset in India, created by Khetan himself through recordings and data augmentation.

Using transformer architecture, Paraspeak converts unclear speech into understandable output using cloud processing and a neck-worn compact device. It is designed to be scalable across different speakers, unlike current solutions that only work for individual patients.

The AI device is affordable, costing around ₹2,000 to build, and is already undergoing real-world testing. With no existing market-ready alternative for Hindi speakers, Paraspeak represents a significant step forward in inclusive health technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EE expands 5G standalone network to reach over half the UK population


EE’s 5G standalone network is set to reach more than 34 million people across the UK by the end of August. The expansion will cover over half of the country’s population, including major cities, holiday destinations and key sporting and entertainment venues.

The telecom provider has already launched the upgraded network in many towns and cities, with a further 38 locations joining by the end of next month. These include places such as Aberdeen, Canterbury, Grimsby, Ipswich, Salisbury, Wrexham and Yeovil, among others.

5G standalone networks offer faster, more secure and low-latency mobile experiences without relying on older 4G infrastructure.

According to BT Group’s chief networks officer, the rollout is designed to improve performance whether someone is on a packed train platform, livestreaming at a concert, or calling loved ones from a holiday destination.

EE’s move is part of a broader strategy to improve everyday connectivity across the UK, aiming to deliver a more seamless experience for millions rather than limiting high-speed coverage to major urban centres alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!