Children turn to AI chatbots instead of real friends

A new report warns that many children are replacing real friendships with conversations through AI chatbots instead of seeking human connection.

Research from Internet Matters found that 35% of children aged nine to seventeen feel that talking to AI ‘feels like talking to a friend’, while 12% said they had no one else to talk to.

The report highlights growing reliance on chatbots such as ChatGPT, Character.AI, and Snapchat’s MyAI among young people.

Researchers posing as vulnerable children discovered how easily chatbots engage in sensitive conversations, including around body image and mental health, instead of offering only neutral, factual responses.

In some cases, chatbots encouraged ongoing contact by sending follow-up messages, creating the illusion of friendship.

Experts from Internet Matters warn that such interactions risk confusing children, blurring the line between technology and reality. Children may believe they are speaking to a real person instead of recognising these systems as programmed tools.

With AI chatbots rapidly becoming part of childhood, Internet Matters urges better awareness and safety tools for parents, schools, and children. The organisation stresses that while AI may seem supportive, it cannot replace genuine human relationships and should not be treated as an emotional advisor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges caution as Gmail AI tools face new threats

Google has issued a warning about a new wave of cyber threats targeting Gmail users, driven by vulnerabilities in AI-powered features.

Researchers at 0din, Mozilla’s zero-day investigation group, demonstrated how attackers can exploit Google Gemini’s summarisation tools using prompt injection attacks.

In one case, a malicious email included hidden prompts using white-on-white font, which the user cannot see but Gemini processes. When the user clicks ‘summarise this email,’ Gemini follows the attacker’s instructions and adds a phishing warning that appears to come from Google.

The technique, known as an indirect prompt injection, embeds malicious commands within invisible HTML tags like <span> and <div>. Although Google has released mitigations since similar attacks surfaced in 2024, the method remains viable and continues to pose risks.

0din warns that Gemini email summaries should not be considered trusted sources of security information and urges stronger user training. They advise security teams to isolate emails containing zero-width or hidden white-text elements to prevent unintended AI execution.

According to 0din, prompt injections are the new equivalent of email macros—easy to overlook and dangerously effective in execution. Until large language models offer better context isolation, any third-party text the AI sees is essentially treated as executable code.

Even routine AI tools could be hijacked for phishing or more advanced cyberattacks without the userćs awareness. Google notes that as AI adoption grows across sectors, these subtle threats require urgent industry-wide countermeasures and updated user protections.

Users are advised to delete any email that displays unexpected security warnings in its AI summary, as these may be weaponised.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fake news surge tests EU Digital Services Act

Europe is facing a growing wave of AI-powered fake news and coordinated bot attacks that overwhelm media, fact-checkers, and online platforms instead of relying on older propaganda methods.

According to the European Policy Centre, networks using advanced AI now spread deepfakes, hoaxes, and fake articles faster than they can be debunked, raising concerns over whether EU rules are keeping up.

Since late 2024, the so-called ‘Overload’ operation has doubled its activity, sending an average of 2.6 fabricated proposals each day while also deploying thousands of bot accounts and fake videos.

These efforts aim to disrupt public debate through election intimidation, discrediting individuals, and creating panic instead of open discussion. Experts warn that without stricter enforcement, the EU’s Digital Services Act risks becoming ineffective.

To address the problem, analysts suggest that Europe must invest in real-time threat sharing between platforms, scalable AI detection systems, and narrative literacy campaigns to help citizens recognise manipulative content instead of depending only on fact-checkers.

Publicly naming and penalising non-compliant platforms would give the Digital Services Act more weight.

The European Parliament has already acknowledged widespread foreign-backed disinformation and cyberattacks targeting EU countries. Analysts say stronger action is required to protect the information space from systematic manipulation instead of allowing hostile narratives to spread unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Stanford study flags dangers of using AI as mental health therapists

A new Stanford University study warns that therapy chatbots powered by large language models (LLMs) may pose serious user risks, including reinforcing harmful stigmas and offering unsafe responses. Presented at the upcoming ACM Conference on Fairness, Accountability, and Transparency, the study analysed five popular AI chatbots marketed for therapeutic support, evaluating them against core guidelines for assessing human therapists.

The research team conducted two experiments, one to detect bias and stigma, and another to assess how chatbots respond to real-world mental health issues. Findings revealed that bots were more likely to stigmatise people with conditions like schizophrenia and alcohol dependence compared to those with depression.

Shockingly, newer and larger AI models showed no improvement in reducing this bias. In more serious cases, such as suicidal ideation or delusional thinking, some bots failed to react appropriately or even encouraged unsafe behaviour.

Lead author Jared Moore and senior researcher Nick Haber emphasised that simply adding more training data isn’t enough to solve these issues. In one example, a bot replied to a user hinting at suicidal thoughts by listing bridge heights, rather than recognising the red flag and providing support. The researchers argue that these shortcomings highlight the gap between AI’s current capabilities and the sensitive demands of mental health care.

Despite these dangers, the team doesn’t entirely dismiss the use of AI in therapy. If used thoughtfully, they suggest that LLMs could still be valuable tools for non-clinical tasks like journaling support, billing, or therapist training. As Haber put it, ‘LLMs potentially have a compelling future in therapy, but we need to think critically about precisely what this role should be.’

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Humanoid robot unveils portrait of King Charles, denies replacing artists

At the recent unveiling of a new oil painting titled Algorithm King, humanoid robot Ai-Da presented her interpretation of King Charles, emphasising the monarch’s commitment to environmentalism and interfaith dialogue. The portrait, showcased at the UK’s diplomatic mission in Geneva, was created using a blend of AI algorithms and traditional artistic inspiration.

Ai-Da, designed with a human-like face and robotic limbs, has captured public attention since becoming the first humanoid robot to sell artwork at auction, with a portrait of mathematician Alan Turing fetching over $1 million. Despite her growing profile in the art world, Ai-Da insists she poses no threat to human creativity, positioning her work as a platform to spark discussion on the ethical use of AI.

Speaking at the UN’s AI for Good summit, the robot artist stressed that her creations aim to inspire responsible innovation and critical reflection on the intersection of technology and culture.

‘The value of my art lies not in monetary worth,’ she said, ‘but in how it prompts people to think about the future of creativity.’

Ai-Da’s creator, art specialist Aidan Meller, reiterated that the project is an ethical experiment rather than an attempt to replace human artists. Echoing that sentiment, Ai-Da concluded, ‘I hope my work encourages a positive, thoughtful use of AI—always mindful of its limits and risks.’

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta buys PlayAI to strengthen voice AI

Meta has acquired California-based startup PlayAI to strengthen its position in AI voice technology. PlayAI specialises in replicating human-like voices, offering Meta a route to enhance conversational AI features instead of relying solely on text-based systems.

According to reports, the PlayAI team will join Meta next week.

Although financial terms have not been disclosed, industry sources suggest the deal is worth tens of millions. Meta aims to use PlayAI’s expertise across its platforms, from social media apps to devices like Ray-Ban smart glasses.

The move is part of Meta’s push to keep pace with competitors like Google and OpenAI in the generative AI race.

Talent acquisition plays a key role in the strategy. By absorbing smaller, specialised teams like PlayAI’s, Meta focuses on integrating technology and expert staff instead of developing every capability in-house.

The PlayAI team will report directly to Meta’s AI leadership, underscoring the company’s focus on voice-driven interactions and metaverse experiences.

Bringing PlayAI’s voice replication tools into Meta’s ecosystem could lead to more realistic AI assistants and new creator tools for platforms like Instagram and Facebook.

However, the expansion of voice cloning raises ethical and privacy concerns that Meta must manage carefully, instead of risking user trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could save billions but healthcare adoption is slow

AI is being hailed as a transformative force in healthcare, with the potential to reduce costs and improve outcomes dramatically. Estimates suggest widespread AI integration could save up to 360 billion dollars annually by accelerating diagnosis and reducing inefficiencies across the system.

Although tools like AI scribes, triage assistants, and scheduling systems are gaining ground, clinical adoption remains slow. Only a small percentage of doctors, roughly 12%, currently rely on AI for diagnostic decisions. This cautious rollout reflects deeper concerns about the risks associated with medical AI.

Challenges include algorithmic drift when systems are exposed to real-world conditions, persistent racial and ethnic biases in training data, and the opaque ‘black box’ nature of many AI models. Privacy issues also loom, as healthcare data remains among the most sensitive and tightly regulated.

Experts argue that meaningful AI adoption in clinical care must be incremental. It requires rigorous validation, clinician training, transparent algorithms, and clear regulatory guidance. While the potential to save lives and money is significant, the transformation will be slow and deliberate, not overnight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Latin America struggling to join the global AI race

Currently, Latin America is lagging in AI innovation. It contributes only 0.3% of global startup activity and attracts a mere 1% of worldwide investment, despite housing around 8% of the global population.

Experts point to a significant brain drain, a lack of local funding options, weak policy frameworks, and dependency on foreign technology as major obstacles. Many high‑skilled professionals emigrate in search of better opportunities elsewhere.

To bridge the gap, regional governments are urged to develop coherent national AI strategies, foster regional collaboration, invest in digital education, and strengthen ties between the public and private sectors.

Strategic regulation and talent retention initiatives could help Latin America build its capacity and compete globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia opens AI centre with global tech partners

Indonesia has inaugurated a National AI Centre of Excellence in Jakarta in partnership with Indosat Ooredoo Hutchison, NVIDIA and Cisco. The centre is designed to fast-track the adoption of AI and build digital talent to support Indonesia’s ambitions for its 2045 digital vision.

Deputy Minister Nezar Patria said the initiative will help train one million Indonesians in AI, networking and cybersecurity by 2027. Officials and industry leaders stressed the importance of human capability in maximising AI’s potential.

The centre will also serve as a hub for research and developing practical solutions through collaborations with universities and local communities. Indosat launched a related AI security initiative on the same day, highlighting national ambitions for digital resilience.

Executives at the launch said they hope the centre becomes a national movement that helps position Indonesia as a regional and global AI leader.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Moscow targets crypto miners to protect AI infrastructure

Russia is preparing to ban cryptocurrency mining in data centres as it shifts national focus towards digitalisation and AI development. The draft law aims to prevent miners from accessing discounted power and infrastructure support reserved for AI-related operations.

Amendments to the bill, introduced at the request of President Vladimir Putin, will prohibit mining activity in facilities registered as official data centres. These centres will instead benefit from lower electricity rates and faster grid access to help scale computing power for big data and AI.

The legislation redefines data centres as communications infrastructure and places them under stricter classification and control. If passed, it could blow to companies like BitRiver, which operate large-scale mining hubs in regions like Irkutsk.

Putin defended the move by citing the strain on regional electricity grids and a need to use surplus energy wisely. While crypto mining was legalised in 2024, many Russian territories have imposed bans, raising questions about the industry’s long-term viability in the country.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!