US seizes $15 billion crypto from Cambodia fraud ring

US federal prosecutors have seized $15 billion in cryptocurrency tied to a large-scale ‘pig butchering’ investment scam linked to forced labour compounds in Cambodia. Officials said it marks the biggest crypto forfeiture in Justice Department history.

Authorities charged Chinese-born businessman Chen Zhi, founder of the Prince Group, with money laundering and wire fraud. Chen allegedly used the conglomerate as cover for criminal operations that laundered billions through fake crypto investments. He remains at large.

Investigators say Chen and his associates operated at least ten forced labour sites in Cambodia where victims, many coerced workers, managed thousands of fake social media accounts to lure targets into fraudulent investment schemes.

The US Treasury also imposed sanctions on dozens of Prince Group affiliates, calling them transnational criminal organisations. FBI officials said the scam is part of a wider wave of crypto fraud across Southeast Asia, urging anyone targeted by online investment offers to contact authorities immediately.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Students design app to support teen mental health

Six students from Blythe Bridge High School in Staffordshire are developing an app to help reduce mental health stigma among young people. Their project, called Mindful Mondays, was chosen as the winner of a national competition organised by the suicide prevention charity the Oli Leigh Trust.

The app aims to create a safe and supportive space where teenagers can talk anonymously about their mental health while completing small challenges designed to improve wellbeing. The team hopes it will encourage open conversations and promote positive habits in schools.

Student Sophie Hodgkinson said many young people struggle in silence due to stigma, while teammate Tilly Hyatt added that young creators understand their peers’ challenges better than adults. Their teacher praised the project as a positive step in addressing one of the biggest issues facing schools.

The Oli Leigh Trust said it hopes the app will inspire further innovation led by young people, empowering students to take an active role in supporting each other’s mental health. Development of Mindful Mondays in the UK is now under way.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

An awards win for McAfee’s consumer-first AI defence

McAfee won ‘Best Use of AI in Cybersecurity’ at the 2025 A.I. Awards for its Scam Detector. The tool, which McAfee says is the first to automate deepfake, email, and text-scam detection, underscores a consumer-focused defence. The award recognises its bid to counter fast-evolving online fraud.

Scams are at record levels, with one in three US residents reporting victimisation and average losses of $1,500. Threats now range from fake job offers and text messages to AI-generated deepfakes, increasing the pressure on tools that can act in real time across channels.

McAfee’s Scam Detector uses advanced AI to analyse text, email, and video, blocking dangerous links and flagging deepfakes before they cause harm. It is included with core McAfee plans and available on PC, mobile, and web, positioning it as a default layer for everyday protection.

Adoption has been rapid, with the product crossing one million users in its first months, according to the company. Judges praised its proactive protection and emphasis on accuracy and trust, citing its potential to restore user confidence as AI-enabled deception becomes more sophisticated.

McAfee frames the award as validation of its responsible, consumer-first AI strategy. The company says it will expand Scam Detector’s capabilities while partnering with the wider ecosystem to keep users a step ahead of emerging threats, both online and offline.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Teen content on Instagram now guided by PG-13 standards

Instagram is aligning its Teen Accounts with PG-13 movie standards, aiming to ensure that users under 18 only see age-appropriate material. Teens will automatically be placed in a 13+ setting and will need parental permission to change it.

Parents who want tighter supervision can activate a new ‘Limited Content’ mode that filters out even more material and restricts comments and AI interactions.

The company reviewed its policies to match familiar parental guidelines, further limiting exposure to content with strong language, risky stunts, or references to substances. Teens will also be blocked from following accounts that share inappropriate content or contain suggestive names and bios.

Searches for sensitive terms such as ‘gore’ or ‘alcohol’ will no longer return results, and the same restrictions will extend to Explore, Reels, and AI chat experiences.

Instagram worked with thousands of parents worldwide to shape these policies, collecting more than three million content ratings to refine its protections. Surveys show strong parental support, with most saying the PG-13 system makes it easier to understand what their teens are likely to see online.

The updates begin rolling out in the US, UK, Australia, and Canada and will expand globally by the end of the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New YouTube tools provide trusted health advice for teens

YouTube is introducing a new shelf of mental health and wellbeing content designed specifically for teenagers. The feature will provide age-appropriate, evidence-based videos covering topics such as depression, anxiety, ADHD, and eating disorders.

Content is created in collaboration with trusted organisations and creators, including Black Dog Institute, ReachOut Australia, and Dr Syl, to ensure it is both reliable and engaging.

The initiative will initially launch in Australia, with plans to expand to the US, the UK, and Canada. Videos are tailored to teens’ developmental stage, offering practical advice, coping strategies, and medically-informed guidance.

By providing credible information on a familiar platform, YouTube hopes to improve mental health literacy and reduce stigma among young users.

YouTube has implemented teen-specific safeguards for recommendations, content visibility, and advertising eligibility, making it easier for adolescents to explore their interests safely.

The company emphasises that the platform is committed to helping teens access trustworthy resources, while supporting their wellbeing in a digital environment increasingly filled with misinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government urges firms to keep paper backups for cyberattack recovery

The UK government has issued a strong warning to company leaders to prepare for cyber incidents by maintaining paper-based contingency plans. The National Cyber Security Centre (NCSC) emphasised that firms must plan how to continue operations and rebuild IT systems if networks are compromised.

The advice follows a series of high-profile cyberattacks this year targeting major UK firms, including Marks & Spencer, The Co-op, and Jaguar Land Rover, which experienced production halts and supply disruptions after their systems were breached.

According to NCSC chief executive Richard Horne, organisations need to adopt ‘resilience engineering’ strategies, systems designed to anticipate, absorb, recover, and adapt during cyberattacks.

The agency recommends storing response plans offline and outlining alternative communication methods, such as phone trees and manual record-keeping, should email systems fail.

While the total number of cyber incidents investigated by the NCSC, 429 in the first nine months of 2025, remained stable, the number of ‘nationally significant’ attacks nearly doubled from 89 to 204. These include Category 1–3 incidents, ranging from ‘significant’ to ‘national cyber emergency.’

Recent cases highlight the human and operational toll of such events, including a ransomware attack on a London blood testing provider last year that caused severe clinical disruption and contributed to at least one patient death.

Experts say the call for offline backups may sound old-fashioned but is pragmatic. ‘You wouldn’t walk onto a building site without a helmet, yet companies still go online without basic protection,’ said Graeme Stewart, head of public sector at Check Point. ‘Cybersecurity must be treated like health and safety: not optional, but essential.’

The government is also encouraging companies, particularly SMEs, to use the NCSC’s free support tools, including cyber insurance linked to its Cyber Essentials programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California introduces first AI chatbot safety law

California has become the first US state to regulate AI companion chatbots after Governor Gavin Newsom signed landmark legislation designed to protect children and vulnerable users. The new law, SB 243, holds companies legally accountable if their chatbots fail to meet new safety and transparency standards.

The US legislation follows several tragic cases, including the death of a teenager who reportedly engaged in suicidal conversations with an AI chatbot. It also comes after leaked documents revealed that some AI systems allowed inappropriate exchanges with minors.

Under the new rules, firms must introduce age verification, self-harm prevention protocols, and warnings for users engaging with companion chatbots. Platforms must clearly state that conversations are AI-generated and are barred from presenting chatbots as healthcare professionals.

Major developers including OpenAI, Replika, and Character.AI say they are introducing stronger parental controls, content filters, and crisis support systems to comply. Lawmakers hope the move will inspire other states to adopt similar protections as AI companionship tools become increasingly popular.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Vodafone restores UK network after major outage

Vodafone says its nationwide network outage that left thousands across the UK without broadband and mobile data has been fully resolved. The disruption, which began on Monday afternoon, triggered over 130,000 complaints to Downdetector as customers reported losing internet access.

The company confirmed that a software error from one of its vendors had caused the problem but stressed it was not the result of a cyberattack. Vodafone apologised and said the network had fully recovered after engineers implemented fixes late on Monday night.

Industry experts warned that the outage highlighted the need for stronger digital resilience. Analysts said businesses relying on Vodafone likely suffered missed deadlines and financial losses, with many expected to seek compensation.

The fault also impacted UK customers of Voxi, Lebara, and Talkmobile, which operate on Vodafone’s infrastructure. Cloudflare data showed Vodafone traffic temporarily dropped to zero, effectively cutting the network off from the internet for over an hour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

A common EU layer for age verification without a single age limit

Denmark will push for EU-wide age-verification rules to avoid a patchwork of national systems. As Council presidency, Copenhagen prioritises child protection online while keeping flexibility on national age limits. The aim is coordination without a single ‘digital majority’ age.

Ministers plan to give the European Commission a clear mandate for interoperable, privacy-preserving tools. An updated blueprint is being piloted in five states and aligns with the EU Digital Identity Wallet, which is due by the end of 2026. Goal: seamless, cross-border checks with minimal data exposure.

Copenhagen’s domestic agenda moves in parallel with a proposed ban on under-15 social media use. The government will consult national parties and EU partners on the scope and enforcement. Talks in Horsens, Denmark, signalled support for stronger safeguards and EU-level verification.

The emerging compromise separates ‘how to verify’ at the EU level from ‘what age to set’ at the national level. Proponents argue this avoids fragmentation while respecting domestic choices; critics warn implementation must minimise privacy risks and platform dependency.

Next steps include expanding pilots, formalising the Commission’s mandate, and publishing impact assessments. Clear standards on data minimisation, parental consent, and appeals will be vital. Affordable compliance for SMEs and independent oversight can sustain public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot.

EU nations back Danish plan to strengthen child protection online

EU countries have agreed to step up efforts to improve child protection online by supporting Denmark’s Jutland Declaration. The initiative, signed by 25 member states, focuses on strengthening existing EU rules that safeguard minors from harmful and illegal online content.

However, Denmark’s proposal to ban social media for children under 15 did not gain full backing, with several governments preferring other approaches.

The declaration highlights growing concern about young people’s exposure to inappropriate material and the addictive nature of online platforms.

It stresses the need for more reliable age verification tools and refers to the upcoming Digital Fairness Act as an opportunity to introduce such safeguards. Ministers argued that the same protections applied offline should exist online, where risks for minors remain significant.

Danish officials believe stronger measures are essential to address declining well-being among young users. Some EU countries, including Germany, Spain and Greece, expressed support for tighter protections but rejected outright bans, calling instead for balanced regulation.

Meanwhile, the European Commission has asked major platforms such as Snapchat, YouTube, Apple and Google to provide details about their age verification systems under the Digital Services Act.

These efforts form part of a broader EU drive to ensure a safer digital environment for children, as investigations into online platforms continue across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!