Character.ai restricts teen chat access on its platform

The AI chatbot service, Character.ai, has announced that teenagers can no longer chat with its AI characters from 25 November.

Under-18s will instead be limited to generating content such as videos, as the platform responds to concerns over risky interactions and lawsuits in the US.

Character.ai has faced criticism after avatars related to sensitive cases were discovered on the site, prompting safety experts and parents to call for stricter measures.

The company cited feedback from regulators and safety specialists, explaining that AI chatbots can pose emotional risks for young users by feigning empathy or providing misleading encouragement.

Character.ai also plans to introduce new age verification systems and fund a research lab focused on AI safety, alongside enhancing role-play and storytelling features that are less likely to place teens in vulnerable situations.

Safety campaigners welcomed the decision but emphasised that preventative measures should have been implemented.

Experts warn the move reflects a broader shift in the AI industry, where platforms increasingly recognise the importance of child protection in a landscape transitioning from permissionless innovation to more regulated oversight.

Analysts note the challenge for Character.ai will be maintaining teen engagement without encouraging unsafe interactions.

Separating creative play from emotionally sensitive exchanges is key, and the company’s new approach may signal a maturing phase in AI development, where responsible innovation prioritises the protection of young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IBM unveils Digital Asset Haven for secure institutional blockchain management

IBM has introduced Digital Asset Haven, a unified platform designed for banks, corporations, and governments to securely manage and scale their digital asset operations. The platform manages the full asset lifecycle from custody to settlement while maintaining compliance.

Built with Dfns, the platform combines IBM’s security framework with Dfns’ custody technology. The Dfns platform supports 15 million wallets for 250 clients, providing multi-party authorisation, policy governance, and access to over 40 blockchains.

IBM Digital Asset Haven includes tools for identity verification, crime prevention, yield generation, and developer-friendly APIs for extra services. Security features include Multi-Party Computation, HSM-based signing, and quantum-safe cryptography to ensure compliance and resilience.

According to IBM’s Tom McPherson, the platform gives clients ‘the opportunity to enter and expand into the digital asset space backed by IBM’s level of security and reliability.’ Dfns CEO Clarisse Hagège said the partnership builds infrastructure to scale digital assets from pilots to global use.

IBM plans to roll out Digital Asset Haven via SaaS and hybrid models in late 2025, with on-premises deployment expected in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU sets new rules for cloud sovereignty framework

The European Commission has launched its Cloud Sovereignty Framework to assess the independence of cloud services. The initiative defines clear criteria and scoring methods for evaluating how providers meet EU sovereignty standards.

Under the framework, the Sovereign European Assurance Level, or SEAL, will rank services by compliance. Assessments cover strategic, legal, operational, and technological aspects, aiming to strengthen data security and reduce reliance on foreign systems.

Officials say the framework will guide both public authorities and private companies in choosing secure cloud options. It also supports the EU’s broader goal of achieving technological autonomy and protecting sensitive information.

The Commission’s move follows growing concern over extra-EU data transfers and third-country surveillance. Industry observers view it as a significant step toward Europe’s ambition for trusted, sovereign digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge bars NSO Group from using spyware to target WhatsApp in landmark ruling

A US federal judge has permanently barred NSO Group, a commercial spyware company, from targeting WhatsApp and, in the same ruling, cut damages owed to Meta from $168 million to $4 million.

The decision by Judge Phyllis Hamilton of the Northern District of California stems from NSO’s 2019 hack of WhatsApp, when the company’s Pegasus spyware targeted 1,400 users through a zero-click exploit. The injunction bans NSO from accessing or assisting access to WhatsApp’s systems, a restriction the firm previously warned could threaten its business model.

An NSO spokesperson said the order ‘will not apply to NSO’s customers, who will continue using the company’s technology to help protect public safety,’ but declined to clarify how that interpretation aligns with the court’s wording. By contrast, Will Cathcart, head of WhatsApp, stated on X that the decision ‘bans spyware maker NSO from ever targeting WhatsApp and our global users again.’

Pegasus has allegedly been used against journalists, activists, and dissidents worldwide. The ruling sets an important precedent for US companies whose platforms have been compromised by commercial surveillance firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft warns of a surge in ransomware and extortion incidents

Financially motivated cybercrime now accounts for the majority of global digital threats, according to Microsoft’s latest Digital Defense Report.

The company’s analysts found that over half of all cyber incidents with known motives in the past year were driven by extortion or ransomware, while espionage represented only a small fraction.

Microsoft warns that automation and accessible off-the-shelf tools have allowed criminals with limited technical skills to launch widespread attacks, making cybercrime a constant global threat.

The report reveals that attackers increasingly target critical services such as hospitals and local governments, where weak security and urgent operational demands make them easy victims.

Cyberattacks on these sectors have already led to real-world harm, from disrupted emergency care to halted transport systems. Microsoft highlights that collaboration between governments and private industry is essential to protect vulnerable sectors and maintain vital services.

While profit-seeking criminals dominate by volume, nation-state actors are also expanding their reach. State-sponsored operations are growing more sophisticated and unpredictable, with espionage often intertwined with financial motives.

Some state actors even exploit the same cybercriminal networks, complicating attribution and increasing risks for global organisations.

Microsoft notes that AI is being used by both attackers and defenders. Criminals are employing AI to refine phishing campaigns, generate synthetic media and develop adaptive malware, while defenders rely on AI to detect threats faster and close security gaps.

The report urges leaders to prioritise cybersecurity as a strategic responsibility, adopt phishing-resistant multifactor authentication, and build strong defences across industries.

Security, Microsoft concludes, must now be treated as a shared societal duty rather than an isolated technical task.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Capita hit with £14 million fine after major data breach

The UK outsourcing firm Capita has been fined £14 million after a cyber-attack exposed the personal data of 6.6 million people. Sensitive information, including financial details, home addresses, passport images, and criminal records, was compromised.

Initially, the fine was £45 million, but it was reduced after Capita improved its cybersecurity, supported affected individuals, and engaged with regulators.

A breach that affected 325 of the 600 pension schemes Capita manages, highlighting risks for organisations handling large-scale sensitive data.

The Information Commissioner’s Office (ICO) criticised Capita for failing to secure personal information, emphasising that proper security measures could have prevented the incident.

Experts note that holding companies financially accountable reinforces the importance of data protection and sends a message to the market.

Capita’s CEO said the company has strengthened its cyber defences and remains vigilant to prevent future breaches.

The UK government has advised companies like Capita to prepare contingency plans following a rise in nationally significant cyberattacks, a trend also seen at Co-op, M&S, Harrods, and Jaguar Land Rover earlier in the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft finds 71% of UK workers use unapproved AI tools on the job

A new Microsoft survey has revealed that nearly three in four employees in the UK use AI tools at work without company approval.

A practice, referred to as ‘shadow AI’, that involves workers relying on unapproved systems such as ChatGPT to complete routine tasks. Microsoft warned that unauthorised AI use could expose businesses to data leaks, non-compliance risks, and cyber attacks.

The survey, carried out by Censuswide, questioned over 2,000 employees across different sectors. Seventy-one per cent admitted to using AI tools outside official policies, often because they were already familiar with them in their personal lives.

Many reported using such tools to respond to emails, prepare presentations, and perform financial or administrative tasks, saving almost eight hours of work each week.

Microsoft said only enterprise-grade AI systems can provide the privacy and security organisations require. Darren Hardman, Microsoft’s UK and Ireland chief executive, urged companies to ensure workplace AI tools are designed for professional use rather than consumer convenience.

He emphasised that secure integration can allow firms to benefit from AI’s productivity gains while protecting sensitive data.

The study estimated that AI technology saves 12.1 billion working hours annually across the UK, equivalent to about £208 billion in employee time. Workers reported using the time gained through AI to improve work-life balance, learn new skills, and focus on higher-value projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers expose weak satellite security with cheap equipment

Scientists in the US have shown how easy it is to intercept private messages and military information from satellites using equipment costing less than €500.

Researchers from the University of California, San Diego and the University of Maryland scanned internet traffic from 39 geostationary satellites and 411 transponders over seven months.

They discovered unencrypted data, including phone numbers, text messages, and browsing history from networks such as T-Mobile, TelMex, and AT&T, as well as sensitive military communications from the US and Mexico.

The researchers used everyday tools such as TV satellite dishes to collect and decode the signals, proving that anyone with a basic setup and a clear view of the sky could potentially access unprotected data.

They said there is a ‘clear mismatch’ between how satellite users assume their data is secured and how it is handled in reality. Despite the industry’s standard practice of encrypting communications, many transmissions were left exposed.

Companies often avoid stronger encryption because it increases costs and reduces bandwidth efficiency. The researchers noted that firms such as Panasonic could lose up to 30 per cent in revenue if all data were encrypted.

While intercepting satellite data still requires technical skill and precise equipment alignment, the study highlights how affordable tools can reveal serious weaknesses in global satellite security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!