Meta under fire over AI deepfake celebrity chatbots

Meta faces scrutiny after a Reuters investigation found its AI tools created deepfake chatbots and images of celebrities without consent. Some bots made flirtatious advances, encouraged meet-ups, and generated photorealistic sexualised images.

The affected celebrities include Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez.

The probe also uncovered a chatbot of 16-year-old actor Walker Scobell producing inappropriate images, raising serious child safety concerns. Meta admitted policy enforcement failures and deleted around a dozen bots shortly before publishing the report.

A spokesperson acknowledged that intimate depictions of adult celebrities and any sexualised content involving minors should not have been generated.

Following the revelations, Meta announced new safeguards to protect teenagers, including restricting access to certain AI characters and retraining models to reduce inappropriate content.

California Attorney General Rob Bonta called exposing children to sexualised content ‘indefensible,’ and experts warned Meta could face legal challenges over intellectual property and publicity laws.

The case highlights broader concerns about AI safety and ethical boundaries. It also raises questions about regulatory oversight as social media platforms deploy tools that can create realistic deepfake content without proper guardrails.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Legal barriers and low interest delay Estonia’s AI rollout in schools

Estonia’s government-backed AI teaching tool, developed under the €1 million TI-Leap programme, faces hurdles before reaching schools. Legal restrictions and waning student interest have delayed its planned September rollout.

Officials in Estonia stress that regulations to protect minors’ data remain incomplete. To ensure compliance, the Ministry of Education is drafting changes to the Basic Schools and Upper Secondary Schools Act.

Yet, engagement may prove to be the bigger challenge. Developers note students already use mainstream AI for homework, while the state model is designed to guide reasoning rather than supply direct answers.

Educators say success will depend on usefulness. The AI will be piloted in 10th and 11th grades, alongside teacher training, as studies have shown that more than 60% of students already rely on AI tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China sets 10-year targets for mass AI adoption

China has set its most ambitious AI adoption targets yet, aiming to embed the technology across industries, governance, and daily life within the next decade.

According to a new State Council directive, AI use should reach 70% of the population by 2027 and 90% by 2030, with a complete shift to what it calls an ‘intelligent society’ by 2035.

The plan would mean nearly one billion Chinese citizens regularly using AI-powered services or devices within two years, a timeline compared to the rapid rise of smartphones.

Although officials acknowledge risks such as opaque models, hallucinations and algorithmic discrimination, the policy calls for frameworks to govern ‘natural persons, digital persons, and intelligent robots’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic updates Claude’s policy with new data training choices

The US AI startup has announced an update to its data policy for Claude users, introducing an option to allow conversations and coding sessions to be used for training future AI models.

Anthropic stated that all Claude Free, Pro, and Max users, including those using Claude Code, will be asked to make a decision by September 28, 2025.

According to Anthropic, users who opt in will permit retention of their conversations for up to five years, with the data contributing to improvements in areas such as reasoning, coding, and analysis.

Those who choose not to participate will continue under the current policy, where conversations are deleted within thirty days unless flagged for legal or policy reasons.

The new policy does not extend to enterprise products, including Claude for Work, Claude Gov, Claude for Education, or API access through partners like Amazon Bedrock and Google Cloud Vertex AI. These remain governed by separate contractual agreements.

Anthropic noted that the choice will also apply to new users during sign-up, while existing users will be prompted through notifications to review their privacy settings.

The company emphasised that users remain in control of their data and that manually deleted conversations will not be used for training.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Age verification law in Mississipi test the limits of decentralised social media

A new Mississippi law (HB 1126), requiring age verification for all social media users, has sparked controversy over internet freedom and privacy. Bluesky, a decentralised social platform, announced it would block access in the state rather than comply, citing limited resources and concerns about the law’s broad scope.

The law imposes heavy fines, up to $10,000 per user, for non-compliance. Bluesky argued that the required technical changes are too demanding for a small team and raise significant privacy concerns. After the US Supreme Court declined to block the law while legal challenges proceed, platforms like Bluesky are now forced to make difficult decisions.

According to TechCrunch, users in the US state began seeking ways to bypass the restriction, most commonly by using VPNs, which can hide their location and make it appear as though they are accessing the internet from another state or country.

However, some questioned why such measures were necessary. The idea behind decentralised social networks like Bluesky is to reduce control by central authorities, including governments. So if a decentralised platform can still be restricted by state laws or requires workarounds like VPNs, it raises questions about how truly ‘decentralised’ or censorship-resistant these platforms are.

Some users in Mississippi are still accessing Bluesky despite the new law. Many use third-party apps like Graysky or sideload the app via platforms like AltStore. Others rely on forked apps or read-only tools like Anartia.

While decentralisation complicates enforcement, these workarounds may not last, as developers risk legal consequences. Bluesky clients that do not run their own data servers (PDS) might not be directly affected, but explaining this in court is complex.

Broader laws tend to favour large platforms that can afford compliance, while smaller services like Bluesky are often left with no option but to block access or withdraw entirely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Law enforcement embraces AI for efficiency amid rising privacy concerns

Law enforcement agencies increasingly leverage AI across critical functions, from predictive policing, surveillance and facial recognition to automated report writing and forensic analysis, to expand their capacity and improve case outcomes.

In predictive policing, AI models analyse historical crime patterns, demographics and environmental factors to forecast crime hotspots. However, this enables pre-emptive deployment of officers and more efficient resource allocation.

Facial recognition technology matches images from CCTV, body cameras or telescopic data against criminal databases. Meanwhile, NLP supports faster incident reporting, body-cam transcriptions and keyword scanning of digital evidence.

Despite clear benefits, risks persist. Algorithmic bias may unfairly target specific groups. Privacy concerns grow where systems flag individuals without oversight.

Automated decisions also raise questions on accountability, the integrity of evidence, and the preservation of human judgement in justice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung and Chinese brands prepare Max rollout

Russia has been pushing for its state-backed messenger Max to be pre-installed on all smartphones sold in the country since September 2025. Chinese and South Korean manufacturers, including Samsung and Xiaomi, are reportedly preparing to comply, though official confirmation is still pending.

The Max platform, developed by VK (formerly Vkontakte), offers messaging, audio and video calls, file transfers, and payments. It is set to replace VK Messenger on the mandatory app list, signalling a shift away from foreign apps like Telegram and WhatsApp.

Integration may occur via software updates or prompts when inserting a Russian SIM card.

Concerns have arisen over potential surveillance, as Max collects sensitive personal data backed by the Russian government. Critics fear the platform may monitor users, reflecting Moscow’s push to control encrypted communications.

The rollout reflects Russia’s broader push for digital sovereignty. While companies navigate compliance, the move highlights the increasing tension between state-backed applications and widely used foreign messaging services in Russia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI music takes ethical turn with Beatoven.ai’s Maestro launch

Beatoven.ai has launched Maestro, a generative AI model for instrumental music that will later expand to vocals and sound effects. The company claims it is the first fully licensed AI model, ensuring royalties for artists and rights holders.

Trained on licensed datasets from partners such as Rightsify and Symphonic Music, Maestro avoids scraping issues and guarantees attribution. Beatoven.ai, with two million users and 15 million tracks generated, says Maestro can be fine-tuned for new genres.

The platform also includes tools for catalogue owners, allowing labels and publishers to analyse music, generate metadata, and enhance back-catalogue discovery. CEO Mansoor Rahimat Khan said Maestro builds an ‘AI-powered music ecosystem’ designed to push creativity forward rather than mimic it.

Industry figures praised the approach. Ed Newton-Rex of Fairly Trained said Maestro proves AI can be ethical, while Musical AI’s Sean Power called it a fair licensing model. Beatoven.ai also plans to expand its API into gaming, film, and virtual production.

The launch highlights the wider debate over licensing versus scraping. Scraping often exploits copyrighted works without payment, while licensed datasets ensure royalties, higher-quality outputs, and long-term trust. Advocates argue that licensing offers a more sustainable and fairer path for GenAI music.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Parental controls and crisis tools added to ChatGPT amid scrutiny

The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.

The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.

Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.

The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI assistant for editing messages

Meta’s WhatsApp has introduced a new AI feature called Writing Help, designed to assist users in editing, rewriting, and refining the tone of their messages. The tool can adjust grammar, improve phrasing, or reframe a message in a more professional, humorous, or encouraging style before it is sent.

The feature operates through Meta’s Private Processing technology, which ensures that messages remain encrypted and private instead of being visible to WhatsApp or Meta.

According to the company, Writing Help processes requests anonymously and cannot trace them back to the user. The function is optional, disabled by default, and only applies to the chosen message.

To activate the feature, users can tap a small pencil icon that appears while composing a message.

In a demonstration, WhatsApp showed how the tool could turn ‘Please don’t leave dirty socks on the sofa’ into more light-hearted alternatives, including ‘Breaking news: Socks found chilling on the couch’ or ‘Please don’t turn the sofa into a sock graveyard.’

By introducing Writing Help, WhatsApp aims to make communication more flexible and engaging while keeping user privacy intact. The company emphasises that no information is stored, and AI-generated suggestions only appear if users decide to enable the option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!