Meta to use EU user data for AI training amid scrutiny

Meta Platforms has announced it will begin using public posts, comments, and user interactions with its AI tools to train its AI models in the EU, instead of limiting training data to existing US-based inputs.

The move follows the recent European rollout of Meta AI, which had been delayed since June 2024 due to data privacy concerns raised by regulators. The company said EU users of Facebook and Instagram would receive notifications outlining how their data may be used, along with a link to opt out.

Meta clarified that while questions posed to its AI and public content from adult users may be used, private messages and data from under-18s would be excluded from training.

Instead of expanding quietly, the company is now making its plans public in an attempt to meet the EU’s transparency expectations.

The shift comes after Meta paused its original launch last year at the request of Ireland’s Data Protection Commission, which expressed concerns about using social media content for AI development. The move also drew criticism from advocacy group NOYB, which has urged regulators to intervene more decisively.

Meta joins a growing list of tech firms under scrutiny in Europe. Ireland’s privacy watchdog is already investigating Elon Musk’s X and Google for similar practices involving personal data use in AI model training.

Instead of treating such probes as isolated incidents, the EU appears to be setting a precedent that could reshape how global companies handle user data in AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI tool helps spot cataracts in babies

A groundbreaking medical device designed to detect cataracts in newborns is being enhanced with the help of AI. The Neocam, a handheld digital imaging tool created by Addenbrooke’s eye surgeon, Dr Louise Allen, allows midwives to take photos of a baby’s eyes to spot congenital cataracts — the leading cause of preventable childhood blindness.

A new AI feature under development will instantly assess whether a photo is clear enough for diagnosis, streamlining the process and reducing the need for retakes. The improvements are being developed by Cambridgeshire-based consultancy 42 Technology (42T), whose software engineers train a machine-learning model using a vast dataset of 46,000 anonymised images.

The AI project is backed by an innovation grant from Addenbrooke’s Charitable Trust (ACT) to make Neocam more accurate and accessible, especially in areas with limited specialist care. Neocam is currently being trialled in maternity units across the UK as part of a large-scale study called DIvO, where over 140,000 babies will have their eyes screened using both traditional methods and the new device.

Although the final results are not expected until 2027, early findings suggest Neocam has already identified rare visual conditions that would have otherwise gone undetected. Dr Allen emphasised the importance of collaboration and public support for the project, saying that the AI-enhanced Neocam could make early detection of eye conditions more reliable worldwide.

Why does it matter?

With growing support from institutions like the National Institute for Health and Care Research and ACT, this innovation could significantly improve childhood eye care across both urban and remote settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to block livestreaming for under 16s without parental permission

Meta will soon prevent children under 16 from livestreaming on Instagram unless their parents explicitly approve.

The new safety rule is part of broader efforts to protect young users online and will first be introduced in the UK, US, Canada and Australia, before being extended to the rest of Europe and beyond in the coming months.

The company explained that teenagers under 16 will also need parental permission to disable a feature that automatically blurs images suspected of containing nudity in direct messages.

These updates build on Meta’s teen supervision programme introduced last September, which gives parents more control over how their children use Instagram.

Instead of limiting the changes to Instagram alone, Meta is now extending similar protections to Facebook and Messenger.

Teen accounts on those platforms will be set to private by default, and will automatically block messages from strangers, reduce exposure to violent or sensitive content, and include reminders to take breaks after an hour of use. Notifications will also pause during usual bedtime hours.

Meta said these safety tools are already being used across at least 54 million teen accounts. The company claims the new measures will better support teenagers and parents alike in making social media use safer and more intentional, instead of leaving young users unprotected or unsupervised online.

For more information on these topics, visit diplomacy.edu.

Victims of AI-driven sex crimes in Korea continue to grow

South Korea is facing a sharp rise in AI-related digital sex crimes, with deepfake pornography and online abuse increasingly affecting young women and children.

According to figures released by the Ministry of Gender Equality and Family and the Women’s Human Rights Institute, over 10,000 people sought help last year, marking a 14.7 percent increase from 2023.

Women made up more than 70 percent of those who contacted the Advocacy Center for Online Sexual Abuse Victims.

The majority were in their teens or twenties, with abuse often occurring via social media, messaging apps, and anonymous platforms. A growing portion of victims, including children under 10, were targeted due to the easy accessibility of AI tools.

The most frequently reported issue was ‘distribution anxiety,’ where victims feared the release of sensitive or manipulated videos, followed by blackmail and illegal filming.

Deepfake cases more than tripled in one year, with synthetic content often involving the use of female students’ images. In one notable incident, a university student and his peers used deepfake techniques to create explicit fake images of classmates and shared them on Telegram.

With over 300,000 pieces of illicit content removed in 2024, authorities warn that the majority of illegal websites are hosted overseas, complicating efforts to take down harmful material.

The South Korean government plans to strengthen its response by expanding educational outreach, supporting victims further, and implementing new laws to prevent secondary harm by allowing the removal of personal information alongside explicit images.

For more information on these topics, visit diplomacy.edu.

Meta rolls out restricted teen accounts across platforms

Meta is expanding its ‘Teen Accounts’ feature to Facebook and Messenger following its initial launch on Instagram last September

The rollout begins in the US, UK, Australia, and Canada, with plans to reach more countries soon. 

These accounts are designed to give younger users an app experience with stronger safety measures, automatically activating restrictions to limit exposure to harmful content and interactions.

Teen users will be automatically placed in a more controlled environment that restricts who can message, comment, or tag them. 

Only friends and previously contacted users can reach out via Messenger or see their stories, but tagging and mentions are also limited. 

These settings require parental approval for any changes, and teens under 16 must have consent to alter key safety features.

On Instagram, Meta is introducing stricter safeguards. Users under 16 now need parental permission to go live or to turn off the tool that blurs images containing suspected nudity in direct messages. 

Meta also implements reminders to limit screen time, prompting teens to log off after one hour and enabling overnight ‘Quiet mode’ to reduce late-night use.

The initiative follows increasing pressure on social media platforms to address concerns around teen mental health. 

In recent years, US lawmakers and the Surgeon General have highlighted the risks associated with young users’ exposure to unregulated digital environments. 

Some states have even mandated parental consent for teen access to social platforms.

Meta reports that over 54 million Instagram accounts have migrated to Teen Accounts. 

According to the company, 97% of users aged 13 to 15 keep the default protections in place. 

A study commissioned by Meta and Ipsos found that 94% of surveyed parents support Teen Accounts, with 85% saying the controls help ensure more positive online experiences for their children.

As digital safety continues to evolve as a priority, Meta’s expansion of Teen Accounts signals the willingness to build more accountable, youth-friendly online spaces across its platforms.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises AI-generated nude deepfakes of minors

New Jersey has become the first US state to criminalise the creation and sharing of AI-generated nude images of minors, following a high-profile campaign led by 14-year-old Francesca Mani. The US legislation, signed into law on 2 April by Governor Phil Murphy, allows victims to sue perpetrators for up to $1,000 per image and includes criminal penalties of up to five years in prison and fines of up to $30,000.

Mani launched her campaign after discovering that boys at her school had used an AI “nudify” website to target her and other girls. Refusing to accept the school’s minimal disciplinary response, she called for lawmakers to take decisive action against such deepfake abuses. Her efforts gained national attention, including a feature on 60 Minutes, and helped drive the new legal protections.

The law defines deepfakes as media that convincingly depicts someone doing something they never actually did. It also prohibits the use of such technology for election interference or defamation. Although the law’s focus is on malicious misuse, questions remain about whether exemptions will be made for legitimate uses in film, tech, or education sectors.

For more information on these topics, visit diplomacy.edu.

National Crime Agency responds to AI crime warning

The National Crime Agency (NCA) has pledged to ‘closely examine’ recommendations from the Alan Turing Institute after a recent report highlighted the UK’s insufficient preparedness for AI-enabled crime.

The report, from the Centre for Emerging Technology and Security (CETaS), urges the NCA to create a task force to address AI crime within the next five years.

Despite AI-enabled crime being in its early stages, the report warns that criminals are rapidly advancing their use of AI, outpacing law enforcement’s ability to respond.

CETaS claims that UK police forces have been slow to adopt AI themselves, which could leave them vulnerable to increasingly sophisticated crimes, such as child sexual abuse, cybercrime, and fraud.

The Alan Turing Institute emphasises that although AI-specific legislation may be needed eventually, the immediate priority is for law enforcement to integrate AI into their crime-fighting efforts.

An initiative like this would involve using AI tools to combat AI-enabled crimes effectively, as fraudsters and criminals exploit AI’s potential to deceive.

While AI crime remains a relatively new phenomenon, recent examples such as the $25 million Deepfake CFO fraud show the growing threat.

The report also highlights the role of AI in phishing scams, romance fraud, and other deceptive practices, warning that future AI-driven crimes may become harder to detect as technology evolves.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises the harmful use of AI deepfakes

New Jersey has become one of several US states to criminalise the creation and distribution of deceptive AI-generated media, commonly known as deepfakes. Governor Phil Murphy signed the legislation on Wednesday, introducing civil and criminal penalties for those who produce or share such media.

If deepfakes are used to commit further crimes like harassment, they may now be treated as a third-degree offence, punishable by fines up to $30,000 or up to five years in prison.

The bill was inspired by a disturbing incident at a New Jersey school where students shared explicit AI-generated images of a classmate.

Governor Murphy had initially vetoed the legislation in March, calling for changes to reduce the risk of constitutional challenges. Lawmakers later amended the bill, which passed with overwhelming support in both chambers.

Instead of ignoring the threat posed by deepfakes, the law aims to deter their misuse while preserving legitimate applications of AI.

‘This legislation takes a proactive approach,’ said Representative Lou Greenwald, one of the bill’s sponsors. ‘We are safeguarding New Jersey residents and offering justice to victims of digital abuse.’

A growing number of US states are taking similar action, particularly around election integrity and online harassment. While 27 states now target AI-generated sexual content, others have introduced measures to limit political deepfakes.

States like Texas and Minnesota have banned deceptive political media outright, while Florida and Wisconsin require clear disclosures. New Jersey’s move reflects a broader push to keep pace with rapidly evolving technology and its impact on public trust and safety.

For more information on these topics, visit diplomacy.edu.

Parents gain more oversight in latest Roblox update

Roblox is expanding its parental controls, offering more ways for parents to manage their children’s interactions and gaming experiences.

The update builds on safety measures introduced last year following concerns about child protection on the platform.

Parents who link their accounts with their child’s can now block or report specific people from the child’s friends list.

Children under 13 cannot unblock restricted users without parental approval. The update also allows parents to block access to specific games rather than just setting general content maturity limits.

A new feature provides parents with insights into their child’s gaming habits by showing the 20 experiences they have spent the most time on in the past week. Roblox continues to refine its safety tools to create a more secure environment for young players.

For more information on these topics, visit diplomacy.edu.

Meta’s Hypernova smart glasses promise cutting-edge features and advanced display technology

Meta is preparing to launch an advanced pair of smart glasses under the codename Hypernova, featuring a built-in display and gesture control capabilities.

The new device, developed in partnership with Ray-Ban, aims to enhance user convenience by offering features such as media viewing, map navigation, and app notifications.

Unlike previous models, the Hypernova glasses will have a display located in the lower right corner of the right lens, allowing users to maintain a clear view through the left lens.

The glasses will be powered by Qualcomm silicon and run on a customised version of Android. Meta is also developing a wristband, codenamed Ceres, which will provide gesture-based controls, including pinch-to-zoom and wrist rotation.

The wristband is expected to be bundled with the glasses, offering users a more seamless and intuitive experience.

Retail pricing for the Hypernova smart glasses is expected to range between $1,000 and $1,400, significantly higher than current VR-ready smart glasses like the Viture Pro and Xreal One.

However, Meta aims to differentiate its product through enhanced functionality and fashionable design, making it an appealing option for consumers looking for both style and utility.

The Hypernova glasses are projected to hit the market by the end of 2025. Meta is also developing additional augmented reality products, including the Orion holographic glasses and research-focused Aria Gen 2 AR glasses.

Competitors like Samsung are expected to launch similar Android-based smart glasses around the same time, setting the stage for an exciting year in the wearable tech market.

For more information on these topics, visit diplomacy.edu.