Victims of AI-driven sex crimes in Korea continue to grow

South Korea is facing a sharp rise in AI-related digital sex crimes, with deepfake pornography and online abuse increasingly affecting young women and children.

According to figures released by the Ministry of Gender Equality and Family and the Women’s Human Rights Institute, over 10,000 people sought help last year, marking a 14.7 percent increase from 2023.

Women made up more than 70 percent of those who contacted the Advocacy Center for Online Sexual Abuse Victims.

The majority were in their teens or twenties, with abuse often occurring via social media, messaging apps, and anonymous platforms. A growing portion of victims, including children under 10, were targeted due to the easy accessibility of AI tools.

The most frequently reported issue was ‘distribution anxiety,’ where victims feared the release of sensitive or manipulated videos, followed by blackmail and illegal filming.

Deepfake cases more than tripled in one year, with synthetic content often involving the use of female students’ images. In one notable incident, a university student and his peers used deepfake techniques to create explicit fake images of classmates and shared them on Telegram.

With over 300,000 pieces of illicit content removed in 2024, authorities warn that the majority of illegal websites are hosted overseas, complicating efforts to take down harmful material.

The South Korean government plans to strengthen its response by expanding educational outreach, supporting victims further, and implementing new laws to prevent secondary harm by allowing the removal of personal information alongside explicit images.

For more information on these topics, visit diplomacy.edu.

Meta rolls out restricted teen accounts across platforms

Meta is expanding its ‘Teen Accounts’ feature to Facebook and Messenger following its initial launch on Instagram last September

The rollout begins in the US, UK, Australia, and Canada, with plans to reach more countries soon. 

These accounts are designed to give younger users an app experience with stronger safety measures, automatically activating restrictions to limit exposure to harmful content and interactions.

Teen users will be automatically placed in a more controlled environment that restricts who can message, comment, or tag them. 

Only friends and previously contacted users can reach out via Messenger or see their stories, but tagging and mentions are also limited. 

These settings require parental approval for any changes, and teens under 16 must have consent to alter key safety features.

On Instagram, Meta is introducing stricter safeguards. Users under 16 now need parental permission to go live or to turn off the tool that blurs images containing suspected nudity in direct messages. 

Meta also implements reminders to limit screen time, prompting teens to log off after one hour and enabling overnight ‘Quiet mode’ to reduce late-night use.

The initiative follows increasing pressure on social media platforms to address concerns around teen mental health. 

In recent years, US lawmakers and the Surgeon General have highlighted the risks associated with young users’ exposure to unregulated digital environments. 

Some states have even mandated parental consent for teen access to social platforms.

Meta reports that over 54 million Instagram accounts have migrated to Teen Accounts. 

According to the company, 97% of users aged 13 to 15 keep the default protections in place. 

A study commissioned by Meta and Ipsos found that 94% of surveyed parents support Teen Accounts, with 85% saying the controls help ensure more positive online experiences for their children.

As digital safety continues to evolve as a priority, Meta’s expansion of Teen Accounts signals the willingness to build more accountable, youth-friendly online spaces across its platforms.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises AI-generated nude deepfakes of minors

New Jersey has become the first US state to criminalise the creation and sharing of AI-generated nude images of minors, following a high-profile campaign led by 14-year-old Francesca Mani. The US legislation, signed into law on 2 April by Governor Phil Murphy, allows victims to sue perpetrators for up to $1,000 per image and includes criminal penalties of up to five years in prison and fines of up to $30,000.

Mani launched her campaign after discovering that boys at her school had used an AI “nudify” website to target her and other girls. Refusing to accept the school’s minimal disciplinary response, she called for lawmakers to take decisive action against such deepfake abuses. Her efforts gained national attention, including a feature on 60 Minutes, and helped drive the new legal protections.

The law defines deepfakes as media that convincingly depicts someone doing something they never actually did. It also prohibits the use of such technology for election interference or defamation. Although the law’s focus is on malicious misuse, questions remain about whether exemptions will be made for legitimate uses in film, tech, or education sectors.

For more information on these topics, visit diplomacy.edu.

National Crime Agency responds to AI crime warning

The National Crime Agency (NCA) has pledged to ‘closely examine’ recommendations from the Alan Turing Institute after a recent report highlighted the UK’s insufficient preparedness for AI-enabled crime.

The report, from the Centre for Emerging Technology and Security (CETaS), urges the NCA to create a task force to address AI crime within the next five years.

Despite AI-enabled crime being in its early stages, the report warns that criminals are rapidly advancing their use of AI, outpacing law enforcement’s ability to respond.

CETaS claims that UK police forces have been slow to adopt AI themselves, which could leave them vulnerable to increasingly sophisticated crimes, such as child sexual abuse, cybercrime, and fraud.

The Alan Turing Institute emphasises that although AI-specific legislation may be needed eventually, the immediate priority is for law enforcement to integrate AI into their crime-fighting efforts.

An initiative like this would involve using AI tools to combat AI-enabled crimes effectively, as fraudsters and criminals exploit AI’s potential to deceive.

While AI crime remains a relatively new phenomenon, recent examples such as the $25 million Deepfake CFO fraud show the growing threat.

The report also highlights the role of AI in phishing scams, romance fraud, and other deceptive practices, warning that future AI-driven crimes may become harder to detect as technology evolves.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises the harmful use of AI deepfakes

New Jersey has become one of several US states to criminalise the creation and distribution of deceptive AI-generated media, commonly known as deepfakes. Governor Phil Murphy signed the legislation on Wednesday, introducing civil and criminal penalties for those who produce or share such media.

If deepfakes are used to commit further crimes like harassment, they may now be treated as a third-degree offence, punishable by fines up to $30,000 or up to five years in prison.

The bill was inspired by a disturbing incident at a New Jersey school where students shared explicit AI-generated images of a classmate.

Governor Murphy had initially vetoed the legislation in March, calling for changes to reduce the risk of constitutional challenges. Lawmakers later amended the bill, which passed with overwhelming support in both chambers.

Instead of ignoring the threat posed by deepfakes, the law aims to deter their misuse while preserving legitimate applications of AI.

‘This legislation takes a proactive approach,’ said Representative Lou Greenwald, one of the bill’s sponsors. ‘We are safeguarding New Jersey residents and offering justice to victims of digital abuse.’

A growing number of US states are taking similar action, particularly around election integrity and online harassment. While 27 states now target AI-generated sexual content, others have introduced measures to limit political deepfakes.

States like Texas and Minnesota have banned deceptive political media outright, while Florida and Wisconsin require clear disclosures. New Jersey’s move reflects a broader push to keep pace with rapidly evolving technology and its impact on public trust and safety.

For more information on these topics, visit diplomacy.edu.

Parents gain more oversight in latest Roblox update

Roblox is expanding its parental controls, offering more ways for parents to manage their children’s interactions and gaming experiences.

The update builds on safety measures introduced last year following concerns about child protection on the platform.

Parents who link their accounts with their child’s can now block or report specific people from the child’s friends list.

Children under 13 cannot unblock restricted users without parental approval. The update also allows parents to block access to specific games rather than just setting general content maturity limits.

A new feature provides parents with insights into their child’s gaming habits by showing the 20 experiences they have spent the most time on in the past week. Roblox continues to refine its safety tools to create a more secure environment for young players.

For more information on these topics, visit diplomacy.edu.

Meta’s Hypernova smart glasses promise cutting-edge features and advanced display technology

Meta is preparing to launch an advanced pair of smart glasses under the codename Hypernova, featuring a built-in display and gesture control capabilities.

The new device, developed in partnership with Ray-Ban, aims to enhance user convenience by offering features such as media viewing, map navigation, and app notifications.

Unlike previous models, the Hypernova glasses will have a display located in the lower right corner of the right lens, allowing users to maintain a clear view through the left lens.

The glasses will be powered by Qualcomm silicon and run on a customised version of Android. Meta is also developing a wristband, codenamed Ceres, which will provide gesture-based controls, including pinch-to-zoom and wrist rotation.

The wristband is expected to be bundled with the glasses, offering users a more seamless and intuitive experience.

Retail pricing for the Hypernova smart glasses is expected to range between $1,000 and $1,400, significantly higher than current VR-ready smart glasses like the Viture Pro and Xreal One.

However, Meta aims to differentiate its product through enhanced functionality and fashionable design, making it an appealing option for consumers looking for both style and utility.

The Hypernova glasses are projected to hit the market by the end of 2025. Meta is also developing additional augmented reality products, including the Orion holographic glasses and research-focused Aria Gen 2 AR glasses.

Competitors like Samsung are expected to launch similar Android-based smart glasses around the same time, setting the stage for an exciting year in the wearable tech market.

For more information on these topics, visit diplomacy.edu.

Gemini AI for kids: A new era of safe, smart learning

Google appears to be working on a child-friendly version of its Gemini AI, offering young users a safer and more controlled experience. A recent teardown of the Google app (version 16.12.39) uncovered strings referencing ‘kid users,’ hinting at an upcoming feature tailored specifically for children.

While Gemini already assists users with creating stories, answering questions, and helping with homework, this kid-friendly version is expected to include stricter content policies and additional safeguards.

Google’s existing safety measures for teens suggest that Gemini for Kids may offer even tighter restrictions and enhanced content moderation.

It remains unclear how Google plans to implement this feature, but it is likely that Gemini for Kids will be automatically enabled for Google accounts registered under a child’s name.

Given global regulations on data collection for minors, Google will reportedly process children’s data in accordance with its privacy policies and the Gemini Apps Privacy Notice.

As AI increasingly integrates into education and daily life, a safer, child-focused version of Gemini could provide a more secure way for kids to engage with technology while ensuring parental peace of mind.

For more information on these topics, visit diplomacy.edu.

Apple expands AI features with new update

Apple Intelligence is expanding with new features, including Priority Notifications, which highlight time-sensitive alerts for users. This update is part of iOS 18.4, iPadOS 18.4, and macOS Sequoia 15.4, rolling out globally.

The AI suite is now available in more languages and has launched in the EU for iPhone and iPad users.

Additional improvements include a new Sketch style in Image Playground and the ability to generate ‘memory movies’ on Mac using simple text descriptions. Vision Pro users in the US can now access Apple Intelligence features like Writing Tools and Genmoji.

Apple’s AI rollout has been gradual since its introduction at WWDC last year, with features arriving in stages.

The update also brings fresh emojis, child safety enhancements, and the debut of Apple News+ Food, further expanding Apple’s digital ecosystem.

For more information on these topics, visit diplomacy.edu.

OnlyFans faces penalty in UK for age check inaccuracy

OnlyFans’ parent company, Fenix, has been fined £1.05 million by UK regulator Ofcom for providing inaccurate information about how it verifies users’ ages. The platform, known for hosting adult content, had claimed its age-checking technology flagged anyone under 23 for additional ID checks.

However, it was later revealed the system was set to flag those under 20, prompting Ofcom to take enforcement action. Ofcom said Fenix failed in its legal obligation to provide accurate details, undermining the regulator’s ability to assess platform safety.

While Fenix accepted the penalty — leading to a 30% reduction in the fine — Ofcom stressed the importance of holding platforms to high standards, especially when protecting minors online. The investigation began in 2022 under UK regulations that predate the Online Safety Act, which is due to take full effect this year.

Why does it matter?

The act will require stronger age verification measures from platforms like OnlyFans, with a July 2025 deadline for full compliance. OnlyFans responded by affirming its commitment to transparency and welcomed the resolution of the case. While best known for adult subscriptions, the platform hosts mainstream content and launched a non-pornographic streaming service in 2023.

For more information on these topics, visit diplomacy.edu.