New Jersey criminalises the harmful use of AI deepfakes

New Jersey has become one of several US states to criminalise the creation and distribution of deceptive AI-generated media, commonly known as deepfakes. Governor Phil Murphy signed the legislation on Wednesday, introducing civil and criminal penalties for those who produce or share such media.

If deepfakes are used to commit further crimes like harassment, they may now be treated as a third-degree offence, punishable by fines up to $30,000 or up to five years in prison.

The bill was inspired by a disturbing incident at a New Jersey school where students shared explicit AI-generated images of a classmate.

Governor Murphy had initially vetoed the legislation in March, calling for changes to reduce the risk of constitutional challenges. Lawmakers later amended the bill, which passed with overwhelming support in both chambers.

Instead of ignoring the threat posed by deepfakes, the law aims to deter their misuse while preserving legitimate applications of AI.

‘This legislation takes a proactive approach,’ said Representative Lou Greenwald, one of the bill’s sponsors. ‘We are safeguarding New Jersey residents and offering justice to victims of digital abuse.’

A growing number of US states are taking similar action, particularly around election integrity and online harassment. While 27 states now target AI-generated sexual content, others have introduced measures to limit political deepfakes.

States like Texas and Minnesota have banned deceptive political media outright, while Florida and Wisconsin require clear disclosures. New Jersey’s move reflects a broader push to keep pace with rapidly evolving technology and its impact on public trust and safety.

For more information on these topics, visit diplomacy.edu.

Parents gain more oversight in latest Roblox update

Roblox is expanding its parental controls, offering more ways for parents to manage their children’s interactions and gaming experiences.

The update builds on safety measures introduced last year following concerns about child protection on the platform.

Parents who link their accounts with their child’s can now block or report specific people from the child’s friends list.

Children under 13 cannot unblock restricted users without parental approval. The update also allows parents to block access to specific games rather than just setting general content maturity limits.

A new feature provides parents with insights into their child’s gaming habits by showing the 20 experiences they have spent the most time on in the past week. Roblox continues to refine its safety tools to create a more secure environment for young players.

For more information on these topics, visit diplomacy.edu.

Meta’s Hypernova smart glasses promise cutting-edge features and advanced display technology

Meta is preparing to launch an advanced pair of smart glasses under the codename Hypernova, featuring a built-in display and gesture control capabilities.

The new device, developed in partnership with Ray-Ban, aims to enhance user convenience by offering features such as media viewing, map navigation, and app notifications.

Unlike previous models, the Hypernova glasses will have a display located in the lower right corner of the right lens, allowing users to maintain a clear view through the left lens.

The glasses will be powered by Qualcomm silicon and run on a customised version of Android. Meta is also developing a wristband, codenamed Ceres, which will provide gesture-based controls, including pinch-to-zoom and wrist rotation.

The wristband is expected to be bundled with the glasses, offering users a more seamless and intuitive experience.

Retail pricing for the Hypernova smart glasses is expected to range between $1,000 and $1,400, significantly higher than current VR-ready smart glasses like the Viture Pro and Xreal One.

However, Meta aims to differentiate its product through enhanced functionality and fashionable design, making it an appealing option for consumers looking for both style and utility.

The Hypernova glasses are projected to hit the market by the end of 2025. Meta is also developing additional augmented reality products, including the Orion holographic glasses and research-focused Aria Gen 2 AR glasses.

Competitors like Samsung are expected to launch similar Android-based smart glasses around the same time, setting the stage for an exciting year in the wearable tech market.

For more information on these topics, visit diplomacy.edu.

Gemini AI for kids: A new era of safe, smart learning

Google appears to be working on a child-friendly version of its Gemini AI, offering young users a safer and more controlled experience. A recent teardown of the Google app (version 16.12.39) uncovered strings referencing ‘kid users,’ hinting at an upcoming feature tailored specifically for children.

While Gemini already assists users with creating stories, answering questions, and helping with homework, this kid-friendly version is expected to include stricter content policies and additional safeguards.

Google’s existing safety measures for teens suggest that Gemini for Kids may offer even tighter restrictions and enhanced content moderation.

It remains unclear how Google plans to implement this feature, but it is likely that Gemini for Kids will be automatically enabled for Google accounts registered under a child’s name.

Given global regulations on data collection for minors, Google will reportedly process children’s data in accordance with its privacy policies and the Gemini Apps Privacy Notice.

As AI increasingly integrates into education and daily life, a safer, child-focused version of Gemini could provide a more secure way for kids to engage with technology while ensuring parental peace of mind.

For more information on these topics, visit diplomacy.edu.

Apple expands AI features with new update

Apple Intelligence is expanding with new features, including Priority Notifications, which highlight time-sensitive alerts for users. This update is part of iOS 18.4, iPadOS 18.4, and macOS Sequoia 15.4, rolling out globally.

The AI suite is now available in more languages and has launched in the EU for iPhone and iPad users.

Additional improvements include a new Sketch style in Image Playground and the ability to generate ‘memory movies’ on Mac using simple text descriptions. Vision Pro users in the US can now access Apple Intelligence features like Writing Tools and Genmoji.

Apple’s AI rollout has been gradual since its introduction at WWDC last year, with features arriving in stages.

The update also brings fresh emojis, child safety enhancements, and the debut of Apple News+ Food, further expanding Apple’s digital ecosystem.

For more information on these topics, visit diplomacy.edu.

OnlyFans faces penalty in UK for age check inaccuracy

OnlyFans’ parent company, Fenix, has been fined £1.05 million by UK regulator Ofcom for providing inaccurate information about how it verifies users’ ages. The platform, known for hosting adult content, had claimed its age-checking technology flagged anyone under 23 for additional ID checks.

However, it was later revealed the system was set to flag those under 20, prompting Ofcom to take enforcement action. Ofcom said Fenix failed in its legal obligation to provide accurate details, undermining the regulator’s ability to assess platform safety.

While Fenix accepted the penalty — leading to a 30% reduction in the fine — Ofcom stressed the importance of holding platforms to high standards, especially when protecting minors online. The investigation began in 2022 under UK regulations that predate the Online Safety Act, which is due to take full effect this year.

Why does it matter?

The act will require stronger age verification measures from platforms like OnlyFans, with a July 2025 deadline for full compliance. OnlyFans responded by affirming its commitment to transparency and welcomed the resolution of the case. While best known for adult subscriptions, the platform hosts mainstream content and launched a non-pornographic streaming service in 2023.

For more information on these topics, visit diplomacy.edu.

Meta cracks down on misinformation in Australia

Meta Platforms has announced new measures to combat misinformation and deepfakes in Australia ahead of the country’s upcoming national election.

The company’s independent fact-checking program, supported by Agence France-Presse and the Australian Associated Press, will detect and limit misleading content, while also removing any material that could incite violence or interfere with voting.

Deepfakes, AI-generated media designed to appear real, will also face stricter scrutiny. Meta stated that any content violating its policies would be removed or labelled as ‘altered’ to reduce its visibility.

Users sharing AI-generated content will be encouraged to disclose its origin, aiming to improve transparency.

Meta’s Australian policy follows similar strategies used in elections across India, the UK and the US.

The company is also navigating regulatory challenges in the country, including a proposed levy on big tech firms profiting from local news content and new requirements to enforce a ban on users under 16 by the end of the year.

For more information on these topics, visit diplomacy.edu.

Security Checkup arrives on TikTok to boost user account safety

TikTok has launched a new Security Checkup tool, offering users a simplified way to manage their account safety.

The dashboard provides an easy-to-navigate hub where users can review and update security settings such as login methods, two-step verification, and device access.

Designed to be user-friendly, it aims to encourage proactive security habits without overwhelming people with technical details.

The security portal functions similarly to tools offered by major tech companies like Google and Meta, reinforcing the importance of digital safety.

Features include passkey authentication for password-free logins, alerts for suspicious activity, and the ability to check which devices are logged into an account.

TikTok hopes the tool will make it easier for users to secure their profiles and prevent unauthorised access.

While the Security Checkup is a practical addition, it also arrives amid TikTok’s ongoing struggles in the US, where concerns over data privacy persist.

The company’s head of global security, Kim Albarella, describes the feature as a ‘powerful new tool’ that allows users to ‘take control’ of their account safety with confidence.

Accessing the tool is straightforward—users can find it within the app’s ‘Settings and privacy’ menu under ‘Security & permissions.’

For more information on these topics, visit diplomacy.edu.

California’s attempt to regulate online platforms faces legal setback

A federal judge in California has blocked a state law requiring online platforms to take extra measures to protect children, ruling it imposes unconstitutional burdens on tech companies.

The law, signed by Governor Gavin Newsom in 2022, aimed to prevent harm to young users by mandating businesses to assess risks, adjust privacy settings, and estimate users’ ages. Companies faced fines of up to $7,500 per child for intentional violations.

Judge Beth Freeman ruled that the law was too broad and infringed on free speech, siding with NetChoice, a group representing major tech firms, including Amazon, Google, Meta, and Netflix.

NetChoice argued the legislation effectively forced companies to act as government censors under the pretext of protecting privacy.

The ruling marks a victory for the tech industry, which has repeatedly challenged state-level regulations on content moderation and user protections.

California Attorney General Rob Bonta expressed disappointment in the decision and pledged to continue defending the law. The legal battle is expected to continue, as a federal appeals court had previously ordered a reassessment of the injunction.

The case highlights the ongoing conflict between government efforts to regulate online spaces and tech companies’ claims of constitutional overreach.

For more information on these topics, visit diplomacy.edu.

UK watchdog launches enforcement on file-sharing services

The UK’s internet watchdog, Ofcom, has launched a new enforcement programme under the Online Safety Act (OSA), targeting storage and file-sharing services due to concerns over the sharing of child sexual abuse material (CSAM).

The regulator has identified these services as particularly vulnerable to misuse for distributing CSAM and will assess the safety measures in place to prevent such activities.

As part of the enforcement programme, Ofcom has contacted a number of file-storage and sharing services, warning them that formal information requests will be issued soon.

These requests will require the services to submit details on the measures they have implemented or plan to introduce to combat CSAM, along with risk assessments related to illegal content.

Failure to comply with the requirements of the OSA could result in substantial penalties for these companies, with fines reaching up to 10% of their global annual turnover.

Ofcom’s crackdown highlights the growing responsibility for online services to prevent illegal content from being shared on their platforms.

For more information on these topics, visit diplomacy.edu.