Parents gain more oversight in latest Roblox update

Roblox is expanding its parental controls, offering more ways for parents to manage their children’s interactions and gaming experiences.

The update builds on safety measures introduced last year following concerns about child protection on the platform.

Parents who link their accounts with their child’s can now block or report specific people from the child’s friends list.

Children under 13 cannot unblock restricted users without parental approval. The update also allows parents to block access to specific games rather than just setting general content maturity limits.

A new feature provides parents with insights into their child’s gaming habits by showing the 20 experiences they have spent the most time on in the past week. Roblox continues to refine its safety tools to create a more secure environment for young players.

For more information on these topics, visit diplomacy.edu.

Meta’s Hypernova smart glasses promise cutting-edge features and advanced display technology

Meta is preparing to launch an advanced pair of smart glasses under the codename Hypernova, featuring a built-in display and gesture control capabilities.

The new device, developed in partnership with Ray-Ban, aims to enhance user convenience by offering features such as media viewing, map navigation, and app notifications.

Unlike previous models, the Hypernova glasses will have a display located in the lower right corner of the right lens, allowing users to maintain a clear view through the left lens.

The glasses will be powered by Qualcomm silicon and run on a customised version of Android. Meta is also developing a wristband, codenamed Ceres, which will provide gesture-based controls, including pinch-to-zoom and wrist rotation.

The wristband is expected to be bundled with the glasses, offering users a more seamless and intuitive experience.

Retail pricing for the Hypernova smart glasses is expected to range between $1,000 and $1,400, significantly higher than current VR-ready smart glasses like the Viture Pro and Xreal One.

However, Meta aims to differentiate its product through enhanced functionality and fashionable design, making it an appealing option for consumers looking for both style and utility.

The Hypernova glasses are projected to hit the market by the end of 2025. Meta is also developing additional augmented reality products, including the Orion holographic glasses and research-focused Aria Gen 2 AR glasses.

Competitors like Samsung are expected to launch similar Android-based smart glasses around the same time, setting the stage for an exciting year in the wearable tech market.

For more information on these topics, visit diplomacy.edu.

Gemini AI for kids: A new era of safe, smart learning

Google appears to be working on a child-friendly version of its Gemini AI, offering young users a safer and more controlled experience. A recent teardown of the Google app (version 16.12.39) uncovered strings referencing ‘kid users,’ hinting at an upcoming feature tailored specifically for children.

While Gemini already assists users with creating stories, answering questions, and helping with homework, this kid-friendly version is expected to include stricter content policies and additional safeguards.

Google’s existing safety measures for teens suggest that Gemini for Kids may offer even tighter restrictions and enhanced content moderation.

It remains unclear how Google plans to implement this feature, but it is likely that Gemini for Kids will be automatically enabled for Google accounts registered under a child’s name.

Given global regulations on data collection for minors, Google will reportedly process children’s data in accordance with its privacy policies and the Gemini Apps Privacy Notice.

As AI increasingly integrates into education and daily life, a safer, child-focused version of Gemini could provide a more secure way for kids to engage with technology while ensuring parental peace of mind.

For more information on these topics, visit diplomacy.edu.

OpenAI’s Ghibli-style tool raises privacy and data issues

OpenAI’s Ghibli-style AI image generator has taken social media by storm, with users eagerly transforming their photos into artwork reminiscent of Hayao Miyazaki’s signature style.

However, digital privacy activists are raising concerns that OpenAI might use this viral trend to collect thousands of personal images for AI training, potentially bypassing legal restrictions on web-scraped data.

Critics warn that while users enjoy the feature, they could unknowingly be handing over fresh facial data instead of protecting their privacy, raising ethical questions about AI and data collection.

Beyond privacy concerns, the trend has also reignited debates about AI’s impact on creative industries. Miyazaki, known for his hand-drawn approach, has previously expressed scepticism about artificial intelligence in animation.

Additionally, under GDPR regulations, OpenAI must justify data collection under “legitimate interest,” but experts argue that users voluntarily uploading images could give the company more freedom to use them instead of requiring further legal justification.

OpenAI has yet to issue an official statement regarding data safety, but ChatGPT itself warns users against uploading personal photos to any AI tool unless they are certain about its privacy policies.

Cybersecurity experts advise people to think twice before sharing high-resolution images online, use passwords instead of facial recognition for device security, and limit app access to their cameras.

As AI-generated image trends continue to gain popularity, the debate over privacy and data ownership is unlikely to fade anytime soon.

For more information on these topics, visit diplomacy.edu.

OnlyFans faces penalty in UK for age check inaccuracy

OnlyFans’ parent company, Fenix, has been fined £1.05 million by UK regulator Ofcom for providing inaccurate information about how it verifies users’ ages. The platform, known for hosting adult content, had claimed its age-checking technology flagged anyone under 23 for additional ID checks.

However, it was later revealed the system was set to flag those under 20, prompting Ofcom to take enforcement action. Ofcom said Fenix failed in its legal obligation to provide accurate details, undermining the regulator’s ability to assess platform safety.

While Fenix accepted the penalty — leading to a 30% reduction in the fine — Ofcom stressed the importance of holding platforms to high standards, especially when protecting minors online. The investigation began in 2022 under UK regulations that predate the Online Safety Act, which is due to take full effect this year.

Why does it matter?

The act will require stronger age verification measures from platforms like OnlyFans, with a July 2025 deadline for full compliance. OnlyFans responded by affirming its commitment to transparency and welcomed the resolution of the case. While best known for adult subscriptions, the platform hosts mainstream content and launched a non-pornographic streaming service in 2023.

For more information on these topics, visit diplomacy.edu.

Meta cracks down on misinformation in Australia

Meta Platforms has announced new measures to combat misinformation and deepfakes in Australia ahead of the country’s upcoming national election.

The company’s independent fact-checking program, supported by Agence France-Presse and the Australian Associated Press, will detect and limit misleading content, while also removing any material that could incite violence or interfere with voting.

Deepfakes, AI-generated media designed to appear real, will also face stricter scrutiny. Meta stated that any content violating its policies would be removed or labelled as ‘altered’ to reduce its visibility.

Users sharing AI-generated content will be encouraged to disclose its origin, aiming to improve transparency.

Meta’s Australian policy follows similar strategies used in elections across India, the UK and the US.

The company is also navigating regulatory challenges in the country, including a proposed levy on big tech firms profiting from local news content and new requirements to enforce a ban on users under 16 by the end of the year.

For more information on these topics, visit diplomacy.edu.

Security Checkup arrives on TikTok to boost user account safety

TikTok has launched a new Security Checkup tool, offering users a simplified way to manage their account safety.

The dashboard provides an easy-to-navigate hub where users can review and update security settings such as login methods, two-step verification, and device access.

Designed to be user-friendly, it aims to encourage proactive security habits without overwhelming people with technical details.

The security portal functions similarly to tools offered by major tech companies like Google and Meta, reinforcing the importance of digital safety.

Features include passkey authentication for password-free logins, alerts for suspicious activity, and the ability to check which devices are logged into an account.

TikTok hopes the tool will make it easier for users to secure their profiles and prevent unauthorised access.

While the Security Checkup is a practical addition, it also arrives amid TikTok’s ongoing struggles in the US, where concerns over data privacy persist.

The company’s head of global security, Kim Albarella, describes the feature as a ‘powerful new tool’ that allows users to ‘take control’ of their account safety with confidence.

Accessing the tool is straightforward—users can find it within the app’s ‘Settings and privacy’ menu under ‘Security & permissions.’

For more information on these topics, visit diplomacy.edu.

California’s attempt to regulate online platforms faces legal setback

A federal judge in California has blocked a state law requiring online platforms to take extra measures to protect children, ruling it imposes unconstitutional burdens on tech companies.

The law, signed by Governor Gavin Newsom in 2022, aimed to prevent harm to young users by mandating businesses to assess risks, adjust privacy settings, and estimate users’ ages. Companies faced fines of up to $7,500 per child for intentional violations.

Judge Beth Freeman ruled that the law was too broad and infringed on free speech, siding with NetChoice, a group representing major tech firms, including Amazon, Google, Meta, and Netflix.

NetChoice argued the legislation effectively forced companies to act as government censors under the pretext of protecting privacy.

The ruling marks a victory for the tech industry, which has repeatedly challenged state-level regulations on content moderation and user protections.

California Attorney General Rob Bonta expressed disappointment in the decision and pledged to continue defending the law. The legal battle is expected to continue, as a federal appeals court had previously ordered a reassessment of the injunction.

The case highlights the ongoing conflict between government efforts to regulate online spaces and tech companies’ claims of constitutional overreach.

For more information on these topics, visit diplomacy.edu.

UK watchdog launches enforcement on file-sharing services

The UK’s internet watchdog, Ofcom, has launched a new enforcement programme under the Online Safety Act (OSA), targeting storage and file-sharing services due to concerns over the sharing of child sexual abuse material (CSAM).

The regulator has identified these services as particularly vulnerable to misuse for distributing CSAM and will assess the safety measures in place to prevent such activities.

As part of the enforcement programme, Ofcom has contacted a number of file-storage and sharing services, warning them that formal information requests will be issued soon.

These requests will require the services to submit details on the measures they have implemented or plan to introduce to combat CSAM, along with risk assessments related to illegal content.

Failure to comply with the requirements of the OSA could result in substantial penalties for these companies, with fines reaching up to 10% of their global annual turnover.

Ofcom’s crackdown highlights the growing responsibility for online services to prevent illegal content from being shared on their platforms.

For more information on these topics, visit diplomacy.edu.

FTC confirms no delay in Amazon trial

The US Federal Trade Commission (FTC) announced on Wednesday that it does not need to delay its September trial against Amazon, contradicting an earlier claim by one of its attorneys about resource shortages.

Jonathan Cohen, an FTC lawyer, retracted his statement that cost-cutting measures had strained the agency’s ability to proceed, assuring the court that the FTC is fully prepared to litigate the case.

FTC Chairman Andrew Ferguson reaffirmed the agency’s commitment, dismissing concerns over budget constraints and stating that the FTC will not back down from taking on Big Tech.

Earlier in the day, Cohen had described a ‘dire resource situation,’ citing employee resignations, a hiring freeze, and restrictions on legal expenses. However, he later clarified that these challenges would not impact the case.

The lawsuit, filed in 2023, accuses Amazon of using ‘dark patterns’ to mislead consumers into enrolling in automatically renewing Prime subscriptions, a program with over 200 million users.

With claims exceeding $1 billion, the trial is expected to be a high-profile battle between regulators and one of the world’s largest tech companies. Amazon has denied any wrongdoing, and three of its senior executives are also named in the case.

For more information on these topics, visit diplomacy.edu.