Gemini AI for kids: A new era of safe, smart learning

Google appears to be working on a child-friendly version of its Gemini AI, offering young users a safer and more controlled experience. A recent teardown of the Google app (version 16.12.39) uncovered strings referencing ‘kid users,’ hinting at an upcoming feature tailored specifically for children.

While Gemini already assists users with creating stories, answering questions, and helping with homework, this kid-friendly version is expected to include stricter content policies and additional safeguards.

Google’s existing safety measures for teens suggest that Gemini for Kids may offer even tighter restrictions and enhanced content moderation.

It remains unclear how Google plans to implement this feature, but it is likely that Gemini for Kids will be automatically enabled for Google accounts registered under a child’s name.

Given global regulations on data collection for minors, Google will reportedly process children’s data in accordance with its privacy policies and the Gemini Apps Privacy Notice.

As AI increasingly integrates into education and daily life, a safer, child-focused version of Gemini could provide a more secure way for kids to engage with technology while ensuring parental peace of mind.

For more information on these topics, visit diplomacy.edu.

Apple expands AI features with new update

Apple Intelligence is expanding with new features, including Priority Notifications, which highlight time-sensitive alerts for users. This update is part of iOS 18.4, iPadOS 18.4, and macOS Sequoia 15.4, rolling out globally.

The AI suite is now available in more languages and has launched in the EU for iPhone and iPad users.

Additional improvements include a new Sketch style in Image Playground and the ability to generate ‘memory movies’ on Mac using simple text descriptions. Vision Pro users in the US can now access Apple Intelligence features like Writing Tools and Genmoji.

Apple’s AI rollout has been gradual since its introduction at WWDC last year, with features arriving in stages.

The update also brings fresh emojis, child safety enhancements, and the debut of Apple News+ Food, further expanding Apple’s digital ecosystem.

For more information on these topics, visit diplomacy.edu.

OnlyFans faces penalty in UK for age check inaccuracy

OnlyFans’ parent company, Fenix, has been fined £1.05 million by UK regulator Ofcom for providing inaccurate information about how it verifies users’ ages. The platform, known for hosting adult content, had claimed its age-checking technology flagged anyone under 23 for additional ID checks.

However, it was later revealed the system was set to flag those under 20, prompting Ofcom to take enforcement action. Ofcom said Fenix failed in its legal obligation to provide accurate details, undermining the regulator’s ability to assess platform safety.

While Fenix accepted the penalty — leading to a 30% reduction in the fine — Ofcom stressed the importance of holding platforms to high standards, especially when protecting minors online. The investigation began in 2022 under UK regulations that predate the Online Safety Act, which is due to take full effect this year.

Why does it matter?

The act will require stronger age verification measures from platforms like OnlyFans, with a July 2025 deadline for full compliance. OnlyFans responded by affirming its commitment to transparency and welcomed the resolution of the case. While best known for adult subscriptions, the platform hosts mainstream content and launched a non-pornographic streaming service in 2023.

For more information on these topics, visit diplomacy.edu.

Meta cracks down on misinformation in Australia

Meta Platforms has announced new measures to combat misinformation and deepfakes in Australia ahead of the country’s upcoming national election.

The company’s independent fact-checking program, supported by Agence France-Presse and the Australian Associated Press, will detect and limit misleading content, while also removing any material that could incite violence or interfere with voting.

Deepfakes, AI-generated media designed to appear real, will also face stricter scrutiny. Meta stated that any content violating its policies would be removed or labelled as ‘altered’ to reduce its visibility.

Users sharing AI-generated content will be encouraged to disclose its origin, aiming to improve transparency.

Meta’s Australian policy follows similar strategies used in elections across India, the UK and the US.

The company is also navigating regulatory challenges in the country, including a proposed levy on big tech firms profiting from local news content and new requirements to enforce a ban on users under 16 by the end of the year.

For more information on these topics, visit diplomacy.edu.

Security Checkup arrives on TikTok to boost user account safety

TikTok has launched a new Security Checkup tool, offering users a simplified way to manage their account safety.

The dashboard provides an easy-to-navigate hub where users can review and update security settings such as login methods, two-step verification, and device access.

Designed to be user-friendly, it aims to encourage proactive security habits without overwhelming people with technical details.

The security portal functions similarly to tools offered by major tech companies like Google and Meta, reinforcing the importance of digital safety.

Features include passkey authentication for password-free logins, alerts for suspicious activity, and the ability to check which devices are logged into an account.

TikTok hopes the tool will make it easier for users to secure their profiles and prevent unauthorised access.

While the Security Checkup is a practical addition, it also arrives amid TikTok’s ongoing struggles in the US, where concerns over data privacy persist.

The company’s head of global security, Kim Albarella, describes the feature as a ‘powerful new tool’ that allows users to ‘take control’ of their account safety with confidence.

Accessing the tool is straightforward—users can find it within the app’s ‘Settings and privacy’ menu under ‘Security & permissions.’

For more information on these topics, visit diplomacy.edu.

California’s attempt to regulate online platforms faces legal setback

A federal judge in California has blocked a state law requiring online platforms to take extra measures to protect children, ruling it imposes unconstitutional burdens on tech companies.

The law, signed by Governor Gavin Newsom in 2022, aimed to prevent harm to young users by mandating businesses to assess risks, adjust privacy settings, and estimate users’ ages. Companies faced fines of up to $7,500 per child for intentional violations.

Judge Beth Freeman ruled that the law was too broad and infringed on free speech, siding with NetChoice, a group representing major tech firms, including Amazon, Google, Meta, and Netflix.

NetChoice argued the legislation effectively forced companies to act as government censors under the pretext of protecting privacy.

The ruling marks a victory for the tech industry, which has repeatedly challenged state-level regulations on content moderation and user protections.

California Attorney General Rob Bonta expressed disappointment in the decision and pledged to continue defending the law. The legal battle is expected to continue, as a federal appeals court had previously ordered a reassessment of the injunction.

The case highlights the ongoing conflict between government efforts to regulate online spaces and tech companies’ claims of constitutional overreach.

For more information on these topics, visit diplomacy.edu.

UK watchdog launches enforcement on file-sharing services

The UK’s internet watchdog, Ofcom, has launched a new enforcement programme under the Online Safety Act (OSA), targeting storage and file-sharing services due to concerns over the sharing of child sexual abuse material (CSAM).

The regulator has identified these services as particularly vulnerable to misuse for distributing CSAM and will assess the safety measures in place to prevent such activities.

As part of the enforcement programme, Ofcom has contacted a number of file-storage and sharing services, warning them that formal information requests will be issued soon.

These requests will require the services to submit details on the measures they have implemented or plan to introduce to combat CSAM, along with risk assessments related to illegal content.

Failure to comply with the requirements of the OSA could result in substantial penalties for these companies, with fines reaching up to 10% of their global annual turnover.

Ofcom’s crackdown highlights the growing responsibility for online services to prevent illegal content from being shared on their platforms.

For more information on these topics, visit diplomacy.edu.

UK teachers embrace AI for future education

Teachers in Stoke-on-Trent gathered for a full-day event to discuss the role of AI in education. Organised by the Good Future Foundation, the session saw more than 40 educators, including Stoke-on-Trent South MP Allison Gardner, explore how AI can enhance teaching and learning. Gardner emphasised the government’s belief that AI represents a ‘generational opportunity’ for education in the UK.

The event highlighted both the promise and the challenges of integrating AI into UK schools. Attendees shared ideas on using AI to improve communication, particularly with families who speak English as an additional language, and to streamline access to school resources through automated chatbots. While the potential benefits are clear, many teachers expressed concerns about the risks associated with new technology.

Daniel Emmerson, executive director of the Good Future Foundation, stressed the importance of supporting educators in understanding and implementing AI. He explained that AI can help prepare students for a future dominated by this technology. Meanwhile, schools like Belgrave St Bartholomew’s Academy are already leading the way in using AI to improve lessons and prepare students for the opportunities AI will bring.

For more information on these topics, visit diplomacy.edu.

Google acknowledges AI being used for harmful content

Google has reported receiving over 250 complaints globally about its AI software being used to create deepfake terrorist content, according to Australia’s eSafety Commission.

The tech giant also acknowledged dozens of user reports alleging that its AI program, Gemini, was being exploited to generate child abuse material. Under Australian law, companies must provide regular updates on their efforts to minimise harm or risk hefty fines.

The eSafety Commission described Google’s disclosure as a ‘world-first insight’ into how AI tools may be misused to produce harmful and illegal content.

Between April 2023 and February 2024, Google received 258 reports of suspected AI-generated extremist material and 86 related to child exploitation. However, the company did not specify how many of these reports were verified.

A Google spokesperson stated that the company strictly prohibits AI-generated content related to terrorism, child abuse, and other illegal activities.

While it uses automated detection to remove AI-generated child exploitation material, the same system is not applied to extremist content.

Meanwhile, the regulator has previously fined platforms like X (formerly Twitter) and Telegram for failing to meet reporting requirements, with both companies planning to appeal.

For more information on these topics, visit diplomacy.edu.

Tech giants challenge Australia’s exemption for YouTube

Major social media companies, including Meta, Snapchat, and TikTok, have urged Australia to reconsider its decision to exempt YouTube from a new law banning under-16s from social media platforms.

The legislation, passed in November, imposes strict age restrictions and threatens heavy fines for non-compliance. YouTube, however, is set to be excluded due to its educational value and parental supervision features.

Industry leaders argue that YouTube shares key features with other platforms, such as algorithmic content recommendations and social interaction tools, making its exemption inconsistent with the law’s intent.

Meta called for equal enforcement, while TikTok warned that excluding YouTube would create an ‘illogical, anticompetitive, and short-sighted’ regulation. Snapchat echoed these concerns, insisting that all platforms should be treated fairly.

Experts have pointed out that YouTube, like other platforms, can expose children to addictive and harmful content. The company has responded by strengthening content moderation and expanding its automated detection systems.

The debate highlights broader concerns over online safety and fair competition as Australia moves to enforce some of the world’s strictest social media regulations.

For more information on these topics, visit diplomacy.edu.