Instagram calls for EU-wide teen protection rules

Instagram is calling on the European Union to introduce new regulations requiring app stores to implement age verification and parental approval systems.

The platform argues that such protections, applied consistently across all apps, are essential to safeguarding teenagers from harmful content online.

‘The EU needs consistent standards for all apps, to help keep teens safe, empower parents and preserve privacy,’ Instagram said in a blog post.

The company believes the most effective way to achieve this is by introducing protections at the source—before teenagers download apps from the Apple App Store or Google Play Store.

Instagram is proposing that app stores verify users’ ages and require parental approval for teen app downloads. The social media platform cites new research from Morning Consult showing that three in four parents support such legislation.

Most parents also view app stores, rather than individual apps, as the safer and more manageable point for controlling what their teens can access.

To reinforce its position, Instagram points to its own safety efforts, such as the introduction of Teen Accounts. These private-by-default profiles limit teen exposure to messages and content from unknown users, and apply stricter filters to reduce exposure to sensitive material.

Instagram says it is working with civil society groups, industry partners, and European policymakers to push for rules that protect young users across platforms. With teen safety a growing concern, the company insists that industry-wide, enforceable solutions are urgently needed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google to roll out Gemini AI for kids under 13

Google has announced plans to introduce its Gemini AI platform to children under 13, a move that has sparked mixed reactions.

Parents recently received notifications about the rollout, with Google stating that children will be able to use Gemini for tasks such as homework help, answering general questions, and even bedtime stories.

The announcement has triggered concern among some organisations due to the risks associated with young users interacting with AI.

Critics point out that AI models have previously struggled to maintain child-appropriate safeguards and worry that children may not fully grasp the implications of engaging with such technology. Despite these issues, others have applauded Google’s decision to keep parents closely involved.

Taylor Barkley, Director of Public Policy at the Abundance Institute, praised Google for prioritising parental involvement. He noted that while risks exist, the best approach is not to impose strict bans but to work collaboratively with parents and caregivers to manage children’s AI usage.

‘Google should be applauded for proactively notifying parents,’ Barkley said in a statement. ‘When it comes to new technologies, parents come first.’

To ensure parental oversight, Google will require children’s access to Gemini to be managed through Family Link, its parental control platform. Family Link allows parents to monitor device usage, manage privacy settings, share location, and establish healthy digital habits for their families.

As AI continues to permeate everyday life, Google’s decision highlights the delicate balance between offering educational opportunities and ensuring the safe and responsible use of technology among younger users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI now accessible to kids via Family Link

Google has announced that children under the age of 13 will soon be able to access its Gemini AI chatbot through Family Link accounts. The service will allow parents to monitor their child’s use, set screen time limits, and disable access if desired.

Gemini, designed to assist with tasks like homework and storytelling, includes safeguards to prevent inappropriate content and protect child users. Google acknowledged the possibility of errors in the AI’s responses and urged parental oversight.

Google emphasised that data collected from child users will not be used to train AI models. Parents will be notified when their child first uses Gemini and are advised to encourage critical thinking and remind children not to share personal information with the chatbot.

Despite these precautions, child safety advocates have voiced concerns. Organisations such as Fairplay argue that allowing young children to interact with AI chatbots could expose them to risks, citing previous incidents involving other AI platforms.

International bodies, including UNICEF, have also highlighted the need for stringent regulations to safeguard children’s rights in an increasingly digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI app offers early support for parents of neurodivergent children

A new app called Hazel, developed by Bristol-based company Spicy Minds, offers parents a powerful tool to understand better and support their neurodivergent children while waiting for formal diagnoses. Using AI, the app runs a series of tests and then provides personalised strategies tailored to everyday challenges like school routines or holidays.

While it doesn’t replace a medical diagnosis, Hazel aims to fill a critical gap for families stuck in long waiting queues. Spicy Minds CEO Ben Cosh emphasised the need for quicker support, noting that many families wait years before receiving an autism diagnosis through the UK’s NHS.

‘Parents shouldn’t have to wait years to understand their child’s needs and get practical support,’ he said.

In Bristol alone, around 7,000 children are currently on waiting lists for an autism assessment, a number that continues to rise. Parents like Nicola Bennett, who waited five years for her son’s diagnosis, believe the app could be life-changing.

She praised Hazel for offering real-time guidance for managing sensory needs and daily planning—tools she wished she’d had much earlier. She also suggested integrating links to local support groups and services to make the app even more impactful.

By helping reduce stress and giving families a head start on understanding neurodiversity, Hazel represents a meaningful step toward more accessible, tech-driven support for parents navigating a complex and often delayed healthcare system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US lawmakers push for app store age checks

A new bill introduced by US lawmakers could force app stores like Apple’s App Store and Google Play to verify the age of all users, in a move aimed at increasing online safety for minors.

Known as the App Store Accountability Act, the legislation would require age categorisation and parental consent before minors can download apps or make in-app purchases. If passed, the law would apply to platforms with at least five million users and would come into effect one year after approval.

The bill proposes dividing users into age brackets — from ‘young child’ to ‘adult’ — and holding app stores accountable for enforcing access restrictions.

Lawmakers behind the bill, Republican Senator Mike Lee and Representative John James, argue that Big Tech companies must take responsibility for limiting children’s exposure to harmful content. They believe app stores are the right gatekeepers for verifying age and protecting minors online.

Privacy advocates and tech companies have voiced concern about the bill’s implications. Legal experts warn that verifying users’ ages may require sensitive personal data, such as ID documents or facial recognition scans, raising the risk of data misuse.

Apple said such verification would apply to all users, not just children, and criticised the idea as counterproductive to privacy.

The proposal has widened a rift between app store operators and social media platforms. While Meta, X, and Snap back centralised age checks at the app store level, Apple and Google accuse them of shifting the burden of responsibility.

Both tech giants emphasise the importance of shared responsibility and continue to engage with lawmakers on crafting practical and privacy-conscious solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK refuses to include Online Safety Act in US trade talks

The UK government has ruled out watering down the Online Safety Act as part of any trade negotiations with the US, despite pressure from American tech giants.

Speaking to MPs on the Science, Innovation and Technology Committee, Baroness Jones of Whitchurch, the parliamentary under-secretary for online safety, stated unequivocally that the legislation was ‘not up for negotiation’.

‘There have been clear instructions from the Prime Minister,’ she said. ‘The Online Safety Act is not part of the trade deal discussions. It’s a piece of legislation — it can’t just be negotiated away.’

Reports had suggested that President Donald Trump’s administration might seek to make loosening the UK’s online safety rules a condition of a post-Brexit trade agreement, following lobbying from large US-based technology firms.

However, Baroness Jones said the legislation was well into its implementation phase and that ministers were ‘happy to reassure everybody’ that the government is sticking to it.

The Online Safety Act will require tech platforms that host user-generated content, such as social media firms, to take active steps to protect users — especially children — from harmful and illegal content.

Non-compliant companies may face fines of up to £18 million or 10% of global turnover, whichever is greater. In extreme cases, platforms could be blocked from operating in the UK.

Mark Bunting, a representative of Ofcom, which is overseeing enforcement of the new rules, said the regulator would have taken action had the legislation been in force during last summer’s riots in Southport, which were exacerbated by online misinformation.

His comments contrasted with tech firms including Meta, TikTok and X, which claimed in earlier hearings that little would have changed under the new rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake victims gain new rights with House-approved bill

The US House of Representatives has passed the ‘Take It Down’ Act with overwhelming bipartisan support, aiming to protect Americans from the spread of deepfake and revenge pornography.

The bill, approved by a 409-2 vote, criminalises the distribution of non-consensual intimate imagery—including AI-generated content—and now heads to President Donald Trump for his signature.

First Lady Melania Trump, who returned to public advocacy earlier this year, played a key role in supporting the legislation. She lobbied lawmakers last month and celebrated the bill’s passage, saying she was honoured to help guide it through Congress.

The White House confirmed she will attend the signing ceremony.

The law requires social media platforms and similar websites to remove such harmful content upon request from victims, instead of allowing it to remain unchecked.

Victims of deepfake pornography have included both public figures such as Taylor Swift and Alexandria Ocasio-Cortez, and private individuals like high school students.

Introduced by Republican Senator Ted Cruz and backed by Democratic lawmakers including Amy Klobuchar and Madeleine Dean, the bill reflects growing concern across party lines about online abuse.

Melania Trump, echoing her earlier ‘Be Best’ initiative, stressed the need to ensure young people—especially girls—can navigate the internet safely instead of being left vulnerable to digital exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government urged to outlaw apps creating deepfake abuse images

The Children’s Commissioner has urged the UK Government to ban AI apps that create sexually explicit images through “nudification” technology. AI tools capable of manipulating real photos to make people appear naked are being used to target children.

Concerns in the UK are growing as these apps are now widely accessible online, often through social media and search platforms. In a newly published report, Dame Rachel warned that children, particularly girls, are altering their online behaviour out of fear of becoming victims of such technologies.

She stressed that while AI holds great potential, it also poses serious risks to children’s safety. The report also recommends stronger legal duties for AI developers and improved systems to remove explicit deepfake content from the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK introduces landmark online safety rules to protect children

The UK’s regulator, Ofcom, has unveiled new online safety rules to provide stronger protections for children, requiring platforms to adjust algorithms, implement stricter age checks, and swiftly tackle harmful content by 25 July or face hefty fines. These measures target sites hosting pornography or content promoting self-harm, suicide, and eating disorders, demanding more robust efforts to shield young users.

Ofcom chief Dame Melanie Dawes called the regulations a ‘gamechanger,’ emphasising that platforms must adapt if they wish to serve under-18s in the UK. While supporters like former Facebook safety officer Prof Victoria Baines see this as a positive step, critics argue the rules don’t go far enough, with campaigners expressing disappointment over perceived gaps, particularly in addressing encrypted private messaging.

The rules, part of the Online Safety Act pending parliamentary approval, include over 40 obligations such as clearer terms of service for children, annual risk reviews, and dedicated accountability for child safety. The NSPCC welcomed the move but urged Ofcom to tighten oversight, especially where hidden online risks remain unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta uses AI to spot teens lying about age

Meta has announced it is ramping up efforts to protect teenagers on Instagram by deploying AI to detect users who may have lied about their age. The technology will automatically place suspected underage users into Teen Accounts, even if their profiles state they are adults.

These special accounts come with stricter safety settings designed for users under 16. Those who believe they’ve been misclassified will have the option to adjust their settings manually.

Instead of relying solely on self-reported birthdates, Meta is using its AI to analyse behaviour and signals that suggest a user might be younger than claimed.

While the company has used this technology to estimate age ranges before, it is now applying it more aggressively to catch teens who attempt to bypass the platform’s safeguards. The tech giant insists it’s working to ensure the accuracy of these classifications to prevent mistakes.

Alongside this new AI tool, Meta will also begin sending notifications to parents about their children’s Instagram settings.

These alerts, which are sent only to parents who have Instagram accounts of their own, aim to encourage open conversations at home about the importance of honest age representation online.

Teen Accounts were first introduced last year and are designed to limit access to harmful content, reduce contact from strangers, and promote healthier screen time habits.

Instead of granting unrestricted access, these accounts are private by default, block unsolicited messages, and remind teens to take breaks after prolonged scrolling.

Meta says the goal is to adapt to the digital age and partner with parents to make Instagram a safer space for young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!