Half of young people would prefer life without the internet

Nearly half of UK youths aged 16 to 21 say they would prefer to grow up without the internet, a new survey reveals. The British Standards Institution found that 68% feel worse after using social media and half would support a digital curfew past 10 p.m.

These findings come as the government considers app usage limits for platforms like TikTok and Instagram. The study also showed that many UK young people feel compelled to hide their online behaviour: 42% admitted lying to parents, and a similar number have fake or burner accounts.

More worryingly, 27% said they have shared their location with strangers, while others admitted pretending to be someone else entirely. Experts argue that digital curfews alone won’t reduce exposure to online harms without broader safeguards in place.

Campaigners and charities are calling for urgent legislation that puts children’s safety before tech profits. The Molly Rose Foundation stressed the danger of algorithms pushing harmful content, while the NSPCC urged a shift towards less addictive and safer online spaces.

The majority of young people surveyed want more protection online and clearer action from tech firms and policymakers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US bans nonconsensual explicit deepfakes nationwide

The US is introducing a landmark federal law aimed at curbing the spread of non-consensual explicit deepfake images, following mounting public outrage.

President Donald Trump is expected to sign the Take It Down Act, which will criminalise the sharing of explicit images, whether real or AI-generated, without consent. The law will also require tech platforms to remove such content within 48 hours of notification, instead of leaving the matter to patchy state laws.

The legislation is one of the first at the federal level to directly tackle the misuse of AI-generated content. It builds on earlier laws that protected children but had left adults vulnerable due to inconsistent state regulations.

The bill received rare bipartisan support in Congress and was backed by over 100 organisations, including tech giants like Meta, TikTok and Google. First Lady Melania Trump also supported the act, hosting a teenage victim of deepfake harassment during the president’s address to Congress.

The act was prompted in part by incidents like that of Elliston Berry, a Texas high school student targeted by a classmate who used AI to alter her social media image into a nude photo. Similar cases involving teen girls across the country highlighted the urgency for action.

Tech companies had already started offering tools to remove explicit images, but the lack of consistent enforcement allowed harmful content to persist on less cooperative platforms.

Supporters of the law argue it sends a strong societal message instead of allowing the exploitation to continue unchallenged.

Advocates like Imran Ahmed and Ilana Beller emphasised that while no law is a perfect solution, this one forces platforms to take real responsibility and offers victims some much-needed protection and peace of mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kick faces investigation after ignoring Ofcom risk assessment request

Ofcom has launched two investigations into Kick Online Entertainment, the provider of a pornography website, over potential breaches of the Online Safety Act.

The regulator said the company failed to respond to a statutory request for a risk assessment related to illegal content appearing on the platform.

As a result, Ofcom is investigating whether Kick has failed to meet its legal obligations to complete and retain a record of such a risk assessment, as well as for not responding to the regulator’s information request.

Ofcom confirmed it had received complaints about potentially illegal material on the site, including child sexual abuse content and extreme pornography.

It is also considering a third investigation into whether the platform has implemented adequate safety measures to protect users from such material—another requirement under the Act.

Under the Online Safety Act, firms found in breach can face fines of up to £18 million or 10% of their global revenue, whichever is higher. In the most severe cases, Ofcom can pursue court orders to block UK access to the website or compel payment providers and advertisers to cut ties with the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram calls for EU-wide teen protection rules

Instagram is calling on the European Union to introduce new regulations requiring app stores to implement age verification and parental approval systems.

The platform argues that such protections, applied consistently across all apps, are essential to safeguarding teenagers from harmful content online.

‘The EU needs consistent standards for all apps, to help keep teens safe, empower parents and preserve privacy,’ Instagram said in a blog post.

The company believes the most effective way to achieve this is by introducing protections at the source—before teenagers download apps from the Apple App Store or Google Play Store.

Instagram is proposing that app stores verify users’ ages and require parental approval for teen app downloads. The social media platform cites new research from Morning Consult showing that three in four parents support such legislation.

Most parents also view app stores, rather than individual apps, as the safer and more manageable point for controlling what their teens can access.

To reinforce its position, Instagram points to its own safety efforts, such as the introduction of Teen Accounts. These private-by-default profiles limit teen exposure to messages and content from unknown users, and apply stricter filters to reduce exposure to sensitive material.

Instagram says it is working with civil society groups, industry partners, and European policymakers to push for rules that protect young users across platforms. With teen safety a growing concern, the company insists that industry-wide, enforceable solutions are urgently needed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google to roll out Gemini AI for kids under 13

Google has announced plans to introduce its Gemini AI platform to children under 13, a move that has sparked mixed reactions.

Parents recently received notifications about the rollout, with Google stating that children will be able to use Gemini for tasks such as homework help, answering general questions, and even bedtime stories.

The announcement has triggered concern among some organisations due to the risks associated with young users interacting with AI.

Critics point out that AI models have previously struggled to maintain child-appropriate safeguards and worry that children may not fully grasp the implications of engaging with such technology. Despite these issues, others have applauded Google’s decision to keep parents closely involved.

Taylor Barkley, Director of Public Policy at the Abundance Institute, praised Google for prioritising parental involvement. He noted that while risks exist, the best approach is not to impose strict bans but to work collaboratively with parents and caregivers to manage children’s AI usage.

‘Google should be applauded for proactively notifying parents,’ Barkley said in a statement. ‘When it comes to new technologies, parents come first.’

To ensure parental oversight, Google will require children’s access to Gemini to be managed through Family Link, its parental control platform. Family Link allows parents to monitor device usage, manage privacy settings, share location, and establish healthy digital habits for their families.

As AI continues to permeate everyday life, Google’s decision highlights the delicate balance between offering educational opportunities and ensuring the safe and responsible use of technology among younger users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI now accessible to kids via Family Link

Google has announced that children under the age of 13 will soon be able to access its Gemini AI chatbot through Family Link accounts. The service will allow parents to monitor their child’s use, set screen time limits, and disable access if desired.

Gemini, designed to assist with tasks like homework and storytelling, includes safeguards to prevent inappropriate content and protect child users. Google acknowledged the possibility of errors in the AI’s responses and urged parental oversight.

Google emphasised that data collected from child users will not be used to train AI models. Parents will be notified when their child first uses Gemini and are advised to encourage critical thinking and remind children not to share personal information with the chatbot.

Despite these precautions, child safety advocates have voiced concerns. Organisations such as Fairplay argue that allowing young children to interact with AI chatbots could expose them to risks, citing previous incidents involving other AI platforms.

International bodies, including UNICEF, have also highlighted the need for stringent regulations to safeguard children’s rights in an increasingly digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI app offers early support for parents of neurodivergent children

A new app called Hazel, developed by Bristol-based company Spicy Minds, offers parents a powerful tool to understand better and support their neurodivergent children while waiting for formal diagnoses. Using AI, the app runs a series of tests and then provides personalised strategies tailored to everyday challenges like school routines or holidays.

While it doesn’t replace a medical diagnosis, Hazel aims to fill a critical gap for families stuck in long waiting queues. Spicy Minds CEO Ben Cosh emphasised the need for quicker support, noting that many families wait years before receiving an autism diagnosis through the UK’s NHS.

‘Parents shouldn’t have to wait years to understand their child’s needs and get practical support,’ he said.

In Bristol alone, around 7,000 children are currently on waiting lists for an autism assessment, a number that continues to rise. Parents like Nicola Bennett, who waited five years for her son’s diagnosis, believe the app could be life-changing.

She praised Hazel for offering real-time guidance for managing sensory needs and daily planning—tools she wished she’d had much earlier. She also suggested integrating links to local support groups and services to make the app even more impactful.

By helping reduce stress and giving families a head start on understanding neurodiversity, Hazel represents a meaningful step toward more accessible, tech-driven support for parents navigating a complex and often delayed healthcare system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US lawmakers push for app store age checks

A new bill introduced by US lawmakers could force app stores like Apple’s App Store and Google Play to verify the age of all users, in a move aimed at increasing online safety for minors.

Known as the App Store Accountability Act, the legislation would require age categorisation and parental consent before minors can download apps or make in-app purchases. If passed, the law would apply to platforms with at least five million users and would come into effect one year after approval.

The bill proposes dividing users into age brackets — from ‘young child’ to ‘adult’ — and holding app stores accountable for enforcing access restrictions.

Lawmakers behind the bill, Republican Senator Mike Lee and Representative John James, argue that Big Tech companies must take responsibility for limiting children’s exposure to harmful content. They believe app stores are the right gatekeepers for verifying age and protecting minors online.

Privacy advocates and tech companies have voiced concern about the bill’s implications. Legal experts warn that verifying users’ ages may require sensitive personal data, such as ID documents or facial recognition scans, raising the risk of data misuse.

Apple said such verification would apply to all users, not just children, and criticised the idea as counterproductive to privacy.

The proposal has widened a rift between app store operators and social media platforms. While Meta, X, and Snap back centralised age checks at the app store level, Apple and Google accuse them of shifting the burden of responsibility.

Both tech giants emphasise the importance of shared responsibility and continue to engage with lawmakers on crafting practical and privacy-conscious solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK refuses to include Online Safety Act in US trade talks

The UK government has ruled out watering down the Online Safety Act as part of any trade negotiations with the US, despite pressure from American tech giants.

Speaking to MPs on the Science, Innovation and Technology Committee, Baroness Jones of Whitchurch, the parliamentary under-secretary for online safety, stated unequivocally that the legislation was ‘not up for negotiation’.

‘There have been clear instructions from the Prime Minister,’ she said. ‘The Online Safety Act is not part of the trade deal discussions. It’s a piece of legislation — it can’t just be negotiated away.’

Reports had suggested that President Donald Trump’s administration might seek to make loosening the UK’s online safety rules a condition of a post-Brexit trade agreement, following lobbying from large US-based technology firms.

However, Baroness Jones said the legislation was well into its implementation phase and that ministers were ‘happy to reassure everybody’ that the government is sticking to it.

The Online Safety Act will require tech platforms that host user-generated content, such as social media firms, to take active steps to protect users — especially children — from harmful and illegal content.

Non-compliant companies may face fines of up to £18 million or 10% of global turnover, whichever is greater. In extreme cases, platforms could be blocked from operating in the UK.

Mark Bunting, a representative of Ofcom, which is overseeing enforcement of the new rules, said the regulator would have taken action had the legislation been in force during last summer’s riots in Southport, which were exacerbated by online misinformation.

His comments contrasted with tech firms including Meta, TikTok and X, which claimed in earlier hearings that little would have changed under the new rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake victims gain new rights with House-approved bill

The US House of Representatives has passed the ‘Take It Down’ Act with overwhelming bipartisan support, aiming to protect Americans from the spread of deepfake and revenge pornography.

The bill, approved by a 409-2 vote, criminalises the distribution of non-consensual intimate imagery—including AI-generated content—and now heads to President Donald Trump for his signature.

First Lady Melania Trump, who returned to public advocacy earlier this year, played a key role in supporting the legislation. She lobbied lawmakers last month and celebrated the bill’s passage, saying she was honoured to help guide it through Congress.

The White House confirmed she will attend the signing ceremony.

The law requires social media platforms and similar websites to remove such harmful content upon request from victims, instead of allowing it to remain unchecked.

Victims of deepfake pornography have included both public figures such as Taylor Swift and Alexandria Ocasio-Cortez, and private individuals like high school students.

Introduced by Republican Senator Ted Cruz and backed by Democratic lawmakers including Amy Klobuchar and Madeleine Dean, the bill reflects growing concern across party lines about online abuse.

Melania Trump, echoing her earlier ‘Be Best’ initiative, stressed the need to ensure young people—especially girls—can navigate the internet safely instead of being left vulnerable to digital exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!