UK Online Safety Act under fire amid free speech and privacy concerns

The UK’s Online Safety Act, aimed at protecting children and eliminating illegal content online, is stirring a strong debate due to its stringent requirements on social media platforms and websites hosting adult content.

Critics argue that the act’s broad application could unintentionally suppress free speech, as highlighted by social media platform X.

X claims the act results in the censorship of lawful content, reflecting concerns shared by politicians, free-speech campaigners, and content creators.

Moreover, public unease is evident, with over 468,000 individuals signing a petition for the act’s repeal, citing privacy concerns over mandatory age checks requiring personal data on adult content sites.

Despite mounting criticism, the UK government is resolute in its commitment to the legislation. Technology Secretary Peter Kyle equates opposition to siding with online predators, emphasising child protection.

The government asserts that the act also mandates platforms to uphold freedom of expression alongside child safety obligations.

While X criticises both the broad scope and the tight compliance timelines of the act, warning of pressures towards over-censorship, it calls for significant statutory revisions to protect personal freedoms while safeguarding children.

The government rebuffs claims that the Online Safety Act compromises free speech, with assurances that the law equally protects freedom of expression.

Meanwhile, Ofcom, the UK’s communications regulator, has initiated investigations into the compliance of several companies managing pornography sites, highlighting the rigorous enforcement.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

VPN use surges in UK as age checks go live

The way UK internet users access adult content has undergone a significant change, with new age-verification rules now in force. Under Ofcom’s directive, anyone attempting to visit adult websites must now prove they are over 18, typically by providing credit card or personal ID details.

The move aims to prevent children from encountering harmful content online, but it has raised serious privacy and cybersecurity concerns.

Experts have warned that entering personal and financial information could expose users to cyber threats. Jake Moore from cybersecurity firm ESET pointed out that the lack of clear implementation standards leaves users vulnerable to data misuse and fraud.

There’s growing unease that ID verification systems might inadvertently offer a goldmine to scammers.
In response, many have started using VPNs to bypass the restrictions, with providers reporting a surge in UK downloads.

VPNs mask user locations, allowing access to blocked content, but free versions often lack the security features of paid services. As demand rises, cybersecurity specialists are urging users to be cautious.

Free VPNs can compromise user data through weak encryption or selling browsing histories to advertisers. Mozilla and EC-Council have stressed the importance of avoiding no-cost VPNs unless users know the risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over children’s use of AI chatbots

The growing use of AI chatbots and companions among children has raised safety concerns, with experts warning of inadequate protections and potential emotional risks.

Often not designed for young users, these apps lack sufficient age verification and moderation features, making them vulnerable spaces for children. The eSafety Commissioner noted that many children are spending hours daily with AI companions, sometimes discussing topics like mental health and sex.

Studies in Australia and the UK show high engagement, with many young users viewing the chatbots as real friends and sources of emotional advice.

Experts, including Professor Tama Leaver, warn that these systems are manipulative by design, built to keep users engaged without guaranteeing appropriate or truthful responses.

Despite the concerns, initiatives like Day of AI Australia promote digital literacy to help young people understand and navigate such technologies critically.

Organisations like UNICEF say AI could offer significant educational benefits if applied safely. However, they stress that Australia must take childhood digital safety more seriously as AI rapidly reshapes how young people interact, learn and socialise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU will launch an empowering digital age verification system by 2026

The European Union will roll out digital age verification across all member states by 2026. Under the Digital Services Act, this mandate requires platforms to verify user age using the new EU Digital Identity Wallet (EUDIW). Non-compliance could lead to fines of up to €18 million or 10% of global turnover.

Initially, five countries will pilot the system designed to protect minors and promote online safety. The EUDIW uses privacy-preserving cryptographic proofs, allowing users to prove they are over 18 without uploading personal IDs.

Unlike the UK’s ID-upload approach, which triggered a rise in VPN usage, the EU model prioritises user anonymity and data minimisation. Scytales and T-Systems develop the system.

Despite its benefits, privacy advocates have flagged concerns. Although anonymised, telecom providers could potentially analyse network-level signals to infer user behaviour.

Beyond age checks, the EUDIW will store and verify other credentials, including diplomas, licenses, and health records. That initiative aims to create a trusted, cross-border digital identity ecosystem across Europe.

As a result, platforms and marketers must adapt. Behavioural tracking and personalised ads may become harder to implement. Smaller businesses might struggle with technical integration and rising compliance costs.

However, centralised control also raises risks. These include potential phishing attacks, service disruptions, and increased government visibility over online activity.

If successful, the EU’s digital identity model could inspire global adoption. It offers a privacy-first alternative to commercial or surveillance-heavy systems and marks a major leap forward in digital trust and safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Children’s screen time debate heats up as experts question evidence

A growing number of scientists are questioning whether fears over children’s screen time are truly backed by evidence. While many parents worry about smartphones, social media, and gaming, experts say the science behind these concerns is often flawed or inconsistent.

Professor Pete Etchells of Bath Spa University and other researchers argue that common claims about screen time harming adolescent brains or causing depression lack strong evidence.

Much of the existing research relies on self-reported data and fails to account for critical factors like loneliness or the type of screen engagement.

One major study found no link between screen use and poor mental wellbeing, while others stress the importance of distinguishing between harmful content and positive online interaction.

Still, many campaigners and psychologists maintain that screen restrictions are vital. Groups such as Smartphone Free Childhood are pushing to delay access to smartphones and social media.

Others, like Professor Jean Twenge, say the risks of screen overuse—less sleep, reduced social time, and more time alone—create a ‘terrible formula for mental health.’

With unclear guidance and evolving science, parents face tough choices in a rapidly changing tech world. As screens become more common via AI, smart glasses, and virtual communities, the focus shifts to how children can use technology wisely and safely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

VPN dangers highlighted as UK’s Online Safety Act comes into force

Britons are being urged to proceed with caution before turning to virtual private networks (VPNs) in response to the new age verification requirements set by the Online Safety Act.

The law, now in effect, aims to protect young users by restricting access to adult and sensitive content unless users verify their age.

Instead of offering anonymous access, some platforms now demand personal details such as full names, email addresses, and even bank information to confirm a user’s age.

Although the legislation targets adult websites, many people have reported being blocked from accessing less controversial content, including alcohol-related forums and parts of Wikipedia.

As a result, more users are considering VPNs to bypass these checks. However, cybersecurity experts warn that many VPNs can pose serious risks by exposing users to scams, data theft, and malware. Without proper research, users might install software that compromises their privacy rather than protecting it.

With Ofcom reporting that eight per cent of children aged 8 to 14 in the UK have accessed adult content online, the new rules are viewed as a necessary safeguard. Still, concerns remain about the balance between online safety and digital privacy for adult users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets smarter with Study Mode to support active learning

OpenAI has launched a new Study Mode in ChatGPT to help users engage more deeply with learning. Rather than simply providing answers, the feature guides users through concepts and problem-solving step-by-step. It is designed to support critical thinking and improve long-term understanding.

The company developed the feature with educators, scientists, and pedagogy experts. They aimed to ensure the AI supports active learning and doesn’t just deliver quick fixes. The result is a mode that encourages curiosity, reflection, and metacognitive development.

According to OpenAI, Study Mode allows users to approach subjects more critically and thoroughly. It breaks down complex ideas, asks questions, and helps manage cognitive load during study. Instead of spoon-feeding, the AI acts more like a tutor than a search engine.

The shift reflects a broader trend in educational technology — away from passive learning tools. Many students turn to AI for homework help, but educators have warned of over-reliance. Study Mode attempts to strike a balance by promoting engagement over shortcuts.

For instance, rather than giving the complete solution to a maths problem, Study Mode might ask: ‘What formula might apply here?’ or ‘How could you simplify this expression first?’ This approach nudges students to participate in the process and build fundamental problem-solving skills.

It also adapts to different learning needs. In science, it might walk through hypotheses and reasoning. It may help analyse a passage or structure an essay in the humanities. Prompting users to think aloud mirrors effective tutoring strategies.

OpenAI says feedback from teachers helped shape the feature’s tone and pacing. One key aim was to avoid overwhelming learners with too much information at once. Instead, Study Mode introduces concepts incrementally, supporting better retention and understanding.

The company also consulted cognitive scientists to align with best practices in memory and comprehension. However, this includes encouraging users to reflect on their learning and why specific steps matter. Such strategies are known to improve both academic performance and self-directed learning.

While the feature is part of ChatGPT, it can be toggled on or off. Users can activate Study Mode when tackling a tricky topic or exploring new material. They can then switch to normal responses for broader queries or summarised answers.

Educators have expressed cautious optimism about the update. Some see it as a tool supporting homework, revision, or assessment preparation. However, they also warn that no AI can replace direct teaching or personalised guidance.

Tools like this could be valuable in under-resourced settings or for independent learners.

Study Mode’s interactive style may help level the playing field for students without regular academic support. It also gives parents and tutors a new way to guide learners without doing the work for them.

Earlier efforts included teacher guides and classroom use cases. However, Study Mode marks a more direct push to reshape how students use AI in learning.

It positions ChatGPT not as a cheat sheet, but as a co-pilot for intellectual growth.

Looking ahead, OpenAI says it plans to iterate based on user feedback and teacher insights. Future updates may include subject-specific prompts, progress tracking, or integrations with educational platforms. The goal is to build a tool that adapts to learning styles without compromising depth or rigour.

As AI continues to reshape education, tools like Study Mode may help answer a central question: Can technology support genuine understanding, instead of just faster answers? With Study Mode, OpenAI believes the answer is yes, if used wisely.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Free VPN use surges in UK after online safety law

The UK’s new Online Safety Act has increased VPN use, as websites introduce stricter age restrictions to comply with the law. Popular platforms such as Reddit and Pornhub are either blocking minors or adding age verification, pushing many young users to turn to free VPNs to bypass the rules.

In the days following the Act’s enforcement on 25 July, five of the ten most-downloaded free apps in the UK were VPNs.

However, cybersecurity experts warn that unvetted free VPNs can pose serious risks, with some selling user data or containing malware.

Using a VPN means routing all your internet traffic through an external server, effectively handing over access to your browsing data.

While reputable providers like Proton VPN offer safe free tiers supported by paid plans, lesser-known services often lack transparency and may exploit users for profit.

Consumers are urged to check for clear privacy policies, audited security practices and credible business information before using a VPN. Trusted options for safer browsing include Proton VPN, TunnelBear, Windscribe, and hide.me.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia reverses its stance and restricts YouTube for children under 16

Australia has announced that YouTube will be banned for children under 16 starting in December, reversing its earlier exemption from strict new social media age rules. The decision follows growing concerns about online harm to young users.

Platforms like Facebook, Instagram, Snapchat, TikTok, and X are already subject to the upcoming restrictions, and YouTube will now join the list of ‘age-restricted social media platforms’.

From 10 December, all such platforms will be required to ensure users are aged 16 or older or face fines of up to AU$50 million (£26 million) for not taking adequate steps to verify age. Although those steps remain undefined, users will not need to upload official documents like passports or licences.

The government has said platforms must find alternatives instead of relying on intrusive ID checks.

Communications Minister Anika Wells defended the policy, stating that four in ten Australian children reported recent harm on YouTube. She insisted the government would not back down under legal pressure from Alphabet Inc., YouTube’s US-based parent company.

Children can still view videos, but won’t be allowed to hold personal YouTube accounts.

YouTube criticised the move, claiming the platform is not social media but a video library often accessed through TVs. Prime Minister Anthony Albanese said Australia would campaign at a UN forum in September to promote global backing for social media age restrictions.

Exemptions will apply to apps used mainly for education, health, messaging, or gaming, which are considered less harmful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!