Google cautions Australia on youth social media ban proposal

The US tech giant, Google (also owner of YouTube), has reiterated its commitment to children’s online safety while cautioning against Australia’s proposed ban on social media use for those under 16.

Speaking before the Senate Environment and Communications References Committee, Google’s Public Policy Senior Manager Rachel Lord said the legislation, though well-intentioned, may be difficult to enforce and could have unintended effects.

Lord highlighted the 23-year presence of Google in Australia, contributing over $53 billion to the economy in 2024, while YouTube’s creative ecosystem added $970 million to GDP and supported more than 16,000 jobs.

She said the company’s investments, including the $1 billion Digital Future Initiative, reflect its long-term commitment to Australia’s digital development and infrastructure.

According to Lord, YouTube already provides age-appropriate products and parental controls designed to help families manage their children’s experiences online.

Requiring children to access YouTube without accounts, she argued, would remove these protections and risk undermining safe access to educational and creative content used widely in classrooms, music, and sport.

She emphasised that YouTube functions primarily as a video streaming platform rather than a social media network, serving as a learning resource for millions of Australian children.

Lord called for legislation that strengthens safety mechanisms instead of restricting access, saying the focus should be on effective safeguards and parental empowerment rather than outright bans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Purple Fest highlights AI for disabilities

Entrepreneurs at International Purple Fest in Goa, India, from 9 to 12 October 2025, showcased AI transforming assistive technologies. Innovations like conversational screen readers, adaptive dashboards, and real-time captioning empower millions with disabilities worldwide.

Designed with input from those with lived experience, these tools turn barriers into opportunities for learning, working, and leading independently.

Surashree Rahane, born with club foot and polymelia, founded Yearbook Canvas and champions inclusive AI. Collaborating with Newton School of Technology near New Delhi, she develops adaptive learning platforms tailored to diverse learners.

‘AI can democratise education,’ she stated, ‘but only if trained to avoid perpetuating biases.’ Her work addresses structural barriers like inaccessible systems and biased funding networks.

Prateek Madhav, CEO of AssisTech Foundation, described AI as ‘the great equaliser,’ creating jobs through innovations like voice-to-speech tools and gesture-controlled wheelchairs.

Ketan Kothari, a consultant at Xavier’s Resource Centre in Mumbai, relies on AI for independent work, using live captions and visual description apps. Such advancements highlight AI’s role in fostering agency and inclusion across diverse needs.

Hosted by Goa’s Department for Empowerment of Persons with Disabilities, UN India, and the Ministry of Social Justice, Purple Fest promotes universal design.

Tshering Dema from the UN Development Coordination Office noted that inclusion requires a global mindset shift. ‘The future of work must be co-designed with people,’ she said, reflecting a worldwide transition towards accessibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Age verification and online safety dominate EU ministers’ Horsens meeting

EU digital ministers are meeting in Horsens on 9–10 October to improve the protection of minors online. Age verification, child protection, and digital sovereignty are at the top of the agenda under the Danish EU Presidency.

The Informal Council Meeting on Telecommunications is hosted by the Ministry of Digital Affairs of Denmark and chaired by Caroline Stage. European Commission Executive Vice-President Henna Virkkunen is also attending to support discussions on shared priorities.

Ministers are considering measures to prevent children from accessing age-inappropriate platforms and reduce exposure to harmful features like addictive designs and adult content. Stronger safeguards across digital services are being discussed.

The talks also focus on Europe’s technological independence. Ministers aim to enhance the EU’s digital competitiveness and sovereignty while setting a clear direction ahead of the Commission’s upcoming Digital Fairness Act proposal.

A joint declaration, ‘The Jutland Declaration’, is expected as an outcome. It will highlight the need for stronger EU-level measures and effective age verification to create a safer online environment for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

European Commission launches Apply AI and AI in Science strategies

Countries are racing to harness AI, and the European Commission has unveiled two strategies to maintain Europe’s competitiveness. Apply AI targets faster adoption across industries and the public sector, while AI in Science focuses on boosting Europe’s research leadership.

Commission President Ursula von der Leyen stated that Europe must shape AI’s future by balancing innovation and safety. The European Commission is mobilising €1 billion to boost adoption in healthcare, manufacturing, energy, defence, and culture, while supporting SMEs.

Measures include creating AI-powered screening centres for healthcare, backing frontier models, and upgrading testing infrastructure. An Apply AI Alliance will unite industry, academia, civil society, and public bodies to coordinate action, while an AI Observatory will monitor sector trends and impacts.

The AI in Science Strategy centres on RAISE, a new virtual institute to pool and coordinate resources for applying AI in research. Investments include €600 million in compute power through Horizon Europe and €58 million for talent networks, alongside plans to double annual AI research funding to over €3 billion.

The EU aims to position itself as a global hub for trustworthy and innovative AI by linking infrastructure, data, skills, and investment. Upcoming events, such as the AI in Science Summit in Copenhagen, will showcase new initiatives as Europe pushes to translate its AI ambitions into tangible outcomes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The global struggle to regulate children’s social media use

Finding equilibrium in children’s use of social media

Social media has become a defining part of modern childhood. Platforms like Instagram, TikTok, Snapchat and YouTube offer connection, entertainment and information at an unprecedented scale.

Yet concerns have grown about their impact on children’s mental health, education, privacy and safety. Governments, parents and civil society increasingly debate whether children should access these spaces freely, with restrictions, or not at all.

The discussion is no longer abstract. Across the world, policymakers are moving beyond voluntary codes to legal requirements, some proposing age thresholds or even outright bans for minors.

Supporters argue that children face psychological harm and exploitation online, while critics caution that heavy restrictions can undermine rights, fail to solve root problems and create new risks.

The global conversation is now at a turning point, where choices about social media regulation will shape the next generation’s digital environment.

Why social media is both a lifeline and a threat for youth

The influence of social media on children is double-edged. On the one side, these platforms enable creativity, allow marginalised voices to be heard, and provide educational content. During the pandemic, digital networks offered a lifeline of social interaction when schools were closed.

multiracial group of school kids using touchpads and listening to a teacher during computer class

Children and teens can build communities around shared interests, learn new skills, and sometimes even gain economic opportunities through digital platforms.

On the other side, research has linked heavy use of social media with increased rates of anxiety, depression, disrupted sleep and body image issues among young users. Recommendation algorithms often push sensational or harmful content, reinforcing vulnerabilities rather than mitigating them.

Cyberbullying, exposure to adult material, and risks of predatory contact are persistent challenges. Instead of strengthening resilience, platforms often prioritise engagement metrics that exploit children’s attention and emotional responses.

The scale of the issue is enormous. Billions of children around the world hold smartphones before the age of 12. With digital life inseparable from daily routines, even well-meaning parents struggle to set boundaries.

Governments face pressure to intervene, but approaches vary widely, reflecting different cultural norms, levels of trust in technology firms, and political attitudes toward child protection.

The Australian approach

Australia is at the forefront of regulation. In recent years, the country has passed strong online safety laws, led by its eSafety Commissioner. These rules include mandatory age verification for certain online services and obligations for platforms to design products with child safety in mind.

Most notably, Australia has signalled its willingness to explore outright bans on general social media access for children under 16. The government has pointed to mounting evidence of harm, from cyberbullying to mental health concerns, and has emphasised the need for early intervention.

australian social media laws for children safety

Instead of leaving responsibility entirely to parents, the state argues that platforms themselves must redesign the way they serve children.

Critics highlight several problems. Age verification requires identity checks, which can endanger privacy and create surveillance risks. Bans may also drive children to use less-regulated spaces or fake their ages, undermining the intended protections.

Others argue that focusing only on prohibition overlooks the need for broader digital literacy education. Yet Australia’s regulatory leadership has sparked a wider debate, prompting other countries to reconsider their own approaches.

Greece’s strong position

Last week, Greece reignited the global debate with its own strong position on restricting youth access to social media.

Speaking at the United Nations General Assembly during an event hosted by Australia on digital child safety, PM Kyriakos Mitsotakis said his government was prepared to consider banning social media for children under 16.

sweden social media ban for children

Mitsotakis warned that societies are conducting the ‘largest uncontrolled experiment on children’s minds’ by allowing unrestricted access to social media platforms. He cautioned that while the long-term effects of the experiment remain uncertain, they are unlikely to be positive.

Additionally, the prime minister pointed to domestic initiatives already underway, such as the ban on mobile phones in schools, which he claimed has already transformed the educational experience.

Mitsotakis acknowledged the difficulties of enforcing such regulations but insisted that complexity cannot be an excuse for inaction.

Across the whole world, similar conversations are gaining traction. Let’s review some of them.

National initiatives across the globe

UK

The UK introduced its Online Safety Act in 2023, one of the most comprehensive frameworks for regulating online platforms. Under the law, companies must assess risks to children and demonstrate how they mitigate harms.

Age assurance is required for certain services, including those hosting pornography or content promoting suicide or self-harm. While not an outright ban, the framework places a heavy responsibility on platforms to restrict harmful material and tailor their products to younger users.

EU

The EU has not introduced a specific social media ban, but its Digital Services Act requires major platforms to conduct systemic risk assessments, including risks to minors.

However, the European Commission has signalled that it may support stricter measures on youth access to social media, keeping the option of a bloc-wide ban under review.

Commission President Ursula von der Leyen has recently endorsed the idea of a ‘digital majority age’ and pledged to gather experts by year’s end to consider possible actions.

The Commission has pointed to the Digital Services Act as a strong baseline but argued that evolving risks demand continued vigilance.

EU

Companies must show regulators how algorithms affect young people and must offer transparency about their moderation practices.

In parallel, several EU states are piloting age verification measures for access to certain platforms. France, for example, has debated requiring parental consent for children under 15 to use social media.

USA

The USA lacks a single nationwide law, but several states are acting independently, although there are some issues with the Supreme Court and the First Amendment.

Florida, Texas, Utah, and Arkansas have passed laws requiring parental consent for minors to access social media, while others are considering restrictions.

The federal government has debated child online safety legislation, although political divides have slowed progress. Instead of a ban, American initiatives often blend parental rights, consumer protection, and platform accountability.

Canada

The Canadian government has introduced Bill C-63, the Online Harms Act, aiming to strengthen online child protection and limit the spread of harmful content.

Justice Minister Arif Virani said the legislation would ensure platforms take greater responsibility for reducing risks and preventing the amplification of content that incites hate, violence, or self-harm.

The framework would apply to platforms, including livestreaming and adult content services.

canada flag is depicted on the screen with the program code 1

They would be obliged to remove material that sexually exploits children or shares intimate content without consent, while also adopting safety measures to limit exposure to harmful content such as bullying, terrorism, and extremist propaganda.

However, the legislation also does not impose a complete social media ban for minors.

China

China’s cyberspace regulator has proposed restrictions on children’s smartphone use. The draft rules limit use to a maximum of two hours daily for those under 18, with stricter limits for younger age groups.

The Cyberspace Administration of China (CAC) said devices should include ‘minor mode’ programmes, blocking internet access for children between 10 p.m. and 6 a.m.

Teenagers aged 16 to 18 would be allowed two hours a day, those between eight and 16 just one hour, and those under eight years old only eight minutes.

It is important to add that parents could opt out of the restrictions if they wish.

India

In January, India proposed new rules to tighten controls on children’s access to social media, sparking a debate over parental empowerment and privacy risks.

The draft rules required parental consent before minors can create accounts on social media, e-commerce, or gaming platforms.

Verification would rely on identity documents or age data already held by providers.

Supporters argue the measures will give parents greater oversight and protect children from risks such as cyberbullying, harmful content, and online exploitation.

Singapore

PM Lawrence Wong has warned of the risks of excessive screen time while stressing that children must also be empowered to use technology responsibly. The ultimate goal is the right balance between safety and digital literacy.

In addition, researchers suggest schools should not ban devices out of fear but teach children how to manage them, likening digital literacy to learning how to swim safely. Such a strategy highlights that no single solution fits all societies.

Balancing rights and risks

Bans and restrictions raise fundamental rights issues. Children have the right to access information, to express themselves, and to participate in culture and society.

Overly strict bans can exclude them from opportunities that their peers elsewhere enjoy. Critics argue that bans may create inequalities between children whose families find workarounds and those who comply.

social media ban for under 16s

At the same time, the rights to health, safety and privacy must also be protected. The difficulty lies in striking a balance. Advocates of stronger regulation argue that platforms have failed to self-regulate effectively, and that states must step in.

Opponents argue that bans may create unintended harms and encourage authoritarian tendencies, with governments using child safety as a pretext for broader control of online spaces.

Instead of choosing one path, some propose hybrid approaches: stronger rules for design and data collection, combined with investment in education and digital resilience. Such approaches aim to prepare children to navigate online risks while making platforms less exploitative.

The future of social media and child protection

Looking forward, the global landscape is unlikely to converge on a single model. Some countries will favour bans and strict controls, others will emphasise parental empowerment, and still others will prioritise platform accountability.

What is clear is that the status quo is no longer acceptable to policymakers or to many parents.

Technological solutions will also evolve. Advances in privacy-preserving age verification may ease some concerns, although sceptics warn that surveillance risks will remain. At the same time, platforms may voluntarily redesign products for younger audiences, either to comply with regulations or to preserve trust.

Ultimately, the challenge is not whether to regulate, but how. Instead of focusing solely on prohibition, governments and societies may need to build layered protections: legal safeguards, technological checks, educational investments and cultural change.

If these can align, children may inherit a safer digital world that still allows them to learn, connect and create. If they cannot, the risks of exclusion or exploitation will remain unresolved.

black woman hands and phone for city map location gps or social media internet search in new york

In conclusion, the debate over banning or restricting social media for children reflects broader tensions between freedom, safety, privacy, and responsibility. Around the globe, governments are experimenting with different balances of control and empowerment.

Australia, as we have already shown, represents one of the boldest approaches, while others, from the UK and Greece to China and Singapore, are testing different variations.

What unites them is the recognition that children cannot simply be left alone in a digital ecosystem designed for profit rather than protection.

The next decade will determine whether societies can craft a sustainable balance, where technology serves the needs of the young instead of exploiting them.

In the end, it is our duty as human beings and responsible citizens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI backs policy push for Europe’s AI uptake

OpenAI and Allied for Startups have released Hacktivate AI, a set of 20 ideas to speed up AI adoption across Europe ahead of the Commission’s Apply AI Strategy.

The report emerged from a Brussels policy hackathon with 65 participants from EU bodies, governments, enterprises and startups, proposing measures such as an Individual AI Learning Account, an AI Champions Network for SMEs, a European GovAI Hub and relentless harmonisation.

OpenAI highlights strong European demand and uneven workplace uptake, citing sector gaps and the need for targeted support, while pointing to initiatives like OpenAI Academy to widen skills.

Broader policy momentum is building, with the EU preparing an Apply AI Strategy to boost homegrown tools and cut dependencies, reinforcing the push for practical deployment across public services and industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU kicks off cybersecurity awareness campaign against phishing threats

European Cybersecurity Month (ECSM) 2025 has kicked off, with this year’s campaign centring on the growing threat of phishing attacks.

The initiative, driven by the EU Agency for Cybersecurity (ENISA) and the European Commission, seeks to raise awareness and provide practical guidance to European citizens and organisations.

Phishing is still the primary vector through which threat actors launch social engineering attacks. However, this year’s ECSM materials expand the scope to include variants like SMS phishing (smishing), QR code phishing (quishing), voice phishing (vishing), and business email compromise (BEC).

ENISA warns that as of early 2025, over 80 percent of observed social engineering activity involves using AI in their campaigns, in which language models enable more convincing and scalable scams.

To support the campaign, a variety of tiers of actors, from individual citizens to large organisations, are encouraged to engage in training, simulations, awareness sessions and public outreach under the banner #ThinkB4UClick.

A cross-institutional kick-off event is also scheduled, bringing together the EU institutions, member states and civil society to align messaging and launch coordinated activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mexico drafts law to regulate AI in dubbing and animation

The Mexican government is preparing a law to regulate the use of AI in dubbing, animation, and voiceovers to prevent unauthorised voice cloning and safeguard creative rights.

Working with the National Copyright Institute and more than 128 associations, it aims to reform copyright legislation before the end of the year.

The plan would strengthen protections for actors, voiceover artists, and creative workers, while addressing contract conditions and establishing a ‘Made in Mexico’ seal for cultural industries.

A bill that is expected to prohibit synthetic dubbing without consent, impose penalties for misuse, and recognise voice and image as biometric data.

Industry voices warn that AI has already disrupted work opportunities. Several dubbing firms in Los Angeles have closed, with their projects taken over by companies specialising in AI-driven dubbing.

Startups such as Deepdub and TrueSync have advanced the technology, dubbing films and television content across languages at scale.

Unions and creative groups argue that regulation is vital to protect both jobs and culture. While AI offers efficiency in translation and production, it cannot yet replicate the emotional depth of human performance.

The law is seen as the first attempt of Mexico to balance technological innovation with the rights of workers and creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS expands tech skills programme to Tennessee

Amazon Web Services (AWS) is expanding its Skills to Jobs Tech Alliance to Tennessee, making it the sixth US state to join the initiative. The partnership with the Nashville Innovation Alliance targets middle Tennessee’s rising demand for AI and cloud computing talent.

Between 2020 and 2023, tech job postings in the region increased by 35 percent, with around 8,000 roles currently open.

The programme will link students from local universities with employers and practical learning opportunities. Courses will be modernised to meet industry demand, ensuring students gain relevant AI and cloud expertise.

Local leaders emphasised the initiative’s potential to strengthen Nashville’s workforce. Mayor Freddie O’Connell stressed preparing residents for tech careers, while AWS and the Alliance aim to create sustainable pathways to high-paying roles.

The Tech Alliance has already reached 62,000 learners globally and engaged over 780 employers. Tennessee’s expansion aims to reach over 1,000 residents by 2027, with further statewide growth planned to boost Nashville’s role as a southeastern tech hub.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s Sora app raises tension between mission and profit

The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.

The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.

Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.

The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.

A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.

Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.

Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!