Twitch is classified as age-restricted by the Australian regulator

Australia’s online safety regulator has moved to classify Twitch as an age-restricted social media platform after ruling that the service is centred on user interaction through livestreamed content.

The decision means Twitch must take reasonable steps to stop children under sixteen from creating accounts from 10 December instead of relying on its own internal checks.

Pinterest has been treated differently after eSafety found that its main purpose is image collection and idea curation instead of social interaction.

As a result, the platform will not be required to follow age-restriction rules. The regulator stressed that the courts hold the final say on whether a service is age-restricted. Yet, the assessments were carried out to support families and industry ahead of the December deadline.

The ruling places Twitch alongside earlier named platforms such as Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X and YouTube.

eSafety expects all companies operating in Australia to examine their legal responsibilities and has provided a self assessment tool to guide platforms that may fall under the social media minimum age requirements.

eSafety confirmed that assessments have been completed in stages to offer timely advice while reviews were still underway. The regulator added that no further assessments will be released before 10 December as preparations for compliance continue across the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Under-16s face new online restrictions as Malaysia tightens oversight

Malaysia plans to introduce a ban on social media accounts for people under 16 starting in 2026, becoming the latest country to push stricter digital age limits for children. Communications Minister Fahmi Fadzil said the government aims to better protect minors from cyberbullying, online scams and sexual exploitation.

Authorities are reviewing verification methods used abroad, including electronic age checks through national ID cards or passports, though an exact enforcement date has not yet been set.

The move follows new rules introduced earlier this year, which require major digital platforms in Malaysia to obtain a licence if they have more than eight million users. Licensed services must adopt age-verification tools, content-safety measures and clearer transparency standards, part of a wider effort to create a safer online environment for young people and families.

Australia, which passed the world’s first nationwide ban on social media accounts for children under 16, is serving as a key reference point for Malaysia’s plans. The Australian law takes effect on 10 December and imposes heavy fines on platforms like Facebook, TikTok, Instagram, X and YouTube if they fail to prevent underage users from signing up.

The move has drawn global attention as governments grapple with the impact of social media on young audiences. Similar proposals are emerging elsewhere in Europe.

Denmark has recently announced its intention to block social media access for children under 15, while Norway is advancing legislation that would introduce a minimum age of 15 for opening social media accounts. Countries adopting such measures say stricter age limits are increasingly necessary to address growing concerns about online safety and the well-being of children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI shows promise in supporting emergency medical decisions

Drexel University researchers studied how AI can aid emergency decisions in pediatric trauma at Children’s National Medical Center. Clinicians used the AI display DecAide to view key patient data, AI-synthesised information, or AI data with treatment recommendations.

The study tested 35 emergency care providers across 12 scripted scenarios, comparing their decisions to established ground truth outcomes.

The results showed participants achieved the highest accuracy, 64.4%, when both AI information and recommendations were provided, compared to 56.3% with information alone and 55.8% with no AI support.

Decision times were consistent across all conditions, suggesting AI did not slow clinicians, though providers varied in how they used the recommendations. Some consulted the guidance after deciding, while others ignored it due to trust or transparency concerns.

Researchers highlight the potential for AI to augment emergency care without replacing human judgement, particularly in time-critical settings. Researchers stress the need for larger studies and clear policies to ensure clinicians can trust and use AI tools effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta to block under-16 Australians from Facebook and Instagram early

Meta is beginning to block users in Australia who it believes are under 16 from using Instagram, Facebook, and Threads, starting 4 December, a week ahead of the government-mandated social media ban.

Last week, Meta sent in-app messages, emails and texts warning affected users to download their data because their accounts will soon be removed. As of 4 December, the company will deactivate existing accounts and block new sign-ups for users under 16.

To appeal the deactivation, targeted users can undergo age verification by providing a ‘video selfie’ to prove they are 16 or older, or by presenting a government-issued ID. Meta says it will ‘review and improve’ its systems, deploying AI-based age-assurance methods to reduce errors.

Observers highlight the risks of false positives in Meta’s age checks. Facial age estimation, conducted through partner company Yoti, has known margins of error.

The enforcement comes amid Australia’s world-first law that bars under-16s from using several major social media platforms, including Instagram, Snapchat, TikTok, YouTube, X and more.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pennsylvania Senate passes bill to tackle AI-generated CSAM

The Pennsylvania Senate has passed Senate Bill 1050, requiring all individuals classified as mandated reporters to notify authorities of any instance of child sexual abuse material (CSAM) they become aware of, including material produced by a minor or generated using artificial intelligence.

The bill, sponsored by Senators Tracy Pennycuick, Scott Martin and Lisa Baker, addresses the recent rise in AI-generated CSAM and builds upon earlier legislation (Act 125 of 2024 and Act 35 of 2025) that targeted deepfakes and sexual deepfake content.

Supporters argue the bill strengthens child protection by closing a legal gap: while existing laws focused on CSAM involving real minors, the new measure explicitly covers AI-generated material. Senator Martin said the threat from AI-generated images is ‘very real’.

From a tech policy perspective, this law highlights how rapidly evolving AI capabilities, especially around image synthesis and manipulation, are pushing lawmakers to update obligations for reporting, investigation and accountability.

It raises questions around how institutions, schools and health-care providers will adapt to these new responsibilities and what enforcement mechanisms will look like.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI in healthcare gains regulatory compass from UK experts

Professor Alastair Denniston has outlined the core principles for regulating AI in healthcare, describing AI as the ‘X-ray moment’ of our time.

Like previous innovations such as MRI scanners and antibiotics, AI has the potential to improve diagnosis, treatment and personalised care dramatically. Still, it also requires careful oversight to ensure patient safety.

The MHRA’s National Commission on the Regulation of AI in Healthcare is developing a framework based on three key principles. The framework must be safe, ensuring proportionate regulation that protects patients without stifling innovation.

It must be fast, reducing delays in bringing beneficial technologies to patients and supporting small innovators who cannot endure long regulatory timelines. Ultimately, it must be trusted, with transparent processes that foster confidence in AI technologies today and in the future.

Professor Denniston emphasises that AI is not a single technology but a rapidly evolving ecosystem. The regulatory system must keep pace with advances while allowing the NHS to harness AI safely and efficiently.

Just as with earlier medical breakthroughs, failure to innovate can carry risks equal to the dangers of new technologies themselves.

The National Commission will soon invite the public to contribute their views through a call for evidence.

Patients, healthcare professionals, and members of the public are encouraged to share what matters to them, helping to shape a framework that balances safety, speed, and trust while unlocking the full potential of AI in the NHS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DPDP law takes effect as India tightens AI-era data protections

India has activated new Digital Personal Data Protection rules that sharply restrict how technology firms collect and use personal information. The framework limits data gathering to what is necessary for a declared purpose and requires clear explanations, opt-outs, and breach notifications for Indian users.

The rules apply across digital platforms, from social media and e-commerce to banks and public services. Companies must obtain parental consent for individuals under 18 and are prohibited from using children’s data for targeted advertising. Firms have 18 months to comply with the new safeguards.

Users can request access to their data, ask why it was collected, and demand corrections or updates. They may withdraw consent at any time and, in some cases, request deletion. Companies must respond within 90 days, and individuals can appoint someone to exercise these rights.

Civil society groups welcomed stronger user rights but warned that the rules may also expand state access to personal data. The Internet Freedom Foundation criticised limited oversight and said the provisions risk entrenching government control, reducing transparency for citizens.

India is preparing further digital regulations, including new requirements for AI and social media firms. With nearly a billion online users, the government has urged platforms to label AI-generated content amid rising concerns about deepfakes, online misinformation, and election integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU introduces plan to strengthen consumer protection

The European Commission has unveiled the 2030 Consumer Agenda, a strategic plan to reinforce protection, trust, and competitiveness across the EU.

With 450 million consumers contributing over half of the Union’s GDP, the agenda aims to simplify administrative processes for businesses, rather than adding new burdens, while ensuring fair treatment for shoppers.

The agenda sets four priorities to adapt to rising living costs, evolving online markets, and the surge in e-commerce. Completing the Single Market will remove cross-border barriers, enhance travel and financial services, and evaluate the effectiveness of the Geo-Blocking Regulation.

A planned Digital Fairness Act will address harmful online practices, focusing on protecting children and strengthening consumer rights.

Sustainable consumption takes a central focus, with efforts to combat greenwashing, expand access to sustainable goods, and support circular initiatives such as second-hand markets and repairable products.

The Commission will also enhance enforcement to tackle unsafe or non-compliant products, particularly from third countries, ensuring that compliant businesses are shielded from unfair competition.

Implementation will be overseen through the Annual Consumer Summit and regular Ministerial Forums, which will provide political guidance and monitor progress.

The 2030 Consumer Agenda builds on prior achievements and EU consultations, aiming to modernise consumer protection instead of leaving gaps in a rapidly changing market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Roblox brings in global age checks for chat

Children will no longer be able to chat with adult strangers on Roblox after new global age checks are introduced. The platform will begin mandatory facial estimation in selected countries in December before expanding worldwide in January.

Roblox players will be placed into strict age groups and prevented from messaging older users unless they are verified as trusted contacts. Under-13s will remain barred from private messages unless parents actively approve access within account controls.

The company faces rising scrutiny following lawsuits in several US states, where officials argue Roblox failed to protect young users from harmful contact. Safety groups welcome the tighter rules but warn that monitoring must match the platform’s rapid growth.

Roblox says the technology is accurate and helps deliver safer digital spaces for younger players. Campaigners continue to call for broader protections as millions of children interact across games, chats and AI-enhanced features each day.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fight over state AI authority heats up in US Congress

US House Republicans are mounting a new effort to block individual states from regulating AI, reviving a proposal that the Senate overwhelmingly rejected just four months ago. Their push aligns with President Donald Trump’s call for a single federal AI standard, which he argues is necessary to avoid a ‘patchwork’ of state-level rules that he claims hinder economic growth and fuel what he described as ‘woke AI.’

House Majority Leader Steve Scalise is now attempting to insert the measure into the National Defence Authorisation Act, a must-pass annual defence spending bill expected to be finalised in the coming weeks. If successful, the move would place a moratorium on state-level AI regulation, effectively ending the states’ current role as the primary rule-setters on issues ranging from child safety and algorithmic fairness to workforce impacts.

The proposal faces significant resistance, including from within the Republican Party. Lawmakers who blocked the earlier attempt in July warned that stripping states of their authority could weaken protections in areas such as copyright, child safety, and political speech.

Critics, such as Senator Marsha Blackburn and Florida Governor Ron DeSantis, argue that the measure would amount to a handout to Big Tech and leave states unable to guard against the use of predatory or intrusive AI.

Congressional leaders hope to reach a deal before the Thanksgiving recess, but the ultimate fate of the measure remains uncertain. Any version of the moratorium would still need bipartisan support in the Senate, where most legislation requires 60 votes to advance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot