Australia begins a landmark study on social media minimum age

eSafety Commissioner has launched a major evaluation of Australia’s Social Media Minimum Age to understand how platforms are applying the requirement and what effects it is having on children, young people and families.

The study aims to deliver robust evidence about both intended and unintended impacts as the national debate on youth, wellbeing and digital environments intensifies.

Over more than two years, the research will follow more than four thousand children and families in Australia, combining surveys, interviews, group discussions and privacy-protected smartphone tracking.

Administrative data from national literacy assessments and health systems will be linked to deepen understanding of online behaviour, wellbeing and exposure to risk. All research materials are publicly available through the Open Science Framework to maintain transparency.

The project is led by eSafety’s Research and Evaluation team in partnership with the Stanford University Social Media Lab and an Academic Advisory Group of specialists in mental health, youth development and digital technologies.

Young people themselves are shaping the study through the eSafety Youth Council, ensuring that the interpretation reflects lived experience rather than external assumptions. Full ethics approval underpins the methodology, which meets strict standards of integrity and privacy.

Findings will be released from late 2026 onward, with early reports analysing the experiences of children under sixteen.

The results will inform a legislative review conducted by Australia’s Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts.

eSafety expects the evaluation to become a major evidence source for policymakers, researchers and communities as the global conversation on minors and social media regulation continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI use among students surges as chatbots reshape schoolwork

More than half of US teenagers use AI tools to help with schoolwork, according to a new Pew Research Center study. The survey found that 54% of students aged 13 to 17 have used chatbots such as OpenAI’s ChatGPT or Microsoft’s Copilot to research assignments or solve maths problems.

Usage has risen in recent years. In 2024, 26% of US teens reported using ChatGPT for schoolwork, up from 13% in 2023. The latest survey of 1,458 teens and parents found 44% use AI for some schoolwork, while 10% rely on chatbots for most tasks.

Researchers say AI assistance is becoming routine in classrooms. Colleen McClain, a senior researcher at Pew and co-author of the report, said chatbot use for schoolwork is now a common practice among teens.

Findings come amid an intensifying debate over generative AI in education. Supporters argue that schools should teach students to use and evaluate AI tools, while critics warn of misinformation, reduced critical thinking, and increased cheating.

Recent research has raised questions about learning outcomes. One study by Cambridge University Press & Assessment and Microsoft Research found that students who took notes without chatbot support showed stronger reading comprehension than those using AI assistance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Health under fire after study finds major failures in emergency detection

A new evaluation of ChatGPT Health has raised major safety concerns after researchers found it frequently failed to recognise urgent medical emergencies.

The independent study, published in Nature Medicine, reported that the system under-triaged more than half of the clinical scenarios tested, giving advice that could have delayed life-saving treatment.

The research team, led by Ashwin Ramaswamy, created sixty patient simulations ranging from minor illnesses to life-threatening conditions.

Three doctors agreed on the appropriate urgency for each case before comparing their judgement with the model’s responses. The AI performed adequately in straightforward emergencies such as strokes, yet frequently minimised danger in more complex presentations, including severe asthma and diabetic crises.

Experts also warned that ChatGPT Health struggled to detect suicidal ideation reliably. Minor changes to scenario details, such as adding normal lab results, caused safeguards to disappear entirely.

Critics, including health-misinformation researcher Alex Ruani, described the behaviour as dangerously inconsistent and capable of creating a false sense of security.

OpenAI said the study did not reflect typical real-world use but acknowledged the need for continued research and improvement.

Policy specialists argue that the findings underline the need for clear safety standards, external audits and stronger transparency requirements for AI systems operating in sensitive medical contexts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Uni.lu expert urges schools to embrace AI

AI should be integrated into classrooms in Luxembourg rather than avoided, according to Gilbert Busana of the University of Luxembourg. Speaking to RTL Today in Luxembourg, he said ignoring AI would be a disservice to pupils and teachers alike.

Busana argued that AI should be taught both as a standalone subject and across disciplines in Luxembourg schools. Clear guidelines are needed to define when and how pupils may use AI, alongside transparency about its role in assignments.

He stressed that developing AI literacy in Luxembourg is essential to protect critical thinking. Assessment methods may shift away from focusing solely on final outputs towards evaluating the learning process itself.

Teachers in Luxembourg are increasingly becoming coaches rather than simple transmitters of knowledge. Busana said continuous professional training and collaboration within schools in Luxembourg will be vital as AI reshapes education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft backs Australia’s next phase of digital government with new AI and cloud agreement

Australia’s rise to second place in the OECD Digital Government Index signals renewed momentum for national digital transformation.

A shift that comes as Microsoft signs a new five-year Volume Sourcing Arrangement with the Federal Government, designed to underpin modernisation across public services and create a secure, future-ready foundation for responsible AI adoption.

The agreement led by the Digital Transformation Agency gives agencies access to Microsoft Copilot, Azure, Microsoft 365, Dynamics 365 and a strengthened security and compliance framework instead of continuing reliance on ageing systems.

The arrangement sets clearer strategic pathways for innovation, procurement and skills development through an enhanced governance structure.

It recommits both sides to national security requirements, including the Security of Critical Infrastructure legislation, the Cloud Hosting Certification Framework and IRAP.

These measures allow agencies to expand AI use while retaining control of data and meeting the expectations placed on government institutions.

A successful Copilot trial in 2024 already demonstrated personal productivity gains of around one hour per day for participating staff.

Microsoft is also establishing a $1.55 million training fund for the Australian Public Service to support capability building in ethical AI use and modern cloud operations.

The company emphasises that Australia’s partner ecosystem will gain new opportunities because the agreement simplifies how local firms engage with government agencies. Such an approach forms an important part of the wider public sector reform agenda announced last year.

The new deal aligns with national priorities set out in the Whole-of-Government Cloud Computing Policy and the National AI Plan.

Australia now enters a pivotal period in which digital transformation is guided not only by technological capacity but by the frameworks of trust, resilience and public benefit that shape how government services evolve.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Meta AI flood of unusable abuse tips overwhelms US investigators

Investigators in the US say that AI used by Meta is flooding child protection units with large volumes of unhelpful reports, thereby draining resources rather than assisting ongoing cases.

Officers in the Internet Crimes Against Children network told a New Mexico court that most alerts generated by the company’s platforms lack essential evidence or contain material that is not criminal, leaving teams unable to progress investigations.

Meta rejects the claim that it prioritises profit, stressing its cooperation with law enforcement and highlighting rapid response times to emergency requests.

Its position is challenged by officers who say the volume of AI-generated alerts has doubled since 2024, particularly after the Report Act broadened reporting obligations.

They argue that adolescent conversations and incomplete data now form a sizeable portion of the alerts, while genuine cases of child sexual abuse material are becoming harder to detect.

Internal company documents disclosed at trial show Meta executives raising concerns as early as 2019 about the impact of end-to-end encryption on the firm’s ability to identify child exploitation and support investigators.

Child safety groups have long warned that encryption could limit early detection, even though Meta says it has introduced new tools designed to operate safely within encrypted environments.

The growing influx of unusable tips is taking a heavy toll on investigative teams. Officers in the US say each report must still be reviewed manually, despite the low likelihood of actionable evidence, and this backlog is diminishing morale at a time when they say resources have not kept pace with demand.

They warn that meaningful cases risk being delayed as units struggle with a workload swollen by AI systems tuned to avoid regulatory penalties rather than investigative value.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Reddit hit with a major ICO penalty over children’s privacy failures

The UK’s Information Commissioner’s Office has fined Reddit £14.47 million after finding that the platform unlawfully used children’s personal information and failed to put in place adequate age checks.

The regulator concluded that Reddit allowed children under 13 to access the platform without robust age-verification measures, leaving them exposed to content they were not able to understand or control.

Although Reddit updated its processes in July 2025, self-declaration remained easy to bypass, offering only a veneer of protection. Investigators also found that the company had not completed a data protection impact assessment until 2025, despite a large number of teenagers using the service.

Concerns were heightened by the volume of children affected and the risks created by relying on inadequate age checks.

The regulator noted that unlawful data processing occurred over a prolonged period, and that children were at risk of viewing harmful material while their information was processed without a lawful basis.

UK Information Commissioner John Edwards said companies must prioritise meaningful age assurance and understand the responsibilities set out in the Children’s Code.

The ICO said it will continue monitoring Reddit’s current controls and expects online platforms to align with robust age-assurance standards rather than rely on weak verification.

It will coordinate its oversight with Ofcom as part of broader efforts to strengthen online safety and ensure under-18s benefit from high privacy protections by default.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Microsoft expands Sovereign Cloud with secure offline support for large AI models

Digital sovereignty is gaining urgency as organisations seek infrastructure that remains secure and reliable under strict regulatory conditions.

Microsoft is expanding its Sovereign Cloud to help public bodies, regulated industries and enterprises maintain control of data and operations even when environments must operate without external connectivity.

The updated portfolio allows customers to choose how each workload is governed, rather than relying on a single deployment model.

Azure Local now supports disconnected operations, keeping mission-critical systems running with full Azure governance within sovereign boundaries. Management, policies and workloads stay entirely on site, so services continue during periods of isolation.

Microsoft 365 Local extends the resilience to the productivity layer by enabling Exchange Server, SharePoint Server and Skype for Business Server to run locally, giving teams secure collaboration within the same protected boundary as their infrastructure.

Support for large multimodal AI models is delivered through Foundry Local, which enables advanced inference on customer-controlled hardware using technology from partners such as NVIDIA.

Such an approach helps organisations bring modern AI capabilities into highly restricted environments while preserving control over data, identities and operational procedures.

Microsoft positions it as a unified stack that works across connected, hybrid and fully disconnected modes without increasing operational complexity.

These additions create a framework designed for governments and regulated industries that regard sovereignty as a strategic priority.

With global availability for qualified customers, the Sovereign Cloud aims to preserve continuity, reinforce governance and expand AI capability while keeping every layer of the environment within local control.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Relic advances AI agents for enterprise observability

The expansion into enterprise AI comes with a no-code platform from New Relic that allows companies to build and supervise their own observability agents.

A system that assembles AI-driven monitors designed to detect bugs and performance problems before they affect users, instead of leaving teams to rely on manual tracking.

It also supports the Model Context Protocol so organisations can link external data sources to the agents and integrate them with existing New Relic tools.

The company stresses that the platform is intended to complement other agent systems rather than replace them.

As AI agent software spreads across the market, enterprises are searching for ways to manage risk when giving automated tools access to internal systems.

Industry players such as Salesforce and OpenAI have already introduced their own agent platforms, and assessments from Gartner describe these frameworks as essential infrastructure for wider AI adoption.

New Relic also introduced new tools for the OpenTelemetry framework to remove friction around observability standards.

Its application performance monitoring agents now support OTel data, allowing enterprises to manage these streams in one place instead of operating separate collectors.

The update aims to reduce fragmentation that has slowed OTel deployment across large organisations and to simplify how engineering teams handle diverse observability pipelines.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI preparing kids for careers that don’t exist yet, say education leaders

Education leaders and industry stakeholders in South Africa say the rise of AI is transforming labour-market expectations to the point that tomorrow’s careers may not yet exist.

They argue that traditional curricula, centred on static knowledge and routine tasks, must evolve to prioritise adaptability, problem solving, creativity, ethical reasoning and digital fluency, competencies that complement AI rather than compete with it.

Speakers at recent education forums emphasised that AI will continue to automate routine cognitive and technical work, pushing demand toward roles that require higher-order thinking and human-centred skills.

They described a growing need to integrate AI literacy and data skills into schooling from an early age to reduce future workforce displacement and prepare students to harness AI as a productive partner.

Experts also highlighted equity concerns: without intentional policy and investment to support under-resourced schools and communities, the ‘AI skills gap’ could exacerbate inequality. Some educators recommended stronger partnerships between government, tech industry and educational institutions to co-develop curricula, teacher training and accessible AI tools.

They underscored that competencies such as empathetic communication, cultural awareness and ethical judgement, areas where AI lacks robust capabilities, will remain crucial.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!