AI tools deployed to set tailored attendance goals for English schools

England will introduce AI-generated attendance targets for each school, setting tailored improvement baselines based on the context and needs of each school. Schools with higher absence rates will be paired with strong performers for support. Thirty-six new Attendance and Behaviour Hubs will help drive the rollout.

Education Secretary Bridget Phillipson said raising attendance is essential for opportunity. She highlighted the progress made since the pandemic, but noted that variation remains too high. The AI targets aim to disseminate effective practices across all schools.

A new toolkit will guide schools through key transition points, such as the transition from Year 7 to Year 8. CHS South in Manchester is highlighted for using summer family activities to ease anxiety. Officials say early engagement can stabilise attendance.

CHS South Deputy Head Sue Burke said the goal is to ensure no pupil feels left out. She credited the attendance team for combining support with firm expectations. The model is presented as a template for broader adoption.

The policy blends AI analysis with pastoral strategies to address entrenched absence. Ministers argue that consistent attendance drives long-term outcomes. The UK government expects personalised targets and shared practice to embed lasting improvement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU regulators, UK and eSafety lead the global push to protect children in the digital world

Children today spend a significant amount of their time online, from learning and playing to communicating.

To protect them in an increasingly digital world, Australia’s eSafety Commissioner, the European Commission’s DG CNECT, and the UK’s Ofcom have joined forces to strengthen global cooperation on child online safety.

The partnership aims to ensure that online platforms take greater responsibility for protecting and empowering children, recognising their rights under the UN Convention on the Rights of the Child.

The three regulators will continue to enforce their online safety laws to ensure platforms properly assess and mitigate risks to children. They will promote privacy-preserving age verification technologies and collaborate with civil society and academics to ensure that regulations reflect real-world challenges.

By supporting digital literacy and critical thinking, they aim to provide children and families with safer and more confident online experiences.

To advance the work, a new trilateral technical group will be established to deepen collaboration on age assurance. It will study the interoperability and reliability of such systems, explore the latest technologies, and strengthen the evidence base for regulatory action.

Through closer cooperation, the regulators hope to create a more secure and empowering digital environment for young people worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta, TikTok and Snapchat prepare to block under-16s as Australia enforces social media ban

Social media platforms, including Meta, TikTok and Snapchat, will begin sending notices to more than a million Australian teens, telling them to download their data, freeze their profiles or lose access when the national ban for under-16s comes into force on 10 December.

According to people familiar with the plans, platforms will deactivate accounts believed to belong to users under the age of 16. About 20 million Australians who are older will not be affected. However, this marks a shift from the year-long opposition seen from tech firms, which warned the rules would be intrusive or unworkable.

Companies plan to rely on their existing age-estimation software, which predicts age from behaviour signals such as likes and engagement patterns. Only users who challenge a block will be pushed to the age assurance apps. These tools estimate age from a selfie and, if disputed, allow users to upload ID. Trials show they work, but accuracy drops for 16- and 17-year-olds.

Yoti’s Chief Policy Officer, Julie Dawson, said disruption should be brief, with users adapting within a few weeks. Meta, Snapchat, TikTok and Google declined to comment. In earlier hearings, most respondents stated that they would comply.

The law blocks teenagers from using mainstream platforms without any parental override. It follows renewed concern over youth safety after internal Meta documents in 2021 revealed harm linked to heavy social media use.

A smooth rollout is expected to influence other countries as they explore similar measures. France, Denmark, Florida and the UK have pursued age checks with mixed results due to concerns over privacy and practicality.

Consultants say governments are watching to see whether Australia’s requirement for platforms to take ‘reasonable steps’ to block minors, including trying to detect VPN use, works in practice without causing significant disruption for other users.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK moves to curb AI-generated child abuse imagery with pre-release testing

The UK government plans to let approved organisations test AI models before release to ensure they cannot generate child sexual abuse material. The amendment to the Crime and Policing Bill aims to build safeguards into AI tools at the design stage rather than after deployment.

The Internet Watch Foundation reported 426 AI-related abuse cases this year, up from 199 in 2024. Chief Executive Kerry Smith said the move could make AI products safer before they are launched. The proposal also extends to detecting extreme pornography and non-consensual intimate images.

The NSPCC’s Rani Govender welcomed the reform but said testing should be mandatory to make child safety part of product design. Earlier this year, the Home Office introduced new offences for creating or distributing AI tools used to produce abusive imagery, punishable by up to five years in prison.

Technology Secretary Liz Kendall said the law would ensure that trusted groups can verify the safety of AI systems. In contrast, Safeguarding Minister Jess Phillips said it would help prevent predators from exploiting legitimate tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Northern Ireland teachers reclaim hours with AI

A six-month pilot across Northern Ireland put Gemini and Workspace into classrooms. One hundred teachers participated under the Education Authority’s C2k programme. Reported benefits centred on time savings and practical support for everyday teaching.

Participants said they saved around ten hours per week on routine tasks where freed time was redirected to pupil engagement and professional development. More than six hundred use cases from the one hundred participants were documented during the trial period.

Teachers cited varied applications, from drafting parent letters to generating risk assessments quickly. NotebookLM helped transform curriculum materials into podcasts and interactive mind maps. Inclusive lessons were tailored, including Irish language activities and support for neurodivergent learners.

C2k plans wider training so more Northen Ireland educators can adopt the tools responsibly. Leadership framed AI as collaborative, not a replacement for teachers. Further partnerships are expected to align products with established pedagogical principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK strengthens AI safeguards to protect children online

The UK government is introducing landmark legislation to prevent AI from being exploited to generate child sexual abuse material. The new law empowers authorised bodies, such as the Internet Watch Foundation, to test AI models and ensure safeguards prevent misuse.

Reports of AI-generated child abuse imagery have surged, with the IWF recording 426 cases in 2025, more than double the 199 cases reported in 2024. The data also reveals a sharp rise in images depicting infants, increasing from five in 2024 to 92 in 2025.

Officials say the measures will enable experts to identify vulnerabilities within AI systems, making it more difficult for offenders to exploit the technology.

The legislation will also require AI developers to build protections against non-consensual intimate images and extreme content. A group of experts in AI and child safety will be established to oversee secure testing and ensure the well-being of researchers.

Ministers emphasised that child safety must be built into AI systems from the start, not added as an afterthought.

By collaborating with the AI sector and child protection groups, the government aims to make the UK the safest place for children to be online. The approach strikes a balance between innovation and strong protections, thereby reinforcing public trust in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI unveils Teen Safety Blueprint for responsible AI

OpenAI has launched the Teen Safety Blueprint to guide responsible AI use for young people. The roadmap guides policymakers and developers on age-appropriate design, safeguards, and research to protect teen well-being and promote opportunities.

The company is implementing these principles across its products without waiting for formal regulation. Recent measures include stronger safeguards, parental controls, and an age-prediction system to customise AI experiences for under-18 users.

OpenAI emphasises that protecting teens is an ongoing effort. Collaboration with parents, experts, and young people will help improve AI safety continuously while shaping how technology can support teens responsibly over the long term.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How Google uses AI to support teachers and inspire students

Google is redefining education with AI designed to enhance learning, rather than replace teachers. The company has unveiled new tools grounded in learning science to support both educators and students, aiming to make learning more effective, efficient and engaging.

Through its Gemini platform, users can follow guided learning paths that encourage discovery rather than passive answers.

YouTube and Search now include conversational features that allow students to ask questions as they learn, while NotebookLM can transform personal materials into quizzes or immersive study aids.

Instructors can also utilise Google Classroom’s free AI tools for lesson planning and administrative support, thereby freeing up time for direct student engagement.

Google emphasises that its goal is to preserve the human essence of education while using AI to expand understanding. The company also addresses challenges linked to AI in learning, such as cheating, fairness, accuracy and critical thinking.

It is exploring assessment models that cannot be easily replicated by AI, including debates, projects, and oral examinations.

The firm pledges to develop its tools responsibly by collaborating with educators, parents and policymakers. By combining the art of teaching with the science of AI-driven learning, Google seeks to make education more personal, equitable and inspiring for all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO adopts first global ethical framework for neurotechnology

UNESCO has approved the world’s first global framework on the ethics of neurotechnology, setting new standards to ensure that advances in brain science respect human rights and dignity. The Recommendation, adopted by member states and entering into force on 12 November, establishes safeguards to ensure neurotechnological innovation benefits those in need without compromising mental privacy.

Launched in 2019 under Director-General Audrey Azoulay, the initiative builds on UNESCO’s earlier work on AI ethics. Azoulay described neurotechnology as a ‘new frontier of human progress’ that demands strict ethical boundaries to protect the inviolability of the human mind. The framework reflects UNESCO’s belief that technology should serve humanity responsibly and inclusively.

Neurotechnology, which enables direct interaction with the nervous system, is rapidly expanding, with investment in the sector rising by 700% between 2014 and 2021. While medical uses, such as deep brain stimulation and brain–computer interfaces, offer hope for people with Parkinson’s disease or disabilities, consumer devices that read neural data pose serious privacy concerns. Many users unknowingly share sensitive information about their emotions or mental states through everyday gadgets.

The Recommendation calls on governments to regulate these technologies, ensure they remain accessible, and protect vulnerable groups, especially children and workers. It urges bans on non-therapeutic use in young people and warns against monitoring employees’ mental activity or productivity without explicit consent.

UNESCO also stresses the need for transparency and better regulation of products that may alter behaviour or foster addiction.

Developed after consultations with over 8,000 contributors from academia, industry, and civil society, the framework was drafted by an international group of experts led by scientists Hervé Chneiweiss and Nita Farahany. UNESCO will now help countries translate the principles into national laws, as it has done with its 2021 AI ethics framework.

The Recommendation’s adoption, finalised at the General Conference in Samarkand, marks a new milestone in the global governance of emerging technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!