India’s data protection rules finally take effect

India has activated the Digital Personal Data Protection Act 2023 after extended delays. Final regulations notified in November operationalise a long-awaited national privacy framework. The Act, passed in August 2023, now gains a fully operational compliance structure.

Implementation of the rules is staggered so organisations can adjust governance, systems and contracts. Some provisions, including the creation of a Data Protection Board, take effect immediately. Obligations on consent notices, breach reporting and children’s data begin after 12 or 18 months.

India introduces regulated consent managers acting as a single interface between users and data fiduciaries. Managers must register with the Board and follow strict operational standards. Parents will use digital locker-based verification when authorising the processing of children’s information online.

Global technology, finance and health providers now face major upgrades to internal privacy programmes. Lawyers expect major work mapping data flows, refining consent journeys and tightening security practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teenagers still face harmful content despite new protections

In the UK and other countries, teenagers continue to encounter harmful social media content, including posts about bullying, suicide and weapons, despite the Online Safety Act coming into effect in July.

A BBC investigation using test profiles revealed that some platforms continue to expose young users to concerning material, particularly on TikTok and YouTube.

The experiment, conducted with six fictional accounts aged 13 to 15, revealed differences in exposure between boys and girls.

While Instagram showed marked improvement, with no harmful content displayed during the latest test, TikTok users were repeatedly served posts about self-harm and abuse, and one YouTube profile encountered videos featuring weapons and animal harm.

Experts warned that changes will take time and urged parents to monitor their children’s online activity actively. They also recommended open conversations about content, the use of parental controls, and vigilance rather than relying solely on the new regulatory codes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New funding round by Meta strengthens local STEAM education

Meta is inviting applications for its 2026 Data Centre Community Action Grants, which support schools, nonprofits and local groups in regions that host the company’s data centres.

The programme has been a core part of Meta’s community investment strategy since 2011, and the latest round expands support to seven additional areas linked to new facilities. The company views the grants as a means of strengthening long-term community vitality, rather than focusing solely on infrastructure growth.

Funding is aimed at projects that use technology for public benefit and improve opportunities in science, technology, engineering, arts and mathematics. More than $ 74 million has been awarded to communities worldwide, with $ 24 million distributed through the grant programme alone.

Recipients can reapply each year, which enables organisations to sustain programmes and increase their impact over time.

Several regions have already demonstrated how the funding can reshape local learning opportunities. Northern Illinois University used grants to expand engineering camps for younger students and to open a STEAM studio that supports after-school programmes and workforce development.

In New Mexico, a middle school used funding to build a STEM centre with advanced tools such as drones, coding kits and 3D printing equipment. In Texas, an enrichment organisation created a digital media and STEM camp for at-risk youth, offering skills that can encourage empowerment instead of disengagement.

Meta presents the programme as part of a broader pledge to deepen education and community involvement around emerging technologies.

The company argues that long-term support for digital learning will strengthen local resilience and create opportunities for young people who want to pursue future careers in technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI platforms approved for Surrey Schools classrooms

Surrey Schools has approved MagicSchool, SchoolAI, and TeachAid for classroom use, giving teachers access through the ONE portal with parental consent. The district says the tools are intended to support instruction while maintaining strong privacy and safety safeguards.

Officials say each platform passes rigorous reviews covering educational value, data protection, and technical security before approval. Teachers receive structured guidance on appropriate use, supported by professional development aligned with wider standards for responsible AI in education.

A two-year digital literacy programme helps staff explore online identity, digital habits, and safe technology use as AI becomes more common in lessons. Students use AI to generate ideas, check code, and analyse scientific or mathematical problems, reinforcing critical reasoning.

Educators stress that pupils are taught to question AI outputs rather than accept them at face value. Leaders argue this approach builds judgment and confidence, preparing young people to navigate automated systems with greater agency beyond school settings.

Families and teachers can access AI safety resources through the ONE platform, including videos, podcasts and the ‘Navigating an AI Future’ series. Materials include recordings from earlier workshops and parent sessions, supporting shared understanding of AI’s benefits and risks across the community.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Coding meets creativity in Minecraft Education’s AI tutorial

Minecraft Education is introducing an AI-powered twist on the classic first night challenge with a new Hour of AI world. Players explore a puzzle-driven environment that turns early survival stress into a guided coding and learning experience.

The activity drops players into a familiar biome and tasks them with building shelter before sunset. Instead of panicking at distant rustles or looming shadows, learners work with an AI agent designed to support planning and problem-solving.

Using MakeCode programming, players teach their agent to recognise patterns, classify resources, and coordinate helper bots. The agent mimics real AI behaviour by learning from examples and occasionally making mistakes that require human correction to improve its decisions.

As the agent becomes more capable, it shifts from a simple tool to a partner that automates key tasks and reduces first-night pressure. The aim is to let players develop creative strategies rather than resort to frantic survival instincts.

Designed for ages seven and up, the experience is free to access through Minecraft Education. It introduces core AI literacy concepts, blending gameplay with lessons on how AI systems learn, adapt, and occasionally fail, all wrapped in a familiar, family-friendly setting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tools deployed to set tailored attendance goals for English schools

England will introduce AI-generated attendance targets for each school, setting tailored improvement baselines based on the context and needs of each school. Schools with higher absence rates will be paired with strong performers for support. Thirty-six new Attendance and Behaviour Hubs will help drive the rollout.

Education Secretary Bridget Phillipson said raising attendance is essential for opportunity. She highlighted the progress made since the pandemic, but noted that variation remains too high. The AI targets aim to disseminate effective practices across all schools.

A new toolkit will guide schools through key transition points, such as the transition from Year 7 to Year 8. CHS South in Manchester is highlighted for using summer family activities to ease anxiety. Officials say early engagement can stabilise attendance.

CHS South Deputy Head Sue Burke said the goal is to ensure no pupil feels left out. She credited the attendance team for combining support with firm expectations. The model is presented as a template for broader adoption.

The policy blends AI analysis with pastoral strategies to address entrenched absence. Ministers argue that consistent attendance drives long-term outcomes. The UK government expects personalised targets and shared practice to embed lasting improvement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU regulators, UK and eSafety lead the global push to protect children in the digital world

Children today spend a significant amount of their time online, from learning and playing to communicating.

To protect them in an increasingly digital world, Australia’s eSafety Commissioner, the European Commission’s DG CNECT, and the UK’s Ofcom have joined forces to strengthen global cooperation on child online safety.

The partnership aims to ensure that online platforms take greater responsibility for protecting and empowering children, recognising their rights under the UN Convention on the Rights of the Child.

The three regulators will continue to enforce their online safety laws to ensure platforms properly assess and mitigate risks to children. They will promote privacy-preserving age verification technologies and collaborate with civil society and academics to ensure that regulations reflect real-world challenges.

By supporting digital literacy and critical thinking, they aim to provide children and families with safer and more confident online experiences.

To advance the work, a new trilateral technical group will be established to deepen collaboration on age assurance. It will study the interoperability and reliability of such systems, explore the latest technologies, and strengthen the evidence base for regulatory action.

Through closer cooperation, the regulators hope to create a more secure and empowering digital environment for young people worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta, TikTok and Snapchat prepare to block under-16s as Australia enforces social media ban

Social media platforms, including Meta, TikTok and Snapchat, will begin sending notices to more than a million Australian teens, telling them to download their data, freeze their profiles or lose access when the national ban for under-16s comes into force on 10 December.

According to people familiar with the plans, platforms will deactivate accounts believed to belong to users under the age of 16. About 20 million Australians who are older will not be affected. However, this marks a shift from the year-long opposition seen from tech firms, which warned the rules would be intrusive or unworkable.

Companies plan to rely on their existing age-estimation software, which predicts age from behaviour signals such as likes and engagement patterns. Only users who challenge a block will be pushed to the age assurance apps. These tools estimate age from a selfie and, if disputed, allow users to upload ID. Trials show they work, but accuracy drops for 16- and 17-year-olds.

Yoti’s Chief Policy Officer, Julie Dawson, said disruption should be brief, with users adapting within a few weeks. Meta, Snapchat, TikTok and Google declined to comment. In earlier hearings, most respondents stated that they would comply.

The law blocks teenagers from using mainstream platforms without any parental override. It follows renewed concern over youth safety after internal Meta documents in 2021 revealed harm linked to heavy social media use.

A smooth rollout is expected to influence other countries as they explore similar measures. France, Denmark, Florida and the UK have pursued age checks with mixed results due to concerns over privacy and practicality.

Consultants say governments are watching to see whether Australia’s requirement for platforms to take ‘reasonable steps’ to block minors, including trying to detect VPN use, works in practice without causing significant disruption for other users.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK moves to curb AI-generated child abuse imagery with pre-release testing

The UK government plans to let approved organisations test AI models before release to ensure they cannot generate child sexual abuse material. The amendment to the Crime and Policing Bill aims to build safeguards into AI tools at the design stage rather than after deployment.

The Internet Watch Foundation reported 426 AI-related abuse cases this year, up from 199 in 2024. Chief Executive Kerry Smith said the move could make AI products safer before they are launched. The proposal also extends to detecting extreme pornography and non-consensual intimate images.

The NSPCC’s Rani Govender welcomed the reform but said testing should be mandatory to make child safety part of product design. Earlier this year, the Home Office introduced new offences for creating or distributing AI tools used to produce abusive imagery, punishable by up to five years in prison.

Technology Secretary Liz Kendall said the law would ensure that trusted groups can verify the safety of AI systems. In contrast, Safeguarding Minister Jess Phillips said it would help prevent predators from exploiting legitimate tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Northern Ireland teachers reclaim hours with AI

A six-month pilot across Northern Ireland put Gemini and Workspace into classrooms. One hundred teachers participated under the Education Authority’s C2k programme. Reported benefits centred on time savings and practical support for everyday teaching.

Participants said they saved around ten hours per week on routine tasks where freed time was redirected to pupil engagement and professional development. More than six hundred use cases from the one hundred participants were documented during the trial period.

Teachers cited varied applications, from drafting parent letters to generating risk assessments quickly. NotebookLM helped transform curriculum materials into podcasts and interactive mind maps. Inclusive lessons were tailored, including Irish language activities and support for neurodivergent learners.

C2k plans wider training so more Northen Ireland educators can adopt the tools responsibly. Leadership framed AI as collaborative, not a replacement for teachers. Further partnerships are expected to align products with established pedagogical principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot