ICO launches online privacy campaign for parents

New research published by the Information Commissioner’s Office (ICO) found that 24% of primary school-aged children have shared their real name or address online, while 21% of parents and carers have never spoken to them about online privacy. It also found that 22% of children have shared personal information, such as health details, with AI tools.

Research published by the ICO also found that 71% of parents worry that information their child shares today could affect their future. Findings also show that 46% do not feel confident protecting their children’s privacy online, 44% say they try but are not sure they are doing enough, and 42% say they probably do not spend enough time checking privacy settings.

Online privacy is one of the least-discussed online safety topics among parents, according to the ICO. Its research found that 38% discuss it less than once a month, while 90% have discussed screen time in the past month.

Emily Keaney, Deputy Commissioner at the ICO, said: ‘The internet offers amazing opportunities for children – but every click can leave a hidden data trail and these digital footprints can last forever.’ She added: ‘We wouldn’t expect our children to share their birthdays or address with a stranger in a shop, because we’d explain stranger danger to them from a very young age, but kids these days are growing up online.’

Keaney said: ‘We know that where children’s details – like their name, interests and pictures – aren’t protected, the potential risks are serious: unwanted contact from strangers, grooming and radicalisation.’ She said children’s online privacy ‘requires a whole society approach’ and added: ‘We have taken and will continue to take action to hold tech companies accountable for their role.’

Keaney also said: ‘There’s a role for parents too but the problem is that many families have never been shown how to talk to their children about online privacy.’ She added: ‘This is where the ICO comes in. We want parents to feel empowered and children to feel digitally confident, because only then will they be able to start to trust in how their data is used and be part of the whole society solution that is needed for online safety.’

The ICO campaign website outlines three steps for parents: talk regularly with children about online privacy, carefully choose what personal information to share, and check privacy settings on new devices and apps.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Transparency push for automated recruitment in the UK

The UK’s Information Commissioner’s Office has issued new guidance on the growing use of AI in recruitment, warning jobseekers may be unaware of how automated systems influence hiring decisions. The regulator says greater transparency is needed as adoption accelerates.

Automated decision-making tools are increasingly used to screen applications, analyse CVs and rank candidates. While this can improve efficiency, some applicants may be rejected before any human review takes place.

The regulator highlights risks including bias, lack of clarity and potential unfair treatment if safeguards towards the use of AI are not properly applied. Employers are expected to monitor systems for discrimination and clearly explain how decisions are made.

Jobseekers are entitled to know when automation is used, to challenge outcomes, and to request human review. The guidance aims to ensure fair and lawful hiring practices as AI becomes increasingly embedded in UK recruitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China sets standards for AI ethics review and algorithm accountability

The introduction of new AI ethics guidelines by China signals a structured attempt to formalise governance frameworks for rapidly expanding AI systems.

Coordinated by the Ministry of Industry and Information Technology of the People’s Republic of China and multiple state bodies, the policy integrates ethical oversight directly into technological development processes.

A central feature of the framework is the emphasis on operationalising ethical principles such as fairness, accountability, and human well-being through technical review mechanisms.

By focusing on data selection, algorithmic design, and system architecture, the guidelines move towards embedding ethical safeguards at the development stage and protecting intellectual property rights in AI ethics review technologies.

Such an approach reflects a broader shift towards anticipatory governance, where risks such as bias, discrimination, and algorithmic manipulation are addressed before deployment.

A policy by China that also highlights the role of infrastructure in ethical governance, including the development of auditing tools, risk assessment systems, and curated datasets.

Scenario-based evaluation mechanisms indicate an effort to tailor oversight to specific use cases, recognising that AI risks vary significantly across sectors. Instead of relying solely on static compliance rules, the framework promotes adaptive governance aligned with technological complexity.

Ultimately, the outcome is a governance model that seeks to maintain technological competitiveness while addressing societal risks, contributing to wider global debates on how states can regulate AI systems without constraining their development.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

IAPP Global Summit session examines AI, privacy, and the courts with US federal judges

US District Court for the District of Columbia Chief Judge James Boasberg and US District Court for the District of Massachusetts Judge Allison Burroughs discussed AI, privacy, and the courts during the IAPP Global Summit 2026 in Washington, D.C.

The IAPP report said Burroughs pointed to the gap between older legal protections and newer technologies, including debates over how surveillance rules apply to cell-tower data. Burroughs said existing laws and constitutional protections are ‘not keeping up, never have kept up and never will keep up’ with the speed of innovation.

Burroughs commented: ‘The gap is getting bigger for two reasons. One is that there’s so much more data stored electronically that if you even search for someone’s laptop, you’re going to get more data now than you used to get, and the other one is that there is so much more technology, there are just so many ways of gaining access to data.’

Another part of the IAPP report stated that Boasberg referred to a case in which lawyers submitted filings containing hallucinatory information generated through AI use. According to the report, he required that side to pay attorney’s fees to the other side as a sanction after discovering that AI had been used in the briefs.

Boasberg noted at the IAPP session: ‘I’m sure lawyers using AI is happening a lot more on the state level, and some judges are referring lawyers to state bars (for possible discipline), but there have been federal judges whose opinions included hallucinatory (citations) and that was obviously embarrassing for them.’ He added: ‘The question is how can it help without compromising privacy issues, sealed cases; there’s just a whole lot that we have to figure out, but I think judges are trying to learn how we can use this constructively.’

Burroughs also remarked at the IAPP event that judges want disclosure when lawyers use AI in filings. She said: ‘We want lawyers to tell us when they’ve used AI. They can use it, but they have to disclose it.’ She added: ‘They can use AI, they can’t use AI, they must disclose when they’re using it, they have to certify that they do citation checks to make sure they don’t have hallucinatory citations — it’s hard to think of what these rules would be going forward today.’

IAPP reported the remarks from the summit discussion. At the IAPP Global Summit, the discussion focused on how AI is affecting legal filings, surveillance questions, and court practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Student AI rights framework unveiled

A newly released ‘Student AI Bill of Rights’ in the US outlines a proposed framework to protect learners as AI tools become increasingly widespread in education. The initiative aims to establish clear standards for fairness, transparency and accountability.

The document highlights the need for students to be informed when AI systems are used in teaching, assessment or administration. It also stresses that students should retain control over their personal data and academic work.

Another central principle is accountability, with students given the right to question and appeal decisions made or influenced by AI systems. The framework also calls for safeguards to prevent bias and ensure equal access to educational opportunities.

While not legally binding, the proposal is designed to guide higher education institutions in developing responsible AI policies. It reflects growing efforts to define ethical standards for AI use in education in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK Research and Innovation review calls for reform at The Alan Turing Institute

An independent review by UK Research and Innovation has assessed the performance of The Alan Turing Institute. The evaluation examined whether the institute meets expectations as a national centre for AI and data science.

Findings recognise scientific excellence, strong partnerships and valuable contributions within the UK research system. However, the review identifies the need for a clearer strategic purpose and stronger delivery.

The panel concludes that alignment with national priorities and value for money is not yet satisfactory. Recommendations include improved governance, clearer prioritisation and renewed external scientific scrutiny.

Additional proposals call for stronger stakeholder engagement and a defined mission focused on resilience, security and defence. A framework for value for money is also expected to be agreed with the Engineering and Physical Sciences Research Council.

UK Research and Innovation will work with the institute’s leadership and partners to implement the changes. A development plan is expected by September 2026, with further assessment to follow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

CNN develops agent infrastructure for AI media trading

CNN is developing an internal agent infrastructure as part of a plan to begin AI-driven media trading by early 2027. The company aims to complete protocol scoping by the end of the second quarter before moving into testing phases later in the year.

Testing will focus on how properties are interpreted by large language models and how buyers allocate budgets to agent-based systems. Executives say the timeline may change as the technology and market conditions continue to evolve.

The initiative combines in-house development with external technology partners, while aligning with industry frameworks to ensure compatibility. CNN is also working with standards bodies to ensure agent communication produces accurate outcomes for buyers.

Agentic protocols enable systems to exchange information, negotiate pricing, and manage tasks autonomously between buyers and sellers. The company is prioritising consistent communication to support efficient and reliable transactions.

Early efforts are centred on learning and experimentation, even without immediate revenue generation. Initial use cases are expected to focus on performance-driven campaigns before expanding into broader advertising activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

GEANT Security Days 2026 to address AI, internet resilience, and cyber resilience

GEANT Security Days 2026 will take place in Utrecht, the Netherlands, bringing together security professionals, network experts, incident responders, and chief information security officers from the research and education community.

The opening plenary includes a keynote by Frank Rieger, Chief Technical Officer of a supplier of secure communication systems, on ‘The bumpy road ahead – IT security challenges of the next years.’ According to the programme, his talk will address agentic LLMs, attention economics, and how automation, control of networked systems, endpoints, and software are becoming increasingly important as change accelerates.

A second keynote in the opening plenary of GEANT Security Days is scheduled with Valerie Aurora of the Amsterdam Internet Resiliency Club. The programme says her session, ‘Start your own Internet Resiliency Club,’ will look at how communities can prepare for temporary loss of internet connectivity caused by accidents, natural disasters, or armed conflict, and how to build local internet resiliency clubs using LoRa radios, mesh networking, and community management.

Another keynote is listed from Nancy Beers of Sanne Cyber and Happy Game Changers. Her session, ‘Play More Today. Secure Tomorrow,’ is described as a discussion of play and playfulness as tools for learning, innovation, and security practice, drawing on interactive games and team-based approaches.

Topics listed in the GEANT Security Days programme include security operations centres, AI in incident response, AI more broadly, cloud security, community engagement, cyber resilience, the human factor, an unconference or storytelling session, squeezed budgets and stretched teams, and practical security.

The event page says these sessions will address issues such as anomaly detection and prediction, malicious uses of generative AI, trust in third-party services, compliance in multi-cloud and hybrid environments, continuity planning, phishing and credential reuse, and operational pressures on security teams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MIT study finds steady AI growth reshapes work

A new study from the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory finds that AI is reshaping work through steady, broad-based improvements rather than sudden technological jumps.

Researchers describe this pattern as a ‘rising tide,’ in which capability gains emerge across many tasks simultaneously.

The analysis draws on more than 17,000 worker evaluations covering over 3,000 text-based tasks from US labour classifications. Findings show limited evidence of abrupt ‘crashing wave’ breakthroughs in which AI suddenly masters specific job areas.

Instead, performance improves consistently across tasks of varying complexity and duration. Researchers report that current AI systems can already complete roughly half to three-quarters of text-related tasks at a minimally sufficient standard without human intervention.

Projections suggest that, if current trends continue, success rates could reach around 80 to 95 percent by 2029, although higher-quality performance may take longer to achieve.

Workplace change is unfolding gradually, with employees shifting towards oversight roles focused on directing, reviewing, and validating AI outputs.

Despite a slower structural transition than abrupt disruption scenarios, researchers warn that cumulative improvements could still drive significant labour market effects as adoption expands.

AI-driven change is likely to unfold across a wide range of tasks, allowing adaptation by workers and organisations while still signalling longer-term shifts in skills, workflows, and labour markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI improves structured and coherent legal systems for better regulation

A study from Sultan Qaboos University shows how AI can be used to map hidden structural relationships within legal systems, offering new ways to understand how laws interact and evolve.

Published in The Journal of Engineering Research, the research applies natural language processing and network analysis to Oman’s 2023 Labour Law.

The analysis reveals that legal provisions operate as an interconnected system rather than isolated rules. Certain articles emerge as highly influential ‘hubs’, with Article 147 identified as a central node whose modification could generate cascading effects across multiple parts of the legislation.

These interdependencies are visualised through network mapping techniques that highlight structural relationships not easily detected through traditional review.

To construct this model, researchers developed a four-stage methodology combining Arabic-language NLP tools with industrial engineering approaches. Legal texts were mapped using terminology and cross-referencing patterns, with outputs validated by Omani legislative experts to ensure accuracy and relevance.

The study highlights links between labour law and broader regulatory domains, including commercial regulation, social protection, occupational health, and immigration policy.

The findings underline AI’s potential in the regulatory sector to improve coherence, reveal interdependencies, and support scalable, more consistent legal frameworks across jurisdictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot