UN experts warn of growing risks from digital surveillance and AI misuse

UN human rights experts have raised concerns about the global expansion of digital surveillance technologies and their impact on fundamental freedoms, warning that current practices risk undermining democratic participation and civic space.

In a joint statement, the experts said that surveillance tools are increasingly used in ways that may be incompatible with international human rights standards. They noted that such technologies are often deployed against civil society, journalists, political opposition, and minority groups, contributing to what they described as a ‘chilling effect’ on freedom of expression and dissent.

The experts highlighted the growing use of advanced technologies, including AI, in areas such as law enforcement, counter-terrorism, and border management. They said that, without adequate legal safeguards, these tools can enable large-scale monitoring, predictive profiling, and the amplification of bias, potentially leading to disproportionate targeting of individuals and groups.

According to the statement, digital surveillance systems are part of broader ecosystems that involve collaboration among governments, private companies, and data intermediaries. These interconnected systems can expand state surveillance capabilities and increase the complexity of assessing their impact on human rights.

The experts also pointed to the role of legal frameworks, noting that broadly defined laws on national security, extremism, and cybercrime may contribute to the misuse of surveillance technologies. Such measures, they said, can affect the work of civil society organisations and other actors operating in the public sphere.

To address these challenges, the experts called for stronger safeguards, including clearer limits on surveillance practices, risk-based regulation of AI systems, and improved oversight mechanisms. They emphasised the importance of human rights impact assessments throughout the lifecycle of digital technologies, as well as the need for accountability and access to remedies in cases of harm.

Why does it matter?

The statement also highlighted the importance of data protection, system testing, and validation to reduce risks associated with digital surveillance tools. It called on governments to align national legislation with international human rights standards and ensure independent oversight of surveillance activities.

The experts further suggested that international cooperation may be needed to address cross-border implications, including the potential development of a binding international framework governing digital surveillance technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO and Oxford University launch global AI course for courts

A free online course aimed at preparing judicial systems for the growing role of AI in legal decision-making has been launched, with UNESCO in partnership with the University of Oxford positioned at the centre of the initiative.

AI is already shaping court processes, influencing evidence assessment, and affecting access to justice. Yet, many legal professionals lack structured guidance to evaluate such systems within a rule-of-law framework.

The UNESCO programme introduces a practical, human rights-based approach to AI, combining legal, ethical, and operational perspectives.

Developed with institutions including Oxford’s Saïd Business School and Blavatnik School of Government, the course equips participants with tools to assess algorithmic outputs, manage risks of bias, and maintain judicial independence in increasingly digital court environments.

Central to UNESCO’s initiative is a newly developed AI and Rule of Law Checklist, designed to help courts scrutinise AI systems and their outputs, including use as evidence.

The course also addresses broader concerns, including fairness, transparency, accountability, and the protection of vulnerable groups, reflecting rising global reliance on AI across justice systems.

Supported by the EU, the course is available globally, free of charge, with certification from the University of Oxford. As AI becomes embedded in judicial processes, capacity-building efforts aim to ensure technological adoption strengthens rather than undermines the rule of law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UN prepares first Global Dialogue on AI governance ahead of Geneva meeting

The United Nations is advancing preparations for the first Global Dialogue on Artificial Intelligence Governance, set to take place in Geneva on 6–7 July 2026 alongside the AI for Good Summit.

Speaking at a UN Geneva press briefing, Egriselda López, Permanent Representative of El Salvador and co-chair of the Dialogue, said the initiative was established by UN member states as a universal forum to discuss AI governance. The process is intended to bring together governments and stakeholders with the aim of producing tangible outcomes.

López said the initial meeting will be structured around thematic clusters, including one focusing on AI opportunities and implications and another addressing the digital divide. She added that consultations with member states and stakeholders are ongoing to ensure an inclusive format for the discussions.

Rein Tammsaar, Permanent Representative of Estonia and co-chair of the Dialogue, said the forum aims to connect existing AI initiatives and best practices from around the world. He stressed the importance of interoperability and coordination, noting that the Dialogue seeks to create synergies rather than duplicate existing efforts.

According to Tammsaar, additional thematic areas will include interoperability, safety, and human rights. While human rights are expected to be a cross-cutting issue, stakeholders have also called for it to be addressed as a standalone theme.

Amandeep Gill, UN Secretary-General’s Envoy on Technology, described the initiative as part of a broader approach to ensuring that AI benefits humanity as a whole. He said the Dialogue is designed as a ‘dialogue of dialogues’, enabling governments, experts and other stakeholders to exchange knowledge in a rapidly evolving technological environment.

Gill also highlighted the role of the Independent International Scientific Panel on AI, which is expected to present its findings at the Geneva meeting. He noted that global capacity to both use and govern AI remains uneven, underlining the need to address disparities between countries.

Officials emphasised that the Dialogue is intended to complement existing initiatives rather than centralise governance efforts. It will focus on issues such as safety and human rights, while discussions on military uses of AI fall outside its mandate.

A second Global Dialogue on AI Governance meeting is planned for May 2027 in New York, as part of ongoing efforts to develop a more coordinated and inclusive global approach to AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

AI for Peace Summit highlights push for African-led innovation

A growing push for African-led AI development is shaping discussions on peace, governance, and security across the continent. At the AI for Peace Summit hosted at the Humanitarian Peace Support School in Nairobi, stakeholders called for AI systems better tailored to African governance, security, and resilience challenges.

Brigadier General John Nkoimo, General Officer Commanding Central Command of the Kenya Defence Forces, speaking on behalf of the Chief of the Defence Forces, highlighted AI’s potential to improve situational awareness and strengthen inter-agency coordination in complex security environments.

Participants also called for stronger investment in local innovation ecosystems to ensure AI tools reflect regional realities, particularly in fragile and conflict-affected settings. Discussions also focused on governance gaps, with participants warning that regulatory frameworks need to evolve quickly enough to keep pace with rapid technological deployment.

Security applications such as early warning systems, election monitoring, and other operational uses featured prominently, alongside concerns over human rights protection and institutional accountability. The summit’s broader message was that Africa’s AI future should be shaped locally through stronger governance and sustained investment in homegrown solutions.

Why does it matter?

AI is moving away from a one-size-fits-all model towards systems better adapted to African governance and security realities. Context-specific tools are more likely to be effective in fragile and conflict-affected environments because they can better reflect local risks, institutions, and operational conditions.

It also supports longer-term resilience by prioritising local innovation, reducing dependence on imported technology frameworks, and helping ensure that AI deployment aligns with regional policy goals, ethical standards, and institutional needs.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

ILO sets first global framework for AI use in manufacturing sector

The International Labour Organization (ILO) has adopted its first-ever tripartite conclusions on AI in manufacturing, marking a significant policy step in addressing the sector’s digital transformation.

Agreed following a five-day technical meeting in Geneva, the framework brings together governments, employers and workers to shape how AI is integrated into one of the world’s largest employment sectors.

These ILO conclusions respond to the growing impact of AI on manufacturing, which employs nearly 500 million people globally.

Rather than focusing solely on productivity gains, the framework emphasises the need to align technological adoption with labour standards, ensuring that innovation supports decent work, strengthens enterprises and contributes to inclusive economic growth.

Key provisions address skills development, lifelong learning and occupational safety, alongside the protection of fundamental rights at work.

The framework also highlights the importance of social dialogue, recognising that collaboration between stakeholders is essential to managing AI-driven change and mitigating potential disruptions to employment and working conditions.

An agreement that reflects a broader effort to balance efficiency with worker protection, rejecting the notion that productivity and labour rights are competing priorities.

Instead, it positions AI as a tool that, if properly governed, can enhance both economic performance and job quality within the manufacturing sector.

The conclusions will be submitted to the ILO Governing Body in November 2026 for formal approval, with the intention of guiding national policies and international approaches to AI deployment in industry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ILO report warns of rising workplace risks amid digital transformation

More than 840,000 deaths each year are linked to psychosocial risks at work, according to a new report by the International Labour Organization. Factors such as long working hours, job insecurity, and workplace harassment are identified as key contributors to serious health conditions.

These risks are linked to cardiovascular and mental health disorders, causing around 45 million lost years of healthy life each year. Economic impacts are significant, with losses estimated at 1.37% of global GDP due to reduced productivity and health-related costs.

The report highlights that risks stem from how work is designed, organised, and managed. High demands, low control, unclear roles, and poor workplace policies can create harmful environments if not addressed through structured safety and health systems.

Ongoing shifts in the labour market, including digitalisation, AI, and remote work, are reshaping these risks. While such changes may increase pressure on workers, they also present opportunities to improve working conditions if managed with clear policies and preventive measures.

The findings reinforce that workplace design is a major public health and economic issue, not just an organisational concern. Without proactive management, psychosocial risks may grow with digital transformation, affecting productivity, labour stability, and economic resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Employee monitoring grows at Meta as AI overhaul accelerates

Meta has introduced a new internal tool to track employee activity, including keystrokes and mouse movements, as part of efforts to train its AI systems. The company says the data will help improve AI models designed to perform everyday digital tasks.

According to company statements, the tracking is limited to Meta-owned devices and applications, with safeguards in place to protect sensitive information. The initiative reflects a broader strategy to gather real-world usage data to enhance the performance and accuracy of AI tools.

The move has raised concerns among employees, some of whom view the monitoring as intrusive, particularly amid ongoing job cuts and reduced hiring. Reports indicate that Meta has significantly scaled back recruitment while increasing investment in AI development.

The company has committed substantial resources to AI, with plans to expand spending and accelerate model development. Internal tracking is positioned as part of a broader shift toward automation, as firms seek to reshape workflows and productivity through AI.

The development highlights growing tensions between AI innovation and workplace privacy. Increased reliance on employee data to train AI systems may reshape labour practices, raising questions about surveillance, consent, and the balance between technological advancement and workers’ rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK children’s bill advances with new online safety powers

The UK’s Children’s Wellbeing and Schools Bill has moved forward with a substantial set of online safety amendments, showing how child protection policy is increasingly being folded into wider legislation beyond the Online Safety Act itself. The current printed version of the bill, published as it continues through consideration of amendments between the Commons and Lords, includes new powers that could allow ministers to require providers of specified internet services to prevent or restrict children’s access to certain services, features, or functionalities where there is a risk of harm.

At the centre of the package is a proposed new section 214A to be inserted into the Online Safety Act 2023. Under that provision, the Secretary of State would be able to make regulations requiring providers of specified internet services to block or limit access for children of a specified age. The text makes clear that those powers could apply not only to entire services but also to specific features or functions within them.

That matters because the bill goes well beyond a general statement of principle. The amendments envisage regulations that could address issues such as the amount of time children spend on services, the times of day they can access them, contact from strangers, live audio or video communications, and the ability of unknown users to identify a child’s actual or approximate location. In other words, the government is seeking flexible powers to target specific design features and risks rather than relying only on broad platform-wide restrictions.

The bill would also place Ofcom into the process. As drafted, the regulator is expected to carry out research or provide advice at the Secretary of State’s request to support the making of regulations under the new power, and to publish that advice afterwards. A separate clause would require the Secretary of State, within six months of the Act being passed, to lay before Parliament a progress statement on the first regulations and a timetable for bringing them forward, unless those regulations have already been made.

Another part of the amendment package would give ministers the power to alter the age at which a child can consent to the processing of personal data in relation to information society services, within a range of 13 to 16. The text also allows for regulations on age verification for that consent, including provisions on compliance, monitoring, and enforcement. That means the bill is not only about access and harmful features, but also about the data governance rules that shape children’s use of digital services.

Also, the bill shows that Parliament has not fully settled the question of how far to go. The latest printed text also includes Lords’ amendments to Commons Amendment 38J, which would require the Secretary of State to make regulations imposing highly effective age-assurance and anti-circumvention measures for under-16s on specified regulated user-to-user services. Those Lords’ changes sit within the continuing exchange between the two Houses, rather than representing a final agreed position. The bill remains in the ‘consideration of amendments’ stage and has not yet received Royal Assent.

Why does it matter?

The broader significance of the bill is that the UK is moving towards a more interventionist model of child online safety, one that reaches beyond content moderation into product design, age assurance, feature controls, and the governance of children’s data. But the legislative picture is still in flux. What is emerging is not yet a final settlement, but a live parliamentary struggle over how prescriptive ministers should be, how much discretion they should have, and how strongly the law should push platforms to redesign services for children.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK’s ICO outlines personal data use in elections

The UK Information Commissioner’s Office has issued guidance on the use of personal data during the upcoming local elections. The publication aims to inform voters about their rights and expectations.

According to the Office, personal data plays a central role in political campaigning, helping parties communicate with voters and understand public concerns. The regulator emphasises that trust depends on lawful and transparent data use.

The guidance states that voters should expect clear explanations of how their data is used, including when profiling or targeted advertising is involved. Political organisations must provide accessible privacy information and follow data protection rules.

The Information Commissioner’s Office also highlights that individuals have the right to question or object to data use, reinforcing accountability during election campaigns in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Paraguay introduces AI rules for courts with UNESCO support and human oversight focus

UNESCO has supported Paraguay in developing a regulatory framework governing the use of AI within its judicial system.

The policy, adopted by the Supreme Court of Justice of Paraguay, establishes clear limits on AI use, ensuring that such systems function strictly as support tools rather than replacing human decision-making.

A regulation that outlines principles for the application of AI in data processing, information management and assisted decision-making. It emphasises transparency, accountability and respect for fundamental rights, requiring disclosure when AI tools influence judicial processes.

The framework aligns with UNESCO’s global guidelines on AI in courts, which promote human oversight, auditability and the protection of rights throughout the lifecycle of AI systems.

Implementation has been supported through technical cooperation, including training programmes to strengthen institutional capacity.

Such an approach in Paraguay reflects a broader trend towards embedding ethical safeguards in AI governance within public institutions. It highlights the role of international cooperation in shaping regulatory models that balance innovation with legal certainty and public trust.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!