TikTok sues US government over law mandating ban or divestment

TikTok has filed a lawsuit against the US government, challenging a new law that requires the app to sever ties with its Chinese parent company, ByteDance, or face a ban in the US. The company argues that the law is unconstitutional and deems it impossible to sell the app from ByteDance, stating that it would instead result in a shutdown by 19 January 2025.

Namely, the law, signed by President Joe Biden last month, grants ByteDance nine months to divest TikTok or cease its operations in the US, citing national security concerns. However, TikTok’s complaint argues that the government has not presented sufficient evidence of the Chinese government misusing the app. Concerns expressed by individual members of Congress and a congressional committee report are speculative about the potential misuse of TikTok in the future without citing specific instances of misconduct. However, TikTok asserts that it has operated prominently in the US since its launch in 2017.

The app contends that a ban in the US would be unfeasible due to the complex task of transferring millions of lines of software code from ByteDance to a new owner. Additionally, restrictions imposed by the Chinese government would prevent the sale of TikTok along with its algorithm. TikTok argues that such a ban would effectively isolate American users and undermine its business, mentioning also its previous efforts to address US government concerns.

During the Trump administration, discussions were held regarding partnerships with American companies such as Walmart, Microsoft, and Oracle to separate TikTok’s US operations. However, these potential deals have yet to materialise. TikTok also attempted to appease the government by storing US user data in Oracle’s servers, although a recent report suggests that this action was primarily cosmetic.

TikTok seeks a court judgement to declare the Biden administration’s legislation unconstitutional in response to the new law. The company also requests an order to prevent the attorney general from enforcing the law.

ByteDance submits overdue risk assessment for TikTok Lite amid regulatory pressure

ByteDance, the company behind TikTok, has submitted a long-awaited risk assessment for its TikTok Lite service, recently launched in France and Spain, following regulatory threats of fines and potential bans from the European Commission. Regulators are concerned about the addictive nature of TikTok Lite, particularly its rewards system for users, and claim ByteDance didn’t complete a full risk assessment on time.

ByteDance now has until 24 April to defend itself against regulatory action, including possibly suspending the rewards program. Failure to comply with regulations could result in fines of up to 1% of its total annual income or periodic penalties of up to 5% of its average daily income under the Digital Services Act (DSA).

The DSA imposes strict rules on online platforms with over 45 million users in the EU, including other major tech companies like Google, Facebook, Instagram, and LinkedIn.

Why does it matter?

Meanwhile, in the US, legislation is swiftly advancing through Congress, requiring ByteDance, the Chinese company that owns TikTok, to divest its ownership within a year or face a US ban. The Senate has passed this measure as part of a foreign aid package, sending it to President Joe Biden for his expected approval. ByteDance will have nine months initially, with a possible three-month extension, to complete the sale, though legal challenges could cause delays.

EU political parties sign election integrity code of conduct

The EU political parties are set to sign a new code of conduct on Tuesday, 9 April 2024, to safeguard the upcoming EU elections from foreign interference and disinformation. The initiative, brokered by the European Commission, is part of a broader effort to protect the integrity of the electoral process.

The code of conduct, overseen by Vice-President Věra Jourová, focuses on preventing the amplification of narratives led by non-EU entities that seek to undermine European values. Parties across the political spectrum, including left, socialists, centre-right, liberals, conservatives, greens, and far-right groups, are committing to proactive measures against spreading misinformation. They pledge to ensure transparency by labelling AI-generated content and not disseminating unfounded accusations or deceptive materials targeting other parties. Although this adds an extra layer of protection to the electoral campaign, the responsibility for implementation and monitoring falls on the European parties rather than national parties conducting the campaign on the ground.

Despite these commitments, the code of conduct lacks independent oversight and enforcement mechanisms instead of relying on the parties to promote compliance among their members and conduct post-election reviews. Commission Vice-President Jourová emphasised the symbolic importance of this collective commitment by European political parties to uphold the integrity of elections, urging them to adhere to ethical and fair campaigning practices in the coming months.

Why does it matter?

The agreement follows recent scandals involving European Parliament members, like Qatargate and Russiagate, and underlines the importance of defending democracy against foreign interference. While the code of conduct does not extend to national parties, it represents a significant step forward in addressing digital risks and maintaining transparency in electoral communications.

Czech government sanctions website for alleged pro-Russian influence ahead of EU election

The Czech government has taken action by sanctioning individuals, including Viktor Medvedchuk and the website voiceofeurope.com, over their alleged involvement in a pro-Russian influence operation in Europe spreading disinformation. According to the Czech Foreign Ministry, the campaign aimed to undermine Ukraine’s territorial integrity, sovereignty, and liberty.

Czech Prime Minister Petr Fiala has underscored that the activities of the sanctioned individuals were aimed at bolstering Russian influence in the EU countries and the European Parliament, based on findings from the Czech secret service agency BIS.

Medvedchuk, a former Ukrainian politician now residing in Russia, stands accused of covertly financing Voice of Europe to sway the European Parliamentary election. Financial accounts associated with the implicated individuals and entities will be frozen as part of the sanctions.

Why does it matter?

The development comes shortly after the European Parliament, and experts warned of expected attempts to undermine the upcoming EU elections in June and deter voter turnout through disinformation campaigns. Despite efforts to combat disinformation through tools like the Digital Services Act, challenges still need to be addressed in effectively countering misleading narratives, especially in the limited timeframe leading up to the elections.

Content Policy

ICT4Peace has been an independent think tank since 2003 in Geneva. It fosters political discussion and common action to support international and human security in cyberspace. All its activities are focused on the use of information and communications technology (ICT) to fulfil its key goals: save lives, protect human dignity, and promote peace and security in cyberspace. ICT4Peace acts as an early mover in identifying important challenges, bringing visibility and high-level attention to critical new issues. It carries out policy research examining how to use technologies to support state and human security, and develops capacity building through the ICT4Peace Academy to support the full participation of all stakeholders in ICT discussions, negotiations, and solutions. The description of the concrete areas of its work can be found in this document.

Inter alia the areas presently covered are deepening the understanding of the ICT-related activities and services provided by private (cyber) security companies and their impacts on human rights, international law, and security international law, norms of responsible state behaviour in cyberspace, including neutrality during cyberwarfare; mis- and disinformation and hate speech; gender and ICT and artificial intelligence (AI), peace and ethics.

Digital policy issues

Network security, cyberconflict, and warfare

An open, secure, stable, accessible, and peaceful ICT environment is essential for all and requires effective cooperation among states, civil society, and the private sector to reduce risks to international peace and security, and secure economic and social development. There are, however, very disturbing trends in the global ICT environment, including a dramatic increase in incidents involving the malicious use of ICTs by state and non-state actors, such as criminals and terrorists. These trends create enormous risks to peace and security in cyberspace for states, but equally to human security and dignity.

In 2011, ICT4Peace called for a code of conduct and for norms of responsible state behaviour and confidence- building measures for an open, secure, and peaceful cyberspace, and encouraged all stakeholders to work together to identify new cyber threats and develop solutions and agreements at national and  global levels. In particular, it advocated against the increasing militarisation of cyberspace. ICT4Peace has supported international negotiations at the UN Governmental Group of Experts (UN GGE) and the Open-Ended Working Groups (OEWG I and II) in New York, as well as at the Organization for Security and Cooperation in Europe (OSCE), the Association of Southeast Asian Nations (ASEAN), the Organization of American States (OAS), and the African Union (AU) with policy recommendations and multiple publications and workshops. In 2014, ICT4Peace launched its capacity-building programmes and in 2020 created The ICT4Peace Academy, in particular for policymakers and diplomats from developing and emerging economies to enable them to develop and implement their national cybersecurity strategies, building computer emergency response teams (CERTS) and meaningfully engage in the UN GGE and in the OEWG I 20192021 and OEWG II 2021– 2025, but also in bilateral and regional negotiations.

In 2019, at OEWG I in New York, ICT4Peace issued a call to governments to publicly commit not to attack civilian critical infrastructure and proposed a states cyber peer review mechanism for tate-conducted foreign cyber operations. See also all ICT4Peace inputs to and comments on OEWG I and the ICT4Peace Submission to OEWG II 20212025.

ICT4Peace has highlighted emerging concerns and suggested governance solutions in the fields of artificial intelligence (AI), lethal autonomous weapons systems (LAWS), and peace time threats.

Capacity development

The ICT4Peace Academy offers custom-tailored courses to meet organisations’ needs in learning more about today’s ICT challenges, including cyber diplomacy, cyber peacebuilding, and cyber (human) security. Drawing from an extensive network of expert practitioners, including diplomats, technologists, and civil society experts, each customised course offers the latest in up-to-date information tailored to an organisation’s particular context and presented in a live and interactive format. ICT4Peace offers advisory services to governments, multilateral initiatives, and the international community to support a peaceful cyberspace and provides a global hub and policy space bringing together actors from the technology community, governments, and civil society.

Regretfully, institution and capacity building in the area of ICTs for peaceful purposes and peace and security in cyberspace are not sufficiently recognised as a development issue and/or treated as a development priority by the development community, development partners, or by the millennium development goals (MDGs) or sustainable development goals (SDGs).

It is hoped that by bringing the discussion around the need for increased cybersecurity institution and capacity building (as expressed inter alia by the UN GGE and OSCE) also into the policy orbit of the OECD Development Assistance Committee (DAC), cybersecurity capacity building will be recognised as a development priority by policymakers and more official development assistance (ODA) will flow into this sector in a consistent and coherent fashion. In cooperation with the Estonian and Swiss governments, ICT4Peace has held discussions with the DAC about making cybersecurity capacity-building ODA-eligible.

ICT4Peace also published a thought piece on Digitization: Curse or Blessing for the Bottom Billion, in which the case for more cybersecurity capacity building in the context of development cooperation is made.

Content policy

In the area of online content policy, ICT4Peace is engaged in activities related to the use of the internet for misinformation, disinformation, defamation, and hate speech. In today’s information society, the dissemination of false information can have devastating consequences, ranging from violent terrorist attacks to interference in elections, to unnecessary illness, such as in the current pandemic. ICT4Peace’s research and publications on misinformation, and hate speech looks at the role of social media and other online platforms/apps in spreading mis/disinformation online.

Regarding the prevention of the use of ICTs for terrorist purposes, ICT4Peace co-launched the Tech against Terrorism Platform with the United Nations Counter-Terrorism Executive Directorate (UNCTED). ICT4Peace organised workshops and produced a number of publications in the aftermath of the Christchurch attack and the Sri Lanka bombing with the main aim of raising awareness and supporting the Christchurch Call Summit Process. Since the emergence of COVID-19, ICT4Peace has launched a review of the risks and opportunities of ICTs and social media during a pandemic.

Human rights principles

ICT4Peace has been active in the area of ICTs and human rights, publishing papers, delivering workshops, and supporting other actors to address the human rights implications of digital technologies. It coined the term ‘digital human security’.

Many innovations are designed with the embedded gender, and other biases of their creators, and even the most helpful technologies remain inaccessible to those who would benefit the most from them, including women,  girls,  and socioeconomically marginalised populations. ICT4Peace is working with gender-focused non-governmental organisations (NGOs) to address gender biases in ICTs.

AI promises to change the very nature of our society, transforming our conflict zones and ushering in a new socio-economic era. While the potential benefits are tremendous, so are the potential risks. This requires careful analysis to inform policy decisions on international and international levels. Since 2017, ICT4Peace has carried out research, published policy papers, and contributed to international discussions on AI, ethical, and political perspectives on emerging digital technologies.

Social media channels

Facebook @ICT4Peace

LinkedIn @ICT4peace

X @ict4peace

YouTube @ICT4Peace Foundation

World Health Organization

WHO is a specialised agency of the UN whose role is to direct and coordinate international health. 

Founded in 1948, WHO works with countries and partners to promote health, keep the world safe, and serve the vulnerable – so that everyone, everywhere can attain the highest level of health.

WHO assists countries in coordinating multi-sectoral efforts of governments and partners to attain their health objectives and support their national health policies and strategies.

Digital activities

WHO is harnessing the power of digital technologies and health innovation to accelerate global attainment of health and well-being. It uses digital technology intensively in its development of activities, ranging from building public health infrastructure in developing countries and immunisation to dealing with disease outbreaks.

WHO has strengthened its approach to data by ensuring this strategic asset has two divisions: (1) the Division of Data, Analytics, and Delivery for Impact. This has helped strengthen data governance by promoting sound data principles and accountability mechanisms, as well as ensuring that the necessary policies and tools are in place that can be used by all three levels of the organisation and can be adopted by member states. Digital health and innovation are high on WHO’s agenda; it is recognised for its role in strengthening health systems through the application of digital health technologies for consumers/ people and healthcare providers as part of achieving its vision of health for all. (2) WHO also established the new Department of Digital Health and Innovation in 2019 within its Science Division. Particular attention is paid to promoting global collaboration and advancing the transfer of knowledge on digital health; advancing the implementation of national digital health strategies; strengthening the governance for digital health at the global, regional, and national levels; and advocating for people-centred health systems enabled by digital health. 

The Division of Data Analytics and Delivery for Impact and the Department of Digital Health and Innovation work closely together to strengthen links between data and digital issues, as well as data governance efforts. Digital health technologies, standards, and protocols enable health systems to integrate the exchange of health data within the health system. Coupled with data governance, ethics, and public health data standards, digital health and innovation enable the generation of new evidence and knowledge through research and innovation and inform health policy through public health analysis.

Since 2020, the COVID-19 pandemic has accelerated WHO’s digital response, collaboration, and innovation in emergencies. Some examples include collaborating to use AI and data science in analysing and delivering information in response to the COVID-19 ‘infodemic’ (i.e. overflow of information, including misinformation, in an acute health event, which prevents people from accessing reliable information about how to protect themselves); promoting cybersecurity in the health system, including hospitals and health facilities; learning from using AI, data science, digital health, and innovation in social science research, disease modelling, and simulations, as well as supporting the epidemiological response to the pandemic; and producing vaccines and preparing for the equitable allocation and distribution of vaccines.
To further its digital transformation, WHO established the WHO Academy, offering professional training modules (including AI ethics and cybersecurity), and the WHO Foundation, an independent grant-making organisation that supports innovative health initiatives worldwide.

Digital policy issues

WHO is a leader among Geneva-based international organisations in the use of social media, through its awareness-raising of health-related issues. It has more than 74 million followers on its social media platforms and has received recognition by the Geneva Engage Awards.

The WHO/International Telecommunication Union (ITU) Focus Group on Artificial Intelligence for Health (WHO/ITU FG-AI4H) works to establish a standardised assessment framework for the evaluation of AI-based methods for health, diagnosis, triage, or treatment decisions.

Data and artificial intelligence

The response to COVID-19 reinforced the centrality of data and AI for the health sector and WHO’s activities. Data and AI policies are covered by the following instruments:

WHO has established the Global Digital Health Certification Network (GDHCN), which allows countries to verify the authenticity of health information using the International Patient Summary (IPS) ISO standard. The GDHCN uses public key infrastructure (PKI) encryption to keep health credentials verifiable and secure across borders. This initiative enables people to carry internationally recognised health credentials for improved travel and healthcare access.

In 2024, WHO partnered with Saudi Arabia to implement a digital health card for Hajj pilgrims, built on the WHO Global Digital Health Certification Network’s infrastructure. Over 250,000 pilgrims from Indonesia, Malaysia, and Oman received Hajj health cards as part of a pilot program. The network now includes over 80 member states that can verify the authenticity of health information between countries.

WHO hosts the Global Initiative on Digital Health, a new global platform for international dialogue on digital health, and the Global Initiative on AI for Health, a tripartite platform with ITU and WIPO. These initiatives convene member states, industry, academia, and civil society to shape policies, share best practices, and foster responsible AI and digital tech adoption in healthcare.

Digital standards

Online gaming

Since 2018, gaming disorder has been included in WHO’s International Classification of Diseases (ICD). While the negative impacts of online gaming on health are being increasingly addressed by national health policies, it has been recognised by some authorities, such as the US Food and Drug Administration (FDA), that some game-based devices could have a therapeutic effect. Given the fast growth of online gaming and its benefits and disadvantages, the implications on health are expected to become more relevant.

The health top-level domain name

Health-related generic top-level domain (gTLD) names, in all languages, including ‘.health’, ‘.doctor’, and ‘.surgery’, should be operated in a way that protects public health and includes the prevention of further development of illicit markets of medicines, medical devices, and unauthorized health products and services. Resolution WHA66.24: eHealth Standardization and Interoperability (2013).

Net neutrality

The issue of net neutrality (the equal treatment of internet traffic) could affect bandwidth and the stability of digital connections, especially for high-risk activities such as online surgical interventions. Thus, health organisations may be granted exceptional provisions, as the EU has already done, where health and specialised services enjoy exceptions regarding the principle of net neutrality. Resolution WHA66.24: eHealth Standardization and Interoperability (2013).

WHO has dedicated cybersecurity focal points, who work with legal and licensing colleagues to provide frameworks for the organisation to not only protect WHO data from various cyber risks, but also provide technical advice to WHO and member states on the secure collection, storage, and dissemination of data. Health facilities and health data have always been the target of cybercriminals; however, the COVID-19 crisis has brought into sharp focus the cybersecurity aspects of digital health.

Ransomware attacks threaten the proper functioning of hospitals and other healthcare providers. The global Wannacry ransomware attack in May 2017 was the first major attack on hospitals and disrupted a significant part of the UK’s National Health System (NHS). Ransomware attacks on hospitals and health research facilities accelerated during the COVID-19 crisis.

A 2021 global survey found that over one-third of healthcare respondents reported at least one ransomware attack in the preceding year, with one-third of those paying a ransom. Even after payment, 31% did not regain access to their encrypted data. Security researchers identified vulnerabilities in at least 17 biomedical companies involved in COVID-19 vaccine manufacturing and therapeutics development, with additional attacks targeting clinical trial software vendors, laboratories, and pharmaceutical companies.

In December 2023, WHO convened experts in Geneva to develop strategies for addressing cybersecurity threats in resource-constrained settings. In January 2024, WHO published two reports in collaboration with INTERPOL, UNODC, and other partners on strengthening cybersecurity and countering disinformation. WHO is developing guidance on implementing and investing in cybersecurity and privacy protection for digital health interventions.

Considering that data is often the main target of cyberattacks, it should come as no surprise that most cybersecurity concerns regarding healthcare are centred on the protection of data. Encryption is thus crucial for the safety of health data: It both protects data from prying eyes and helps assuage the fears patients and consumers may have about sharing or storing sensitive information through the internet.

Data governance

The 2021 Health Data Governance Summit brought together experts to review best practices in data governance, sharing, and use. The result was a call to action to tackle the legal and ethical challenges of sharing data, ensure data is shared during both emergency and non-emergency situations, and encourage data and research stewardship that promotes tangible impact. Key WHO resources include WHO’s Data Sharing Policies, the UN Joint Statement on Data Protection and Privacy in the COVID-19 Response, and GATHER (Guidelines for Accurate and Transparent Health Estimates Reporting).

WHO’s SCORE technical package (Survey, Count, Optimize, Review, and Enable) identifies data gaps and provides countries with tools to precisely address them. SCORE has been developed in partnership with the Bloomberg Data for Health Initiative. As part of SCORE, WHO completed the first-ever global assessment of health information systems capacity in 133 countries, covering 87% of the world’s population.

The project Strengthening National Nutrition Information Systems operated in five countries in Africa and Asia – Côte d’Ivoire, Ethiopia, Laos, Uganda, and Zambia – for a period of four years (2020–2024). Demographic and Health Surveys (DHS), Multiple Indicator Cluster Surveys (MICS) and national nutrition surveys are the major sources of nutrition data for many countries, but they are complex and expensive undertakings that cannot be implemented with the required frequency. It is, therefore, critical to strengthen or establish integrated nutrition information systems (NIS) of countries to enhance the availability and use of routine nutrition data to better support policy development, programme design and monitoring.

Data-driven delivery approach

A data-driven delivery approach sharpens WHO’s focus to address gaps, close inequalities, and accelerate progress towards national and regional priorities from WHO regions. The WHO Regional Office for the Americas is working to create open data platforms for evidence-based decisions and policymaking. The Core Indicators Portal provides a dataset of around 200 health indicators for 49 countries across the region from 1995 to 2021. The WHO Regional Office for the Eastern Mediterranean is conducting harmonised health facility assessments and tracking 75 indicators through the Regional Health Observatory (RHO). The WHO Regional Office for Africa has prioritised investments in civil registration and vital statistics (CRVS) and digital health. Its integrated African Health Observatory (iAHO) offers high-quality national and regional health data on a single platform and District Health Information Software (DHIS2) is now implemented in all but four African countries. The WHO Regional Office for South-East Asia is focused on promoting health equity through workshops that introduce member states to WHO’s Health Equity Assessment Toolkit (HEAT). High-quality data on health indicators is available on the Health Information Platform (HIP). The WHO Regional Office for Europe is prioritising support for countries’ national health information systems (HIS) through more robust data governance frameworks. Member states also have access to the European Health Information Gateway, a one-stop shop for health information and data visualisation. The WHO Regional Office for the Western Pacific has released a progress report on each member state’s journey to achieving universal health coverage (UHC). Additionally, the Western Pacific Health Data Platform provides a single destination where countries can easily monitor and compare their progress towards national and global health objectives.

Sustainable development

E-waste

WHO recognises e-waste as a growing global health threat, especially for children and pregnant women exposed to toxic substances in informal recycling. In 2021, WHO released its first global report on e-waste and child health, identifying serious risks from lead, mercury, and other pollutants. WHO’s ongoing E-waste and Child Health Initiative – active in Latin America and Africa – develops frameworks for safer recycling, regulatory compliance, and advocacy to protect vulnerable populations.

Strengthening health information systems for refugee- and migrant-sensitive healthcare

Health information and research findings can provide a platform for understanding and responding to the health needs of refugees and migrants and for aligning the efforts of other sectors and sources of international assistance. However, the systematic national data and evidence comparable across countries and over time available for policy- and decision-making on health of refugees and migrants from around the world are inadequate. The WHO Health and Migration Programme (PHM) supports the strengthening of member-state information systems, providing specialised technical assistance, response, and capacity building.

Human rights principles

Improving access to assistive technology

Assistive technology enables and promotes inclusion and participation, especially of persons with disability, ageing populations, and people with non-communicable diseases. The primary purpose of assistive products is to maintain or improve an individual’s functioning and independence, thereby promoting their well-being. Despite a growing number of people in need of assistive products in every country, only 5%–15%, or one in 10 people has access to assistive products. WHO coordinates the Global Cooperation on Assistive Technology (GATE) as a step towards realising the SDGs and the Convention on the Rights of Persons with Disabilities (UNCRPD), and implementing resolution WHA71.8 on assistive technology. The GATE initiative has the goal to support countries in addressing challenges and improving access to assistive products within their context. To achieve this, the GATE initiative is focusing on five interlinked areas (5Ps): people, policy, products, provision, and personnel.

Content policy

Infodemics

An infodemic is an overflow of information, including misinformation, that prevents people from accessing reliable information and hampers the ability of people to know how to protect themselves in the context of health. Infodemics cannot be eliminated, but can be managed by producing engaging, reliable content and using digital, traditional media, and offline tools to disseminate it; engaging key stakeholder groups in cooperative content creation and dissemination; empowering communities to protect themselves; and promoting community and individual resilience against misinformation. Digital health technologies and data science can support these activities by analysing the information landscape and social dynamics in digital and analogue environments; delivering messages; supporting fact-checking and countering misinformation; promoting digital health, media, and health literacy; and optimising the effectiveness of messages and their delivery through real time monitoring and evaluation (M&E), among others.

At the Munich Security Conference 2020, WHO Director-General Tedros Adhanom Ghebreyesus stated: ‘We’re not just fighting an epidemic; we’re fighting an infodemic.’ This translated into many WHO initiatives to counter the infodemic, such as working with the public and the scientific community to develop a framework for managing infodemics; bringing the scientific community together for the 1st WHO Infodemiology Conference; developing of a draft research agenda on managing infodemics, cooperating with UN agencies and the AI community; promoting reliable WHO information through a coordinated approach with Google, Facebook, Twitter, and other major tech platforms and services; and campaigning to counter misinformation.

WHO-trained infodemic managers, over 1,300 of them from 142 countries, are already making great strides in member states and together around the globe as a global community of practice. In Serbia, the Laboratory for Infodemiology and Infodemic Management has been established at the Faculty of Medicine, University of Belgrade. With the support of the WHO Country Office in Serbia, two infodemic managers working at the Institute of Social Medicine have gathered a multidisciplinary team that will be conducting research and supporting infodemic management in the country and the region.

Digital tools and initiatives

Interdisciplinary

Public health challenges are complex and cannot be effectively addressed by one sector alone. A holistic, multisectoral, multidisciplinary approach is needed for addressing gaps and advancing coordination for health emergency preparedness and health security and is essential for the implementation of the International Health Regulations (IHR) 2005.

  • WHO Classifications and Terminologies: operates a one-stop shop for WHO classifications and terminologies and delivers and scales use of terminologies and classifications. WHO maintains a portfolio of digital tools and methods for emergency preparedness and response, for example:
  • Go.Data is an outbreak investigation tool for field data collection during public health emergencies. The tool includes functionality for case investigation, contact follow-up, and visualisation of chains of transmission including secure data exchange and is designed for flexibility in the field, to adapt to the wide range of outbreak scenarios. The tool is targeted at any outbreak responder.
  • Epidemic Intelligence from Open Sources (EIOS) is a unique collaboration between various public health stakeholders around the globe. It brings together new and existing initiatives, networks, and systems to create a unified all-hazards, One Health approach to early detection, verification, assessment, and communication of public health threats using publicly available information. Creating a community of practice for public health intelligence (PHI) that includes member states, international organisations,  research institutes, and other partners and collaborators is at the heart of the initiative; saving lives through early detection of threats and subsequent intervention is its ultimate goal. Since January 2022, the lead of the EIOS initiative is hosted within the new WHO Hub for Pandemic and Epidemic Intelligence. As one of the Hub’s flagship initiatives, EIOS is one of the main vehicles for building a strong PHI community of practice, as well as a multidisciplinary network supporting it.
  • Digital proximity tracking technologies have been identified as a potential tool to support contact tracing in outbreaks and epidemics. However, these technologies raise ethical and privacy concerns. The document Ethical Considerations to Guide the Use of Digital Proximity Tracking Technologies for COVID-19 Contact Tracing – provides policymakers and other stakeholders with guidance as to the ethical and appropriate use of digital proximity tracking technologies for COVID-19.
  • WHO Digital and Innovation for Health Online Community to Fight COVID-19 is a platform for discussion and sharing experiences and innovative responses related to the COVID-19 pandemic.
  • The new Survey Count Optimize Review Enable (SCORE) for Health Data Technical Package was published during one of the most data-strained public health crisis responses ever – that of the COVID-19 pandemic. SCORE can guide countries to take action by providing a one-stop shop for best technical practices that strengthen health information systems, using universally accepted standards and tools.
  • WHO Hub for Pandemic and Epidemic Intelligence supports countries, and regional and global actors in addressing future pandemic and epidemic risks with better access to data, better analytical capacities, and better tools and insights for decision-making.
  • Digitalised health workforce education: an elicitation of research gaps and selection of case studies. The report outlines research gaps in utilising digital technology for healthcare worker education, employing a conceptual framework. It presents 63 research questions across six domains for guiding future studies and identifies evidence gaps in the literature for further research.

Health data

  • WHO Health Data Hub (WHDH) is a single repository of health data in WHO and establishes a data governance mechanism for member states.
  • Civil Registration and Vital Statistics (CRVS) registers all births and deaths, issues birth and death certificates, and compiles and disseminates vital statistics, including cause of death information. It may also record marriages and divorces.
  • The open-access WHO Snakebite Envenoming Information and Data Platform is already working to shorten the time between a snakebite and receiving antivenom. It does this by mapping the distribution of venomous snakes, known antivenoms, and the proximity to health facilities that stock them.

Public health strategy, planning and monitoring

  • Global Benchmarking Tool is designed to benchmark the regulatory programmes of a variety of product types, including medicines, vaccines, blood products (including whole blood, blood component and plasma-derived products) and medical devices (including in vitro diagnostics). It is supported by a computerised platform to facilitate the benchmarking, including the calculation of maturity levels. The computerised GBT (cGBT) is available, upon request, to member states and organisations working with WHO under the Coalition of Interested Partners (CIP).
  • The organisation also integrates digital health interventions in its strategies for certain diseases. WHO’s Global Observatory for e-Health (GOe) aims to assist member states with information and guidance on practices and standards in the field of e-health.
  • The newly established Geographic Information Systems (GIS) Centre for Health enables spatial representation of data to support better public health planning and decision making.
  • The Health Equity Monitor is a platform for health inequality monitoring, which includes databases of disaggregated data, a handbook on health inequality monitoring, and step-by-step manuals for national health inequality monitoring (generally and specifically for immunisation inequality monitoring).
  • The Health Assessment Toolkit is a software application that facilitates the assessment of health inequalities in countries. Inequality data can be visualised through a variety of interactive graphs, maps, and tables. Results can be exported and used for priority-setting and policymaking.

Health facilities data

Digital health solutions

  • The Digital Health Atlas is a global registry of implemented digital health solutions. It is open and available to anyone to register and contribute information about digital implementations. The registry provides a consistent way to document digital solutions, and offers functionalities in a web platform to assist technologists, implementers, governments, and donors for inventory, planning, coordinating, and using digital systems for health. The Digital Health Atlas includes a special focus on listing digital technologies related to the COVID-19 pandemic. The repository of information is open to all users to register projects, download project information, and connect with digital health practitioners globally.
  • Be He@lthy, Be Mobile (BHBM) is helping millions of people quit tobacco, and control diabetes and cervical cancer. It helps people at risk of asthma and chronic obstructive pulmonary disease (COPD), and those who care for older people.
  • WHO has launched a women’s health chatbot with messaging on breast cancer. The new chatbot uses the Viber platform to deliver health information directly to subscribers’ mobile phones. People subscribing to the new chatbot will find information on how to reduce the risk of breast cancer, symptoms, and treatment options.
  • WHO’s prototype of a digital health promoter, S.A.R.A.H., started off as a chatbot to fight misinformation around COVID-19 and offered information on living healthily during the pandemic. The platform has since been expanded to provide messages for individuals at risk of hypertension and diabetes, offering accessible health information in multiple languages via messaging apps like WhatsApp.

Health-related research

  • The WHO BioHub System offers a reliable, safe, and transparent mechanism for WHO member states to voluntarily share novel biological materials, without replacing or competing with existing systems. Sharing of biological materials with epidemic or pandemic potential will be done through one (or more) of the laboratories designated as a WHO BioHub Facility. This will allow WHO member states and partners to work in a better and faster way, to advance research, and to be more prepared for health emergencies as well as ensure fairness in access to benefits arising from this sharing.

Resources

Resolutions and deliberations on eHealth

  • The Global Strategy on Digital Health (2020-2025) aims to support potential, national or regional digital health initiatives with a robust strategy that integrates financial, organisational, human, and technological resources.
  • Resolution WHA58.28 eHealth
  • Resolution WHA71.7 (2018): The resolution urges member states to prioritise the development and greater use of digital technologies in health as a means of promoting Universal Health Coverage and advancing the SDGs.
  • Report EB 142/20 (2018): The Executive Board in January 2018 considered the updated report ‘mHealth: Use of appropriate digital technologies for public health’. This updated version of the report also includes the use of other digital technologies for public health.
  • Report EB139/8 (2016): The Executive Board considered ‘mHealth: Use of mobile wireless technologies for public health’, reflecting the increasing importance of this resource for health services delivery and public health, given their ease of use, broad reach and wide acceptance.
  • Resolution WHA66.24 (2013): The World Health Assembly recognised the need for health data standardisation to be part of eHealth systems and services, and the importance of proper governance and operation of health-related global top-level Internet domain names, including ‘.health’.
  • Resolution WHA58.28 (2005): The World Health Assembly in 2005 recognised the potential of eHealth to strengthen health systems and improve quality, safety, and access to care, and encouraged member states to take action to incorporate eHealth into health systems and services.
  • Resolution EB101.R3 (1998): WHO recognised the increasing importance of the internet and its potential to impact health through the advertising and promotion of medical products, in its resolution on ‘Cross-border Advertising, Promotion, and Sale of Medical Products through the Internet’.

Relevant policy documents to data and digital health in the WHO European Region

Digital health
Data

Social media channels

Facebook @WHO

Instagram @who

LinkedIn @world-health-organization

Snapchat @who

TikTok @who

X @WHO

YouTube @WHO

Kofi Annan Foundation

The Kofi Annan Foundation is an independent not-for-profit organisation, established in Switzerland in 2007 by the late former UN Secretary-General and Nobel Peace Prize laureate, Kofi Annan. Its board is composed of prominent personalities from the public and private sectors, and it has a small team based in Geneva, Switzerland.

The Kofi Annan Foundation wants a fairer and more peaceful world, where no one is left behind, where democratic principles and the rule of law are upheld, and divides are bridged through dialogue and international cooperation.

The Foundation works closely with partners from international and regional organisations, foundations, universities, and civil society. It channels expertise, convenes all stakeholders around the table, and forges coalitions of trusted influence that can make change happen.

The Kofi Annan Foundation has three strategic objectives:

  • Strengthening democracy and elections, because popular legitimacy provides the basis for democratic governance, accountability, and respect for human rights and the rule of law.
  • Empowering youth to build a peaceful, sustainable future because they are active agents of change and must be given the opportunity to shape the world they will inherit.

Advocating for a more effective, inclusive, and equitable multilateral system, and promoting Kofi Annan’s core belief that structured international cooperation is key to solving challenges in today’s interconnected world.

Digital activities

The Kofi Annan Foundation addresses digitalisation within the scope of youth, and peace, as well as elections and democracy in the follow-up to the Kofi Annan Commission on Elections and Democracy in the Digital Age (KACEDDA).

The Commission has proposed a series of actions to mitigate the negative impact of social media on elections and democracy, several of which the Foundation is directly implementing. These include new models to counter political disinformation, pre-electoral pledges regarding digital behaviour and activities, and the gauging of digital vulnerabilities of elections. The Foundation is also mobilising digital tools and platforms to increase the representativeness and inclusivity of elections and democratic decision-making, particularly for young people.

The Kofi Annan Foundation leverages digital platforms and technology to advance its mission of promoting peace, sustainable development, and human rights. Through our online presence, the Foundation disseminates information, engages with global audiences, and implements initiatives that harness digital tools for greater impact. 

Some of our digital activities include:

Extremely Together: This global youth-led initiative empowers young people to prevent violent extremism. Utilising digital storytelling, social media campaigns, and online resources, Extremely Together engages youth in promoting peace and countering radicalisation narratives.

Kofi Annan Changemakers: Facilitating intergenerational dialogue, this programme connects young leaders with experienced mentors through virtual platforms. The digital format allows for widespread participation, fostering a global exchange of ideas and strategies for positive change.

Kofi Annan Commission on Elections and Democracy in the Digital Age (KACEDDA): The Commission has proposed a series of actions to mitigate the negative impact of social media on elections and democracy, several of which the Foundation is directly implementing. These include new models to counter political disinformation, pre-electoral pledges regarding digital behaviour and activities, and the gauging of digital vulnerabilities of elections. The Foundation is also mobilising digital tools and platforms to increase the representativeness and inclusivity of elections and democratic decision-making, particularly for young people.

Electoral Vulnerability Index: The Kofi Annan Foundation has developed the Electoral Vulnerability Index (EVI), a predictive tool designed to assess the risk of election-related violence. This initiative aims to identify elections that may be particularly susceptible to violence, thereby enabling international and domestic actors to prioritise resources and interventions effectively.

Digital policy issues

Capacity development

While the Foundation does not use this terminology to describe its work, we have adopted it in line with our internal taxonomy. Its use in this publication is for consistency and clarity within that framework.

Fostering youth leadership

Sharing the leadership values, wisdom, and lessons of Kofi Annan with the next generation is an important element of the legacy work of the Foundation. Digital tools allow us to reach young people in every corner of the globe who would otherwise not be able to benefit from his advice and that of the people who worked closely with him. Two cohorts of Kofi Annan Changemakers – young leaders from different fields and backgrounds – have now harnessed digital communications tools and platforms to improve their leadership skills and build critical capacities.

The Foundation has expanded its youth and peace projects to regions including Colombia, Pakistan, and Southeast Asia. New initiatives include:

Bridges to Peace – Preventing and countering violent extremism and terrorism in Uganda

Environment of Peace – Youth-led research on climate, environment, and peace

Voices of Reconciliation – Using music to strengthen youth’s resilience to violence in Colombia

Champions for Peace – Empowering young people in Southeast Asia

The Foundation has launched the Leadership Excellence in Politics (LEiP) initiative focused on rebuilding trust in political leadership for the 21st century. Additionally, the Kofi Annan NextGen Democracy Prize has been established to recognise exceptional leadership in defending, renewing, and promoting democracy.

Ethical and democratic leadership

The WYDE Civic Engagement – Accountability Hubs brings together a group of exceptional young leaders from sub-Saharan Africa in a comprehensive digital and in-person training and networking programme to enhance their skills in ethical and democratic leadership, project management, advocacy, and communications and develop pilot actions to put their project ideas into practice with dedicated seed grants.

Digital tools and AI

The Foundation works with civil society, electoral management bodies, and the private sector to develop capacity and tools to counter electoral-related disinformation. It has developed a tool, the Electoral Vulnerability Index, to identify elections at risk from digital threats and predict election-related violence.
The Foundation has published a report titled ‘GenAI at the Ballot Box: A Review of Generative AI Use in the 2024 European Parliament Elections’ and convened discussions with policymakers and civil society in Brussels to identify strategies to protect future elections from the malicious use of AI.

Human rights issues

Digital rights and youth participation

The Digitalise Youth Project, part of the Digital Democracy Initiative, aims to address the shrinking civic space and rampant disinformation in the Sahel and neighbouring regions by empowering local youth activists and civil society organisations. Launched in January 2025, it focuses on enhancing digital skills, promoting civic tech solutions, and raising awareness about online political engagement. By connecting human rights defenders and the tech community, the project equips young activists and media organisations with the knowledge and tools to navigate the digital ecosystem, protect themselves from surveillance, and fight against disinformation. In addition to its capacity-building work, Digitalise Youth’s advocacy efforts seek to promote digital rights at local, regional, and international levels.

Ensuring the protection of human rights in the digital era

The Foundation works with electoral stakeholders to mitigate the impact of online disinformation and hate speech and to ensure that threats from the digital space do not undermine citizens’ rights to political participation and that digital tools increase voters’ ability to make informed and educated electoral decisions.

The Foundation has also established a gender, equality, and inclusion initiative to ensure these principles are integrated across all its work.

Violent extremism

The Extremely Together programme consists of young people from around the world working to counter the impact of extremism in their communities. The initial cohort of ten impressive leaders has grown to include national hubs throughout South and Southeast Asia, East Africa, and the Sahel. Digital tools allow these young people to draw on the network and support of the Kofi Annan Foundation and share experiences to improve the impact of their work.

Interdisciplinary approaches

While the Foundation does not use this terminology to describe its work, we have adopted it in line with our internal taxonomy. Its use in this publication is for consistency and clarity within that framework.

Supporting elections with integrity

Regarding its activities on elections and democracy, the Foundation’s digital work is based on KACEDDA’s findings. The Commission was first established in 2018 and was composed of members from civil society and government, the technology sector, academia, and the media. The objectives of the Commission were to identify and frame the challenges to electoral integrity arising from the global spread of digital technologies and social media platforms, develop policy measures to tackle these challenges and highlight the opportunities that technological change offers for strengthening electoral integrity and political participation, and define and articulate an advocacy programme to ensure that the key messages emerging from the Commission were widely diffused and debated around the world.

The Kofi Foundation has joined the Global Network for Securing Electoral Integrity (GNSEI), which convenes election stakeholders to advance electoral integrity in the face of critical threats to democracy.

The Foundation’s Elections and Democracy projects are active in countries including Ghana, Kenya, the DRC, Malaysia, Nigeria, and more broadly in Sub-Saharan Africa. In 2022, the Foundation urged Kenyan electoral candidates to pledge to appropriate and peaceful online behaviour, including avoiding all forms of gender-based violence, as part of its project ‘Securing the Digital Environment for the 2022 Election in Kenya’, funded by UNDEF.

In addition to articles addressing issues such as the interplay between democracy and the internet, the impact of digital on elections and democracy in West Africa, and digital dangers to democracy, the Commission published an extensive report titled Protecting Electoral Integrity in the Digital Age. It addresses, among other things, hate speech, disinformation, online political advertising, and foreign interference in elections. The report proposes a set of 13 recommendations that address capacity building, norm building, and actions to be taken by public authorities and social media platforms. The Foundation is now working to implement certain recommendations, in cooperation with key stakeholders from civil society, academia, the private sector, and government.

The Foundation has published additional reports, including ‘Eliminating Violence Against Women in Politics’, and has hosted workshops addressing online gender-based political violence, such as a two-day multistakeholder workshop with the Centre for Multiparty Democracy (CMD-Kenya) in Nairobi in November 2021.

In 2022, the Foundation joined the European Partnership for Democracy (EPD) network to reinforce European actions promoting democracy worldwide. It regularly convenes discussions in Brussels to strengthen and inform EU mechanisms that play a role in protecting digital rights, such as the Media Freedom Act and the Artificial Intelligence Act.

Advancing multilateralism

The Foundation has expanded its focus to include a dedicated area of multilateralism that advocates for a fairer, more inclusive multilateral system. Key initiatives include:

Multilateralism and Democracy – Understanding how democratic leadership is essential to solving global challenges

Kofi Annan Commission on Food Security – Calling for urgent action to reform global food governance

The Foundation is collaborating with the Albert Hirschman Centre on Democracy on a project titled ‘Understanding the Links Between Multilateralism and Democracy to Tackle Global Challenges More Effectively’, which includes thematic roundtables on topics such as ‘Artificial Intelligence and Democracy’.

Digital tools

Conferencing technologies

Raising awareness of Kofi Annan’s legacy

The Kofi Annan Foundation uses digital tools to raise awareness of Kofi Annan’s legacy, by providing electronic access to selected speeches and quotations, as well as to a collection of his papers compiled by the City University of New York on our website and to some of his recorded statements and discussions via our official YouTube channel.

Through our podcast, Kofi Time, we promote Kofi Annan’s values and their relevance today to a global audience. In Kofi Time, Ahmad Fawzi, one of Kofi Annan’s former spokespersons and communications advisors, examines how Kofi Annan tackled a specific crisis and its relevance to today’s world and challenges. Kofi Annan’s call to bring all stakeholders to the table – including the private sector, local authorities, civil society organisations, academia, and scientists – resonates now more than ever with many who understand that governments alone cannot shape our future.

In the first 10-part series, Fawzi interviews some of Kofi Annan’s closest advisors and colleagues, including Dr Peter Piot, Christiane Amanpour, Mark Malloch-Brown, Michael Møller, and others. Kofi Time is available to stream via the Kofi Annan Foundation website, SoundCloud, Spotify, and Apple Podcasts.

Social media channels

Facebook @KofiAnnanFoundation

Instagram @KofiAnnanFoundation

LinkedIn @Kofi Annan Foundation

YouTube @Kofi Annan Foundation

Office of the United Nations High Commissioner for Human Rights

The Office of the United Nations High Commissioner for Human Rights and other related UN human rights entities, namely the United Nations Human Rights Council, the Special Procedures, and the Treaty Bodies are considered together on this page.

The UN Human Rights Office is headed by the High Commissioner for Human Rights and is the principal UN entity on human rights. Also known as UN Human Rights, it is part of the UN Secretariat. UN Human Rights has been mandated by the UNGA to promote and protect all human rights. As such, it plays a crucial role in supporting the three fundamental pillars of the UN: peace and security, human rights, and development. UN Human Rights provides technical expertise and capacity development in regard to the implementation of human rights, and in this capacity assists governments in fulfilling their obligations.

UN Human Rights is associated with a number of other UN human rights entities. To illustrate, it serves as the secretariat for the UN Human Rights Council (UNHRC) and the Treaty Bodies. The UNHRC is a body of the UN that aims to promote the respect of human rights worldwide. It discusses thematic issues, and in addition to its ordinary session, it has the ability to hold special sessions on serious human rights violations and emergencies. The ten Treaty Bodies are committees of independent experts that monitor the implementation of the core international human rights treaties.

The UNHRC established the Special Procedures, which are made up of UN Special Rapporteurs (i.e. independent experts or working groups) working on a variety of human rights thematic issues and country situations to assist the efforts of the UNHRC through regular reporting and advice. The Universal Periodic Review (UPR), under the auspices of the UNHRC, is a unique process that involves a review of the human rights records of all UN member states, providing the opportunity for each state to declare what actions they have taken to improve the human rights situation in their countries. UN Human Rights also serves as the secretariat to the UPR process.

Certain NGOs and national human rights institutions participate as observers in UNHRC sessions after receiving the necessary accreditation.

Digital activities

Digital issues are increasingly gaining prominence in the work of the UN Human Rights, the UNHRC, the Special Procedures, the UPR, and the Treaty Bodies. The GDC, adopted in September 2024, recognised the central role of human rights in all digitalisation efforts, identifying respect, protection and promotion of human rights as one of its main objectives, and designating UN Human Rights as one of the main implementing UN entities.

A landmark document that provides a blueprint for digital human rights is the UNHRC resolution (A/HRC/20/8) on the promotion, protection, and enjoyment of human rights on the internet, first adopted in 2012, starting a string of regular resolutions with the same name addressing a growing number of issues. All resolutions affirm that the same rights that people have offline must also be protected online. Numerous other resolutions and reports from UN human rights entities and experts considered in this overview tackle an ever-growing range of other digital issues including the right to privacy in the digital age; freedom of expression and opinion; freedom of association and peaceful assembly; the rights of older persons; racial discrimination; the rights of women and girls; human rights in the context of violent extremism online; economic, social, and cultural rights; human rights and technical standard setting; business and human rights; and the safety of journalists. In 2024, UN Human Rights published an overview report (A/HRC/56/45) mapping the work and recommendations of the UNHRC, UN Human Rights, Human Rights Treaty Bodies, and Special Procedures in the domain of human rights and new and emerging digital technologies, including AI.

Digital policy issues

Artificial intelligence

UN Human Rights works extensively in the AI field. For example, a 2021 report to the UNHRC (A/HRC/48/31) analysed how AI impacts the enjoyment of the right to privacy and other human rights in areas such as policing, delivery of public services, employment and online information management. It clarified measures that states and businesses should take to ensure that AI is developed and used in ways that benefit human rights and prevent and mitigate harm.

The UN Human Rights B-Tech Project is running a Generative AI project that demonstrates how the UN Guiding Principles on Business and Human Rights should guide more effective understanding, mitigation, and governance of the risks associated with generative AI. The B-Tech Project also contributes to the implementation of the GDC, in particular with regard to the implementation of the UN Guiding Principles on Business and Human Rights regarding AI products and services.

UN Human Rights also weighs in on specific policy and regulatory debates, such as an open letter concerning the negotiations of the EU AI Act. A brief titled Key Asks for State Regulation of AI, released in 2025, offers recommendations on AI regulation for states.

In 2018, the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression presented a report to the UNGA on Artificial Intelligence (AI) Technologies and Implications for the Information Environment. Among other things, the document addresses the role of AI in the enjoyment of freedom of opinion and expression, including ‘access to the rules of the game when it comes to AI-driven platforms and websites’ and therefore calls for a human rights-based approach to AI.

For her 2020 thematic report to the Human Rights Council, the UN Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia, and related intolerance analysed different forms of racial discrimination in the design and use of emerging technologies, including the structural and institutional dimensions of this discrimination. She followed up with reports examining how digital technologies, including AI-driven predictive models, deployed in the context of border enforcement and administration, reproduce, reinforce, and compound racial discrimination. In 2024, the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance published a new thematic report, highlighting how the belief that technology is neutral allows AI to perpetuate racial discrimination. Through examples, she examines issues like data problems, algorithm design, intentional misuse, and accountability, analysing efforts and providing recommendations for regulation to prevent racial discrimination. In 2023, the Special Rapporteur on the right to privacy published a report on the principles of transparency and explainability in the processing of personal data in AI (A/78/310), stressing the importance of taking measures to ensure that AI is ethical, responsible, and human rights-compliant. 

Several other special procedures mandate holders have discussed AI and human rights, including in reports on the implications of AI for the right to freedom of thought, the right to education, the right to health, the rights of older persons, and the rights of persons with disabilities. Important insights concerning AI have also been presented in areas such as counter-terrorism, and extreme poverty. 

In its 2021 report on new and emerging digital technologies, the Human Rights Council Advisory Committee discussed issues associated with AI.         

In 2020, the Committee on the Elimination of Racial Discrimination published its General Recommendation No. 36 on preventing and combating racial profiling by law enforcement officials (CERD/C/GC/36), which focuses on algorithmic decision-making and AI in relation to racial profiling by law enforcement officials.

Child safety online

Within the work of the UN Human Rights, ‘child safety online’ is referred to as ‘rights of the child’ and dealt with as a human rights issue.

The issue of child safety online has garnered the attention of UN human rights entities for some time. The 2016 resolution on Rights of the Child: Information and Communications Technologies and Child Sexual Exploitation adopted by the UNHRC calls on states to ensure ‘full, equal, inclusive, and safe access […] to information and communications technologies by all children and safeguard the protection of children online and offline’, as well as the legal protection of children from sexual abuse and exploitation online. The Special Rapporteur on the sale and sexual exploitation of children, including child prostitution, child pornography, and other child sexual abuse material, mandated by the UNHRC to analyse the root causes of the sale and sexual exploitation and promote measures to prevent it, also looks at issues related to child abuse, such as the sexual exploitation of children online, as addressed in a report (A/ HRC/43/40) published in 2020, but also in earlier reports.

The Committee on the Rights of the Child published its General Comment No. 25 on Children’s Rights in Relation to the Digital Environment (CRC/C/GC/25), which lays out how states parties should implement the convention in relation to the digital environment and provides guidance on relevant legislative, policy, and other measures to ensure full compliance with their obligations under the convention and the optional protocols in the light of opportunities, risks, and challenges in promoting, respecting, protecting, and fulfilling all children’s rights in the digital environment.

In 2024, the resolution A/HRC/RES/56/6 on the Safety of the Child in the Digital Environment was adopted by the UNHRC. This resolution requests the Office of the United Nations High Commissioner for Human Rights to hold five regional workshops to assess child safety in the digital environment, involving various stakeholders. UN Human Rights is also asked to prepare a report summarising these consultations with recommendations for a global framework on child safety, to be presented at the Human Rights Council’s sixty-second session, in June 2026.

Human Rights Council resolution 56/6 requested UN Human Rights to convene a series of consultations to assess the risks to the safety of the child in the digital environment and related best practices and to publish a report on these consultations in June 2026. 

Data governance

UN Human Rights maintains an online platform consisting of a number of databases on anti-discrimination and jurisprudence, as well as the Universal Human Rights Index (UHRI), which provides access to recommendations issued to countries by Treaty Bodies, Special Procedures, and the UPR of the UNHCR.

UN Human Rights’ report A Human Rights-Based Approach to Data – Leaving no one Behind in the 2030 Agenda for Sustainable Development specifically focuses on issues of data collection and disaggregation in the context of sustainable development.

UN Human Rights has worked closely with partners across the UN system in contributing to the Secretary-General’s 2020 Data Strategy. It was co-led with the Office of Legal Affairs and UN Global Pulse in drafting the Data protection and privacy policy for the Secretariat of the United Nations (ST/SGB/2024/3).

UN Human Rights is an observer in the Working Group on Data Governance at all levels under the auspices of the Commission on Science and Technology for Development, established by the GDC. 

Capacity development

UN Human Rights launched the Guiding Principles in Technology Project (B-Tech Project) to provide guidance and resources to companies operating in the technology space with regard to the implementation of the UN Guiding Principles on Business and Human Rights (UNGPs on BHR). It also provides advice to states with regard to their duty to protect human rights from adverse impacts stemming from business activities, and the mix of measures of regulatory and policy options for doing so. Following the publication of a B-Tech scoping paper in 2019, several foundational papers have delved into a broad range of business-related human rights issues, from business-model-related human rights risks to access to remedies. At the heart of the B-Tech Project lies multistakeholder engagement, informing all of its outputs. The B-Tech Project is enhancing its engagement in Africa and Asia, working with technology company operators, governments, investors, and other key digital economy stakeholders, including civil society, across Africa in a set of African economies and their tech hubs to create awareness of implementing the UNGPs on BHR. Another thematic priority is B-Tech’s work on women’s and girls’ rights.

Following a multistakeholder consultation held on 7–8 March 2022, the High Commissioner presented a report on UN Guiding Principles on Business and Human Rights and Technology Companies (A/HRC/50/56), which demonstrated the value and practical application of the UNGPs in preventing and addressing adverse human rights impacts of technology companies.

Extreme poverty

Within the work of the OHCHR, ‘extreme poverty’ is dealt with as a human rights issue.

The Special Rapporteur on extreme poverty and human rights has, in recent years, increased his analysis of human rights issues arising in the context of increased digitisation and automation. His 2017 report to the General Assembly tackled the socio-economic challenges in an emerging world where automation and AI threaten traditional sources of income and analysed the promises and possible pitfalls of introducing a universal basic income. His General Assembly report in 2019 addressed worrying trends in connection with the digitisation of the welfare state. Moreover, in his 2022 report to the UNHRC on non-take-up of rights in the context of social protection, the Special Rapporteur highlighted, among other things, the benefits and considerable risks associated with the automation of social protection processes.

Content policy

Geneva-based human rights organisations and mechanisms have consistently addressed content policy questions, in particular in the documents referred to under the freedom of expression and the freedom of peaceful assembly and of association. Other contexts where content policy plays an important role include rights of the child, gender rights online, and rights of persons with disabilities. Moreover, the use of digital technologies in the context of terrorism and violent extremism is closely associated with content policy considerations.

In 2016, UN Human Rights, at the request of the UNHRC, prepared a compilation report exploring, among other issues, aspects related to the prevention and countering of violent extremism online, and underscores that responses to violent extremism that are robustly built on human rights are more effective and sustainable.

Additional efforts were made in 2019 when the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism published a report examining the multifaceted impacts of counter-terrorism measures on civic space and the rights of civil society actors and human rights defenders, including measures taken to address vaguely defined terrorist and violent extremist content. In July 2020, she published a report discussing the human rights implications of the use of biometric data to identify terrorists and recommended safeguards that should be taken.

In August 2022, responding to a request from the General Assembly in resolution A/RES/76/227, the Secretary-General released his Countering Disinformation for the Promotion and Protection of Human Rights and Fundamental Freedoms (A/77/287) report, outlining the challenges of disinformation, the international legal framework and the information and best practices shared by states, UN entities, and others on countering disinformation. A public webpage has been published to highlight the disinformation topic.

In 2023, the Secretary-General published the Report on Terrorism and Human Rights (A/78/269), further analysing the impact of counter-terrorism measures on civic space with a special consideration on the use of new technologies in counter-terrorism efforts.

Interdisciplinary approaches

Collaboration within the UN system

UN Human Rights has led a UN system-wide process to develop a human rights due diligence (HRDD) guidance for digital technology, as requested by the Secretary-General’s Roadmap for Digital Cooperation and his Call to Action for Human Rights. The HRDD guidance pertains to the application of human rights due diligence and human rights impact assessment related to the UN’s design, development, procurement, and use of digital technologies, and was completed in 2022. The guidance was adopted by the Executive Committee in 2024 and is being rolled out. The HRDD Guidance has sparked interest from other organisations and states.

As part of the implementation of the Secretary-General’s Call to Action for Human Rights, UN Human Rights launched the UN Hub for Human rights and Digital Technology, which provides a central repository of authoritative guidance from various UN human rights mechanisms on the application of human rights norms to the use and governance of digital technologies.

In addition, UN Human Rights is a member of the Legal Identity Agenda Task Force, which promotes solutions for the implementation of SDG target 16.9 (i.e. by 2030, provide legal identity for all, including free birth registration). It leads its work on exclusion and discrimination in the context of digitised identity systems.

The Secretary-General addressed, in his report on human rights in the administration of justice (A/79/296) published in 2024, human rights challenges and good practices of the application of digital technologies and artificial intelligence in the administration of justice. The report provides a summary of UN activities to support states and civil society in their efforts to develop and implement digital and AI systems in the administration of justice, with a focus on human rights.

Technical standard settings and human rights

In June 2023, UN Human Rights presented the first UN report systematically analysing the intersection of technical standards-setting and human rights. It sheds light on how technical standards shape how human rights can be enjoyed in a digital environment. It identifies multiple challenges and provides extensive recommendations for the effective integration of human rights considerations into standards-setting processes. UN Human Rights has rolled out a project for the coming years to support the implementation of those recommendations. As part of this project, it works closely with standard-setting organisations, such as the International Telecommunication Union (ITU), and many stakeholders, including states, civil society, the technical community, academic institutions, and businesses. 

United Nations Convention against Cybercrime 

UN Human Rights participated in the process of the negotiation of the new United Nations Convention against Cybercrime, adopted by the General Assembly in December 2024. The Office supported member states with in-depth analysis and recommendations for aligning the treaty with human rights law, standards and principles, and will continue providing advice in this area. 

Neurotechnology

Rapid advancements in neurotechnology and neuroscience, while holding promises of medical benefits and scientific breakthroughs, pose a number of human rights and ethical challenges. Against this backdrop, UN Human Rights has been contributing significantly to an inter-agency process led by the Executive Office of the Secretary-General to develop a global roadmap for the effective and inclusive governance of neurotechnology.

In 2024, at the request of the UNHRC in its resolution 51/3, the Advisory Committee published a study report on the impact, opportunities, and challenges of neurotechnology with regard to the promotion and protection of all human rights (A/HRC/57/61). This report, available in an easy-to-read format, highlights the specific human rights at risk, identifies vulnerable groups, examines settings where individuals are exposed to coercive uses of neurotechnologies, and explores aspects of human augmentation. It also provides insights into solutions to maximise opportunities and outlines a protective framework to mitigate risks.

Two resolutions on neurotechnology and human rights (A/HRC/RES/51/3 and A/HRC/RES/58/6) were published in 2022 and 2025, respectively, emphasising the importance of promoting and protecting human rights in the context of neurotechnology and digital advancements. The resolutions highlight the need for ethical, legal, and societal considerations to ensure human dignity, autonomy, and non-discrimination. The most recent resolution also asked the Advisory Committee to draft a set of recommended guidelines for applying the existing human rights framework to the conception, design, development, testing, use, and deployment of neurotechnologies.

In 2025, the Special Rapporteur on the right to privacy published a report titled Foundations and Principles for the Regulation of Neurotechnologies and the Processing of Neurodata from the Perspective of the Right to Privacy (A/HRC/58/58)..

Global Digital Compact

Objective 3 of the GDC highlights the importance of fostering an inclusive, open, safe and secure digital space that respects, protects, and promotes human rights. UN Human Rights co-leads the implementation of this objective with UNESCO and safeguards the integration of human rights aspects throughout the text. In the framework of this objective, the GDC acknowledged its human rights advisory service for digital technologies, which aims to bridge the gap in expertise at the intersection of digital technologies and human rights by offering tailored advice, building capacity, and informing states and stakeholders.

Smart cities

‘Making Cities Right for Young People’ is a participatory research project, supported by Foundation Botnar, which examines the impact of the digitalisation of cities on the enjoyment of human rights. It also examines strategies to ensure that ‘smartness’ is measured not solely by technological advancements but by the realisation and promotion of inhabitants’ human rights and well-being, and explores ways to promote digital technologies for civic engagement, participation, and the public good, with a focus on meaningful youth participation in decision-making processes. Launched in 2023, this project surveys the current landscape and details key human rights issues in urban digitalisation. Based on participatory research carried out in three geographically, socially, culturally, and politically diverse cities, it produced a report with initial findings and developed a roadmap for future human-rights-based work on smart cities. Building on this first phase of the project, it will expand its geographical scope and support future youth engagement in urban digitalisation processes.

Migration

In 2020, the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance published a report titled Racial Discrimination and Emerging Digital Technologies: A Human Rights Analysis (A/HRC/44/57), outlining the human rights obligations of states and corporations to address it.

In 2021, the Special Rapporteur published a complementary report (A/HRC/48/76), addressing the issue of the development and use of emerging digital technologies in ways that are uniquely experimental, dangerous, and discriminatory in the border and immigration enforcement context. The report highlights that technologies are being used to promote xenophobic and racially discriminatory ideologies, often due to perceptions of refugees and migrants as security threats, and the pursuit of efficiency without human rights safeguards, with significant economic profits from border securitisation and digitisation exacerbating the issue.

In September 2023, UN Human Rights published a study, conducted with the University of Essex, that analyses the far-reaching human rights implications of specific border technologies. It provides recommendations to states and stakeholders on how to take a human-rights-based approach in ensuring the use of digital technologies at borders aligns with international human rights law and standards. The study draws from a collective body of expertise, research, and evidence, as well as extensive interviews and collaborative meetings with experts.

Privacy and data protection

Challenges to the right to privacy in the digital age, such as surveillance, communications interception, and the increased use of data-intensive technologies, are among the issues covered by the activities of the UN Human Rights. At the request of the UNGA and the UNHRC, the High Commissioner prepared four reports on the right to privacy in the digital age. The first report, presented in 2014, addressed the threat to human rights caused by surveillance by governments, in particular mass surveillance. The ensuing report, published in September 2018, identified key principles, standards, and best practices regarding the promotion and protection of the right to privacy. It outlined minimum standards for data privacy legal frameworks. In September 2021, the High Commissioner presented a ground-breaking report on AI and the right to privacy (A/HRC/48/31), in which she called for a ban on AI applications that are incompatible with international human rights law, and stressed the urgent need for a moratorium on the sale and use of AI systems that pose serious human rights risks until adequate safeguards are put in place. In September 2022, the High Commissioner presented a report focusing on the abuse of spyware by public authorities, the key role of encryption in ensuring the enjoyment of human rights in the digital age, and the widespread monitoring of public spaces. The new report, based on resolution 54/21, is expected to be published in September 2025. In 2023 and 2024, UN Human Rights published two briefs titled What is Encryption? and Hacking & Spyware. While the first document explains the encryption process, its restrictions, and its relationship with privacy, the second document emphasises the impact of spyware and human rights and key measures to end abuses.

The UNHRC also tackles online privacy and data protection. Resolutions on the promotion and protection of human rights on the internet have underlined the need to address security concerns on the internet in accordance with international human rights obligations to ensure the protection of all human rights online, including the right to privacy. The UNHRC has also adopted specific resolutions on the right to privacy in the digital age, addressing issues such as mass surveillance, AI, the responsibility of business enterprises, and the key role of the right to privacy as an enabler of other human rights. Resolutions on the safety of journalists have emphasised the importance of encryption and anonymity tools for journalists to freely exercise their work. Two resolutions on new and emerging technologies (2019 and 2021) have further broadened the lens, for example, by asking for a report on the human rights implications of technical standard-setting processes.

The UNHRC has also mandated the Special Rapporteur on the right to privacy to address the issue of online privacy in its 2015 Resolution on the Right to Privacy in the Digital Age  (A/HRC/RES/28/16). To illustrate, the Special Rapporteur has addressed the question of privacy from the stance of surveillance in the digital age (A/HRC/34/60), which becomes particularly challenging in the context of cross-border data flows. More recently, specific attention has been given to the privacy of health data that is increasingly being produced in the day and age of digitalisation, and that requires the highest legal and ethical standards (A/HRC/40/63). In this vein, in 2020, the Special Rapporteur examined data protection and surveillance in relation to COVID-19 and contact tracing in his preliminary report (A/75/147), in which he provided a more definitive analysis of how pandemics can be managed with respect to the right to privacy (A/76/220). In another 2020 report (A/HRC/43/52), the Special Rapporteur provides a set of recommendations on privacy in the online space calling for, among other things, ‘comprehensive protection for secure digital communications, including by promoting strong encryption and anonymity- enhancing tools, products, and services, and resisting requests for “backdoors” to digital communications’ and recommending that ‘government digital identity programmes are not used to monitor and enforce societal gender norms, or for purposes that are not lawful, necessary, and proportionate in a democratic society.’

The Special Rapporteur also addressed the challenges of AI and privacy, as well as children’s privacy, particularly the role of privacy in supporting autonomy and positive participation of children in society, in his 2021 report (A/HRC/46/37).

In 2022, the Special Rapporteur examined developments in privacy and data protection in Ibero-America in her report titled Privacy and Personal Data Protection in Ibero-America: A Step Towards Globalization? (A/HRC/49/55), and published the principles underpinning privacy and the protection of personal data (A/77/196). 

More recently, in 2023, at the request of the UNHRC, the Special Rapporteur addressed the issue of the implementation of the principles of purpose limitation, deletion of data and demonstrated or proactive accountability in the processing of personal data collected by public entities in the context of the COVID-19 pandemic (A/HRC/52/37).

In her 2024 report titled Legal Safeguards for Personal Data Protection and Privacy in the Digital Age (A/HRC/55/46), the Special Rapporteur provided a comparative study of personal data protection and privacy laws across five continents. The report examines mechanisms for data subjects to control their personal data and legal avenues for protecting their rights and addressing misuse. During the same year, the Special Rapporteur proposed the updating of General Assembly resolution 45/95 Guidelines for the regulation of computerized personal data files (A/79/173), to bring it into line with the socio-technological reality of the twenty-first century. 

Freedom of expression

The High Commissioner and his office advocate for the promotion and protection of freedom of expression, including in the online space. Key topics in this advocacy are the protection of the civic space and the safety of journalists online; various forms of information control, including internet shutdowns and censorship; addressing incitement to violence, discrimination, or hostility; disinformation; and the role of social media platforms in the space of online expression.

Freedom of expression in the digital space also features highly on the agenda of the UNHRC. It has often been underlined that states have a responsibility to ensure adequate protection of freedom of expression online, including adopting and implementing measures aimed at dealing with issues such as cybersecurity, incitement to violence, and the promotion and distribution of extremist content online. The UNHRC has also been firm in condemning measures to intentionally prevent or disrupt access to or the dissemination of information online and has called on states to refrain from and cease such measures.

In 2021, at the request of the UNHRC resolution 47/16, the High Commissioner prepared a report on internet shutdowns (A/HRC/50/55), which looks at trends in internet shutdowns, analysing their causes, legal implications, and impact on a range of human rights, including economic, social, and cultural rights. She called on states to refrain from the full range of internet shutdowns and for companies to uphold their responsibilities to respect human rights. She stressed the need for development agencies and regional and international organisations to bridge their digital connectivity efforts with efforts related to internet shutdowns. The UNHRC resolution 57/29 mandated UN Human Rights 

to prepare a report on a human rights approach to meaningful connectivity and overcoming digital divides, including addressing threats to individuals’ access to the internet. The report will be presented in June 2026. 

UN Human Rights also weighs in on a range of law-making processes that are relevant to the exercise of the right to freedom of expression. For example, it has engaged with the development of the EU Digital Services Act and commented extensively on global trends in regulating social media.

Special Rapporteurs on the promotion and protection of the right to freedom of opinion and expression have been analysing issues relating to free expression in the digital space for more than a decade. Reports in the first half of the 2010s already addressed the importance of universal access to the internet for the enjoyment of human rights, free expression in the context of elections, and the adverse impacts of government surveillance on free expression. In 2018, the Special Rapporteur published a report on online content regulation. It tackles governments’ regulation of user-generated online content, analyses the role of companies, and recommends that states should ensure an enabling environment for online freedom of expression and that businesses should rely on human rights law when designing their products and services. UN Human Rights published a brief on the thematic report titled A Human Rights Approach to Online Content Regulation. The same year, he also presented to the UNGA a report addressing freedom of expression issues linked to the use of AI by companies and states. A year later, the Special Rapporteur presented a report to the UNGA on online hate speech that discusses the regulation of hate speech in international human rights law and how it provides a basis for government actors considering regulatory options and for companies determining how to respect human rights online.

In 2020, the Special Rapporteur issued Disease Pandemics and the Freedom of Opinion and Expression, a report that specifically tackles issues such as access to the internet, which is highlighted to be ‘a critical element of healthcare policy and practice,  public information, and even the right to life’. Other reports addressed the vital importance of encryption and anonymity for the exercise of freedom of opinion and the threats to freedom of expression emanating from widespread digital surveillance.

The Special Rapporteur, while acknowledging the complexities and challenges posed by disinformation in the digital age, noted that responses by states and companies to counter disinformation were inadequate and detrimental to human rights. In her 2021 report Disinformation and Freedom of Opinion and Expression (A/HRC/47/25), she examined the threats posed by disinformation to human rights, democratic institutions, and development processes, and called for multidimensional and multistakeholder responses to disinformation that are well grounded in the international human rights framework and urged companies to review their business models and states to recalibrate their responses to disinformation.

More recently, in 2022, the Special Rapporteur issued Reinforcing Media Freedom and the Safety of Journalists in the Digital Age (A/HRC/50/29), a report in which she calls on states and the international community to strengthen multistakeholder cooperation to protect and promote media freedom and the safety of journalists in the digital age, and ensure independence, pluralism, and viability of the media. She also calls on digital services companies and social media platforms to respect the UNGPs on BHR.

Online hate speech and discrimination have also been addressed by the Special Rapporteur on freedom of religion and belief. For instance, a report published in 2019 underscored the online manifestation of antisemitism (including antisemitic hate speech) and shared best practices from the Netherlands and Poland. The report highlights that governments have an affirmative responsibility to address online antisemitism, as the digital sphere is now the primary public forum and marketplace for ideas’. In another document published that same year, the Special Rapporteur assesses the impact of online platforms on discrimination and on the perpetuation of hostile and violent acts in the name of religion, as well as how restrictive measures such as blocking and filtering of websites negatively impact the freedom of expression.

The issue of online blasphemy and undue limitations on expressing critical views of religions and beliefs imposed by governments has also been addressed on a number of occasions, including in a 2018 report.

In 2024, the High Commissioner for Human Rights, at the request of the UNHRC, prepared a thematic report identifying challenges and best practices in assessing civic space trends, along with recommendations to enhance information-gathering. Based on inputs from states and civil society, the report highlights the roles of various actors, common elements of civic space, gaps, and challenges, and calls for increased data access, safe working conditions for contributors, and improved assessment of online civic space trends. UN Human Rights published a brief titled Tracking civic space trends, related to this report.

In 2023 and 2025, UN Human Rights published two briefs on internet shutdowns and social media platforms in the Middle East, North and East Africa. While the first document explains the importance of shutdowns, their human rights violations, and how to prevent and respond to them, the second document addresses issues faced by human rights defenders, including online attacks, platform policies, and access, and highlights key recommendations.During its 58th session, the UNHRC adopted a resolution titled Human Rights Defenders and New and Emerging Technologies: Protecting Human Rights Defenders, Including Women Human Rights Defenders, in the Digital Age (A/HRC/58/23), which asked UN Human Rights to convene regional workshops and prepare a report about risks created by digital technologies to human rights defenders and best practices to respond to these concerns.

Gender rights online

Within the work of the OHCHR, ‘gender rights women rights and gender equality online’

On several occasions, UN Human Rights and the UNHRC have reiterated the need for countries to bridge the gender digital divide and enhance the use of ICTs, including the internet, to promote the empowerment of all women and girls. It has also condemned gender-based violence committed on the internet. Implementing a 2016 UNHRC resolution on the Promotion, Protection, and Enjoyment of Human Rights on the Internet, in 2017, the High Commissioner on Human Rights prepared a report on ways to bridge the gender digital divide from a human rights perspective.

Rights of persons with disabilities

The promotion and protection of the rights of persons with disabilities in the online space have been repeatedly addressed by the UN Special Rapporteur on the rights of persons with disabilities. A 2016 report underscored that ICTs, including the internet, can increase the participation of persons with disabilities in public decision-making processes and that states should work towards reducing the access gap between those who can use ICTs and those who cannot.

Nevertheless, a 2019 report stressed that the shift to e-governance and service delivery in a digital manner can hamper access for older persons with disabilities who may lack the necessary skills or equipment.

The Special Rapporteur also examined the opportunities and risks posed by AI, including discriminatory impacts in relation to AI in decision-making systems. In his 2021 report (A/HRC/49/52), the Special Rapporteur emphasises the importance of disability-inclusive AI and the inclusion of persons with disabilities in conversations about AI.

More recently, in 2024, at the request of the UNHCR resolution 51/10, the High Commissioner prepared a report on cyberbullying against persons with disabilities. The report examines the experiences of persons with disabilities facing cyberbullying, the relevant human rights frameworks, prevailing trends and challenges, promising counter-cyberbullying practices, and provides recommendations for rights-respecting responses and inclusion in the digital environment.

Rights of older persons

The mandate of the Independent Expert on the enjoyment of all human rights by older persons has repeatedly addressed complex issues relating to digital technologies, for example, in the report Robots and Rights: The Impact of Automation on the Human Rights of Older Persons (A/HRC/36/48) and on data gaps concerning older persons (A/HRC/45/14). In 2026, UN Human Rights will publish a report on countering cyberbullying against older persons, as requested by the UNHRC (resolution 57/6).  

Freedom of peaceful assembly and association

The exercise of the rights to freedom of peaceful assembly and association in the digital environment has attracted increased attention in recent years. For example, the High Commissioner presented a report on new technologies such as ICTs and their impact on the promotion and protection of human rights in the context of assemblies, including peaceful protests, to the 44th session of the UNHRC. The report highlighted many of the opportunities for the exercise of human rights that digital technologies offer, analysed key issues linked to online content takedowns, and called on states to stop the practice of network disruptions in the context of protests. It also developed guidance concerning the use of surveillance tools, in particular facial recognition technology.

In July 2020, the Human Rights Committee published its General Comment No. 37 on Article 21 of the International Covenant on Civil and Political Rights (ICCPR) (the right of peaceful assembly), which addresses manifold aspects arising in the digital context.

In 2019, the Special Rapporteur on the rights to freedom of peaceful assembly and of association published a report for the UNHRC focusing on the opportunities and challenges facing the rights to freedom of peaceful assembly and association in the digital age. In the report, he condemned the widespread practice of internet shutdowns and raised concerns about technologically mediated restrictions on free association and assembly in the context of crises.

Economic, social, and cultural rights

In March 2020, the UN Secretary-General presented a report on the role of new technologies for the realisation of economic, social, and cultural rights to the UNHRC. He identified the opportunities and challenges held by new technologies for the realisation of economic, social, and cultural rights and other related human rights, and for the human-rights-based implementation of the 2030 Agenda for Sustainable Development. The report concludes with recommendations for related action by member states, private companies, and other stakeholders.

More recently, in 2022, the Special Rapporteur on the right to education presented a report on the impact of digitalisation of education on the right to education (A/HRC/50/32) to the UNHRC, calling for the integration of human rights legal framework in digital education plans in the context of the increasing digitalisation of education.

The Special Rapporteur in the field of cultural rights has published reports on technology-related topics, including the right to science (A/HRC/55/44 and A/HRC/55/44/Corr.1)  and the relationship between human rights and intellectual property rights (A/70/279 and A/70/279/Corr.1 and A/HRC/28/57). The Independent Expert on the effects of foreign debt and other related international financial obligations of states on the full enjoyment of all human rights, particularly economic, social and cultural rights, has presented a report on international financial obligations, digital systems, and human rights (A/HRC/52/34).

UN Human Rights works extensively on the human rights dimensions of development finance, including technology-related aspects, in, for example, a benchmarking study on development finance institutions’ safeguard policies, a study on remedy in development finance and submissions to development finance institutions addressing technology-related policies and practices.

Social media channels

Facebook @UnitedNationsHumanRights

Instagram @unitednationshumanrights

X @UNHumanRights

YouTube @UNOHCHR


ICT for Peace Foundation

ICT4Peace has operated as an independent think tank based in Geneva since 2003. It fosters political discussion and common action to support international and human security in cyberspace. All its activities are focused on the use of ICT to fulfil its key goals: saving lives, protecting human dignity, and promoting peace and security in cyberspace. ICT4Peace acts as an early mover in identifying important challenges, bringing visibility and high-level attention to critical new issues. It carries out policy research examining how to use technologies to support state and human security, and develops capacity building through the ICT4Peace Academy to support the full participation of all stakeholders in ICT discussions, negotiations, and solutions. A description of the concrete areas of its work can be found in this document.

The areas presently covered are deepening the understanding of the ICT-related activities and services provided by private (cyber) security companies and their impacts on human rights, international law, and international security law, norms of responsible state behaviour in cyberspace, including neutrality during cyberwarfare, mis- and disinformation and hate speech, gender and ICT, and AI, peace, and ethics.

Digital policy issues

Network security, cyberconflict, and warfare

An open, secure, stable, accessible, and peaceful ICT environment is essential for all and requires effective cooperation among states, civil society, and the private sector to reduce risks to international peace and security and ensure economic and social development. There are, however, very disturbing trends in the global ICT environment, including a dramatic increase in incidents involving the malicious use of ICTs by state and non-state actors, such as criminals and terrorists. These trends create enormous risks to peace and security in cyberspace for states, but equally to human security and dignity.

In 2011, ICT4Peace called for a code of conduct and norms of responsible state behaviour and confidence-building measures for open, secure, and peaceful cyberspace, and encouraged all stakeholders to work together to identify new cyber threats and develop solutions and agreements at national and global levels. In particular, it advocated against the increasing militarisation of cyberspace. ICT4Peace supported international negotiations at the UN Governmental Group of Experts (UN GGE) and the Open-Ended Working Groups (OEWG I and II) in New York, as well as at the Organization for Security and Co-operation in Europe (OSCE), the Association of Southeast Asian Nations (ASEAN), the Organization of American States (OAS), and the African Union (AU) with policy recommendations and multiple publications and workshops. In 2014, ICT4Peace launched its capacity-building programmes and in 2020 created the ICT4Peace Academy, in particular for policymakers and diplomats from developing and emerging economies to enable them to develop and implement their national cybersecurity strategies, build computer emergency response teams (CERTS) and meaningfully engage in the UN GGE and the OEWG I 20192021 and OEWG II 2021– 2025, but also in bilateral and regional negotiations.

In 2019, at OEWG I in New York, ICT4Peace issued a call to governments to publicly commit not to attack civilian critical infrastructure and proposed a state cyber peer review mechanism for state-conducted foreign cyber operations. See also all ICT4Peace inputs to and comments on OEWG I and the ICT4Peace Submission to OEWG II 20212025.

ICT4Peace has highlighted emerging concerns and suggested governance solutions in the fields of AI, lethal autonomous weapons systems (LAWS), and peacetime threats.

Since 2019, ICT4Peace has been advocating for a peer-review mechanism on accountability for more than five years, inspired by the Human Rights Council’s Universal Periodic Review process. The organisation is currently engaged in discussions about establishing a permanent mechanism for addressing global cybersecurity challenges.

In 2024, ICT4Peace launched a groundbreaking toolkit titled ‘From Boots on the Ground to Bytes in Cyberspace’, providing comprehensive guidance on the use of technologies by Private Security Companies (PSCs). The toolkit addresses human rights challenges posed by emerging technologies in the private security sector, covering topics including responsible data practices, surveillance ethics, algorithmic bias, and emerging technologies.

ICT4Peace has also expanded its work to address quantum computing as an emerging threat to cybersecurity. In 2024, it published ‘Navigating the Quantum Wave: A Policy Maker’s Guide for the Responsible Governance of Quantum Technologies’, advocating for quantum-resistant cryptography and global ICT infrastructure upgrades.

Capacity development

The ICT4Peace Academy offers custom-tailored courses to meet organisations’ needs in learning more about today’s ICT challenges, including cyber diplomacy, cyber peacebuilding, and cyber (human) security. Drawing from an extensive network of practitioners, including diplomats, technologists, and civil society experts, each customised course offers the latest up-to-date information tailored to an organisation’s particular context and presented in a live and interactive format. ICT4Peace offers advisory services to governments, multilateral initiatives, and the international community to support a peaceful cyberspace and provides a global hub and policy space bringing together actors from the technology community, governments, and civil society.

Regretfully, institution and capacity building in the ICT area for peaceful purposes and peace and security in cyberspace has not been sufficiently recognised as a development issue and/or treated as a development priority by the development community, development partners, or the MDGs or SDGs.

It is hoped that by bringing the discussion around the need for increased cybersecurity institution and capacity building (as expressed inter alia by the UN GGE and OSCE) also into the policy orbit of the OECD Development Assistance Committee (DAC), cybersecurity capacity building will be recognised as a development priority by policymakers and more official development assistance (ODA) will flow into this sector in a consistent and coherent fashion. In cooperation with the Estonian and Swiss governments, ICT4Peace has held discussions with the DAC about making cybersecurity capacity building ODA-eligible.

ICT4Peace has also published a thought piece on Digitisation: Curse or Blessing for the Bottom Billion, which makes the case for more cybersecurity capacity building in the context of development cooperation.

The ICT4Peace Academy has expanded its offerings to include specialised courses on understanding the threats of mis-, disinformation and hate speech (MDH) and emerging governance frameworks. The Academy provides workshops on national cyber security strategy building, developing and implementing national legislation, establishing CERTs and CERT-CERT cooperation, as well as specialised workshops for parliamentarians, judiciary, and regulatory authorities.

Content policy

In the area of online content policy, ICT4Peace is engaged in activities related to the use of the internet for misinformation, disinformation, defamation, and hate speech. In today’s information society, the dissemination of false information can have devastating consequences, ranging from violent terrorist attacks to interference in elections to major health crises, as was the case with the COVID-19 pandemic. ICT4Peace’s research and publications on misinformation and hate speech look at the role of social media and other online platforms/apps in spreading mis/disinformation online.

Regarding the prevention of the use of ICTs for terrorist purposes, ICT4Peace co-launched the Tech against Terrorism Platform with the United Nations Counter-Terrorism Executive Directorate (UNCTED). ICT4Peace organised workshops and produced a number of publications in the aftermath of the Christchurch attack and the Sri Lanka bombing with the main aim of raising awareness and supporting the Christchurch Call Summit Process. At the emergence of COVID-19, ICT4Peace launched a review of the risks and opportunities of ICTs and social media during a pandemic.

In 2024, ICT4Peace launched a podcast series called ‘Digital Distortions’ that examines disinformation and truth decay in contemporary democracies. The podcast is available on SoundCloud, Spotify, and other major podcast platforms.

Human rights principles

ICT4Peace has been active in the area of ICTs and human rights, publishing papers, delivering workshops, and supporting other actors in addressing the human rights implications of digital technologies. It coined the term ‘digital human security’.

Many innovations are designed with the embedded gender and other biases of their creators, and even the most helpful technologies remain inaccessible to those who would benefit the most from them, including women,  girls, and socioeconomically marginalised populations. ICT4Peace is working with gender-focused NGOs to address gender biases in ICTs.

AI promises to change the very nature of our society, transforming our conflict zones and ushering in a new socio-economic era. While the potential benefits are tremendous, so are the potential risks. This requires careful analysis to inform policy decisions at the international and national levels. Since 2017, ICT4Peace has carried out research, published policy papers, and contributed to international discussions on AI, ethical, and political perspectives on emerging digital technologies.

Social media channels

Facebook @ICT4Peace

LinkedIn @ICT4peace

X @ict4peace

YouTube @ICT4Peace Foundation