Iran allocates funds to expand state-controlled internet infrastructure

The Raisi administration in Iran has allocated millions of dollars towards bolstering the country’s internet infrastructure, focusing on tightening control over information flow and reducing the influence of external media.

This decision, part of a broader financial strategy for the Ministry of Communications and Information Technology, reflects a 25% increase from the previous year’s budget, totalling over IRR 195,830 billion (approximately $300 million). Additionally, over IRR 150,000 billion (over $220 million) in miscellaneous credits have been earmarked to expand the national information network.

The Ministry of Communications and Information Technology’s efforts aim to reduce dependency on the global internet, leading to a more isolated and state-controlled national information network.

Why does it matter?

Popular social media platforms like Instagram and Facebook are blocked in Iran, and the government appears to be tightening internet control. Cloudflare has observed a significant decrease in internet traffic from Iran over the past two years, suggesting a trend of increased control and isolation. However, widespread internet disruptions have sparked discontent, leading the Tehran Chamber of Commerce to call for policy reassessment, citing economic concerns.

Pakistan blocks social media platform X for national security reasons

Pakistan’s interior ministry confirmed that it had blocked access to the social media platform X (formerly Twitter) around February’s national election due to national security concerns. Despite reports from users experiencing difficulties accessing X since mid-February, the government has not officially acknowledged the shutdown. The interior ministry made this revelation in a written submission to the Islamabad High Court, responding to a petitioner’s plea challenging the ban.

The ministry cited X’s alleged failure to comply with lawful directives and address concerns regarding platform misuse as reasons for imposing the ban. According to the ministry, X was reluctant to resolve these issues, prompting the government’s decision to uphold national security, maintain public order, and preserve the country’s integrity.

Why does it matter?

The temporary ban on X coincided with the 8 February national election, contested by the party that jailed former prime minister Imran Khan, alleging rigging. Khan’s party heavily relies on social media platforms for communication, especially after facing censorship by traditional media ahead of the polls. Khan, with over 20 million followers on X, remains prominent despite being incarcerated on multiple convictions preceding the election.

The decision to block X was based on confidential reports from Pakistan’s intelligence and security agencies, which indicated nefarious intentions by hostile elements on the platform to create chaos and instability in the country. This move has raised concerns among rights groups and marketing advertisers, with activists arguing that such restrictions hinder democratic accountability and access to real-time information crucial for public discourse and transparency. Marketing consultants also highlight challenges in convincing Pakistani advertisers to use X for brand communications due to governmental restrictions on the platform.

Far-right party Chega challenges Meta over 10-year Facebook ban

Portugal’s far-right political party, Chega, has initiated legal action against Meta Platforms, the parent company of Facebook, following a 10-year ban imposed on the party’s Facebook account. The reasons behind the ban remain unspecified, raising concerns about potential political censorship across Meta’s platforms.

Led by André Ventura, Chega has gained traction in Portugal with its anti-immigration and anti-establishment rhetoric. Chega has responded by calling the restrictions ‘clearly illegal and of unspeakable persecution’ in a post on X.

Why does it matter?

Chega’s legal action against Meta Platforms underscores broader issues surrounding content moderation and political speech on social media platforms. The outcome of this case may establish precedents for how such platforms are held accountable for their moderation policies and their impact on political discourse (see Iran’s recent case). However, the need for more transparency regarding the reasons for Chega’s ban raises questions about the fairness and consistency of content moderation practices.

US rights groups push for limits on facial recognition tech

Rights groups are intensifying their calls for restrictions on using facial recognition technology (FRT) by the US government. The Electronic Frontier Foundation (EFF) has submitted comments to the US Commission on Civil Rights, asserting that FRT lacks reliability for making decisions that impact constitutional rights or social benefits and it poses risks to marginalised communities and privacy. EFF advocates for a ban on government use of FRT and strict limits on private sector use to safeguard against the perceived threats posed by this technology.

Joining EFF, the immigrant advocacy organisation United We Dream and over 30 civil rights partners have also submitted comments to the commission. They highlight concerns that a legal loophole has enabled agencies like ICE and CBP to use facial recognition for extensive surveillance of immigrants and people of colour. The alliance argues that FRT’s algorithmic biases often lead to incorrect identifications, unjust arrests, detentions, and deportations within immigrant communities.

The US Commission on Civil Rights has been conducting hearings with various stakeholders presenting their perspectives on FRT. While rights groups and advocates have raised concerns, government, enforcement agencies, vendors, and institutions, like NIST, have defended the technology. The Department of Justice emphasised its interim facial recognition policy prioritising First Amendment rights, while HUD submitted written testimony in recent weeks.

Why does it matter?

Official data from 2021 reveals that 18 out of 24 federal agencies surveyed were employing facial recognition technology, predominantly for law enforcement and digital access purposes. This ongoing debate underscores the growing scrutiny and debate surrounding using FRT in government operations and its impact on civil liberties and marginalised communities.

US House of Representatives votes to reauthorize surveillance program

The House of Representatives has approved the reauthorisation of Section 702 of the Foreign Intelligence Surveillance Act (FISA), allowing US intelligence agencies to conduct foreign communications surveillance without a warrant. The bill passed by a vote of 273–147, extending Section 702 beyond its April 19th expiration. The debate over amendments to the bill revealed unexpected alliances, with bipartisan efforts to impose a warrant requirement for surveillance of Americans narrowly defeated.

Speaker Mike Johnson faced challenges securing enough votes for reauthorisation, with former President Trump weighing in against FISA on social media. After earlier failures to advance the bill, a revised version shortened the extension to two years to gain support from reluctant Republicans. The amendment requiring a warrant for accessing Americans’ data did not pass, with concerns raised about privacy and national security implications.

The reauthorisation underscores ongoing debates over privacy rights and national security measures in the United States. Senator Ron Wyden strongly criticised the House bill, expressing concerns about increased government surveillance authority and the lack of oversight in accessing Americans’ communications data.

While some lawmakers argued that the bill expanded surveillance powers, supporters emphasised its role in disrupting activities like fentanyl trafficking. However, the Senate must still vote on the reauthorisation before the 19 April deadline.

UK MPs urge government action on TikTok misinformation

UK MPs urge the government to develop a TikTok strategy to tackle misinformation targeting young people. A cross-party committee emphasises the need for the government to adapt to new platforms like TikTok, which have become significant sources of news for the youth. The recommendation is part of a broader report advocating for the use of trusted voices, such as scientists and doctors, to combat conspiracy theories and misinformation spreading on social media.

Data from Ofcom reveals that TikTok is cited as the leading news source for one in 10 individuals in the UK aged 12 to 15, while 71% of 16 to 24-year-olds prefer social media over traditional news websites. TikTok welcomes the suggestion for government engagement on social media platforms, highlighting the rapid evolution of information sources and audience habits in the digital age.

The committee stresses the importance of broadcasters being active on social media to counter disinformation effectively. The government’s ban on TikTok from official electronic devices underscores security concerns, although some departments still utilise the platform. MPs advocate for a more transparent approach from the government, urging it to leverage experts and boost trust by publishing evidence used in policymaking, particularly in areas susceptible to misinformation.

EU Parliament approves controversial Asylum and Migration Pact amidst criticism

The European Parliament approved the Asylum and Migration Pact, a controversial measure that included reforms to the EURODAC biometric database and biometric data collection from minors. Three and a half years in the making, the document aims to bolster border security and streamline asylum processes.

However, critics fear it may usher in repressive policies and expand biometric surveillance, particularly regarding minors, as it provides for the collection of biometric data from children as young as seven. Despite these concerns, proponents argue it aids family reunification efforts and combats document fraud.

The pact’s complexity has sparked debate over its effectiveness and ethics. While some view it as progress, others see it as a missed opportunity for a more compassionate system. The implications of biometrics and facial recognition technology are central to the discourse, which critics warn could grant excessive control over migrants’ movements.

Why does it matter? 

The legal move comes after years of intense debate among conservative and liberal lawmakers and between northern and southern EU member states, with allegations over loyalty to Europe and dissent further complicating the voting process. As political tensions escalate amidst ongoing migrant detentions and deaths, exacerbated by global conflicts driving displacement, discussions on technological deployments at the EU borders in light of implementing the pact will persist.

US Department of Justice reveals facial recognition policy details

Despite not making the full policy public, the US Department of Justice (DOJ) has revealed insights into its interim policy concerning facial recognition technology (FRT). The testimony submitted to the US Commission on Civil Rights highlights key aspects of the policy announced in December, emphasising its adherence to protecting First Amendment activities. The policy aims to prevent unlawful use of FRT, establish guidelines for compliant use, and address various aspects, including privacy protection, civil rights, and accuracy.

Ethical considerations are integral to the interim policy, with measures in place to prevent discriminatory use of facial recognition and ensure accountability for its deployment. However, complexities arise due to evolving AI regulations and the proliferation of biometric algorithms, leading to stipulations that FRT systems must comply with DOJ policies on AI and that FRT results alone cannot serve as sole proof of identity.

The testimony acknowledged civil rights concerns, recognising the potential for bias in algorithms and the misuse of FRT, including unlawful surveillance. Nonetheless, the DOJ emphasises the benefits of FRT in enhancing public safety, citing its role in identifying missing persons, combating human trafficking, and aiding in criminal investigations. According to the DOJ, the key lies in harnessing FRT’s potential while implementing effective safeguards to mitigate potential harm.

Why does it matter?

In a related development, the US government has recently published new guidelines that require all federal agencies to appoint senior leaders as chief AI officers to oversee the use of AI systems. According to the guidelines, agencies must establish AI governance boards to coordinate usage and submit annual reports detailing AI systems, associated risks, and mitigation strategies. As a result, the US Department of Justice appointed Jonathan Mayer, an assistant professor specialising in national security, consumer privacy, and criminal procedure at Princeton University, as its first chief AI officer.

Israel deploys facial recognition program in Gaza

Israel has deployed a sophisticated facial recognition program in the Gaza Strip, according to reports. The program, initiated after the 7 October attacks, employs technology from Google Photos and a proprietary tool from Corsight AI, an Israeli firm dedicated to creating industry-leading facial recognition technology to identify individuals linked to Hamas without their consent.

The facial recognition system, crafted in parallel with Israel’s military operations in Gaza, operates by collecting data from diverse sources, including social media platforms, surveillance footage, and inputs from Palestinian detainees. Israeli Unit 8200, the primary intelligence unit, played a pivotal role in identifying potential targets through these means.

Corsight’s technology, known for its claim to accurately identify individuals even with less than 50% of their face visible, was utilised to construct a facial recognition tool. Establishing checkpoints equipped with facial recognition cameras along critical routes used by Palestinians to escape southwards, the Israeli military aims to expand the database and pinpoint potential targets, compiling a ‘hit list’ of individuals associated with the 7 October attack.

Despite soldiers acknowledging Corsight’s technology’s limitations, particularly in grainy images or obscured faces, concerns persist over misidentifications. One such incident involved the mistaken apprehension of Palestinian poet Mosab Abu Toha, who faced interrogation and detention due to being flagged by the system.

Convention on AI and human rights (draft July 2023)

Image of Council of Europe

Consolidated Working Draft of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law

COMMITTEE ON ARTIFICIAL INTELLIGENCE  (CAI) 

July 2023

This document was prepared by the Chair of the CAI with the assistance of the Secretariat following the first reading of the revised Zero Draft to serve as the basis for further negotiations of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law.

It contains provisions which have preliminarily been agreed to during the first reading of the revised Zero Draft, as well as proposals drafted by the Chair with the assistance of the Secretariat. The latter provisions are marked with an asterisk and have not yet been discussed by the Committee.

This document does not preclude the final outcome of negotiations in the CAI.


Preamble

The member States of the Council of Europe and the other signatories hereto,

1.Considering that the aim of the Council of Europe is to achieve greater unity between its members, based in particular on respect for human rights and fundamental freedoms, democracy and the rule of law;

2. Recognising the value of fostering cooperation between the Parties to this Convention and of extending such cooperation to other States sharing the same values;

3. Conscious of the accelerating developments in science and technology and the profound changes brought about through [by the design, development, use and decommissioning of] artificial intelligence systems which have the potential to promote human prosperity as well as individual and societal well-being, sustainable development, gender equality and the empowerment of all women and [children/girls], and other important goals and interests, by enhancing progress and innovation;

4. Recognising that artificial intelligence systems may be designed, developed and used to offer unprecedented opportunities to protect and promote human rights and fundamental freedoms, democracy and the rule of law;

5. [Concerned that the design, development, use and decommissioning of artificial intelligence systems may undermine human dignity and individual autonomy, human rights and fundamental freedoms, democracy and the rule of law;]

6. [Expressing deep concern that discrimination in digital contexts, particularly those involving artificial intelligence systems, prevent women, [girls/children], and members of other groups from fully enjoying their human rights and fundamental freedoms, which hinders their full, equal and effective participation in economic, social, cultural and political affairs;]

7. [Opposing the misuse of artificial intelligence technologies and] / [Striving to prevent unlawful and unethical uses of artificial intelligence systems] / [Condemning/concerned by the documented and ongoing use of artificial intelligence systems by some States for repressive purposes, often by leveraging private sector tools, in violation of international human rights law, including through arbitrary or unlawful surveillance and censorship practices that erode privacy and autonomy;]

8. Conscious of the fact that human rights and fundamental freedoms, democracy and the rule of law are inherently interwoven;

9. Convinced of the need to establish, as a matter of priority, a globally applicable legal framework setting out common general principles and rules governing the design, development, use and decommissioning of artificial intelligence systems effectively preserving the shared values and harnessing the benefits of artificial intelligence for the promotion of these values in a manner conducive to responsible innovation;

10. Recognising the need to promote digital literacy, knowledge about, and trust in the design, development, use and decommissioning of artificial intelligence systems;

11. Recognising the framework character of the Convention which may be supplemented by further instruments to address specific issues relating to the design, development, use and decommissioning of artificial intelligence systems;

12. [Noting relevant efforts to advance international understanding and cooperation on artificial intelligence by other international and supranational organisations and fora;]

13. Mindful of applicable international human rights instruments, such as the 1948 Universal Declaration of Human Rights, the 1950 Council of Europe Convention for the Protection of Human Rights and Fundamental Freedoms and its protocols, the 1966 United Nations International Covenant on Civil and Political Rights, the 1966 United Nations International Covenant on Economic, Social and Cultural Rights and their protocols, and the 1961 European Social Charter and its protocols and the 1996 Revised European Charter;

14. [Mindful also of the 1989 United Nations Convention on the Rights of the Child, and the principle of equality and non-discrimination, including gender equality and rights of discriminated groups and individuals in vulnerable situations;]

15. [OptionA][Mindful also of the [protections for] [right to] privacy and [the protection of]] personal data, as conferred, for example, by the 1981 Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data and its protocols;

[Option B] [Recalling also the need of ensuring respect of the right to respect for private and family life, and the right to the protection of personal data for Parties to the 1981 Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data and its protocols]

[Option C] [Recalling also the need of ensuring respect of the right to respect for private and family life and the right to the protection of personal data, as applicable and conferred, for example, by the 1981 Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data and its protocols]

16. Underlining that the present Convention is intended to [address] the specific challenges arising from the design, development, use and decommissioning of artificial intelligence systems;

17. [Option A] Affirming the commitment of Parties to protecting human rights and fundamental freedoms, democracy and the rule of law, [and to fostering lawful, ethical, responsible, fair, accountable and transparent design, development, use and decommissioning of these technologies];

[Option B] [Affirming the commitment of Parties to protecting human rights, democracy and the rule of law, including through fostering lawful, ethical, non-discriminatory, accountable, safe and transparent design, development, use and decommissioning of artificial intelligence systems;]

[Option C] [Recognising the need to promote transparency, explainability, accountability, human agency and oversight, technical robustness and safety, and privacy and data governance in the design, development, use and decommissioning of artificial intelligence systems;]

Have agrees as follows:

Chapter I: General provisions 

Article 1 – Purpose and object 

1. This Convention sets out principles and obligations aimed at ensuring that design, development, use and decommissioning of artificial intelligence systems are fully consistent with respect for human dignity and individual autonomy, human rights and fundamental freedoms, the functioning of democracy and the observance of the rule of law.

2. In order to ensure effective implementation of its provisions by its Parties, this Convention establishes a follow-up mechanism.

Article 2 – Risk-based approach*

In order to give full effect to the principles and obligations set out in this Convention, each Party shall maintain and take such graduated and differentiated measures in its domestic legal system as may be necessary and appropriate in view of the severity and probability of occurrence of adverse impacts on human rights and fundamental freedoms, democracy and the rule of law during design, development, use and decommissioning of artificial intelligence systems.

Article 3 – Artificial intelligence systems*

For the purposes of this Convention, “artificial intelligence system” means any algorithmic system or a combination of such systems that uses computational methods derived from statistics or other mathematical techniques and that generates text, sound, image or other content or either assists or replaces human decision-making. The Conference of the Parties may, as appropriate, decide to give interpretation to this definition in a manner consistent with relevant technological developments.

Article 4 – Scope*

1. This Convention shall apply to design, development, use and decommissioning of artificial intelligence systems that have the potential to interfere with the respect for human rights and fundamental freedoms, the functioning of democracy and the observance of rule of law.

2. This Convention shall not apply to research and development activities regarding artificial intelligence systems unless the systems are tested or otherwise used in ways that have the potential to interfere with human rights and fundamental freedoms, democracy and the rule of law.

Chapter II: General obligations 

Article 5 – Respect for human rights and fundamental freedoms* 

Each Party shall take the necessary measures to ensure that all activities in relation to the design, development, use and decommissioning of artificial intelligence systems are compatible with relevant human rights and non-discrimination obligations undertaken by it under international law, or prescribed by its domestic law.

Article 6 – Integrity of democratic processes and respect for rule of law*

1. Each Party shall take the necessary measures to protect the ability of anyone to reach informed decisions free from undue influence or manipulation through the use of artificial intelligence systems in the context of equal and fair access to public debate and democratic processes, in particular democratic participation, freedom of assembly and freedom of expression, including the freedom to seek, receive and impart information of all kinds, as well the freedom to hold opinions without interference.

2. Each Party shall take the necessary measures to ensure that artificial intelligence systems are not used to undermine the integrity, independence and effectiveness of democratic institutions and processes, including respect for judicial independence and the principle of separation of powers.

Chapter III: Principles of design, development, use and decommissioning of artificial intelligence systems

Each Party shall observe the general common principles applicable to design, development, use, and decommissioning of artificial intelligence systems set forth in this Chapter, in a manner appropriate to its domestic legal system and the other obligations under this Convention. No restrictions, derogations, or exceptions to these principles shall be allowed, other than those already permitted under the domestic law of, and international legal obligations undertaken by, the Party in question in relation to the protection of national security, defence, public safety, health and morals, important economic and financial interests of the State, the impartiality and independence of the judiciary, or the prevention, investigation, and prosecution of disorder or crime, as well as the protection of the rights and freedoms of others. *

Article 7 – Transparency and oversight* 

Each Party shall take appropriate measures to ensure that adequate oversight mechanisms as well as transparency requirements tailored to the specific contexts and risks are in place in respect of design, development, use and decommissioning of artificial intelligence systems.

Article 8 – Accountability and responsibility* 

Each Party shall take measures necessary to ensure accountability and responsibility for violations of human rights and fundamental freedoms resulting from the design, development, use and decommissioning of artificial intelligence systems.

Article 9 – Equality and non-discrimination* 

1. Each Party shall take the necessary measures a view to ensuring that the design, development, use and decommissioning of artificial intelligence systems respect the principle of equality, including gender equality and non-discrimination.

2. Each Party is called upon to adopt special measures or policies aimed at eliminating inequalities and achieving fair, just and equal outcomes, in line with its applicable domestic and international human rights and non-discrimination obligations.

Article 10 – Privacy and personal data protection*

Each Party shall ensure that as regards the design, development, use and decommissioning of artificial intelligence systems:

a. the privacy of individuals is protected including through applicable domestic and international personal data protection and data governance laws and standards;

b. appropriate guarantees and safeguards have been put in place for data subjects, in line with its applicable domestic and international legal obligations.

Article 11 – Safety, security and robustness* 

Each Party shall take appropriate measures to ensure that adequate safety, security, performance, data quality, data integrity, data security, cybersecurity and robustness requirements are in place for the design, development, use and decommissioning of artificial intelligence systems.

Article 12 – Safe innovation* 

When testing artificial intelligence systems for research and innovation, each Party shall provide for a controlled regulatory environment for testing artificial intelligence systems under the supervision of its competent authorities, with a view to avoiding adverse impacts on human right, democracy and rule of law though the testing.

Chapter IV: Remedies

Article 13 – Remedies* 

Each Party shall, in a manner appropriate to its domestic legal system and the other obligations under this Convention, take measures ensuring the availability of effective remedies for violations of human rights and fundamental freedoms resulting from the use of artificial intelligence systems, including through:

a. appropriate measures to ensure that the relevant usage of the artificial intelligence system is recorded, provided to bodies authorized in accordance with its domestic law to access that information and, where appropriate and applicable, made available or communicated to the affected person concerned;

b. appropriate measures to guarantee the information referred to paragraph (a) is sufficient and proportionate for an effective possibility for the affected persons to contest the use of the system and the decision(s) made or substantially informed by the use of the system.

Article 14 – Procedural safeguards* 

Each Party shall, in a manner appropriate to its domestic legal system and the other obligations under this Convention, ensure that:

1. where an artificial intelligence system substantially informs or takes decisions [potentially] impacting on human rights and fundamental freedoms, there are effective procedural guarantees, safeguards and rights, in accordance with the applicable domestic and international law, available to anyone affected thereby;

2. any person has the right to know that one is interacting with an artificial intelligence system rather than with a human unless obvious from the circumstances and context of use and, where appropriate, shall provide for the option of interacting with a human in addition to, or instead of, such system.

Chapter V: Assessment and Mitigation of Risks and Adverse Impacts

Article 15 – Risk and impact management framework*

1. Each Party shall take measures for the identification, assessment, prevention and mitigation of risks and impacts to human rights, democracy and rule of law arising from the design, development, use and decommissioning of artificial intelligence systems within the scope of this Convention.

2. Such measures shall take into account the risk-based approach referred to in Article 2 and:

a. contain adequate requirements which take due account of the context and intended use of artificial intelligence systems, in particular as concerns risks to human rights, democracy, the rule of law and the preservation of the environment;

b. take account of the severity, duration and reversibility of any potential risks and adverse impacts;

c. integrate the perspective of all relevant stakeholders, including any person whose rights may be potentially impacted through the design, development, use and decommissioning of the artificial intelligence system;

d. require the recording, monitoring and due consideration of adverse impacts resulting from the use of artificial intelligence systems;

e. ensure that the risk and impact management processes are carried out iteratively throughout the design, development, use and decommissioning of the artificial intelligence system;

f. require proper documentation of the risk and impact management processes;

g. require, where appropriate, publishing of the information about efforts to identify, assess, mitigate and prevent risks and adverse impacts undertaken;

h. require the implementation of sufficient preventive and mitigating measures to address the risks and adverse impacts identified, including, if appropriate, a requirement for prior testing of the system before it is made available for first use;

3. Each party shall take such legislative or other measures as may be required to put in place mechanisms for a moratorium or ban or other appropriate measures in respect of certain uses of artificial intelligence systems where such practices are considered incompatible with the respect of human rights, the functioning of democracy and the rule of law.

Article 16 – Training* 

Each Party shall take appropriate measures, particularly in regard to training of those responsible for the design, development, use and decommissioning of artificial intelligence systems, with a view to ensuring that the relevant actors are capable of applying the relevant methodology or guidance to identify, assess, prevent and mitigate relevant risks and impacts in relation to the enjoyment of human rights, the functioning of democracy and the observance of rule of law.

Chapter VI: Implementation of the Convention

Article 17 – Non-discrimination

The implementation of the provisions of this Convention by the Parties shall be secured without discrimination on any ground such as sex, gender, sexual orientation, gender identity, race, colour, language, age, religion, political or any other opinion, national or social origin, association with a national minority, property, birth, state of health, disability or other status, or based on a combination of one or more of these grounds.

Article 18 – Rights of persons with disabilities and of children*

Each Party shall, in accordance with its domestic law and relevant international obligations, take due account of any specific needs and vulnerabilities in relation to respect of the rights of persons with disabilities and of children.

Article 19 – Public consultation* 

Each Party shall strive to ensure that fundamental questions raised by the design, development, use and decommissioning of artificial intelligence systems are the subject of appropriate public discussion and multi-stakeholder consultation in the light, in particular, of relevant social, economic, legal, ethical, and environmental implications.

Article 20 – Digital literacy and skills*

Each Party shall encourage and promote adequate digital literacy and digital skills for all segments of the population as well as for those responsible for the design, development, use and decommissioning of artificial intelligence systems, as set out in its applicable domestic law. 

Article 21 – Relationship with other legal instruments*

Nothing in the present Convention shall be construed as limiting or derogating from any of the human rights and fundamental freedoms as well as legal rights and obligations which may be guaranteed under the laws of any Party or under any other agreement to which it is a Party. 

Article 22 – Wider protection 

None of the provisions of this Convention shall be interpreted as limiting or otherwise affecting the possibility for a Party to grant a wider measure of protection than is stipulated in this Convention.

Chapter VII: Follow-up mechanism and cooperation

Article 23 – Conference of the Parties*

1. Parties shall consult periodically with a view to:

a. facilitating the effective use and implementation of this Convention, including the identification of any problems and the effects of any declaration [or reservation] made under this Convention;

b. considering the possible supplementation or amendment of the Convention;

c. considering matters concerning the interpretation and application of this Convention;

d. facilitating the exchange of information on significant legal, policy or technological developments of relevance for the implementation of this Convention;

e. facilitating, where necessary, the friendly settlement of disputes related to the application of this Convention.

2. The Conference of the Parties shall be convened by the Secretary General of the Council of Europe whenever the latter finds it necessary and in any case when a majority of the Parties or the Committee of Ministers request its convocation.

3. The Conference of the Parties shall adopt its own rules of procedure [by consensus].

4. Parties shall be assisted by the Secretariat of the Council of Europe in carrying out their functions pursuant to this article.

5. Any Party which is not a member of the Council of Europe shall contribute to the funding of the activities of the Conference of the Parties in an amount and according to modalities established by the Committee of Ministers in agreement with that Party. *

6. The Conference of the Parties may decide to restrict the participation in its work of a Party that has ceased to be a member of the Council of Europe under Article 8 of the Statute of the Council of Europe for a serious violation of Article 3 of the Statute. Similarly, measures can be taken in respect of any Party non-member State of the Council of Europe concerned by a decision of the Committee of Ministers ceasing its relations with it on grounds similar to those mentioned in Article 3 of the Statute.

Article 24 – International co-operation* 

1. Parties shall co-operate in the realisation of the purpose of this Convention. *

2. Parties shall, as appropriate, exchange relevant and useful information between them[selves and others] concerning aspects of design, development and use of artificial intelligence systems which may have significant positive or negative effect on the enjoyment of human rights, the functioning of democracy and the observance of rule of law, including risk and effects that have arisen in research contexts.

3. [Parties are encouraged to, as appropriate, assist States that are not Party to this Convention in acting consistently with the terms of this Convention and becoming Party to it.]

4. [In its legal regime for implementing the obligations of this Convention, each Party shall consider the impact of domestic requirements on its ability to conduct intergovernmental cooperation with other Parties to this Convention and shall endeavor to avoid adverse impact upon such cooperation.]

5. [Parties are encouraged to, as appropriate, involve relevant non-State actors in the exchange of information referred to under Paragraph 2.]

Article 25 – Effective oversight mechanisms* 

1.Each Party shall establish or designate one or more effective mechanisms to oversee and supervise compliance with the obligations in the Convention, as given effect by the Parties in their domestic legal system.

2. Each Party shall ensure that such mechanisms exercise their duties independently and impartially and that they have the necessary powers, expertise and resources to effectively fulfil their tasks of overseeing compliance with the obligations in the Convention, as given effect by the Parties in their domestic legal system.

3. In case a Party has provided for more than one such mechanism, it shall take measures, where practicable, to facilitate effective cooperation among them.

4. In case a Party has provided for mechanisms different from existing human rights structures, it shall take measures, where practicable, to promote effective cooperation between the mechanisms referred to in paragraph 1 and those existing domestic human rights structures.

Chapter VIII: Final clauses

Article 26 – Effects of the Convention

Parties which are members of the European Union shall, in their mutual relations, apply European Union rules governing the matters within the scope of this Convention.

Article 27 – Amendments

1.Amendments to this Convention may be proposed by any Party, the Committee of Ministers of the Council of Europe or the Conference of the Parties.

2. Any proposal for amendment shall be communicated by the Secretary General of the Council of Europe to the Parties.

3. Moreover, any amendment proposed by a Party, or the Committee of Ministers, shall be communicated to the Conference of the Parties, which shall submit to the Committee of Ministers its opinion on the proposed amendment.

4. The Committee of Ministers shall consider the proposed amendment and any opinion submitted by the Conference of the Parties and may approve the amendment.

5. The text of any amendment approved by the Committee of Ministers in accordance with paragraph 4 shall be forwarded to the Parties for acceptance.

6. Any amendment approved in accordance with paragraph 4 shall come into force on the thirtieth day after all Parties have informed the Secretary General of their acceptance thereof.

Article 28 – Dispute settlement

[In the event of a dispute between Parties as to the interpretation or application of this Convention which cannot be resolved by the Conference of the Parties, as provided for in Article 23, paragraph 1, c, they shall seek a settlement of the dispute through negotiation or any other peaceful means of their choice, including submission of the dispute to an arbitral tribunal whose decisions shall be binding upon the Parties to the dispute, or to the International Court of Justice, as agreed upon by the Parties concerned.

The European Union and its members States in their relations with each other shall not avail themselves of Article 28 of the Convention. Nor shall the member States of the European Union avail themselves of that Article of the Convention insofar as a dispute between them concerns the interpretation or application of European Union law.]

Article 29 – Signature and entry into force

1. This Convention shall be open for signature by the member States of the Council of Europe, the non-member States which have participated in its elaboration and the European Union.

2. This Convention is subject to ratification, acceptance or approval. Instruments of ratification, acceptance or approval shall be deposited with the Secretary General of the Council of Europe.

3. This Convention shall enter into force on the first day of the month following the expiration of a period of three months after the date on which five Signatories, including at least three member States of the Council of Europe, have expressed their consent to be bound by the Convention in accordance with the provisions of paragraph 2. 1The question of how to count the number of signatures in the case of the European Union signing will be examined and revised at a later stage.

4. In respect of any Signatory which subsequently expresses its consent to be bound by it, the Convention shall enter into force on the first day of the month following the expiration of a period of three months after the date of the [deposit of its instrument of ratification, acceptance or approval].

Article 30 – Accession

1.After the entry into force of this Convention, the Committee of Ministers of the Council of Europe may, after consulting the Parties to this Convention and obtaining their unanimous consent, invite any non-member State of the Council of Europe which has not participated in the elaboration of the Convention to accede to this Convention by a decision taken by the majority provided for in Article 20.d of the Statute of the Council of Europe, and by unanimous vote of the representatives of the Parties entitled to sit on the Committee of Ministers.

2. In respect of any acceding State, the Convention shall enter into force on the first day of the month following the expiration of a period of three months after the date of deposit of the instrument of accession with the Secretary General of the Council of Europe.

Article 31 – Territorial application

1. Any [State or the European Union] may, at the time of signature or when depositing its instrument of ratification, acceptance, approval or accession, specify the territory or territories to which this Convention shall apply.

2. Any Party may, at a later date, by a declaration addressed to the Secretary General of the Council of Europe, extend the application of this Convention to any other territory specified in the declaration. In respect of such territory the Convention shall enter into force on the first day of the month following the expiration of a period of three months after the date of receipt of the declaration by the Secretary General.

3. Any declaration made under the two preceding paragraphs may, in respect of any territory specified in such declaration, be withdrawn by a notification addressed to the Secretary General of the Council of Europe. The withdrawal shall become effective on the first day of the month following the expiration of a period of three months after the date of receipt of such notification by the Secretary General.

Article 32 – Reservations

2While considering that reservations should in principle not be necessary, whether or not it is appropriate to provide for reservations will be considered as the CAI examines the other Chapters of the Convention. [No reservation may be made in respect of any provision of this Convention.]

Article 33 – Denunciation

1. Any Party may, at any time, denounce this Convention by means of a notification addressed to the Secretary General of the Council of Europe.

2. Such denunciation shall become effective on the first day of the month following the expiration of a period of three months after the date of receipt of the notification by the Secretary General.

Article 34 – Notification

The Secretary General of the Council of Europe shall notify the member States of the Council of Europe, the non-member States which have participated in its elaboration, the European Union, any Signatory, any [contracting State] [Party], and any other State which has been invited to accede to this Convention, of:

a. any signature;

b. the deposit of any instrument of ratification, acceptance, approval, or accession;

c. any date of entry into force of this Convention in accordance with Article 29, paras. 3 and 4, and Article 30, para. 2;

d. any amendment adopted in accordance with Article 27 and the date on which such an amendment enters into force;

e. [any reservation and withdrawal of reservation made in pursuance of Article 32];

f. any denunciation made in pursuance of Article 33 ;

g. any other act, declaration, notification or communication relating to this Convention.

In witness whereof the undersigned, being duly authorised thereto, have signed this Convention.

Done in [place], this … day of [month] 202[4], in English and in French, both texts being equally authentic, in a single copy which shall be deposited in the archives of the Council of Europe. The Secretary General of the Council of Europe shall transmit certified copies to each member State of the Council of Europe, to the non-member States which have participated in the elaboration of the Convention [enjoy observer status with the Council of Europe], to the European Union and to any State invited to [sign or] accede to this Convention.