Guidelines on the responsible implementation of artificial intelligence systems in journalism

Author: Council of Europe

Introduction

The aim of the Council of Europe is to achieve a greater unity between its members, and one of the means of achieving this aim is the adoption of common standards and the provision of guidance on the reach and scope of human rights and fundamental freedoms. Article 10 of the Convention for the Protection of Human Rights and Fundamental Freedoms (ETS No. 5, “the Convention”) enshrines the right to freedom of expression and is of fundamental importance as a cornerstone of the activities of the media and the rights of the audience. Journalism has an important democratic and societal role to inform the public, enable the free formation and expression of opinions and ideas, scrutinise the activities of public and private stakeholders, and to provide a forum for pluralistic debate. The protection of Article 10 of the Convention extends to the use of communication technology which can support journalists in their execution of this societal and democratic role. Artificial intelligence (AI) systems (defined below) can be usefully deployed across the entire journalistic value chain, from research and data analysis to the production and dissemination of news and engagement with audiences. Recommendation CM/Rec(2022)4 of the Committee of Ministers to member States on promoting a favourable environment for quality journalism in the digital age explicitly encourages media organisations to seize the opportunities of digital technologies, including AI systems.

The use of AI systems can be a competitive factor in the digital marketplace and for the resilience of journalism in that marketplace, but the ability to access and use AI systems in conformity with human rights and professional values such as editorial independence is essential. At the same time, AI as a notion is politically and societally loaded, and fraught with myths and common misconceptions that can be counterproductive or even harmful. It is important to demystify AI as a notion and as a concept, and there is an important role for journalism in doing so.

Article 10 of the Convention confers rights, as well as duties and responsibilities, for the news media and journalists. This includes the duty to use AI systems in ways that are compatible with human rights and public values, promote the society’s interests in being informed and the functioning of the media as a forum for public discourse and a critical public watchdog. Other important rights in this context are the right to privacy (Article 8 of the Convention), human dignity, the right to freedom of thought (Article 9 of the Convention) and the prohibition of discrimination (Article 14 of the Convention). The ability to exercise the human rights of citizens as well as of journalists and media organisations cannot be seen separately from the impact that other actors, such as technology companies and information intermediaries, have on the media ecosystem and the creation, dissemination and use of information.

Member States have an important role in protecting the human rights of journalists and audiences (as citizens and consumers), and in creating the conditions for journalists and the public to benefit from their human rights. As the use of AI is becoming more widespread, permeating and affecting society more broadly, citizens, civil society, representatives of societal interests, artists, content creators, and academics should be allowed and enabled to critically assess the impact of AI systems on users and society, voice their concerns, and be treated as legitimate participants in dialogues about where (not) to use AI, and how to develop standards for the responsible use of AI.

Practical guidance is needed for policy makers, technology providers, platforms, media professionals, and other relevant stakeholders in implementing and critically evaluating the use of AI in advancing the democratic and societal role of the media and journalism with a view to ensuring that the use of AI is compatible with the Convention, in particular its articles 10, 8 and 14.

The Council of Europe is preparing a framework convention – a legal instrument of general application – on the development, design and application of AI systems based on the Council of Europe’s standards on human rights, democracy, and the rule of law. Additional sectoral guidance on the use of AI can be beneficial for all stakeholders involved.

Definitions

The Guidelines use the general definition of artificial intelligence system provided in the Consolidated Working Draft of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (“the Framework Convention on AI”), and complement it with other relevant definitions, including the more specific ones of “journalistic artificial intelligence systems” and “artificial intelligence audience”. 1The definition of artificial intelligence system is subject to change because the Framework Convention on AI has not yet been finalised. If the Convention definition will undergo further changes after the adoption of the Guidelines, the text will be adjusted accordingly by the decision of the CDMSI.

– “Artificial intelligence system” means any algorithmic system or a combination of such systems that uses computational methods derived from statistics or other mathematical techniques and that generates text, sound, image or other content or either assists or replaces human decision-making. This definition is to be interpreted in a manner consistent with relevant technological developments, in line with any decision of the Conference of the Parties to the Framework Convention on AI.

– “Journalistic artificial intelligence system” means artificial intelligence system directly related to the business or practice of regularly producing information about contemporary affairs of public interest and importance, including the research and investigation tasks that underpin journalistic outputs. 2This definition of journalism is adapted from: Schudson, Michael. 2012. The Sociology of News. Second Edition. New York: W. W. Norton & Company This can include, but is not limited to, large language models and generative AI when used for journalistic purposes and/or news organisations. Journalistic AI systems are not a single technology but a range of different, often interlinked, tools for automating specific tasks.

– “Artificial intelligence user” means any natural or legal person, public authority or other body using an artificial intelligence system in their own name or under their authority.

– “Artificial intelligence technology provider” means any natural or legal person, public authority or other body that designs and/or develops an artificial intelligence system with a view to putting it into service/commissioning it.

– “Artificial intelligence subject” means any natural or legal person whose human rights and fundamental freedoms or connected legal rights guaranteed under applicable domestic law or international law are impacted through the application of an artificial intelligence system, including by decisions made or substantially informed by the application of such system.

– “Artificial intelligence audience” means the group of natural or legal persons who are exposed to news, information and other media content from the artificial intelligence user based on outputs from the artificial intelligence system. In the context of journalism, artificial intelligence users are typically news organisations and those that work for them. AI technology providers are typically technology companies or individual developers, but if news organisations develop artificial intelligence systems themselves then they are also the technology provider. For the purposes of these Guidelines, it is also useful to define the “artificial intelligence audience” as a specific subset of artificial intelligence subjects that are exposed to the outputs of journalistic artificial intelligence systems.

In the context of journalism, artificial intelligence users are typically news organisations and those that work for them. AI technology providers are typically technology companies or individual developers, but if news organisations develop artificial intelligence systems themselves then they are also the technology provider. For the purposes of these Guidelines, it is also useful to define the “artificial intelligence audience” as a specific subset of artificial intelligence subjects that are exposed to the outputs of journalistic artificial intelligence systems.

Scope and purpose

Journalistic AI systems can be used for many different tasks. AI systems more broadly can be used for other generic tasks common to other businesses and organisations (as well as being seamlessly embedded into office software, search engines, smartphones, and a wide range of other software and hardware). Journalistic AI systems, as defined above, can be used for news production, for example in data analysis for investigative journalism and fact-checking. They can be used for automated text, video and audio generation as well as routine support tasks like translation and transcription.

For dissemination, AI systems can be used to match content with audiences through personalisation and the use of news recommender algorithms, or to organise or customise content. Journalistic AI systems can also be a means to engage with the audience, for example through virtual assistants and chatbots, or new pricing models. 3For the source of this list, and for a more detailed description, see: Chan-Olmsted, Sylvia M. 2019. A Review of Artificial Intelligence Adoptions in the Media Industry. International Journal on Media Management, 21:3-4, 193- 215.

Some journalistic tasks lend themselves to automation more than others. Highly repetitive tasks that can be executed by following explicit instructions are often amenable to automation, whereas tasks that are variable or require expert judgment, creativity and discretion are often less amenable to automation, or at least require more human oversight and approval. In order to do much of the above, the news media often depends on external technology providers of AI systems, data, and computational infrastructures.

The ability to innovate and use journalistic AI systems in accordance with professional ethics and human rights can contribute to the resilience of journalism in the digital age. The purpose of these Guidelines is therefore to carve out principles for media organisations and media professionals that implement journalistic AI systems. They also offer guidance for AI technology providers and platform companies. Finally, they provide guidance for States and national regulatory authorities on how they can create conditions for the responsible implementation of journalistic AI systems.

The Guidelines cover the decision to use journalistic AI systems, identifying and acquiring them, and incorporating them into organisational and professional practice within media organisations. They also address responsibilities towards the audience, and for external technology providers, platforms and States. The Guidelines do not comprehensively cover design and development as both are highly specialised and task specific, and there are too many different tasks to cover meaningfully. Furthermore, it is not feasible for many small- to medium-sized news organisations to design and construct their own journalistic AI systems, meaning that the main challenge is often effectively acquiring and implementing systems developed by, or in collaboration with others. However, some broad aspects of design and development are briefly addressed.

The decision to implement journalistic AI systems in the newsroom is a strategic choice with important consequences for internal processes and workflows. The decision to implement journalistic AI systems can also have broader implications for society and the journalistic profession, including public perceptions of journalism, the quality and fairness of working conditions in the broader media ecosystem (including freelancers, photographers and illustrators, content moderators, fact checkers, etc.), as well as the shape of digital communication infrastructure, and the formation of new relationships and dependencies.

Many different stakeholders, both inside and outside of news organisations, can be involved when journalistic AI systems are adopted, and a wide variety of AI subjects can be impacted. An assessment on the use of such systems should therefore recognise different perspectives and interests, and consider procedural aspects (e.g., who decides and how) as well as substantive aspects (e.g., what to optimise for). Fostering the conditions for open-source development, the sharing of best/worst practices, multi-disciplinarity, industry-academic collaborations, and room to experiment are also important conditions for the development of responsible journalistic AI.

The Guidelines are informed by and are consistent with existing Council of Europe documents, and in particular the Framework Convention on the development, design and application of artificial intelligence systems as well as Committee of Ministers’ recommendations CM/Rec(2022)4 on promoting a favourable environment for quality journalism in the digital age and CM/Rec(2020)1 on the human rights impacts of algorithmic systems, the Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes (13 February 2019) and the upcoming Framework Convention on AI (CAI). In preparation of the Guidelines, concrete experiences and insights from media professionals, best practices, key challenges, the state of art of the academic literature and the expertise of the members of the Committee of Experts on Increasing Media Resilience (MSI-RES) were taken into account.

Guidelines

1. The decision by media organisations and journalists to implement AI systems

1.1. The decision to implement journalistic AI systems should not be purely technology or commerciallydriven, but also mission-driven in that it will help achieve the goals and align with the values of the news organisation in question. This means it needs to be embedded within a broader vision of what the news organisation hopes to achieve, its business model, the challenges it faces, the democratic role of the media, the promotion of human rights and professional ethics, and the role of technology in each.

1.2. The decision to implement journalistic AI systems constitutes an editorial decision insofar as it is critical to the realisation of the editorial mission and the professional values of a news organisation, and as such there should be someone in the organisation who is clearly accountable for the implementation and outcomes of using journalistic AI. Typically, this will be the editor-in-chief. The editorial staff should also review and understand what AI systems are already in use.

1.3. The decision to implement journalistic AI in the regular workflow should be informed by the actual task or problem for which the system is a response.

1.4. Conducting a systematic risk assessment is an important precondition for the responsible development and deployment of journalistic AI. News organisations should have procedures in place to recognise, and where feasible, assess and mitigate risks that result from the way journalistic AI systems are implemented, including any risks to the rights of third parties (such as data protection, copyright, and non-discrimination) or risks to the environment, internal and external workers’ rights or rights of subjects, copyright holders and affected communities. Risk assessment procedures should include ways to integrate the experiences and perspectives of affected individuals and communities. It should be recognised that procuring AI systems can itself carry risks associated with not being in full control of data, methods and processes.

1.5. The decision to implement journalistic AI systems should, as far as the relevant corporate structure allows, be a participatory process that involves and balances different interests and perspectives including those of journalists, editors, technology developers, product owners, marketing and legal services, advertisers, and the audience. The decision to implement journalistic AI will not always be made by an individual (subsidiary) news organisation, but by the parent organisation that owns or controls it. This makes it more important to have procedures in place that make these decisions transparent and inclusive, allowing for the expression and balancing of different interests and perspectives.

1.6. An important actor in this process is a ‘problem owner’ who should be afforded the overview, responsibility, and ability to mediate and translate between the different stakeholders in the decision-making process. The problem owner should have a role in the core business of news production, the future strategy, as well as the implementation of the system itself. Particularly in smaller and less well-resourced media organisations, the problem owner may not be a separate role.

1.7. The decision to implement specific journalistic AI systems should be based on what is legally and technically feasible to automate. This knowledge needs to be continually updated to reflect changes to the legal framework, technological capabilities, and newsroom practices. All those involved in the decision-making process need to be equipped with the necessary skills, awareness and information, including at the leadership level, to make adequate and wellinformed decisions. Discussion and dialogue between AI users and AI providers are essential for building a shared understanding of ethical and human rights standards, and each other’s work, rights and duties.

1.8. The decision to implement should be informed by proof-of-concept or prototype testing to better understand what is feasible. This includes experimentation with new tools and ideas to discover opportunities.

1.9. Decisions about the implementation of a journalistic AI system should not be considered as a oneoff, discrete decision, but part of a circular process in the sense that they should be based on regular monitoring of the performance of the AI system, its contribution to the editorial mission of the news organisation, and the changing legal and ethical framework in which it operates

2. Identification and acquisition of AI systems by media organisations and professional users

2.1. Once automatable journalistic tasks have been identified, there are decisions to be made about the journalistic AI systems’ acquisition. Options include procurement from an AI technology provider (which can include subscribing or paying for access to a remote system), or in-house development. Responsible implementation and use of journalistic AI starts with responsible procurement. Annex 1 provides a checklist with relevant considerations for responsible AI procurement and aspects to consider in procurement contracts and negotiations.

2.2. Many journalistic AI systems need to be trained with data to work usefully. Therefore, data availability, data fairness and data quality should be rigorously evaluated. Where data pertain to subjects (which, as defined above, includes the audience), compliance with privacy and data protection rules is an essential requirement, and adequate measures to counter biases, stereotypes and other harmful differentiations should be applied for those systems to operate responsibly. Training data should respect the rights of others, including copyright holders – which, as legal systems evolve, could include, for example, asking for consent and offering compensation schemes. In some cases, news organisations will depend on technology providers to make assessments about data because they are not directly involved in, and do not have any direct influence over, the training process.

2.3. When choosing a particular AI technology provider, it is important to consider the extent to which the technology provider has made efforts to ensure the responsible use of data. This is because it affects whether the journalistic AI system can be used responsibly or not.

3. Incorporating AI tools into professional and organisational practice

3.1. Journalistic AI systems require both technical and organisational infrastructure to support them. It is therefore recommended that organisations build and maintain this infrastructure by hiring new staff or upskilling existing staff. News organisations should avoid simply replacing trained journalists with technical staff, and the introduction of AI roles should not come at the expense of developing routine AI competencies among other staff. It is also recommended that in making decisions about personnel, diversity and inclusiveness are carefully considered, with special consideration given to the representation of minorities, women and historically marginalised groups, as this can shape the use of AI and the resulting outputs.

3.2. Journalistic AI systems can be used to complete highly automatable tasks within existing workflows, freeing up time and resources for other activities. However, even for these tasks, and especially in the case of automation and use of generative AI, editorial oversight is required to avoid incorrect or biased processes and outputs. For example, well-written, plausible-looking automated text outputs will need to be properly checked for misleading, incomplete or factually incorrect claims, and their identification requires expert knowledge and editorial oversight. Oversight should go beyond checking outputs and extend to the processes that produced those outputs. Editorial oversight is particularly important for tasks where outputs are highly sensitive (e.g., those that have concrete consequences for individuals) or highly consequential (e.g., those that may impact society, such as election results), or where outputs are produced with the help of generative AI. In no event can the formalisation of professional values into code replace editorial oversight and control.

3.3. News organisations should continue performing risk assessments as defined in paragraph 1.4.

3.4. News organisations should disclose when and how they use AI systems to both subjects and the audience. Disclosure should be applied in situations where the use of AI systems might meaningfully affect the subject or audience’s rights or interpretation of the outputs. Information should also be made available within the news organisation about what systems have been implemented, what they are designed for, what values they reflect, and what is being done to train staff and ensure adequate oversight. Standardised forms of labelling that AI systems were used in the workflow (in natural language and machine-readable code) will enhance the utility of labelling to subjects and the audience. Improving the quality of metadata is another way to increase transparency and the responsible use of journalistic AI systems.

3.5. Working with journalistic AI systems often requires skills that go beyond most journalists’ existing training. News organisations should therefore provide ongoing training on the use of journalistic AI systems for staff (including for CEOs and other relevant roles), with programmes that bring together technologists and journalists, stimulate awareness for human rights (such as privacy and the right to non-discrimination) and professional ethics. Programmes should also empower staff with the knowledge and skills they need to work in contemporary news organisations and prepare them for likely future developments. In so doing, news organisations should avoid technosolutionism that comes at the expense of their values and mission.

3.6. Developing and implementing journalistic AI systems in accordance with the mission of the news organisation requires room, time and long-term investment. Some news organisations, such as well-resourced public service media and large commercial organisations are better positioned to offer this than others. In some cases, there may be options for voluntarily sharing research, methods and best practices. Where they exist, well-funded public service media can play an important pioneering role in the development and implementation of journalistic AI systems and can consider it part of their public mission to support research and innovation in value-sensitive technology development and deployment, while sharing their experiences, best practices, and technology (only where feasible) with other stakeholders (including other news organisations) and encouraging public debate of the role of AI in society. This will help develop shared standards of responsible AI implementation and development, strengthening the overall resilience of the media sector.

4. The use of AI tools in relation to users and society

4.1. While the news media enjoy wide protection under Article 10 of the Convention, this protection comes with responsibilities and duties towards citizens and the public at large. Both the rights and responsibilities under this provision extend to technology, thus also involving an obligation to use digital technology (including journalistic AI systems) responsibly and securely, i.e., in accordance with the ethics of journalism, aligned with professional codes, and in a way that does not impinge upon the human rights of others.

4.2. News organisations and journalists have an important role in developing and regularly updating standards on the responsible implementation and use of journalistic AI (also when using third-party technology). They should have a transparent vision, made explicit, for example, in self-regulatory and organisational codes, mission statements, and internal guidelines, informed by a dialogue with other relevant stakeholders. Ideally, such standards would be developed using a process that is inclusive and geared towards understanding how AI can affect different groups in society and different societal interests. These commitments are also an opportunity for the news media to distinguish themselves from other professions, and a means to be accountable to the public when using AI systems. The news media additionally have an important role in informing the public about AI and its implications for users and society.

4.3. Traditional journalistic values such as fairness, autonomy, accuracy, diversity, lack of bias, truthfulness, and objectivity remain relevant in the context of journalistic AI systems – but might require re-formulation or re-conceptualisation in the light of the new affordances and risks that come with the use of journalistic AI systems. In addition, it may be necessary to formulate and operationalise new priorities, for example concerning data quality and data fairness, security, and expert oversight.

4.4. The implementation and use of some journalistic AI systems could alter the relationship with the audience and should bring audience-centred values to the fore. Key audience-centred values are transparency and explainability, (knowing if, where and how along the production chain journalistic AI is being used), accuracy, privacy and data protection, accessibility, diversity, audience members’ right to form opinions and take independent decisions, the ability to choose between different personalisation systems or opt out of them entirely, and the right to be informed about and question automated decisions as well as the opportunity to express concerns. In the case of automated content production using generative AI, developing procedures and organisational safeguards to guarantee authenticity and accuracy, human oversight of automatically generated content and respect for the privacy and confidentiality of both audiences’ and subject’s interactions with the system could be particularly important for public trust in news.

4.5. To the extent that recommendations, content, distribution models, or prices are personalised, subjects should have a right to accountable personalisation. This means receiving the necessary information and choices to be able to exercise control over their personal data, have the possibility to manage and adjust their profiles, be offered a real choice from different personalisation settings that take into account the short- and long-term interests including the opportunity not to receive recommendations and personalised offers or prices, and be offered the opportunity to voice concerns and critique that is reflected in future implementations. In addition, users should be reminded on a regular basis that some news services are personalised, how and why this has been done, and how it can be undone by changing settings.

4.6. An important element of the responsible use of journalistic AI systems is the translation of editorial values into algorithm design. Value-sensitive design processes can be lengthy, complicated and require multidisciplinary expertise. Many values are not absolute, and instead require trade-offs with other values and concerns, or cannot be easily translated into code. Doing so also often requires the combination of a diverse expertise, such as technological but also legal, ethical or journalism expertise. Where news organisations rely on technology providers’ journalistic AI systems, the procurement decision should take account of the values these systems are optimised for, and subject to the risk assessment referred to in paragraph 1.4 and disclosure in para 3.4.

4.7. Implementing and using journalistic AI systems in accordance with values is a difficult task, and there are often no ready-made answers as the true challenge sits in the operationalisation. Part of the editorial responsibility of a news organisation is therefore to create, where possible, the room, time and resources for experimentation, and the responsible use of journalistic AI. This includes giving the problem owner the necessary leverage to steer this process.

At a minimum, this process includes:

  • A documented, organisation-wide process of identifying, negotiating and determining the core values.
  • A continuous assessment of how journalistic AI systems are operating, whether there are biases to remove or any other unintended risks and side effects that the technology may have on human rights.
  • Room for multi-disciplinary, diverse, cross-organisational, cross-professional, and participatory experimentation and sharing of best practices, within the boundaries of feasibility, particularly for smaller outlets.
  • As part of public accountability, clear communication and transparency of this process to the public, including mechanisms for members of the public to make their concerns heard and taken into consideration.
  • A review of what tools are already used by the editorial staff and sharing experience and best practices for using these tools.

4.8. The responsible implementation and use of journalistic AI systems means that not only editors and journalists, but also technology providers working on journalistic AI do so with professional values and the rights and interests of subjects and the audience in mind. As such, sufficient ethical and human rights literacy for all parties is required.

5. Responsibilities of external technology providers and platforms

5.1. Technology providers (including platforms to the extent that they develop AI systems for the news media))

5.1.1. The importance of editorial autonomy and the ability to act in accordance with professional values for the democratic functioning of the news media also entails an obligation for third-party technology providers when they are working with news organisations to respect these values, editorial autonomy, and news media independence.

5.1.2. Technology providers should recognise that although automation can help with some tasks in the journalistic production chain, journalists will require some (if not most) tasks to be completed by humans – and will likely require expert oversight of the whole production chain. Furthermore, not all users are likely to possess strong technical skills, and some will require clear and understandable guidance on how tools work and how to use them.

5.1.3. Technology providers should also understand some of the unique or heightened risks faced by the news media in terms of how their output is interpreted, which can include close scrutiny, high ethical standards, low tolerance for mistakes, legal consequences, and high levels of political and economic pressure.

5.1.4.Technology providers may also consider, where commercially feasible, making some of their models, training data and other resources available to newsrooms looking to develop their own journalistic AI systems.

5.1.5. Technology providers should recognise that news organisations vary greatly in size, and smaller organisations may not realistically generate enough data for their journalistic AI systems to work optimally. Providers should therefore, where relevant, offer transparent and practical assessments of how well their systems will function at different scales and in different circumstances.

5.1.6. Technology providers should take steps to provide ample warning time and information about product shifts or adaptations and changes to key AI infrastructure and software. Technology developers should recognise and account for the fact that even small changes can sometimes have large consequences for the news organisations’ editorial autonomy, their realisation of professional values, and their ability to deliver on their mission.

5.1.7. To the extent that news organisations depend on technology providers to be able to use journalistic AI systems in a responsible, transparent and explainable way, providers have a responsibility to lend their assistance and cooperation, for example by being available for questions, or being transparent where necessary about the models and data used. Technology providers should be required to provide news organisations with adequate information to facilitate their risk assessment.

5.2. Platforms (that disseminate news)

5.2.1. Given that platforms that disseminate or intermediate news have long used AI systems to operate at a large scale, existing Committee of Ministers’ recommendations on media and communication governance, media pluralism and quality journalism remain applicable for the role of platforms in creating the conditions for the responsible implementation of AI systems in journalism, which includes the systems used by platforms for the dissemination of journalism.

This includes:

– the need to develop appropriate internal governance responses to ensure that content is universally available, easy to find and recognised as a source of trusted information by the public (CM/Rec(2022)4);

– the requirement that they should not restrict access to news based merely on political or other opinions (CM/Rec(2022)4);

– the need to reflect on their social impact (Guidance Note on the Prioritisation of Public Interest Content Online);

– avoiding interference with news content and refraining from overwriting editorial standards (CM/Rec(2022)11), and the need to collaborate with the news media, civil society and other relevant stakeholders like fact-checkers in tackling dis-/misinformation (CM/Rec(2022)4);

– the empowerment of users by offering both opt-out from news personalisation and alternative forms of personalisation (CM/Rec(2022)11);

– ensuring that algorithmic bias does not infringe human rights and fundamental freedoms (CM/Rec(2022)11);

– enhancing the transparency, accountability, explainability and inclusiveness of systems used for personalising the delivery of news content while providing information about their use, nature, purpose, and functionality (CM/Rec(2022)11).

6. Obligations of States

6.1. States have a positive obligation to protect and create favourable conditions for the realisation of human rights and media pluralism. There is a need for the diversification of funding schemes to support short- and long-term projects on the development of responsible journalistic AI systems, as well as more broadly alternative digital tools and communication infrastructures, particularly for smaller and local media organisations. Such schemes must not, however, compromise the independence of journalism. Possible initiatives could include funds for research and development in news organisations working in the public interest, combined with an obligation to invest in digital media innovation and make the resulting tools and applications available open source, create dedicated funding programmes, or stimulate and facilitate cooperation between academia, the technical community and news organisations. States should take effective steps to implement Recommendation CM/Rec(2022)4.

6.2. There is an important role for States to foster access and choice between technology providers that respect and promote the realisation of journalistic values and human rights. To this end, States should create enabling conditions for open-source solutions, access to training data, open data approaches and to ensure competition among technology providers, including European and specialised start-ups, while respecting the rights of others.

6.3. States should encourage independent regulatory authorities, or news media self-regulatory bodies, to help develop guidelines and standards for responsible use and development of journalistic AI, in line with existing Guidelines. This should also include clarifying the legal status of training data, best practice standards of fair data extraction and the attribution and labelling of synthetic content, best practice standards for transparency and human oversight, as well as situations in which the use of (generative) AI risks conflicting with human rights and public values. A particular focus should be on aiding the translation of abstract standards into concrete measures, for example through collecting best practice examples or creating safe spaces for experimentation. (Self-)regulatory bodies could also conduct or facilitate long term research into the effects of journalistic AI systems on journalism and society in combination with other stakeholders. (Self- )regulatory bodies can also facilitate collaboration and best practice sharing among relevant stakeholders, including technology developers, platforms, journalists, academia and civil society and diverse societal groups and actors, to ensure emerging practices are scrutinised from a diversity of perspectives, and any guidelines updated to reflect these.

6.4. States should encourage independent regulatory authorities, news media self-regulatory bodies or standard setting bodies to help news organisations develop procurement guidelines, making available standard clauses for the responsible procurement of journalistic AI systems. This can assist smaller and local media organisations and strengthen their negotiation power vis-à-vis technology providers, thus helping to set a general standard for the development of responsible journalistic AI. Guidelines for the responsible procurement of AI should be the result of a dialogue between news organisations and providers of journalistic AI. Annex 1 makes a first suggestion parties could build on. Such standard clauses could include, but should not be limited to, the items listed in the checklist in Annex 1.

6.5. There is a role for independent and accountable regulators to create the conditions for the critical review of the fairness of commercial relationships and contractual agreements between news organisations, platforms and technology providers, with particular attention to addressing possible imbalances of negotiation power in the case of smaller or local news organisations.

6.6. (Self-) regulators can support transparency and accountability by facilitating independent reporting to allow public scrutiny of the use of AI in journalism and researching public attitudes and understanding of these issues and practices. In some cases, where regulators have existing powers to collect information from platforms about their systems and processes, this information gathering can inform such reporting, as well as feeding into the development of any common guidance or standards.

6.7. States should develop initiatives in collaboration with media organisations, journalists, platforms, communication scholars, and relevant NGOs designed to foster data, media and AI literacy among citizens, so that they are better able to understand the use of journalistic AI systems by news organisations, and better able to make use of the control over personalisation that news organisations and platforms offer. Fostering AI literacy is a continual process that needs to respond to technological developments and people’s life stages so that they are better able to understand the use of journalistic AI systems and are better able to make use of the control over personalisation that news organisations and platforms offer.

Annex 1 – Procurement Checklist

The following checklist lists several central themes and questions that can be relevant in a) assessing the suitability of a particular provider, and b) scrutinising the fairness of a procurement contract with an external provider. Not all questions may be equally relevant for all organisations; the checklist is not exhaustive. The checklist should be seen as a living document and an attempt to kick-start a discussion on the fairness of conditions in procurement contracts for AI solutions in the news media sector.

Quality training data:

Explanation: The quality of the training data influences the functioning and quality of the output of a model.

Relevant questions to ask:

  • On which data has the system been trained?
  • Did the provider check the training data for bias and what steps have been taken to address problems with bias?
  • Does the training data include content protected by copyright and data protection law?
  • If so, what has been done to ensure the legitimacy of the training data?
  • What are the remaining legal risks?
  • What guarantees are offered to deal with the remaining legal risks?
  • Is there a way of assessing or reviewing the training data?

Quality model:

Explanation: Next to the quality of the training data, the functioning of a particular AI solution depends on the parameters and model weights used to train the model

Relevant questions to ask:

  • How was the machine learning model trained?
  • What values has it been optimised for?
  • Can the model be easily trained or adapted?
  • Has the model been checked for bias and security?
  • Are there any additional steps that transform the output for the model? – Can the output modifiers or filters be easily adjusted?
  • How was the software tested or audited?
  • What issues were encountered, how were they mitigated, and the remaining risks and problems?
  • What benchmarks were used to evaluate the functioning of the model?
  • How does the provider update and keep the system aligned with the state-of-the-art?

Ownership training data:

Explanation: Implementing an AI solution can involve inputting content or training the system on mediaown content. Such content is an important asset of a media organisation.

Relevant questions to ask:

  • If a system is trained on the content or data of a media organisation, will that content be re-used, and if so, for which purpose (improvement of the technology, development of competing products, etc.)?
  • What guarantees are offered to secure the confidentiality and lawfulness of that data?
  • (In case it is from the media organisation’s perspective desirable to allow its content to be re-used): Is a fair compensation offered (in terms of financial reward, access to technology and knowledge, ownership of a model?
  • (In case re-use is not desired): What guarantees are offered to protect the content?
  • Will the data be deleted from the servers of the technology company in case a media organisation decides to go with another provider?
  • Who will have access to the outputs of the system?
  • Who owns the outputs and where will they be stored? What level of journalistic AI system transparency is required to assess the outputs?

Data storage:

Explanation: The location of data storage is relevant to the applicable legal frameworks, such as data protection law. With the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (ETS No. 108) and the European Union’s General Data Protection Regulation, European countries have typically opted for a higher level of legal protection than, for example, the United States.

Relevant questions to ask:

  • Where will (personal) data be stored?
  • What guarantees are offered for compliance with legal requirements, e.g., flowing from data protection law, as well as data security?

Liability:

Explanation: The contracts’ important role is determining who is liable if things go wrong. Liability distributions must consider the ability to recognise and mitigate risks effectively and not create unfair or unrealistic burdens.

Relevant questions to ask:

  • Who is liable for what?
  • What guarantees are offered in case legal liabilities can arise from factors outside the control of a media organisation?
  • What information or mutual assistance is offered to identify and mitigate potential liability, e.g., copyright infringements?
  • Does the provider offer the necessary level of transparency for the user to comply with its ethical and legal responsibilities towards the audience?

Human oversight:

The ability to exercise human oversight and control is an important ethical and also legal requirement for the responsible deployment of journalistic AI. Different forms of journalistic AI can require different levels and forms of oversight, and in case of externally procured technology news media organisations may also depend on the technology provider to be able to organise effective human control.

Relevant questions to ask:

  • What skills are needed to oversee the system?
  • What are possibilities to intervene and adjust the system? – What are the key performance indicators (KPIs)?
  • How can the overall success or failure of the implementation be evaluated? What data is needed to properly evaluate the system, and is access to that data available?
  • What kind of support is being offered?

Responsible development:

Technologies are never neutral but reflect, directly or indirectly the values and KPIs that have informed their development. Being able to use journalistic AI responsibly requires understanding what has or has not been done to develop AI systems with an eye towards professional values and human rights.

Relevant questions to ask:

  • What efforts have been made by the technology provider to develop the technology in alignment with public values and human rights?
  • Has the system been developed specifically as journalistic AI?
  • Has it been designed for specific languages or audience needs?
  • Has a risk or human rights impact assessment been performed?
  • What are the commitments to environmental sustainability and the protection of workers’ rights?
  • What safeguards and guardrails are in place?
  • Are the systems compliant with European and national laws?
  • Do the providers define user guidelines and their own responsibilities?

Infrastructure and hardware requirements:

Explanation: Different AI solutions will have different needs in terms of hardware, access to cloud infrastructure and interoperability

Relevant questions to ask:

  • Does the specific AI solution depend on particular infrastructure requirements (e.g., access to cloud technology, incompatibility with particular platforms)?
  • If so, what are the short and long-term additional costs?
  • Is it possible to switch to another infrastructure provider?
  • What guarantees are offered in terms of pricing, support and continuity?

Continuity:

Explanation: AI solutions can be easily outdated or no longer technically supported as state-of-the-art develops. Also, start-ups can fail, and even large operators tend to reserve the right to discontinue services without notice.

Relevant questions to ask:

  • What guarantees are offered in terms of continued support of the technology?
  • Is sufficient transparency and advance notice offered, or does the technology provider reserve a unilateral right to change, modify or discontinue service at any time?
  • Is the media organisation free to take the training data to another provider?
  • Is code regulatory updated to respond to security concerns, legal requirements, state-of-the-art insights into risks and ethical requirements?

Pricing:

Explanation: Pricing transparency, including transparency on hidden costs, is necessary to be able to compare solutions

Relevant questions to ask:

  • How is the pricing calculated?
  • Are there additional costs, e.g. in terms of infrastructure requirements?
  • What is the anticipated dynamic development of the price, for example is the supplier prepared to offer some advantageous payment terms such as instalment payments, free maintenance for a certain period, etc.?

Mutual support:

Explanation: Particularly with more sophisticated AI solutions, such as Generative AI, professional users have only a very limited role in the training and development of the system. This, combined with a lack of transparency, skills, and expertise, means that media organisations may rely on the cooperation from the provider to address particular issues.

Relevant questions to ask:

  • What does the provider do to help identify the accuracy of content generation and detect disinformation? What kind of assistance is offered in dealing with legal claims of third parties, particularly if the source of the claims is outside the control of a media organisation?
  • What does the provider do to address problems around disinformation, discrimination, security, dealing with abuse?
  • What kind of disclaimers and indemnification clauses are included in the contract?
  • What kind and how long is tech support offered?
  • What additional resources are offered?

Environment:

  • What efforts have been made to reduce the ecological footprint (using green energy, reducing water consumption, how to deal with CO2 emissions)?

Annex 2 – Overview of existing Council of Europe guidance

Relevant existing instruments and other texts:

Conventions

Convention for the Protection of Human Rights and Fundamental Freedoms (ETS No. 005) Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (ETS No. 108), as updated by its amending Protocol (CETS No. 223, Convention 108+)

Other standards

Recommendation CM/Rec(2016)4 of the Committee of Ministers to member States on the protection of journalism and safety of journalists and other media actors

Recommendation CM/Rec(2018)1 of the Committee of Ministers to member States on media pluralism and transparency of media ownership

Recommendation CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries

Recommendation CM/Rec(2020)1 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems

Recommendation CM/Rec(2022)4 of the Committee of Ministers to member States on promoting a favourable environment for quality journalism in the digital age

Recommendation CM/Rec(2022)11 of the Committee of Ministers to member States on principles for media and communication governance and its Explanatory Memorandum

Recommendation CM/Rec(2022)12 of the Committee of Ministers to member States on electoral communication and media coverage of election campaigns

Recommendation CM/Rec(2022)13 of the Committee of Ministers to member States on the impacts of digital technologies on freedom of expression

Declaration by the Committee of Ministers to member States on the manipulative capabilities of algorithmic processes (adopted by the Committee of Ministers on 13 February 2019 at the 1337th meeting of the Ministers’ Deputies)

Declaration by the Committee of Ministers to member States on the financial sustainability of quality journalism in the digital age (adopted by the Committee of Ministers on 13 February 2019 at the 1337th meeting of the Ministers’ Deputies)

Declaration of the Committee of Ministers on Risks to Fundamental Rights stemming from Digital Tracking and other Surveillance Technologies (adopted by the Committee of Ministers on 11 June 2013)

Guidancenote on best practices towards effective legal and procedural frameworks for self-regulatory and co-regulatory mechanisms of content moderation (adopted by the Steering Committee on Media and Information Society (CDMSI) at its 19th plenary meeting, 19-21 May 2021)

Guidancenote on the prioritisation of public interest content online

Research

Helberger, N., Eskens, S. J., Drunen, M. Z., Bastian, M. B., & Möller, J. E. (2019). Implications of AI- driven tools in the media for freedom of expression. Artificial Intelligence – Intelligent Politics: Challenges and Opportunities for Media and Democracy. Background Paper, Ministerial Conference, Cyprus, 28-29 May 2020, pp. 1–36.

Table of artificial intelligence (AI)/algorithmic systems related issues and guidance (with a focus on private actors)

Key legal/ethical concernGuidance on how to use AI/algorithmic systems responsibly to address this concernSource of Council of Europe’s guidance
General
The impact of algorithmic systems on human rights: notably freedom of expression, privacy, data protection, IP rights, principle of non- discrimination– Requirement for the private sector to respect internationally recognised human rights and fundamental freedoms of their users and third parties affected by their activities;

– Existence of legislative and regulatory frameworks to ensure that (i) algorithmic systems are designed/developed/deployed in compliance with human rights (including the requirement to conduct human rights impact assessments, independent expert reviews, etc.) and (ii) media and communication governance is implemented in compliance with human rights and fundamental freedoms and especially Article 10 ECHR;

– Determination of areas of public services which, due to their effect on human rights, may not be determined, decided or optimised through algorithmic systems.
CM/Rec (2020)1, part B: 1.4., 5.1.-5.3., 5.7., part C: 1.1. CM/Rec(2022)11: 3.2., 6.4.
General risks involved in the use of algorithms: violations/circumventions of applicable laws and regulations, illegal access or system interference, discriminatory effects and bias, etc.– Transparency to the public about the use of algorithms which can trigger significant human rights impacts, about their nature and functionality, managing settings and the availability of complaint/redress mechanisms;

– Continuous evaluation of the provenance and quality of data put into/extracted from algorithmic systems to identify bias or inappropriate use and to remedy or minimise adverse effects;

– Configuration of algorithmic systems in such a way as to prevent any illegal access, system interference and any misuse of devices, data and models, by developer’s/business user’s staff or third parties, in line with applicable standards.
CM/Rec(2020)1, part C: 3.1., 3.3., 4.1., 4.4.,
The impact of AI/algorithmic systems on (news/media) content production, curation, selection and prioritisation
Algorithmic control over the availability, findability and accessibility of (media) content
– Requirements for platforms to respect internationally recognised human rights, notably articles 10, 8 and 14 ECHR, in the design, development and ongoing deployment of algorithmic systems used for content dissemination; enhancing the transparency and explainability of such systems, providing users with the necessary tools to understand the basic criteria and functioning of algorithms involved in the distribution of media content;

– Content moderation: requirement of transparency about platforms’ content restriction policies regarding illegal and harmful content, in an easily understandable language; restrictions to be carried out using the least restrictive technical means and be limited in scope and duration to what is strictly necessary. Contestability of decisions and provision of information to the public about the number and types of complaints, take-down notices and the results of content moderation; removal of content as a measure of last resort, alternative techniques such as promotion and demotion, monetisation and demonetisation, etc. are to be favoured. Regarding media content: a requirement for platforms to refrain from overwriting editorial standards and interfere with such content, insofar as it complies with human rights standards;

– Content curation/ranking/recommendation: requirement of transparency, explainability and accountability of algorithmic systems for content dissemination, providing users meaningful and understandable information about which data is being processed, which criteria are used and why certain content was selected. Such selection should be carried out in full compliance with the right to nondiscrimination and no source of news or other content should be restricted based merely on its political or other opinion. Co-regulatory frameworks to ensure independent oversight of algorithmic systems for content dissemination, with reporting duties to relevant regulators or other designated bodies;

– Ensuring individuals’ communication rights: the possibility to make use of the media and platforms without unjustified restrictions of freedom of expression or undue interferences with their right to privacy, easy access to affordable and effective complaint mechanisms in case of alleged violations of their rights, accompanied by opportunities for participatory governance (e.g., through public consultations) of the media and platforms;

– Requirement for platforms to give access to data to the research community for the purpose of analysing the impact of algorithmic systems on the distribution of media content.
CM/Rec(2018)2: 2.1.1., 2.3.1.-
2.3.6.
CM/Rec(2022)11: 12.3., 12.5.,
13.3., 13.4., 14.3., 14.4.
CM/Rec(2022)13: 1.5., 6.1.-
6.10.
Lack of (media) content diversity online, prioritisation of engagement over accuracy and diversity– Commitment of the media to offer to all groups of the population an easy access to a diversity of topics, actors and viewpoints, representing the diversity of society; to promote the balanced representation and equal participation of different societal groups in the news and in the media in general and strive for diverse teams in media management, newsrooms and production;


– Ensuring that diverse media content is available in different languages, suitable formats, and is easy to find and use (e.g., through automated translations), and create/promote media and information literacy initiatives which can help individuals, especially those from minority or disadvantaged communities, to develop the skills and confidence to engage with the media and participate in the public sphere;


– Clarity about the nature of content (editorial, commercial), distinctions between factual information, opinion, analysis, promotional content, (political) advertising, etc., and between professional and user-generated content; information on the process behind specific stories, including efforts to include a plurality of perspectives, and encouraging audiences’ feedback;

– Collaboration between platforms, the media, civil society and academia to improve exposure diversity by providing clear information to users on how to find and access a wide range of content, by offering both opt-out from personalisation and alternative forms of personalisation compatible with the public interest that guarantee the prominence of quality journalism, and by reinforcing the role of public service media in offering personalised services.





CM/Rec(2018)1: 2.5.-2.7.,
CM/Rec(2022)4: 2.1.4.
CM/Rec(2022)11: 8.8., 13.5.,
14.2.
Guidance note on the prioritisation of PI content, paras. 24 and 25
Filter bubbles (problem of selective exposure)Empirical evidence of filter bubbles is scarce or inconclusive; in fact, research suggests that people who use social media for news are exposed to more diverse sources. However, it also seems that this diversity may have a polarising effect on users’ attitudes, entrenching them in their beliefs rather than challenging them. While comprehensive solutions to this phenomenon go beyond the field of media governance, some possible steps to counter this process consist of:

– MIL programmes empowering individuals to understand how communication is produced and disseminated online – by the media and platforms – and how ownership, funding and governance influence the (algorithmic) curation of content; enhancing individuals’ awareness of biases and inaccuracies;

– Enhancing individuals’ knowledge about the collection and use of their personal data by media and platforms and their related rights;

– Collaboration between platforms, the media, civil society and academia to improve exposure diversity by providing clear information to users on how to find and access a wide range of content, by offering both opt-out from personalisation and alternative forms of personalisation compatible with the public interest that guarantee the prominence of quality journalism, and by reinforcing the role of public service media in offering personalised services.

– Independent research and advice for decision-makers regarding the capacity of algorithmic tools to enhance or interfere with the cognitive sovereignty of individuals, taking account of existing diversity in societies and users’ backgrounds;

– Assessing the need for enhanced regulatory frameworks ensuring oversight over the design, development, deployment and use of algorithmic tools, with a view to ensuring that there is effective protection against unfair practices or abuse of position of market power.
CM/Rec(2018)1: 2.5.-2.7. CM/Rec(2022)11: 11.7., 13.5., 15.1. Declaration on the manipulative capabilities of algorithmic processes Report on use of AI driven tools, pp. 11-13
Access to (reliable) information: difficulty of determining the source of information (also due to information overload) and consequently to assess its credibility;– Ensuring transparency of media content production (information about ownership, management, editors, journalists, editorial policies, codes of conduct, etc.) and transparent use of AI tools in content creation and distribution by media organisations;

– Clear attribution of media content and news sources on platforms to enable users to easily establish the provenance of stories discovered through search engines and social media;

– Collaborative initiatives by platforms, the media, civil society and other stakeholders (e.g., fact-checkers) to develop criteria for identifying reliable content, subject to independent review, and transparent use of such criteria by platforms; labelling of social bots and automated accounts on platforms;

– Introduction of non-commercial prominence regimes that seek to improve users’ exposure to diversity of media content online; enhancing the role of public service media in offering personalised services;

– Media and information literacy (MIL) programmes and activities oriented towards helping users to better understand the online infrastructure and economy and how technology can influence choices in relation to media, and to highlight the value of quality news sources.
CM/Rec (2022)4: 1.4.1., 3.1.2.
CM/Rec(2022)11: 9.2.-9.4.,
13.5.
Guidance note on the
prioritisation of PI content,
paras. 16, 24 and 25
Creation of bias and/or discriminatory effects of algorithms, notably on marginalised groups or minorities– Use of technology to ensure that diverse content is accessible to all groups in society, particularly disadvantaged/marginalised ones, by making such content available in different languages, suitable formats and making it easy to find and use;

– Platforms’ commitment to provide their products and services without any discrimination of their users or other relevant parties, including those with special needs or disabilities, which may require correcting existing inequalities;

– Continuous evaluation of data on which algorithmic systems are trained to identify and respond to errors, bias and potential discrimination in datasets and models; data checks to monitor the quality of data used for training of algorithmic systems;

– Transparent use of AI tools in content creation and distribution by media organisations; measures to level the playing field between large and small organisations regarding their access to and control of AI tools;

– Requirement for platforms to make enough data publicly available to ensure adequate and independent auditing capable of identifying any discriminatory or problematic approaches in content restriction decisions.
CM/Rec(2018)1: 2.6,
CM/Rec(2018)2: 2.1.5.
CM/Rec(2020)1, part C: 3.1.,
5.1.
CM/Rec(2022)4: 2.1.2., 2.2.4.
Guidance note on content
moderation, para. 29
Spread of sensationalist, misleading and unreliable media content and/or disinformation– Content moderation by platforms to clearly distinguish between algorithmic responses to illegal content and legal but harmful content; for the latter, alternative responses to restricting access should be sought that prioritise safeguards rather than restrictions on freedom of expression;

– Requirement of transparency regarding the sources of commercial and political advertising/identity of actors on platforms, also with a view to prevent purveyors of (political) disinformation to generate revenue;

-Collaborative initiatives by platforms, the media, civil society and other stakeholders (e.g., fact-checkers) to develop criteria for identifying reliable content, subject to independent review, and transparent use of such criteria by platforms. Labelling of social bots and automated accounts on platforms;

– Strengthening users’ media and information literacy (MIL) through programmes designed to help them better understand how online infrastructure and economy operate and how technology can influence their choices when dealing with digital media, including by enhancing their awareness of biases, inaccuracies and falsehoods;

– Rebuilding trust in the media through improved fact-checking and selection of sources to improve accuracy, especially when using user-generated content or anonymous sources; providing citations and references in relation to sources, especially those behind assertions of facts or in-depth stories; disclosure of AI software tools and especially robot journalism in news production; complementing transparency by effective self-regulatory mechanisms such as press/media councils or ombudspersons.
CM/Rec(2022)13: 1.1.
CM/Rec (2022)4: 2.1.1.-2.1.3.
CM/Rec(2022)11: 12.6., 15.2.
CM/Rec(2022)12:
Guidance note on content
moderation, para. 16
(Political) targeting of users: large-scale monitoring and data collection, potentially with a view to manipulate (electoral) opinions and choices– Labelling of advertising/campaigning material on platforms and disclosing the identity of the campaigners; archives of electoral advertisements to be kept by platforms and political parties;

– Information to be provided by platforms about the algorithms they use to rank and display digital campaigning material, as well as the algorithms that are used in content moderation practices;

– Clear information to be provided by platforms to their users on why they are being targeted with political advertisements, and the possibility to opt out of online political advertising;

– Codes of conduct to be adopted by political parties and other relevant actors with commitments to avoid the abuse of microtargeting techniques;

– Clear, transparent and foreseeable policies of platforms for ensuring the integrity of services and countering misrepresentation and the intentional spread of political disinformation; requirement for platforms to clearly label bots and fake accounts;

– Transparent content moderation by platforms, avoiding any discrimination based on political views and limiting restrictions to the least restrictive technical means and necessary scope and duration;

– Online content and data flows pertaining to electoral matters to be treated in an equal and non-discriminatory manner by internet service providers to comply with the principle of network neutrality;

– Fair, balanced and impartial media coverage of elections, with adequate safeguards to prevent interference with editorial independence of the media and ensure comparable levels of information across the political spectrum, protecting voters against unfair practices and manipulation.
CM/Rec(2018)1: 1.4.
CM/Rec(2022)12: 2.1.-2.3., 4.1.-
4.9., 5.1.-5.5., 6.1.-6.6.
Declaration on the manipulative
capabilities of algorithms
The impact of AI/algorithmic systems on users’ privacy and data protection
Privacy concerns: large-scale collection and processing of personal data of users and the lack of transparency and accountability– Collection and processing of data in accordance with international standards on protection of personal data (Convention 108 and Convention 108+);

– Compliance with the requirements regarding the proportionality and data minimisation principles, lawfulness of processing and privacy by design principle (integration into algorithmic systems at the stage of architecture and system design);

– Keeping a user benefit-perspective by informing them about how their data are being used, ensuring them a meaningful choice to give and revoke consent regarding all uses of data, including within algorithmic datasets, and enabling them to object to the processing of their data and opt out of personalisation;
– Disclosure about the use of data for journalistic purposes.
Convention 108+: Article 10 (2) and (3) CM/Rec(2020)1, part C: 2.2. CM/Rec(2022)11: 9.4., 11.7. CM/Rec(2022)4: 2.3.1., 2.3.2.
Mass surveillance, tracking and targeting of journalists and undermining of journalistic sourcesRespect for human rights standards, notably those guaranteed by Article 8 ECHR, as
interpreted by the case law of the European Court of Human Rights; adequate and
effective safeguards against abuse, including independent supervision.
CM/Rec(2016)4, para. 38
Declaration on digital tracking
and other surveillance
technologies
Economic impact of AI/algorithmic dissemination of news/media content
Platforms disrupting traditional media business models:
platforms’ large-scale collection and processing of business and end users’ data, driven by commercial considerations and resulting in successful monetisation of media content;
platforms disseminate media content along other types of content that are not subject to the same regulatory/ethical frameworks, which is associated with declining trust in news (media);
the media are tempted to gain advantage by moving towards a low quality (clickbait) business model
– Existence of media governance frameworks to ensure fair treatment of content providers and counter anti-competitive behaviour of platforms which may adversely impact media pluralism, notably:
– Data sharing obligations for platforms: a requirement to give media organisations access to the relevant audience data on the usage of their content, enabling them to optimise user experience and better monetise their products;

– Enhanced transparency of platforms’ advertising systems and practices, collaboration between platforms, media stakeholders and advertisers, with measures to avoid diverting advertising revenues from to sources of disinformation and blatantly false content;

– Conditions/frameworks for equitable sharing of revenues arising from the largescale dissemination and monetisation of media content on platforms, platforms’ contributions to the preservation of quality journalism;

– Rebuilding trust in the media through improved fact-checking for accuracy, enhanced transparency about ownership and editorial processes, clarity about the nature of media content and citing of news sources, documenting efforts to include a plurality of perspectives, collaborative practices amongst (local) newsrooms, encouraging feedback from the audiences, effective self-regulatory mechanisms, etc.;

– Introduction of new technologies and the development of digital business skills, development of innovative, collaborative journalistic projects, also involving freelance journalists.
CM/Rec (2022)4: 1.4.1.-1.4.5., 2.1.1.-2.1.3., 2.2.4, 2.4.2., 2.4.3. CM/Rec(2018)1: 3.1, 3.3. Declaration on the financial sustainability of quality journalism in the digital age
Other
Other negative impacts of algorithmic systems on freedom of expressionAvailability of relevant data and meta-datasets to independent researchers, the media and civil society organisations, for the purpose of analysing the impacts of algorithmic systems on the exercise of human rights, notably the right to freedom of expression.CM/Rec(2020)1, part B: 6.1.-
6.4., part C: 6.1.-6.2.
CM/Rec(2022)13: 6.1.-6.10.