Liability of intermediaries

AI and the liability of intermediaries

AI introduces new dimensions to intermediaries’ liability, necessitating a careful understanding and management of potential risks.

AI as a tool for intermediaries

One of the areas in which intermediaries use AI is content moderation. AI systems can automatically identify and filter out prohibited or harmful content, including hate speech, explicit material, or copyright infringement. Intermediaries also use algorithms to deliver recommendations and personalised content to users. But implementing AI tools requires that intermediaries foresee and address potential risks associated with AI, including fairness, transparency, and accountability.

Potential risks and their effect on users

While AI algorithms may be helpful in content moderation, a failure to effectively filter harmful content or unauthorised distribution of content, among others, may involve liability for intermediaries. Additionally, AI algorithms are prone to bias, potentially leading to biassed or discriminatory outcomes or reinforcement of existing prejudices during content distribution and moderation. Algorithms may also lead to breaches of freedom of expression, given the inability of AI systems to clearly distinguish, for instance, certain illegal content from satire.

Moreover, the use of AI algorithms by intermediaries for analysing user data to provide personalised recommendations or targeted advertising gives rise to concerns regarding privacy and the security of personal information.

Learn more on AI Governance.

 

Intermediaries play a huge role in our daily use of the internet: access to the internet, browsing, e-commerce, and the publishing of content are all made possible by intermediaries.

Although they are frequently referred to as internet service providers (ISPs) by national legal systems, online intermediaries comprise a variety of services, including internet access providers, hosting providers, search engines, e-commerce platforms, and social networking platforms.

Given that ISPs facilitate the flow of third-party information online, they are often concerned in legal disputes involving copyright infringement, the distribution of illegal content online such as children pornographic images, freedom of expression breaches, the right to be delisted from indexes (right to be forgotten), fake news, and defamation. ISPs are the most direct way for governments and courts to enforce laws online. From a legal perspective, ISPs might be liable for third-party illegal content in many contexts. In order to avoid legal responsibility for internet users’ activities, certain ISPs – such as host providers and search engines – have increasingly controlled third-party content over the past twenty years. Moreover, even when ISPs are headquartered in a particular country, they are subject to different liability rules for the same content when their services are available to users in multiple jurisdictions.

Intermediary liability is often discussed at international fora including the World Summit on the Information Society (WSIS) and Internet Governance Forum (IGF). ISPs have been individually and collectively active in WSIS/IGF/Working Group on Internet Governance (WGIG) processes through sector-specific business organisations, such as the Information Technology Association of America (ITAA). Various regional ISP associations have been set up worldwide to enhance the ability of thousands of ISPs to reach governments and to influence public policy agendas with regard to their liability for third-party illegal activities online. The Organisation for Economic Co-operation and Development (OECD) recommends governments to minimise burdens on online intermediaries and to provide legal certainty for them by ensuring clear regulatory rules and fair processes.

The following will briefly introduce the role and responsibility of ISPs regarding copyright infringement, child online protection, spam, content policy, and defamation.

One of the main legal issues concerning online intermediaries is their liability for Internet users’ copyright infringements. National and regional enforcement mechanisms in the field of intellectual property have been further strengthened by making ISPs liable for hosting materials in breach of copyright, should the material not be removed upon notification of infringement. This has made the previously vague intellectual property rights (IPRs) regime directly enforceable in the online space.

The approach taken by the US Digital Millennium Copyright Act (DMCA) and the EU e-commerce and copyright directives is to exempt ISPs from liability for the information that is transmitted or stored at the direction of the users. However, both regimes require ISPs to remove materials in breach of copyright upon a notice-and-take-down procedure. This solution provides some relief to ISPs as they are exempted from legal sanctions when they are not aware of the illegal content, but also potentially transforms them into third-party content moderators. This type of regime has been implemented by most countries so far.

Online intermediaries are vital entities in filtering certain types of illegal content harming children, most notably, child sexual abuse images, as soon as they become aware of it. There are generally two main processes leading to the removal of illegal content harming children:

  1. Via notice-and-take-down measures, which are typically the first line of defence. Online intermediaries often have their own self-regulatory tools that allow users to flag content harming children. As soon as online intermediaries, such as ISPs, domain registrars, and web hosts are alerted that their services are being used to host such content, they remove it or take down the user’s account, in a timely manner.
  2. Via hotline reporting, through which online intermediaries can be notified of illegal content by their customers, members of the public, law enforcement, or hotline organisations. ISPs generally work hand-in-hand with law enforcement to ensure that the content is verified and that steps are taken to identify and locate those criminally responsible.

Other technical options may help prevent illegal content from being accessed. For example, a number of online intermediaries around the world, including ISPs and search engines, restrict access to lists of URLs that are confirmed to contain illegal content.

The extent of ISPs and hosts’ liabilities may vary from country to country. In some national legal frameworks, a legal obligation is imposed; in many other cases, ISPs and hosts voluntarily develop and adopt processes to help protect children online.

Read more about child safety online.

ISPs are commonly seen as the primary entities involved with anti-spam initiatives. Usually, ISPs establish and maintain self-regulatory initiatives to reduce spam, including technical filtering or the introduction of anti-spam policies. The International Telecommunication Union (ITU) recommends that ISPs should be liable for spam and proposes an anti-spam code of conduct, which would include two main provisions:

  • ISPs must prohibit their users from spamming; and
  • ISPs must not peer with other ISPs that do not accept a similar code of conduct.

Under growing governmental pressure, host providers and search engines are gradually, albeit reluctantly, becoming involved with content policy. In doing so, they might enforce government regulation, including the moderation of fake news, defamatory comments, and illegal political campaigns. Governments have encouraged online intermediaries to implement self-regulatory measures to address illegal third-party content. While the control of third-party content by online intermediaries may tackle abuses, it might also inadvertently privatise content control, with ISPs taking over governmental responsibilities.

Read more about content policy.

Over the years, courts in Europe have increasingly imposed liability rules on online intermediaries, most notably with respect to the right to be forgotten and with respect to comments posted on online platforms.

Right to be forgotten

In 2014, following the decision of the Spanish data protection authority to uphold a Spanish citizen’s request to delist a webpage detailing the recovery of social security debts owed with his name from Google search results, the Court of Justice of the European Union (CJEU) imposed upon search engines the obligation to consider all right-to-be-forgotten requests. The ruling by the CJEU against Google Spain was brought by a Spanish man after he failed to ensure the deletion of an auction notice of his repossessed home dating from 1998 on the website of a Spanish newspaper. He argued that the matter, in which his house had been auctioned to recover his social security debts, had been resolved and should no longer be linked to him whenever his name was searched on Google.

Although many argued that this right represents only a right to be delisted or the right to erasure, the obligation imposed upon search engines – and not only to Google, as a claimant in the case Google Spain SL, Google Inc. v Agencia Española de Protección de Datos– triggered major debates.

Moreover, the EU’s General Data Protection Regulation (GDPR), which entered into force in 2018, has introduced the right for individuals to have their personal data erased available on online intermediaries. In 2019, the Court of Justice of the EU in Google v Commission Nationale de l’Informatique et des Libertés (CNIL) case upheld that there is no obligation under EU law for Google to apply the European right to be forgotten globally. The decision established that, while EU residents have the legal right to be forgotten, the right only applies within the borders of the bloc’s 27 member states.

Read more about the Right to be forgotten.

Offensive comments posted on news portal

In 2015, the European Court of Human Rights (ECHR) confirmed a ruling by an Estonian court that had found the news portal Delfi liable for offensive internet users’ comments posted on its website. In the case, the ECHR held that the decision was justifiable and proportionate, given that the comments were extreme and had been posted in reaction to an article published by Delfi on its professionally managed news portal run on a commercial basis.

In 2019, the Court of Justice of the European Union ruled in Eva Glawischnig-Piesczek v Facebook case that EU law allows national courts to order host providers such as Facebook to remove ‘identical and, in certain circumstances, equivalent comments previously declared to be illegal‘. In this case, Ms Glawischnig-Piesczek asked Facebook to delete defamatory statements about herself not only in Austria but worldwide. The Court decided that Facebook, in its capacity as a host provider, could be ordered by a national court to remove information about users available in the country from where it was published and worldwide. After this precedent, national courts of EU member states could issue decisions that impact countries that are not under its jurisdiction and, therefore, issue decisions with global effects. The ruling was criticized for putting at risk the freedom of expression of Facebook users.