Content policy

Updates

17 May 2017

Ukraine has blocked Russia's largest social media and Internet services as a sanction against Russia's annexation of Crimea and the ongoing conflict in eastern Ukraine. The targeted sites include social networks VK.com and Odnoklassniki, as well as search engine Yandex and e-mail service Mail.ru. According to Ukraine's president Petro Poroshenko, 'The challenges of hybrid war demand adequate responses. Massive Russian cyber attacks across the world...show it is time to act differently and more decisively.' The measure risks a backlash among the Ukrainian public, as these websites are widely popular in the country. 

16 May 2017

Earlier this month, Thailand passed a law that prohibits online content that is 'contrary to public order or public morality' leading Thai Internet service providers (ISPs) to request Facebook to block 600 Facebook pages. On 12 May, Thailand threatened Facebook to take legal action, unless the social media platform removes the remaining 131 pages that are considered illegal by the Thai authorities. Thai secretary-general of the National Broadcasting and Telecommunications Commission, Takorn Tantasith, told reporters that 'if even a single illicit page remains, we will immediately discuss what legal steps to take against Facebook Thailand', giving the company four days to comply. With the deadline approaching, ISPs claimed that they were pressured to shut down access to Facebook in the country. After the deadline passed on 16 May, Takorn Tantasith announced that Facebook was cooperating and blocked the 131 posts. 

12 May 2017

Access to the Chinese messenger app WeChat has been restored by the Russian government. WeChat was blocked in Russia following an allegation that it hadn't registered with the Russian authorities. The Russian media regulator now confirmed that WeChat has 'provided the information that is necessary to include them in the registry'. 

Pages

One of the main sociocultural issues is content policy, often addressed from the standpoints of human rights (freedom of expression and the right to communicate), government (content control), and technology (tools for content control). Discussions usually focus on three groups of content:

 

  • Content that has a global consensus for its control. Included here are child pornography, justification of genocide, and incitement to or organisation of terrorist acts.
  • Content that is sensitive for particular countries, regions, or ethnic groups due to their particular religious and cultural values. Globalised online communication poses challenges for local, cultural, and religious values in many societies. Most content control in Middle Eastern and Asian countries, for example, is officially justified by the protection of specific cultural values. This often means that access to pornographic and gambling websites is blocked.
  • Political censorship on the Internet, often to silence political dissent and usually under the claim of protecting national security and stability.

How content policy is conducted

Governmental filtering of content

Governments that filter access to content usually create an Internet Index of websites blocked for citizen access. Technically speaking, this is done with the help of router-based IP blocking, proxy servers, and DNS redirection. Content filtering occurs in a growing number of countries (see opennet.net). 

Private rating and filtering systems

Faced with the potential risk of the disintegration of the Internet through the development of various national barriers (filtering systems), W3C and other like-minded institutions made proactive moves proposing the implementation of user-controlled rating and filtering systems. In these systems, filtering mechanisms can be implemented by software on personal computers or at server level controlling Internet access. This method allows users to implement their own filtering systems without national intervention. It remains to be seen, however, whether governments will sufficiently trust their citizens to create their own filters.

Content filtering based on geographical location

Another technical solution related to content is geo-location software, which filters access to particular web content according to the geographic or national origin of users. The Yahoo! case was important in this respect, since the group of experts involved, including Vint Cerf, indicated that in 70-90% of cases Yahoo! could determine whether sections of one of its websites hosting Nazi memorabilia were accessed from France. This assessment helped the court come to a final decision, which requested Yahoo! to filter access from France to Nazi memorabilia. Since the 2000 Yahoo! case, the precision of geo-location has increased further through the development of highly sophisticated geo-location software.

Content control through search engines

The bridge between the end-user and Web content is usually a search engine, and filtering search results is therefore often used as a tool to prevent access to specific content. The risk of filtering of search results, however, doesn’t come only from the governmental sphere; commercial interests may interfere as well, more or less obviously or pervasively. Commentators have started to question the role of search engines (particularly Google, considering its dominant position in users’ preferences) in mediating user access to information and to warn about their power of influencing users’ knowledge and preferences. This issue is increasingly attracting the attention of governments, which call for increased transparency from Internet companies regarding the algorithms they employ in their search engines. German chancellor Angela Merkel spoke out about this risk, claiming that: 'Algorithms, when they are not transparent, can lead to a distortion of our perception, they can shrink our expanse of information.'

Web 2.0 challenge: users as contributors

With the development of Web 2.0 platforms – blogs, document‑sharing websites, forums, and virtual worlds – the difference between the user and the creator has blurred. Internet users can create large portions of web content, such as blog posts, videos, and photo galleries. Identifying, filtering, and labelling ‘improper’ websites is becoming a complex activity. While automatic filtering techniques for texts are well developed, automatic recognition, filtering, and labelling of visual content are still in the early development phase.

One approach, sometimes taken by governments in an attempt to manage user-generated content that they deem objectionable, is to completely block access to platforms such as YouTube and Twitter throughout the country, or even to cut Internet access completely, hindering all communication on social network platforms (as was the case, for example, during some of the Arab Spring events). However, this can seriously infringe on the right to free speech, and violates the potential of the Internet in other areas (e.g. as an educational resource).

As the debate of what can and cannot be published online is becoming increasingly mature, social media platforms themselves have started to formalise their policies of where they draw the border between content that should or should not be tolerated. For example, Facebook's Statement of Rights and Responsibilities specifies: 'We can remove any content or information you post on Facebook if we believe that it violates this statement or our policies.' Yet, the implementation of such policies sometimes leads to unintended consequences, with platforms removing legitimate content.

Automated content control

For Internet companies, it is often difficult to identify illegal content among the millions of content inputs on their platforms. One possible solution can be found in artificial intelligence mechanisms to detect hate speech, verbal abuse or online harassment. However, relying on machine learning to make decisions as to what constitutes hate speech opens many questions, such as whether such systems would be able to differentiate between hate speech and irony or sarcasm. 

The national legal framework

The legal vacuum in the field of content policy provides governments with high levels of discretion in deciding what content should be blocked. Since content policy is a sensitive issue for every society, the adoption of legal instruments is vital. National regulation in the field of content policy could bring a more predictable legal situation beneficial for the business sector, ensure a better protection of human rights for citizens, and reduce the level of discretion that governments currently enjoy. However, as. the border between justified content control and censorship is delicate and difficult to enshrine in legislation, this tension is increasingly being resolved in the courtroom, for example regarding the role of social media outlets in terrorist activities.  

International initiatives

In response to the increased sophistication with which terrorists manage their activities and promote their ideologies online, multilateral forums have started addressing ways to limit harmful content (e.g. the G7, the UN Security Council, and the United Nations Office on Drugs and Crime). At the regional level, the main initiatives have arisen in European countries with strong legislation in the field of hate speech, including anti-racism and anti-Semitism. European regional institutions have attempted to impose these rules on cyberspace. The primary legal instrument addressing the issue of content is the Council of Europe Additional Protocol to the Convention on Cybercrime (2003), concerning the criminalisation of acts of racist and xenophobic nature committed through computer systems. On a more practical level, the EU adopted the European Strategy to Make the Internet a Better Place for Children in 2012. 

The Organization for Security and Co-operation in Europe (OSCE) is also active in this field. Since 2003, it has organised a number of conferences and meetings with a particular focus on freedom of expression and the potential misuses of the Internet (e.g. racist, xenophobic, and anti-Semitic propaganda, and content related to violent extremism and radicalisation).

The role of intermediaries 

The private sector is playing an increasingly important role in content policy. Internet Service Providers, as Internet gateways, are often held responsible for the implementation of content filtering. In addition, Internet companies (such as Facebook, Google, and Twitter) are becoming de facto content regulators. Google, for example, has had to decide on more than half a million requests for the removal of links from search results, based on the right to be forgotten. These companies are also increasingly involved in cooperative efforts with public authorities in an attempt to combat illegal online content. 

Events

Instruments

Conventions

Judgements

Resolutions & Declarations

Wuzhen World Internet Conference Declaration (2015)
Universal Declaration of Human Rights (1948)

Other Instruments

Resources

Articles

Eric Schmidt on How to Build a Better Web (2015)
The Digital Dictator's Dilemma: Internet Regulation and Political Control in Non-Democratic States (2014)
Internet Content Regulation in Liberal Democracies: A Literature Review (2013)
Trends in Transition from Classical Censorship to Internet Censorship: Selected Country Overviews (2012)
Policy and Regulatory Issues in the Mobile Internet (2011)
The Impact of Internet Content Regulation (2002)

Publications

Internet Governance Acronym Glossary (2015)
An Introduction to Internet Governance (2014)

Papers

Internet Fragmentation: An Overview (2016)

Reports

One Internet (2016)
Freedom of the Press 2016 (2016)
2016 Special 301 Report (2016)
2016 World Press Freedom Index (2016)
The 2016 National Trade Estimate Report on Foreign Trade Barriers (2016)
The Impact of Digital Content: Opportunities and Risks of Creating and Sharing Information Online (2016)
Content Removal Requests Report (2016)
Global Support for Principle of Free Expression, but Opposition to Some Forms of Speech (2015)
Freedom on the Net 2015 (2015)
Government Request Report (2015)

GIP event reports

Report for Violent Extremism Online – A Challenge to Peace and Security (2017)

Other resources

The Twitter Rules (2016)

Processes

IGF 2016 Report

 

One important message resulting from IGF 2016 was that the Internet needs to be preserved as a global resource available to all (Dynamic Coalition on Core Internet Values). It was, however, stressed that the global nature of the Internet could be undermined by certain content control policies – ranging from blocking of access to specific online content (Internet Fragmentation: Getting Next 4 Billion Online - WS37) to complete Internet shutdowns (Analyzing the Causes & Impacts of Internet Shutdowns - WS109).

Content control was also discussed in relation to its impact on freedom of expression and other human rights (Sex and Freedom of Expression Online - WS164). As was underlined in several sessions, delicate balances need to be achieved between protecting the public interest (a concept whose under- standing varies across cultures) and preserving the right to freedom of expression.

Furthermore, some sessions explored who should bear responsibility for dealing with illegal or harmful online content: govern- ments, or rather the intermediaries – such as Facebook and Twitter – whose platforms are used for dissemination?

IGF 2015 Report

 

Although not always mentioned explicitly, content policy is often embedded in discussions on human rights, liability of intermediaries, intellectual property, child safety, jurisdiction, and more. Last week’s discussions were once again a vivid example of the intersecting nature of content policy.

Several sessions addressed the need for content control in different cases: from fighting violence against women online, to protecting children and adolescents, and safeguarding LGBT rights. At the same time, the discussions recognised the need to safeguard freedom of expression and other rights.

 

Although there was general consensus on the need to protect vulnerable communities, the extent of content control was not always agreed on. For example, during the Best Practice Forum on Practices to Countering Abuse and Gender-Based Violence against Women Online, several panellists spoke of the difficulty of establishing strong legal mechanisms that do not cause over-censorship.

The workshop on Tech-related Gender Violence x Freedom of Expression (WS 196) explicitly dealt with the tension between gender protection and the right to free speech. At the other end of the spectrum, several sessions addressed cases in which Internet content is censored by governments to establish digital control over their citizens. For example, Information Controls in the Global South (WS 224) addressed the challenges faced by civil society to have a meaningful impact when faced with information censorship.

New areas in content policy are being explored. For example, the emerging issue of content quality control was discussed during Open Education Resources (WS 58). What happens with our digital assets after we pass away? Death and the Internet (WS 70) looked at the issue of digital legacies… with a touch of humour. In a hypothetical set-up, panellists played the role of an online user who died testate without a valid power of attorney; his family were suing for the right to access his data, while legal experts applied different laws to the scenario. Although future planning is a topic many avoid, the amount of personal data we leave behind merits an in-depth discussion about privacy, personal data, conflicting policies and regulations, jurisdiction, and the role of policymakers. It is expected that more discussions on digital legacies will take place, especially among the legal community and the industry.

With regard to the right to be forgotten (RTBF), last year’s Court of Justice of the European Union (CJEU) ruling (Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González) had far-reaching implications, and created a ripple effect across different jurisdictions.

One of the main issues is with regard to the terminology, as the RTBF can generate false reassurances that an individual’s past can be forgotten. Panellists in The ‘Right to be Forgotten’ Rulings and their Implications (WS 31) suggested that the right be renamed to ‘the right to be de-indexed’. The main issues were reiterated in Cases on the Right to be Forgotten, What Have we Learned? (WS 142): the term is problematic, and policymakers and the judiciary need a better understanding of technology. The process of de-listing imposes an unnecessary burden on online media houses to continually update their published stories. The process is also likely to be abused in jurisdictions where the take-down notice system is implemented.

Both workshops discussed the risk that the RTBF is affecting other human rights including the right to memory and the flow of ideas, the right to know the truth, and freedom of the press. These essential rights to democracy could be threatened by the RTBF. In fact, the representative from the United Nations Commission for Human Rights commented that the RTBF contrasts with the right to know the truth, which is a distinct right. The erasure of information could impact the right to truth, and thus create a need for due process.

Among the practical implications is the fact that different jurisdictions have ruled or legislated on the RTBF. These include a judgment by the Constitutional Court of Colombia; new legislation in Chile, Nicaragua, and Russia; and data authorities’ rulings on search engines. The CJEU ruling has therefore created a ripple effect, extending the European cyberlaw footprint to a global level.

 

 

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee



 

GIP Digital Watch is operated by

Scroll to Top