Internet for Trust: Regulating Digital Platforms for Information as a Public Good

The United Nations Educational, Scientific and Cultural Organization (UNESCO) will host a global multistakeholder conference on the topic of regulating digital platforms. The event will bring together UN entities, other intergovernmental organisations, ministers, regulators, judicial actors, the private sector, civil society, academia, and the technical communities to discuss challenges and ways forward in ensuring that regulatory approaches targeting digital platforms support freedom of expression and the availability of accurate and reliable information in the public sphere.

The conference will feature debates and consultations on the draft Guidance on regulating digital platforms: a multistakeholder approach, issued by UNESCO for public consultation in December 2022. The guidance is dedicated to actors seeking to regulate, co-regulate ,and self-regulate digital platforms, and aimed to assist them in developing approaches that support freedom of expression and the availability of accurate and reliable information in the public sphere, while dealing with content that potentially damages human rights and democracy.

Registration for the event is open until 17 February 2023. More details are available on the conference website.

China’s regulation on deepfakes to enter into force in January 2023

China’s regulation on deepfakes will come into force on 10 January 2023. Deepfakes are synthetically generated or altered images or videos built using artificial intelligence. This technology can be used to alter an existing video, for example, by creating realistic fake speech.

Finalised at the end of 2022, the Provisions on the Administration of Deep Synthesis of Internet-based Information Services requires providers of deep synthesis services, among other issues, to:

  • Strengthen data management by taking necessary measures for personal data protection according to the existing laws.
  • Establish guidelines, criteria, and processes for recognising false or damaging information, and devise mechanisms to deal with users who produce false or damaging material using deep synthesis technology.
  • Periodically review the algorithms used, and conduct security assessments when providing models, templates, and other tools with the editing function of the face, voice, and other biometric information, or objects, scenes, and other non-biometric information that may involve national security, national image, national interests, and public interests.

Twitter dissolves its Trust and Safety Council

Twitter dissolved its Trust and Safety Council, right before the council was supposed to meet with the company representatives.

The Trust and Safety Council, formed in 2016, was an advisory group of around 100 independent civil, human rights, and other organisations working to advise on issues related to addressing hate speech, child exploitation, self-harm, and other types of harmful content on Twitter.

Members of the council received an email informing them that the council was no longer ‘the best structure’ to bring ‘external insights into our product and policy development work’.

Russia blocks 14,800 websites in a single week

Roskomsvoboda, an NGO that focuses on protecting internet users’ digital rights, claims that almost 15,000 websites were blocked in Russia in a week.

Their research indicates that Russian authorities banned 14,800 new websites between 5 December and 11 December and asserted that most blockages (60%) resulted from court decisions. Additionally, the NGO calculated that throughout 2022, Russia’s media watchdog Roskomnadzor blocked an average of 4,900 websites per week.

The last such spike in blocks was seen in April 2021, during demonstrations in support of the politician Alexei Navalny, when 18,000 sites and pages were blocked, Roskomsvoboda noted.

Meta partners with Nigerian organisations to combat disinformation ahead of 2023 elections

Meta announced a partnership with the Independent National Elections Commission (INEC), civil society groups, and local radio stations to combat the spread of disinformation and protect the integrity of the Nigerian 2023 general elections. The approach has also been informed by conversations with human rights groups, NGOs, local civil society organisations, regional experts, and local election authorities to make it easier for audiences to distinguish trusted content from dubious claims. For instance, the official Facebook page on the 2023 elections will have a blue tick that confirms the authenticity of the results posted on the INEC official website. Additionally, Meta has quadrupled the size of its global teams working on safety and security to about 40,000 people, including over 15,000 content reviewers in every major timezone. Collectively, these reviewers are able to review content in more than 70 languages, including Yoruba, Igbo, and Hausa.

Apple will not scan iCloud photos for CSAM

Apple has announced that it has withdrawn its plans to scan photos on users’ iCloud for child sexual abuse material (CSAM). Following criticism from civil society and expert communities, in September 2021 Apple paused the rollout of the relevant feature. Now, the company will focus on its Communication Safety feature announced in August 2021, which allows parents and caregivers to opt into protections on the iCloud. Apple is also developing a new feature to detect nudity in videos sent through Messages and will expand this to its other communication applications.

TikTok sued in a US State for security and safety violations

Indiana’s Attorney General filed a lawsuit against TikTok for violation of state consumer protection laws. The lawsuit alleges that the social media company failed to disclose that ByteDance, the Chinese company that owns TikTok, has access to sensitive consumer information. Moreover, another complaint claims that the company exposes children to sexual and substance-related content, while misleading the users with its age rating of 12 plus on App Store and Google Play. Indiana seeks penalties of up to US$5000 per violation and asks the Indiana Superior Court to order the company to stop false and misleading representations to its users.

Russia bans LGBTQ ‘propaganda’, including on social media

A new Russian law bans ‘propaganda’ about ‘nontraditional sexual relations’ in the media, advertising, or on social media. The new law extends the ban on ‘propaganda of nontraditional sexual relations’ among minors, in place since 2013, to adults, with steep fines or suspension of business activities for Russians, and expulsion from the country for foreigners who are found guilty. The law also prohibits the issuance of a rental or streaming certificate for films promoting nontraditional sexual relations and preferences.
The new law imposes fines for ‘propaganda’ of nontraditional sexual relations or preferences to about US$6,400 for citizens and US$80,000 for organisations. Roskomnadzor, Russia’s internet regulator, is tasked with the enforcement of these rules.

The Federal Election Commission passes new digital ad transparency rule

The US Federal Election Commission passed a new ad transparency rule to enforce ‘paid for by’ disclaimers on the most paid political promotions online. However, the initial proposal was broader in scope and covered ‘communications placed or promoted for a fee on another person’s website, digital device, application, service, or advertising platform’. The proposal was criticised for being ‘burdensome’ and ‘expanding’ the FEC’s regulatory authority over online speech. The revised version is almost identical, whereby ‘or promoted’ is removed from the sentence. The small change in wording significantly reduced the scope of the regulation, exempting those who promote digital political ads from disclosing if they are being paid to do so.

New amendments introduced to UK Online Safety Bill

The UK Government has introduced amendments to the Online Safety Bill, addressing the removal of online content. The new version of the Bill will not define types of objectionable content; rather, it will offer a ‘triple shield’ of protection to users. Online platforms will be required to remove illegal content or content violating their community guidelines and to provide adult users with greater control over the online content. Online platforms will also be expected to be more transparent about online risks to children and to illustrate how they enforce age verification measures. Another set of amendments will protect women and girls online, introducing control or coercive behaviour as a criminal offence under the Bill, and requiring that online platforms be more proactive with such content. The Bill is scheduled to return to the UK Parliament next week, with the first amendments tabled in the Commons for Report Stage on 5 December. Further amendments are expected in the later stages of the legislative process.