Finland’s Institute for Health and Welfare withdraws from Twitter due to disinformation

The Finnish Institute for Health and Welfare (THL) has temporarily withdrawn from Twitter due to the high amount of disinformation and inappropriate remarks in their posts’ replies. Marjo Loisa, the director of communications at THL, explained that, although the platform has always been prone to spreading disinformation, the situation worsened during the COVID-19 pandemic. Especially given that the THL tweeted about the virus and vaccines. Consequently, the institute decided to leave Twitter because it currently offers few benefits as an official information channel.

Twitter urged to change policies to protect right to information and press freedom

Reporters Without Borders (RSF) and the Committee to Protect Journalists (CPJ) have urged Twitter to revise its policies in order to protect the right to information and uphold press freedom. The organisations sent a joint letter to Twitter’s management team expressing their concern about recent developments regarding the company’s policies and actions, noting that these ‘contribute to a hostile environment for journalists and threaten media freedom more broadly’.

The letter also outlines steps Twitter can take to ‘regain integrity and uphold the basic human right to information’. For instance, the company is invited to implement transparent corporate policies aligned with the UN Guiding Principles on Business and Human Rights, to preserve and update its annual transparency report, and to reinstate the Trust and Safety Council.

Turkish court releases first journalist jailed under new disinformation law

A Turkish court ordered the release of a journalist who was detained under the country’s new disinformation law. Sinan Aygul became the first journalist to be jailed pending trial under the new law approved by the Turkish parliament in October 2022. Aygul, a journalist in the Bitlis province, wrote on Twitter that a 14-year-old girl had allegedly been sexually abused by the police and soldiers, but then apologised and retracted the posts because the story was not confirmed with the authorities. Nevertheless, he was prosecuted and put under arrest.

Turkish authorities argue that the disinformation law – which mandates sentences of up to three years in prison for the spread of false or misleading information – is aimed at protecting the public. Critics, however, are concerned that the law can be abused to stifle dissent.

Internet for Trust: Regulating Digital Platforms for Information as a Public Good

The United Nations Educational, Scientific and Cultural Organization (UNESCO) will host a global multistakeholder conference on the topic of regulating digital platforms. The event will bring together UN entities, other intergovernmental organisations, ministers, regulators, judicial actors, the private sector, civil society, academia, and the technical communities to discuss challenges and ways forward in ensuring that regulatory approaches targeting digital platforms support freedom of expression and the availability of accurate and reliable information in the public sphere.

The conference will feature debates and consultations on the draft Guidance on regulating digital platforms: a multistakeholder approach, issued by UNESCO for public consultation in December 2022. The guidance is dedicated to actors seeking to regulate, co-regulate ,and self-regulate digital platforms, and aimed to assist them in developing approaches that support freedom of expression and the availability of accurate and reliable information in the public sphere, while dealing with content that potentially damages human rights and democracy.

Registration for the event is open until 17 February 2023. More details are available on the conference website.

China’s regulation on deepfakes to enter into force in January 2023

China’s regulation on deepfakes will come into force on 10 January 2023. Deepfakes are synthetically generated or altered images or videos built using artificial intelligence. This technology can be used to alter an existing video, for example, by creating realistic fake speech.

Finalised at the end of 2022, the Provisions on the Administration of Deep Synthesis of Internet-based Information Services requires providers of deep synthesis services, among other issues, to:

  • Strengthen data management by taking necessary measures for personal data protection according to the existing laws.
  • Establish guidelines, criteria, and processes for recognising false or damaging information, and devise mechanisms to deal with users who produce false or damaging material using deep synthesis technology.
  • Periodically review the algorithms used, and conduct security assessments when providing models, templates, and other tools with the editing function of the face, voice, and other biometric information, or objects, scenes, and other non-biometric information that may involve national security, national image, national interests, and public interests.

Twitter dissolves its Trust and Safety Council

Twitter dissolved its Trust and Safety Council, right before the council was supposed to meet with the company representatives.

The Trust and Safety Council, formed in 2016, was an advisory group of around 100 independent civil, human rights, and other organisations working to advise on issues related to addressing hate speech, child exploitation, self-harm, and other types of harmful content on Twitter.

Members of the council received an email informing them that the council was no longer ‘the best structure’ to bring ‘external insights into our product and policy development work’.

Russia blocks 14,800 websites in a single week

Roskomsvoboda, an NGO that focuses on protecting internet users’ digital rights, claims that almost 15,000 websites were blocked in Russia in a week.

Their research indicates that Russian authorities banned 14,800 new websites between 5 December and 11 December and asserted that most blockages (60%) resulted from court decisions. Additionally, the NGO calculated that throughout 2022, Russia’s media watchdog Roskomnadzor blocked an average of 4,900 websites per week.

The last such spike in blocks was seen in April 2021, during demonstrations in support of the politician Alexei Navalny, when 18,000 sites and pages were blocked, Roskomsvoboda noted.

Meta partners with Nigerian organisations to combat disinformation ahead of 2023 elections

Meta announced a partnership with the Independent National Elections Commission (INEC), civil society groups, and local radio stations to combat the spread of disinformation and protect the integrity of the Nigerian 2023 general elections. The approach has also been informed by conversations with human rights groups, NGOs, local civil society organisations, regional experts, and local election authorities to make it easier for audiences to distinguish trusted content from dubious claims. For instance, the official Facebook page on the 2023 elections will have a blue tick that confirms the authenticity of the results posted on the INEC official website. Additionally, Meta has quadrupled the size of its global teams working on safety and security to about 40,000 people, including over 15,000 content reviewers in every major timezone. Collectively, these reviewers are able to review content in more than 70 languages, including Yoruba, Igbo, and Hausa.

Apple will not scan iCloud photos for CSAM

Apple has announced that it has withdrawn its plans to scan photos on users’ iCloud for child sexual abuse material (CSAM). Following criticism from civil society and expert communities, in September 2021 Apple paused the rollout of the relevant feature. Now, the company will focus on its Communication Safety feature announced in August 2021, which allows parents and caregivers to opt into protections on the iCloud. Apple is also developing a new feature to detect nudity in videos sent through Messages and will expand this to its other communication applications.

TikTok sued in a US State for security and safety violations

Indiana’s Attorney General filed a lawsuit against TikTok for violation of state consumer protection laws. The lawsuit alleges that the social media company failed to disclose that ByteDance, the Chinese company that owns TikTok, has access to sensitive consumer information. Moreover, another complaint claims that the company exposes children to sexual and substance-related content, while misleading the users with its age rating of 12 plus on App Store and Google Play. Indiana seeks penalties of up to US$5000 per violation and asks the Indiana Superior Court to order the company to stop false and misleading representations to its users.