Updates

Share on FacebookTweet

Internet companies stop providing services to 8chan after El Paso deadly shooting

The cybersecurity and domain name registrar companies Cloudflare and Tucows have terminated 8Chan as a customer after the shooting in El Paso. 8chan, an imageboard website of user-created message boards, has been used for harmful activism. Days before the deadly shooting in El Paso, Texas, the perpetrator of the massacre had spread his manifesto in the platform. The same happened in the Christchurch and Poway synagogue shootings. 8Chan alleges that owners moderate each board with no interaction from the website administration and, therefore, has no liability for illegal content under American laws. Cloudflare stated “the action we take today won’t fix hate online. It will almost certainly not even remove 8Chan from the Internet. But it’s the right thing to do”. As a result of Cloudflare and Tucows’ actions, 8Chan changed its domain register to BitMitigate which supports freedom of speech as fundamental rights in American society.

AI ‘is not a silver bullet’ for moderating online content, report stresses

A report on the 'Use of AI in online content moderation', prepared by Cambridge Consultants and commissioned by the UK Office of Communications (Ofcom) concludes that artificial intelligence (AI) 'shows promise in moderating online content, but raises some issues'. On the one hand, the report notes that AI can have a significant impact on the content moderation workflow. The technology can be used to improve the pre-moderation stage and flag content for review by humans, thus increasing moderation accuracy. It can also be used to synthesise training data to improve pre-moderation performance. Moreover, AI can assist human moderators by increasing their productivity and reducing the potentially harmful effects of content moderation of individual moderators. On the other hand, using AI for content moderation ‘is not a silver bullet’, as it also raises a number of issues, such as unintentional bias and lack of transparency on how decisions are made. 'AI is not a silver bullet', it is said in the report, and 'even if it can be successfully applied, there will be weaknesses which will be exploited by others to subvert the moderation system'. The report also highlights several policy implications, noting, for example, that the availability of online content moderation services from third-party providers should be encouraged and that a better understanding is needed regarding the performance of AI-based content moderation by individual platforms and moderation services.

French National Assembly approved law to fight against hate speech online 


The French National Assembly approved Act against hate speech online. The Act forces online platforms and search engines, such as Facebook and Google, to remove hate content targeting ethnic origin, religion, sexual orientation, and disability in 24 hours. The deputies also added to this list incitement to terrorist and child pornography. If the intermediary refuses to remove the illegal content, it can be condemned to pay up to 4% of their annual global profit. The act also requires platforms to implementing a button, common to all intermediaries, that will allow users to report any hate content. The superior audio-visual council will watch and ensure that online intermediaries enforce the Act.

Instagram launches new AI-powered feature against online bullying

The social networking service Instagram announced new measures aimed at fighting against online bullying. A new feature powered by artificial intelligence (AI) notifies users if the comments they are about to post may be considered offensive. The feature encourages people to think twice before posting comments. According to Instagram, the testing of this new tool demonstrated that some users are indeed reflecting on their comments once they receive the notification, and, thus, the intended recipients are prevented from receiving harmful content.

Tik Tok being investigated for issues in handling data related to children

 

 

The Chinese video sharing Tik Tok is being investigated in the UK for how it handles the personal data of children and the security measures it takes for children in their platform. This investigation which began in February this year was triggered by  the fine imposed on the company by the US Federal Trade Commission (FTC) for similar violations

Addressing the parliamentary committee, Elizabeth Denham, the information commissioner said that the company was potentially violating the general data protection regulation (GDPR) which “requires the company to provide different services and different protections for children”.

Civil rights audit concludes that Facebook’s policy against white supremacy is insufficient

Since March 2019 Facebook’s has implemented a new policy banning from the platform any content that used the term ‘white nationalist’.  An external audit, conducted by former American Civil Liberties Union (UCLU) director Laura Murphy, reported that Facebook’s present white nationalism policy is too limited, because it bans only explicit support of the term ‘white nationalism’ or ‘white separatism’. As a result, content that expressly supports white nationalist ideology without using the terms has flourished in the platform. Facebook reacted to the audit by having formalised a civil rights team at the platform. The team will identify hate slogans and symbols connected to white nationalism and white separatism. Despite Facebook’s efforts on content moderation, online intermediaries are exempted from liability for illegal third-party content in the United States.

Tech companies lobby against UK proposal to enforce duty of care

The Internet Association (IA), an American lobbying group formed by Google, Facebook, Microsoft and Twitter among others, have publicly attacked the UK online harms proposal that imposes online platform a duty of care to protect users. The government of the UK has published for consultation a regulatory proposal to address online harms, which includes the enactment of a statutory duty of care on online businesses for their users’ safety; the appointment of an independent regulator to oversee and enforce the regulation; and extraterritorial application of the statute. The proposal was opened to public consultation until 1 of July.

G20 adopts statement on preventing exploitation of the Internet for terrorism and violent extremism conducive to terrorism (VECT)

The G20 leaders adopted the 'G20 Osaka leaders' statement on preventing exploitation of the Internet for terrorism and violent extremism conducive to terrorism (VECT)'  after the Osaka Summit. The statement raises the bar of expectations for online platforms to protect their users from terrorist and VECT exploitation of the Internet, by not facilitating terrorism and VECT, and by preventing such content from being streamed, uploaded, or re-uploaded. Where such content appears, online platforms should address it in a timely manner to prevent proliferation. G20 countries also reaffirmed their commitment to protect their citizens from terrorist and VECT exploitation of the Internet, including through international fora and initiatives, such as the Global Internet Forum to Counter Terrorism (GIFCT). The statement also highlighted the need to counter terrorist propaganda with positive narratives. In relation to cybersecurity, the Declaration called for enhancing cyber resilience, and addressing security gaps and vulnerabilities.

Facebook accused of leaving 'children broken as collateral damage' by UK inquiry

During an Independent Inquiry into Child Sexual Abuse (IICSA), where evidence from various online companies such as Facebook, Apple, Microsoft, and Google on the initiatives taken by them to combat child abuse online was heard, Facebook has been accused of leaving 'broken children as collateral damage' for their commercial aims.

Barrister William Chapman, representing the abused victims, argued that the social media companies were not taking adequate measures to prevent paedophiles from reaching out to  children online due to their business models and that the time had come for these platforms to be ‘fundamentally redesigned’. Few recommendations shared by the victims before the inquiry for the tech companies included paying compensation to the children abused by their services and to ban posing as a child online, without reasonable excuse.

 
 
Share on FacebookTweet