Creating a safer internet while protecting human rights

1 Dec 2022 12:30h - 13:30h

Session page

Event report

During the pandemic, the number of people online increased and more safety issues emerged. The main goal of this discussion was to think about ways  to make the internet safer, as well as how to advance digital rights across the globe. Laws that protect freedom of speech and expression, right to privacy, and encryption are key to achieving this. 

One issue is a gap in the practice of the law and international human rights standards. Internet shutdowns don’t just happen out of context. They happen before elections, in the midst of conflicts, when people are protesting on the streets, etc.  

Women in marginalised communities, LGBTQ+ groups, ethnic groups, and those who are excluded from traditional society, are the ones who use online communication to advocate for their rights. And yet, they are often the ones most at risk on social media platforms. More has to be done when it comes to those marginalised groups. Without human rights there can be no safer internet.

It was noted that smart regulation of the platforms is needed, making them more transparent and accountable. Governments must not use censorship as a tool for managing safety online.

When it comes to AI, it was stated that it should be used as a tool when it can help make processes more efficient and effective. Therefore, AI can help improve the speed of moderating and eliminating unwanted content, which helps policies to be implemented more efficiently. This is fundamental for the protection of human rights, especially freedom of speech. In order to get the benefits of AI, the right data, subject matter experts, and training sets are needed.

Companies will be obliged by online safety regulations, such as EU Digital Services Act and Australia’s Online Safety Act, to consistently enforce their terms of service. They will not be able to arbitrarily remove content, which is a step to freedom of expression. Another special obligation would be the protection of journalist content and content of democratic importance.

Today, companies that have online platforms use AI to automate content removal, since there is so much content online. 

Another part of implementing AI is to make sure that people who code and develop AI get to represent their geographical area more proportionately. This helps diversify the AI developer community. 

The session noted that the responsibility for the harm we see online lies with both governments and companies. Civil society organisations, individuals, and even journalists are quite successful in helping  to flag harmful content for companies. In this way, they are able to moderate or ensure that these harmful content do not spread on online platforms. However, it is important for companies to lead and be responsible, as there is harmful content on their platforms.

By Anamarija Pavlovic

 

The session in keywords

WS395 WORDCLOUD Creating safer internet IGF2022