Platform Responsibility: Automated Decision Making and Artificial Intelligence (DCPR)

Share on FacebookTweet
Session date
Session ID

[Read more session reports and live updates from the 13th Internet Governance Forum]

The session, organised by the Dynamic Coalition on Platform Responsibility (DCPR), and led by the work of the DCPR, focused on artificial intelligence (AI), automated decision-making and online dispute resolution (ODR). Stressing the role and responsibility of online platforms and in the context of private decision-making, the session assessed what are the safeguards for the protection of fundamental rights in automated decision activities. Following those lines, the concept of effectiveness, fairness and due diligence featured in the discussion on the application of AI to automated decision-making.

The session was moderated by Mr Nicolo Zingales, Sussex University and Mr Luca Belli, Fundação Getulio Vargas (FGV), who introduced the report ‘Platform regulations: how platforms are regulated and how they regulate us’, meant to collect and identify best practices on how terms of service of platforms can solve disputes while respecting human rights, particularly the fundamental right of due process. It represents a positive obligation for states and a responsibility to protect for businesses. In this regard, two core pillars need to be further explained: alternative dispute resolution and AI. While alternative dispute resolution represents the classic way of doing things, AI defines something that constantly needs to be implemented.

Mr Moez Chakchouk, UNESCO, stressed that AI represents a major interest of the Communication and Information programme (CI) of UNESCO in terms of coordination among different sectors, in order to engage with different stakeholders on the topic of AI. Awareness needs to be raised but public organisations cannot be successful by themselves: the technical community has to be engaged in the artificial intelligence framework of analytical and normative efforts. In line with the mandate of UNESCO, the goal is to reinforce online human rights through digital skills, media information, literacy and the use of AI. Finally, he concluded his speech stressing the importance of the asymmetry between the evolution of AI and the lack of awareness about the related policy challenges, especially in developing countries.

Ms Nathalie Marechal, Ranking Digital Rights, addressed the topic of targeted advertising and automated decision-making by proposing a set of standards that information and communication technology (ICT) companies should follow. She underlined that general artificial intelligence does not exist: there are many kinds of narrow artificial intelligence, mostly seen as analytic tools, which should not completely replace human decision making. In allowing AI to make decisions, it is crucial to understand that the humans that designed and deployed the system have delegated their responsibility to make decisions, and the people should still be the ones that must be held accountable.

Mr Nic Suzor, Queensland University of Technology, stressed that there is a need to reimagine due process and accountability, to track and hold accountable massive systems. While the traditional way of dealing with due process is through an expensive court or judicial system, a day-to-day governance of AI can assist the large scale decision making. AI can be trained on past data. You can train AI machines to be incredibly consistent on the data seen in the past. However, it will not be able to extract a context that is not reducible to the data in the training sets. Furthermore, he talked about constitutionalisation: there is a need to re-imagine the system of holding power accountable. The state is not the only regulator; indeed, the pathway to decentralised power accountability implies that decentralised systems can hold themselves accountable. In this regard, a solution needs to be found.

Ms Marta Cantero, University of Helsinki, focused on the procedural processes, and certain aspects related to dispute resolution in the context of digital platforms such as effectiveness and fairness. Effectiveness is defined by a meaningful involvement of the parties when it comes to the dispute resolution mechanism, so that the resolution is effective and does not remain unimplemented. Fairness on the other hand, is defined by the fact that a procedure must respect certain basic rules of due process. These are a few of the concepts that feature the best practices developed in the context of the right to an effective remedy. Focusing on content removal, she highlighted the importance of the human aspect. She concluded her speech stressing that any requests shall only be responded to after an internal human review, as a means to have a form of trust and human accountability concerning content removal and account deactivation.


By Stefania Grottola

Share on FacebookTweet