Making Artificial Intelligence work for Equity and Social Justice (WS129)

Session: WS129 

20 Dec 2017 - 15:00 to 16:30

#IGF2017, #WS129


[Read more session reports and live updates from the 12th Internet Governance Forum]

The moderator Mr Parminder Jeet Singh, Executive Director of IT for Change at Just Net Coalition, framed the session by pointing out that the discussion focuses on artificial intelligence (AI) as a social construct and not as a technology. Further, the panel focused on social justice as a structural concern. As a whole it pursued the argument that equity and social justice need to be made part of AI, instead of being considered separately.

The first speaker Mr J Carlos Lara, Policy Manger at Derechos Digitales, explained that his organisation aims to defend human rights as related to the use of technology, by for example, encouraging the use of technology that is respectful of human rights. Looking at Latin America, he argued that there is a disconnect between the development in the region and the AI technologies that have already been set up in their most relevant features for use and implementation. These do not necessarily consider the specific concerns of that region. He also expressed scepticism regarding techno-solutionism’, which relies on technology to solve problems. He stressed that social problems have social, not technical solutions. It is important to not only aim for better technologies, but to try and influence their development and deployment. This is a matter of participation and citizenship.

Mr Norbert Bollow, President of Digital Gesellschaft and Co-convener of Just Net Coalition, reminded the audience that AI is simply something that can recognise patterns and can generate a human readable output based on these patterns. He argued that AI systems have many different components, but highlighted two in particular. First, there are algorithmic components that are comparable to traditional programs. They are based on the programs’ understanding of human patterns and are focused on recognising them and acting accordingly. They work in a very similar way to how humans think consciously and how they reason and doubt. Second, there are neural networks, which are closer to unconscious human thought. The neural network goes beyond the patterns that a human can recognise. For this type of algorithm, large amounts of data and a clear optimisation goal are needed. This also means that you can only work with things that can be expressed as numbers. Bollow cautioned that you cannot optimise social good, ‘because you cannot put that into a number’.

Ms Mishi Choudhary, Software Freedom Law Centre of India, argued that society is shifting permanently due to AI and that these are no longer thoughts for science fiction. AI challenges the possibilities for unskilled labour but also for experts. It also changes democracy. Looking at the developing world, she identified that there is a great hope that new technology will allow countries to leapfrog stages of development. Yet, she pointed out that social consequences are often not considered and that there is a predominant technological optimism.  She cautioned that better AI, a call often heard, does not necessarily produce better decisions. Those also depend on the quality of the underlying data-sets. She argued that we need greater transparency in how companies are using our data and how algorithms operate. When developing AI, the crucial task is to reflect a rich portrait of humanity and to make sure that the diversity is reflected in data-sets and algorithms

Mr Preetam Maloor, Strategy And Policy Advisor in the Corporate Strategy Division of the International Telecommunication Union General Secretariat, highlighted a number of challenges that need to be addressed such as: algorithm transparency, data challenges (especially data security), and transformation (socio-economic challenges). He highlighted three angles that are worth pursuing from his perspective:

a) a structured response from the UN (such as providing a platform for global stakeholder dialogue, such as like the ‘AI for Good’ summit that took place in June 2017, inaugurating a panel of experts, or using inter-agency mechanisms)
b) research on the impact of AI on current frameworks and social issues (such as jobs) and,
c) capacity building efforts related to the fair equitable, non-discriminatory distribution of AI.

Ms Malavika Jayaram, Executive Director of the Digital Asia Hub and faculty associate at the Berkman Klein Center for Internet & Society, reminded us that despite all the criticism, we should not forget the potential of AI for doing social good. She highlighted that concerns about discrimination apply to the online (AI) as well as offline worlds. She argued that we should be more specific and clearly analyse what social questions are solvable through AI. She wondered how fairness could even be encoded when there is a multiplicity of definitions and an absence of agreement. We should also think about what AI can add and what it can subtract from social issues more specifically.  She concluded by adding a quote from science fiction author William Gibbson, ’the future is already here — it's just not very evenly distributed’ and combined this with a call for equal access to technology and for not placing the burden of adaption on the already marginalised.

By Katharina E Höne



The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee


GIP Digital Watch is operated by

Scroll to Top