Building Blocks of Trust for a Sustainable, Evolving Internet (OF17)

Session: OF17 

21 Dec 2017 - 09:00 to 10:00

#IGF2017, #OF17

Report

[Read more session reports and live updates from the 12th Internet Governance Forum]

In this session, panelists addressed the topic of trust in technological developments like the Internet and artificial intelligence (AI). Moderator, Mr Greg Shannon, Chief Scientist,  CERT.org, introduced his co-moderator and fellow speaker, Ms Ichrak Mars, General Secretary, Institute of Electronic and Electric Engineers’ Special Interest Group on Humanitarian Technology in Tunisia (IEE- SIGHT).

Mars posited that, in our everyday lives, ‘we place a tremendous amount of trust on technology, people and institutions’, like our alarms or weather forecasts. Likewise, we put similar trust in our online activities. This amounts to a ‘mechanical’ trust because, like with our cars, we are automatically trusting machines. Yet, Mars expressed her concern with how companies are collecting and applying our online data, to an extent of which many users are unaware. 

Ms Danit Gal, Yenching Scholar, Peking University, offered a view of trust in technology in China. She said there is strong incentive for the government to cultivate mutual trust with its citizens through social interaction. Even ideas like the social credit system are meaningful tools to cultivate it, because they bring the government closer to its enormous population, helping to ‘keep everyone’, government and citizens, ‘in check’. In this sense, Chinese tech companies have a pivotal role, developing the technologies that will enable these interactions. such as AI and blockchain.

On the topic of trust between human and robots, Ms Arisa Ema, Assistant Professor, Tokyo University, suggested that the Japanese are more concerned than Europeans about the coexistence between the two beings. To them, robots are to be considered our partners, not just tools. She offered two examples. First, of a robot-operated hotel, where robots substitute physical and emotional labour, challenging the concept of hospitality. Then, of a minimally designed waste collector, who has wheels but no arms, which compels humans to collaborate to accomplish the task. Many companies think that creating trustworthy robots is paramount to their acceptance in society. Yet, the adorable appearance and behavior of some machines may disguise more nefarious intents, such as personal data collection.

Ms Marina Ruggeri, Full Professor of Telecommunications Engineering, University of Rome, said trust is not a theoretical concept. After chairing IEEE’s technical activities, she realised trust exists automatically when there is content. When it is inexistent, we start asking if the activities we engage are trustable. Ruggeri then challenged the audience to connect ‘trust’ and ‘content’. If the Internet is an outlet for content, trust should not be a serious problem. Responsibility and human-centricity are essential for building online trust online.  Transparency, then, becomes almost automatic through the provision of that content. Trust, nonetheless, can only be addressed if we also consider matters of ethics and privacy, and ICT professionals can show other social groups how to do it. To conclude, Ruggeri expressed her optimism in the future of the Internet and in the role of `generation Z’ers’ to lead this process.

Lastly, Shannon, highlighted the importance of vulnerability as a definitional aspect of trust, as it concerns ‘willingly making yourself vulnerable to another, and trusting that they will not exploit that’. The opportunity of technology is to facilitate this, providing mechanisms to enable security, privacy, resilience, and accountability.  As a chief scientist for CERT.org, he sees that Computer Emergency Response Teams (CERTs) embody a great deal of trust, insofar as their work requires them to build relations with organisations who are at compromised positions, while also being transparent about their activities. Concerning AI, there are two related points to consider, the notions of user literacy and explainable AI, since users want to understand both how to best operate machines and the functioning of their decision-making processes.

After the panel, Mars opened the floor for questions, which covered topics like how ‘fake news’ affect trust, how to strike a balance between the benefits of AI and issues of ethics and trust, what would be the role of each social sector in building an Internet with greater trust levels, and whether the focus should not be on the trustworthiness of who provides a technology.

By Guilherme Cooper Vicente

 

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee



 

GIP Digital Watch is operated by

Scroll to Top