AI and the future of diplomacy: What’s in Store?

13 Nov 2018 15:00h - 16:30h

Event report

[Read more session reports and live updates from the 13th Internet Governance Forum]

Mr Jorge Cancio, Swiss Office of Communication (OFCOM), opened the session by saying that artificial intelligence (AI) is currently a very important topic present at international discussions. He continued that alongside the numerous promises that such technology offers, its possible pitfalls and challenges should also be carefully considered. What is the impact of AI on Freedom of Expression (FoE)? What is its role in democratic rights and democracy in general? Can Lethal Autonomous Weapons (LAWS) be accepted by the international community? The UN Secretary General has called for the total ban on LAWS. At the same time, AI promises to be a useful technology helping practitioners in accomplishing specific functions. 

Ms Katharina Höene, DiploFoundation, moderated the workshop which featured three thematic discussion groups on different aspects related to AI. 

Höene looked into AI as a tool for diplomats and policymakers and specified the group’s focus: AI as a tool for diplomacy, in particular as a natural language learning processor that could simulate specific functions (speaking, writing, conducting research). She also clarified that the discussion looked at this issue from three perspectives, i.e. the possible pitfalls and limits of AI, what constitutes meaningful human control and possible dangers linked to its wide application.  

The group discussion touched upon four different issues. First, AI could help with language especially in the case of translations (e.g. applications such as Google Translate). Second, AI and data analytics can help elaborate speeches that better target a specific audience through sentiment analysis. Third, AI can help in conducting research and analyse a big amount of resources in preparation for negotiations. Fourth, there was a widespread reluctance towards AI application for autonomous weapons due to many ethical concerns and the role of private companies in developing such a technology. 

Mr Claudio de Lucena Neto, Paraiba State University in Brazil, Foundation for Science and Technology, Portugal, led the group on AI impact on the geopolitical environment. The focus considered three different institutional levels: non-governmental perspective, governmental initiatives and international interventions (e.g. ITU’s AI for good initiative). He stressed that the discussion focused on the issues that governments consider relevant in their strategies and possible alternatives in consolidating their respective political positions in the future. 

The group discussion raised three considerations. First, there is not much regulation around AI (neither  nationally nor internationally) besides discussions at the European level and specific declarations (e.g. Montréal Declaration on responsible AI and Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems). Second, the crucial role of countries’ digital capabilities should be taken into consideration, namely the possibility of exclusion from the international field of countries that struggle in obtaining and developing such a technology. Third, systems favouring access to data and fostering data partnerships should be envisaged: ‘we have stock exchanges for the financial market but we do not have stock exchanges for data’.

Mr Mike Nelson, CloudFlarestressed the importance of using the correct definition of AI considering that since its invention there have been more than 37 different definitions elaborated. He thus used the term machine learning and focused on how big data and machine learning can be used to enhance human performance and consequently create ‘superhuman capabilities’. He listed nine ways in which national governments are trying to understand machine learning, namely targeting content (illegal or legal), competition and antitrust (vis-à-vis data dominance by big companies), privacy, cybersecurity, social media’s influence on election results, the impact on the job market, military strategic advantage of LAWS, policing techniques (i.e. detecting crime), and the Internet of Things (IoT). 

The group discussion enumerated three additional issues. First, the importance of transparency when governments are developing and using algorithms in delivering services alongside education and digital literacy programmes. Second, the accountability mechanisms to be developed to ensure that the government’s use of machine learning is ethical. Third, the participants considered successful applications in cybersecurity and police profiling, together with pitfalls when AI is used in determining school admissions and monitoring public infrastructures (due to the lack of complete data sets).

 

By Marco Lotti