Privacy and consumer protection in the age of artificial intelligence

4 Oct 2018 02:00h

Event report

The moderator, Ms Burcu Kilic (Legal and Policy Director at Public Citizen, Access to Innovation, Knowledge and Information Program), opened by stressing that artificial intelligence (AI) is arriving fast, and that we need to engage with the questions it raises for policymaking, decision making, and societal consequences in general.

Mr Asgar Koene (Senior Research Fellow, Horizon Digital Economy Research Institute at University of Nottingham) talked about his work, which focuses on bias in algorithmic systems. He cautioned against the term AI, which might be misleading in its future-focus and imprecision. He argued that all decision making is biased and this is unavoidable. The important point is that the bias is justified and intended. Koene went on to outline a number of causes of algorithmic bias such as: insufficient understanding of the context; improper identification of optimisation criteria; lack of explicit justification for the chosen criteria; using justifications that are not acceptable in the given context; and, systems not performing as intended due to implementation errors and unreliable input data. He used a number of examples to highlight current issues with algorithmic decision making such as face recognition, which encountered problems due to an insufficient training data set, and algorithms used in making medical decisions, that failed to account for certain types of medical issues. He concluded by highlighting that there are no AI standards in place, but that there are current initiatives from the Institute of Electrical and Electronics Engineers (IEEE), the International Organization for Standardization (IOS), and China. Further, a number of countries have or are initiating AI strategies. However, these focus on national leadership regarding the technology, while questions of ethics, regulation, and standards remain untouched.

Mr Francisco Vera (Advocacy Officer, Privacy International) focused on digital trade in the context of privacy and consumer protection. He argued that it is important to address power imbalances that are emerging due to new technology. AI is good for analysing data and inferring data from existing data. More specifically, machine learning applications are useful for discovering patterns in data. However, the crucial point, according to Vera, is the input data. Input data and data sets are based on personal and private information. In this regard, there is a lack of meaningful consent from users and consumers. A further problem relates to the lack of high-quality data, or biases in the data sets used. Vera cautioned against blind faith in technology, and an uncritical approach to the free flow of information. He also pointed out that decisions should not be taken by machines without meaningful human involvement. The key task for the future is to recognise and address these problems in trade agreements.

Mr Felipe Sandoval (Senior Advisor, Trade Law and Negotiations, ICTSD) approached the topic from his experience as a trade negotiator and trade lawyer. He argued that trade negotiators do not approach the issue from the technological perspective, as they are not technological experts. The key concern of trade negotiators relates to questions of data. There are concerns about the protection of private data, and how to balance it with the free flow of data. Sandoval argued that these principles are not necessarily in contradiction, but it is important and possible to find a balance between them. He stressed that regional and national policymakers need to act, but existing treaties give an uneven picture, at times even a contradictory one, when it comes to the appropriate treatment of private data. Sandoval pointed out examples, the US-Mexico-Canada Agreement (USMCA), China, and the EU who all take substantially different approaches. This causes issues for the relations between these and other countries that are part of different treaties. There needs to be a dedicated process to look at privacy, consumer protection, and the value of free flow of data. Stronger frameworks on the issue of consumer data need to be integrated in trade agreements.

Mr Rodrigo Polanco (Senior Researcher and Lecturer, University of Luzern) pointed out that e-commerce and digital trade provisions in preferential trade agreements (PTAs) are not a new phenomenon, and that the number of PTAs with such provisions is growing substantially. They are mostly found in bilateral treaties and not all of them are binding. There are few general provisions on data flows. Further, Polanco compared the USMCA and the Korea-Central America Free Trade Agreement with regard to their provisions on e-commerce, digital trade, and data flows. He highlighted that USMCA has provisions on open government data, which is a new element for this type of treaty.

In the discussion, panellists stressed that the solutions to political questions cannot be produced by technological approaches. Political debate and decision making, including in trade agreements, need to take place to address privacy and consumer protection in the context of algorithmic decision making.