[Read more session reports and live updates from the OECD Going Digital Summit]
Mr Steve Lohr (Senior Writer and Reporter, New York Times) moderated the session. Lohr noted that while there are many advances taking places in certain areas, such as speech recognition, translation, and image recognition, thanks to artificial intelligence (AI), other areas are developing slowly or are creating worry, like when it comes to the loss of jobs and the labour market.
Mr Karol Okonski (Secretary of State for Digital Affairs, Poland) believes that no country can afford not having implemented or be in the process of implementing an AI strategy. It is important to make sure that all stakeholders understand what AI brings. He emphasised the importance of four aspects:
- Data economy: Making sure to provide an environment with the best possible access to data in order to teach systems and have a high quality of algorithms
- Education: In 2025, Poland will need 200 000 specialists on AI, who are currently unavailable
- Legal and ethical aspects of AI: It will be important to create a network between specialists at different universities to combine their knowledge and make sure the movement is heading in the right direction
Okonski noted that digital innovation hubs are creating expert networks in terms of people, competences, and services. ‘It is key to make sure that AI will not remain a concept accessible to big companies only.’
Mr Adam Lusin (Director, State Department Economic Bureau Multilateral Affairs Office, United
States) said that in the US, they are excited about economic growth, innovation, and better standards of living. AI will continue to be an important industry in the future and will enhance the establishment and development of other industries, such as autonomous vehicles, medical research, and transportation. The US administration has launched the American AI Initiative with 3 pillars:
- Investment in AI research and developments through a collaborative, multistakeholder approach
- Reducing barriers to help build trust and adoption in AI, insuring the protection of civil liberties, freedom, privacy, and the rule of law
- Investing in people in the workforce, focusing on their education
He noted that both European and American systems share the same core values of AI.
Mr Katsuya Watanabe (Vice Minister, Ministry of Internal Affairs and Communications, Japan) noted the importance of making rules around social implementations. When natural disasters occur, it is crucial for governments to get quick and accurate information about the situation on the ground and affected areas. A lot of information comes from social media, such is Twitter. Japan is using natural language processing and AI for analysing real time needs for food or medicine. There is a special system in place, which can provide support to the administration in case of natural resisters. Different types of AI are being implemented in Japan. As an example, Watanabe demonstrated to the audience a multilingual, speech-to-speech, free mobile application which translated his words from Japanese to English.
Ms Fanny Hidvegi (AccesNow, Belgium) spoke on human rights challenges associated to AI. AccessNow is looking into AI in connection to their longstanding projects on privacy, data protection, and freedom of expression and anti-discrimination. AccessNow has published three reports: the Toronto Declaration on equality and the right to non-discrimination in machine learning systems; a report which maps and compares the European Union and national strategies, and a global report on Human Rights in the Age of Artificial Intelligence.
Hidvegi offered three main recommendations:
- Evidence based policy making
- That design, development, and deployment of AI should be human-centric and respectful of human rights
- More research to be done
When it comes to facial recognition, she named several examples where people were targeted with facial recognition systems. For example, in a city in Hungary, Roma Hungarians were targeted, ‘Under a legal order, a whole data process was put under a national security umbrella in order to prevent any transparency.’ She urged companies to respect human rights.
Ms Margarete McGrath (Chief Digital Officer, Dell Technologies) said that from the private sector and technology perspectives, they are still not recognising maturity for AI in the market. The current stage is the one of robotics and automation, where it is important to get the right data and governance. Many of Dell’s clients on a global level have fragmented data systems, and fundamentals on backing-up the data and securing it are not in place. She pointed out that from Dell’s technology perspective, there is still a long way to go, especially for governments. Systems and infrastructure, basic data sets ‘talking to each other’ need to be put in place first.
By Aida Mahmutović