Policy challenges for AI development
19 Dec 2017 09:00h - 10:30h
[Read more session reports and live updates from the 12th Internet Governance Forum]
This session, sponsored by Internet Society of China, focused on the ethical, legal, and regulatory aspects that policies aimed at advancing artificial intelligence (AI) must consider. Ms Xinmin Gao, Vice President, Internet Society of China, moderated the panel.
First, Ms Akari Noguchi, Public Policy and Corporate Governance, Yahoo Japan, offered the perspective of the private sector. As Yahoo Japan is the country’s largest browser, it believes in its responsibility to lead the market not only on service development but also in policy suggestions. Noguchi highlighted three principles, along with three matching questions that should guide AI policymaking. These are: responsibility (who should be making decisions on AI?), accountability (who should be responsible for the decisions robots make?), and maturity of the market (would regulation at this stage benefit or hamper advances in AI?).
Then, Mr Satish Babu, President, InApp Information Technologies, focused on the ethical dimension of AI from the standpoint of civil society. To him, regardless of the modality of AI being discussed (‘strong’, ‘weak’, or mind-uploading), these developments will pose dilemmas that we must start discussing now. During his intervention, Babu posed stimulating questions on the matter, pertaining to:
- Job loss due to automation (how to protect workers)
- Fair redistribution of wealth created by machines
- Behavioural changes in humans interacting with robots
- ‘Artificial stupidity’ (machines may become biased owing to faulty coding)
- Security issues (how to keep AI in good hands)
- Unintended consequences (how to remain in control)
- Legal nature of machines (should they have rights?)
Next, Mr Claudio Lucena, Professor of Law, Paraiba State University, Brazil, presented an overview of the AI policy initiatives around the world, explaining ‘who is leading them, what are they doing, and how are they deploying it’. Actors are very diverse, ranging from governments and international organisations, to professional organisations (such as the Institute of Electric and Electronic Engineers – IEEE), NGOs, and networks of think tanks, like the Network of Centers. Despite their diversity, actors operate in much the same way: issuing guidelines and recommendations, most of them based on deep analysis. Among the initiatives that Lucena cited were The European Parliament’s Commission on Civil Rights Rules on Robotics and the IEEE’s Model Process for Addressing Ethical Concerns During System Design.
Lastly, Mr Yi Ma, Professor of Electrical Engineering and Computer Sciences, University of California, Berkeley, provided a ‘more or less purely technological’ outlook of how future AI policies will impact the academic field. Drawing on his research curriculum, Ma stated that he took part in a project regarding driverless cars as early as 20 years ago. Wondering why the technology took so long to reach the market, he posited that reasons may be related to concerns from the insurance industry and ‘the fact that the US has too many lawyers’. To conclude, he enumerated three questions that must be considered by any future AI policy:
- How will we accommodate different markets and nations?
- How can issues of security, safety and privacy be solved?
- How can we encourage companies to work for society?
The moderator then opened the floor for questions from onsite and remote participants. The enquiries covered the issue of coordinating actors with different expertise and backgrounds; whether the current regulatory efforts are beneficial to AI development; what is the role of AI in larger policies that include it (like universal basic income enabled by blockchain technology), and what changes can we anticipate in copyright and IP law as a result of AI issues.
By Guilherme Cooper Vicente
12th Internet Governance Forum
18 Dec 2017 08:00h - 21 Dec 2017 17:00h