[Read more session reports and live updates from the WSIS Forum 2016.]
The session looked into the social consequences of emerging technologies and the Internet of Things and People, such as ethics, privacy, surveillance, and security. The session moderator, Justin Caso (Technology Policy Advisor at the IEEE), opened the session suggesting that the discussions should be led by the in situ and remote participants, and invited the panellists to give a round of short introductions on what they see as the main challenges and opportunities towards 2030.
Mr Oleg Logvinov (President and CEO of the IoTecha Corporation) outlined his work and the work of the IEEE in closing the divide between policymakers and the technical community in order to strengthen cohesion between policy and technology. Privacy and security are incorporated in standards for the architectural framework for the Internet of Things (IoT), and peer review in revision of code is an important mechanism to ensure the incorporation of these standards. Looking towards 2030, he suggested that it might not be the IoT itself that would be the leading business innovation, but rather the interpretation of vast amount of information that IoT will bring with it. Using this information in the proper social context may bring new business models.
Dr M.G. Michael (PhD, Honorary Associate Professor at the University of Wollongong School of Computing and Information Technology) used the ancient Greek mythology creature Argus Panoptes, a 100-eyed giant who could see everything, as a metaphor for the mass surveillance of today. Quoting Orwell's 'nothing was your own except the few cubic centimeters inside your skull', he warned of the new technologies that will further invade privacy, especially brain-to-computer interfaces (BCI) and sub-dermal implants that could, and most likely will, be used for tracking everyone. Uberveillance - omnipresent electronic surveillance - is an imminent threat that is near the tipping point after which its effect on society would be irreversible, he pointed. In this regard, he reminded us of the Universal Declaration on Human Rights in which privacy is a fundamental right that underpins all other rights.
Dr Greg Shannon (PhD, Chief Scientist for the CERT Division, Software Engineering Institute at Carnegie Mellon University) agreed that the new technologies bring immediate challenges with regard to security and privacy. He emphasised the responsibility of all stakeholders towards citizens in this regard. On the other hand, while it may not be easy to differentiate between innovative and potentially risky aspects of technology and there may be mistakes made, he reminded us that technology creates many opportunities and transforms the society.
Reflecting specifically on the IoT, Logvinov suggested that in the combination of information from various sources and creating knowledge that was not there is the potential for creating a multi-trillion dollar economy; for example, the 360-degree cameras used in modern cars may be of great help for self-driving, but can also be a business opportunity to capture every single moment from every single angle. Certainly, this brings the risks for privacy that should be taken into consideration as well. Michael warned that lots of big data must be followed with 'big judgment' as well, otherwise we may be leaving too much of decisions to the technology.
A question from the remote participants focused the discussion back on the threat of uberveillance, asking how - and if - we can prevent it: whether through international agreements and regulation, ethics in technological innovations and business use, or through community push-back. Michael re-iterated that society is at the tipping point of an irreversible change, and suggested that communities need to become more aware of the imminent risks and push back on their regulators to ensure ethical governmental and business use of new technologies. Otherwise, he states, ‘we are all becoming monetized’ and everything may become economy. Shannon reacted with a dilemma asking if we still trust our civilisation, what are we really afraid of bearing in mind all the benefits that technologies have brought to us, such as longer and more comfortable lives.
A representative of the Global Network of Ethics asked how we can systematise ethics to keep it on the agenda - both in technology and in policy development processes. They suggested that community networks should be used to spread awareness and empower people to participate in this dialogue. Shannon confirmed that when it comes to technology development, ethics is built into the research and development process. An ITU staff from the audience asked if the concept of privacy in a multicultural hyper-connected world could become flat, in spite of different cultural environments, and even if it could possibly disappear thanks to the new generation of users? Runnegar reflected on the cultural diversity aspect by reminding us that the Asia-Pacific Economic Cooperation (APEC) countries managed to agree on a common privacy framework in spite of their cultural diversity, and that the global experiences will shape the understanding of privacy regardless of different cultures.
Reflecting on a discussion on trust in civilisation, a remote participant asked if today’s society is ready to experiment with technology and policy at this level, and if the technology is being developed at a faster pace that society can adapt to? For example, the school programmes cannot catch up with technologies; governments cannot catch up with citizens’ needs for privacy and rights; users cannot make sure they choose the right products and it can be a challenge to understand the privacy policies, net neutrality aspects, and security challenges. Mr Logvinov responded that society was not, is not, and will probably never be ready for technological challenges, but it has always been able to adapt. While he admitted that the pace of technology development is faster than ever and is almost exponential, he reminded us that we should trust that our civilisation is capable of figuring out how to deal with this. Shannon reminded us that the multistakeholder process is a societal response to technology changes that equips us to address the challenges more efficiently. Michael, however, reminded us that we are not sure of the implications of these new technologies, and that it might be difficult or even impossible to go back and check and correct after the current tipping point is crossed. He warned that, while we want to keep trusting our civilisation, we have to proceed with caution.
Another ITU staff asked the panel to look at the positive role of technology for security. Shannon reminded us of the case in which Google wanted to use browsers to monitor the behaviour of the population, but not to track individual browsers. They came up with an innovation that allows this, showing that technology innovations can also be made to support public good. Logvinov gave examples of wrist devices measuring pulse, movement, and other private data to warn employees in industrial environments of potential risks if entering certain jobs at certain moments. While this sacrifices some private data, it has been proven that such devices decrease the number of injuries. Another example he brought up was with regard to catering for the aging population, such as monitoring their everyday habits and issuing warnings for their particular needs such as if they skipped the meal or did not take a medicine. Runnegar outlined the importance of raising awareness about the collective responsibility and how the security of one network or set of devices can prevent exposing the security of other connected objects; she referenced the Routing Manifesto as one such good example.
Finally, a lawyer in the audience asked how can the IoT world benefit from the legal expertise to make the environment more trustable? Logvinov reminded us that the technologies do not simply emerge. There is a long development cycle - sometimes longer than policy development! He invited his fellow lawyers to engage in all steps of developing the new technology, so that by the time we have implementation of the technology, we also have an understanding of its implications and are ready to react with proper regulation. Shannon added that understanding should prevent possible unintended consequences of the legislation, such as discouraging innovation and investment, and invited lawyers to be technology agnostic. Runnegar agreed that the technology is evolving as are the law and policy. Michael added that we should all be open to different narratives, ready to accept correction if needed, and certainly to criticise technology and laws that are counterproductive. He reminded us that we need big judgment to follow big data. He concluded that we have to work together - and love our civilisation while being careful.
by Vladimir Radunovic