[Read more session reports and updates from the 14th Internet Governance Forum]
The German word ‘sicherheit’ means both safety and security. This session analyses the changes that artificial intelligence (AI) has brought to our lives with regards to security and safety, as information is more and more digitalised.
Safety and security are major issues with AI. Hence talking about safety requires talking about AI as well. One of the co-founders of a Berlin-based start-up called Neurocat, outlined three pillars of AI quality: robustness, comprehensibility, and functionality. Few companies or start-ups address the weakness of their systems, he noted. Humans do not have the cognitive capacity to inspect complex neural networks. Hence tools that test the robustness and functionality of a system before deployment are necessary.
To help organisations assess their AI capabilities, Neurocat has developed a program called AidKit, which is able to deploy 70 AI attacks simulations, to help identify weaknesses and to ensure system safety. In doing so, Neurocat addresses robustness, which is especially applicable for airline and automotive industries, and healthcare, where it is absolutely essential to maintain safe operations of AI.
While robustness may be addressed by such technical means, how can we address the ethical dimension? The ideal answer would be for systems to be built and trained in such a way as to leave no scope for bias. In practice, this seems to be complex, and Neurocat has not yet managed to connect the ethical and technical ends of AI.
One way to address ethical risks may be to make consumers aware that products have AI-enabled technology, by labelling AI products with a specific code or trademark. A seal of approval can assure consumers of safety, particularly if granted by public authorities - an option which is under consideration by the German government. To support this, a certification and technical standards to classify AI-enabled technology need to be developed. On the other hand, AI labelling may not be needed for trivial tasks that do not call for personal information.
While America is leading this discussion, Europe and Germany are developing their own standards, and China has yet another approach to developing AI. Canada, especially Quebec, has developed research in the field and has allocated considerable investment to stimulate research. The Canadian policy related to AI, which considers it to serve for the public good, is not based on ethics - which may vary across cultures - but rather on universally accepted human rights.
By Mili Semlani