The application of Artificial Intelligence (AI) for malicious purposes can increase the impact of cyber threats on information and communications technology (ICT) networks. However, AI can also be used to strengthen cyber defense and to improve cybersecurity and create new competences, skills and jobs. The second session of the GSR – 18 focused on the positive application of AI to strengthen the security of ICT infrastructures and services, while having a positive impact on the workforce and end users. The session was moderated by Mr Stephen Bereaux (Chief Executive Officer Utilities Regulation and Competition Authority (URCA) of the Bahamas) who introduced the panel, stressing that the key aspect in the regulatory mandate is to understand what these new technologies are, and how they will impact the regulatory frameworks.
The first panellist was Mr Benedict Matthey (Account Executive at Dark Trace). He explained how large organisations are already able to launch attacks; however, the increased availability of learning machines has made small organisations able to launch attacks as well. Thus, the complete visibility of all organisations’ devices is needed. To this extent, organisations need to make sure that it is clear what is going on in the network. The application of AI can enable humans to go beyond their limits: despite attackers using AI, defenders can also use it in tackling security issues because it saves time and is efficienct.
The second panellist was Mr Michael Nelson (Tech Strategy at Cloudflare). He talked about the misconception about AI and learning machines which results in ineffective and counterproductive policies. He talked about these misconceptions in terms of myths:
The term ‘artificial intelligence’ is often believed to be a useful term; however, its definition is too broad and refers to too many aspects.
One myth about the Internet of Things (IoT) is that it is different from the Internet. With regard to his, he argued on his Twitter account (@MikeNelson) that ‘We are not going to “fix” the IoT by replacing the Internet’.
There is a misconception about the possibility of controlling software; however, this is unpractical.
Regulating AI by controlling algorithms and making companies disclose their algorithms and software does not work. Software evolves minute by minute because of the amount of data that is put into it.
The need for standards and check-lists that define how IoT devices work with the relative proposal of implementing outdated security solutions for all devices should be considered as an additional cost and a subtraction of incentives for innovation.
The final misconception is that we need to create a global framework for securing IoT devices. However, an alternative solution is to rely on the ‘programmable cloud’ to create techniques for securing the different types of IoT applications. To this extent, the main key is the interoperability of devices.
The third panellist, Mr Graham Butler (Chairman at Bitek Global Limited) stressed that the quick evolution of the network means that we see 2.5 million attacks carried out every 20 minutes. Moreover, he underlined that rules on voice telecommunications exist and are applicable, while there are no rules on data. This results in an enormous loss of income. Moreover, policy and law enforcement actors are facing problems because of encrypted traffic: 50-60 % of attacks are encrypted and this creates challenges for law enforcement when it comes to prosecuting the attackers. He finished by saying that the World Wide Web in any country belongs to that country, and that it is that country’s duty to protect it.
The fourth panellist, Mr Ilia Kolochenko (CEO at High-Tech Bridge) argued that the purpose of using AI from a big firm’s perspective is based on the idea that AI technologies solve problems and diminish the costs. Thus, before trying to implement AI, it is important to understand its practical features within the context of the firm.
The fifth panellist, Mr Stefano Bordi (Vice President Cyber Security of Leonardo Company) argued that the cyber defense capability can be described by the coexistence of technology, procedures, processes and people. With regards to the activities of cyber defense centres, he stressed that the application of AI can be implemented in the prevention phase of the activities. Despite he fact that the cybersecurity aspect will always be ‘in front of the monitor’ and the control system, the new cybersecurity experts will need to change their competency package.
The sixth panellist, Ms Miho Naganuma (Manager Regulatory Research Office and Cyber Security Strategy Division at NEC Corporation) argued that in order to liberate AI, we need to face four issues: data, information, knowledge and intelligence. AI gives intelligence features to the devices it is applied to. Thus, for this intelligent part to support human activities, it needs to have broader views for solving issues. In line with the previous statement, he said that in the near future, many processes will be automatised, thus highly skilled people will be needed.
The last panellist was Mr Guido Gluschke (Co-Director of the Institute for Security and Safety, Brandenburg University of Applied Sciences). He started his speech by recalling the history of nuclear weapons and the relative discussion on the international level. He underlined that after the Stuxnet attack, nobody discussed the cybersecurity aspect of the topic. It took five years to make regulators feel confident in ruling about cybersecurity; yet, today there is still no clear understanding about cyber threats. In his closing, he advised including cybersecurity in nuclear security plans and then having a discussion on the topic. There is a need for regulators to understand the topic in its specificity and to act on a co-operative basis, by supporting nation states in the implementation of the policies. Education is a key factor and has to be implemented. Finally, a multistakeholder approach is necessary.