AI for application security testing and risk scoring

12 Apr 2019 09:00h - 10:45h

Event report


Session moderator Mr Ilia Kolochenko (CEO & Founder, High-Tech Bridge) began by giving a brief overview of the current state of affairs in artificial intelligence (AI) and machine learning (ML) technologies in cybersecurity.
Kolochenko mentioned that the New York Times had outlined that AI experts make more than US$1 million a year. He added that statistics by the Business Insider had shown that in the first quarter of 2018, US AI startups had raised US$1.9 billion of venture capital. This, he said, was proof that people believed AI was the future.
Further, Kolochenko cited that a Gartner Research had estimated the global AI business value to reach US$1.2 trillion in 2018. Additionally, he cited a BBC study, which indicated that by 2030, robot automation will have taken 800 million jobs.
Kolochenko outlined some of the fundamental issues in AI, which among them included bias based on logical predictable outcome, and their limitations.

In his intervention, Mr Paul Wang (Executive Advisory to CEO & Advisory Board Member, High-Tech Bridge) sought to answer the question whether AI was capable of replacing humans. Paul argued that AI was ideal for undertaking complicated tasks, and should not be seen as a threat. He felt that AI was rather an enabler, and offered numerous benefits to humanity.

Prof Federico Varese (Board Member, Global Cyber Security Capacity Centre [GCSCC], Professor of Criminology, Director of EXLEGI) reiterated that AI depended on protocols created by humans. Federico suggested that AI should not be left solely in the hands of those who design them. He further said that it was vital to incorporate ethics and human rights in AI design.

Mr Antoine Bichler (Sales Executive, High-Tech Bridge) answered Kolochenko’s question on what role AI was going to play in the next decade.
Bichler explained that AI algorithms could be trained to distinguish typical activities from malicious ones, such as anticipated cyber-attacks. Bichler mentioned that AI possessed the capability to process huge amounts of data beyond human capabilities. He added that strong AI was not yet in use, as human intervention was still needed to determine whether errors were present in a given activity.
With regard to AI in cybersecurity, Kolochenko argued that AI should not be a replacement of cybersecurity strategy. He stated that while AI was incapable of training employees and teams, it was capable of supporting humans in effective decision-making.

Answering the question on whether he foresaw AI replacing law enforcement agencies, Varese stated that it could only supplement but not replace the human factor. Varese gave an example of the UK, where the police was using AI and big data to analyse crime patterns.

On his part, Wang highlighted the importance of AI in combating crime. Wang shared the case of China, where he alluded that the Chinese government had installed over 200 million cameras. This, he said, had necessitated the government to monitor citizens and give them social credit scores based on the way they conducted themselves in public, a factor that had helped combat social vices.

Based on the discussions, the moderator established the following as key takeaways:

  • Strong AI, capable of fully replacing people, does not exist yet and will unlikely appear within the next decade.
  • AI will create new jobs, eliminate routines and trivial human work, and empower people to unleash their creativity.
  • AI will not help fix fundamental problems, and that a risk-based information security strategy was requisite prior to AI implementation.
  • It is always vital measure and benchmark economical practicality of ML/AI solutions, related maintenance, and support costs.

 A workshop participant sought to know from whether AI in China was only being used in the cities.
Wang mentioned that the use was mainly in cities owing the perceived high levels of crime.


By Bonface Witaba