18th Global Symposium for Regulators

9 Jul 2018 to 12 Jul 2018
Geneva, Switzerland


Event report/s:
Stefania Grottola

This high-level roundtable brought together experts from academia to present the ITU BDT AI For Development Series, highlighting its key findings and recommendations.

This high-level roundtable brought together experts from academia to present the ITU BDT AI For Development Series, highlighting its key findings and recommendations. It was moderated by Ms Régina Fleur Bessou Assoumou (Chair of the ITU-D Study Group 1) who introduced the panellists by asking about the key issues that can be encountered when dealing with policy makers.

The first panellist, Dr Urs Gasser (Executive Director of the Berkman Klein Center for Internet & Society at Harvard University and Professor of Practice at Harvard Law School), argued that policy makers and regulators are wrestling with how to approach the next wave of technology. Recurrent issues are the asymmetry of information and siloed conversations, and solutions that benefit everyone need to be considered. Questions about inclusiveness and the future of jobs should be part of the conversation, as well as discussion on the governance instruments available.

The second speaker, Dr Gyu Myoung Lee (Adjunct Professor at KAIST) spoke about the use of data, algorithms and blockchain. In order to provide convenient and smart services, the application of AI is essential. Thus, there is a need for new ecosystems that facilitate data sharing. Moreover, concerns over technical issues and about trust related to the use of blockchain need to be addressed.

Dr Michael Best (Director of the United Nations University Institute on Computing and Society (UNU-CS), Professor, Sam Nunn School of International Affairs and the School of Interactive Computing, Georgia Institute of Technology) argued that AI inevitably falls under ethical and social implications. Thus, ethicists on the cutting-edge of AI are needed. Moreover, there is a critical need for a robust information sharing infrastructure.

AI creates both opportunities and risks; however, the best way to address these challenges is to have a fair and diverse all-round discussion.


Stefania Grottola

The second day of the Global Symposium of Regulators started with the opening remarks of Mr Houlin Zhao (ITU Secretary-General) who talked about regulation in rela

The second day of the Global Symposium of Regulators started with the opening remarks of Mr Houlin Zhao (ITU Secretary-General) who talked about regulation in relation to the digital economy. The agenda then moved to the leadership debate. It brought together leaders and experts to discuss the challenges of using artificial intelligence (AI) as well as the opportunities it brings, and how emerging technologies are expanding regulatory frontiers to new horizons. The role of policy makers and regulators is being questioned by digital transformation and the new categories of digital opportunities. This session explored the opportunities of AI for improving services such as e-government. With this opportunity in mind, it is necessary that regulators are able to address the different concerns related to the changing landscape, by identifying both the challenges and opportunities. The session was moderated by Mr Brahima Sanou (Director of International Telecommunication Union, BDT) who introduced the session topic by underlining the ‘huge’ opportunities of emerging technologies, while pointing out the need for awareness.

The first speaker, Mr Sorin Grindeanu (President ANCOM (Romania) and GSR-18 Chair), talked about 5G technologies and the spectrum allocation for implementing them. He used the example of Romania drafting its 5G strategy to highlight that the rapid growth of wireless broadband requires a wireless electronic communications network. Millions of people will be connected, and a new range of applications will be available. The regulation process has to be able to harmonise standardisation.

The second speaker, Mr Ajit Pai (Chairman of the Federal Communications Commission (FCC) of the United States), recalled that the term ‘artificial intelligence’ was coined sixty years ago by Prof. John McCarthy in his research to find a machine that could reason like a human; indeed, he believed that ‘to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’. Speaking about the opportunities created by AI, he mentioned an FCC project to develop new technology to assist people living with disabilities, and Seeing AI, one of this year’s winners. It is an app by Microsoft that uses AI and deep learning tools to narrate the visual world with spoken audio or real-time text for those with visual impairment. Pai said that he recognises that AI is changing every social and economic aspect of our society. With this in mind, the FCC will hold a forum on the impact of AI and machine learning in the communications market. He then proposed some guiding principles that would set the stage for a policy environment that encourages the development of new technologies and high-speed networks. First, regulatory humility is needed to avoid new technology being forced into old frameworks. Second, governments should facilitate innovation and investments. Third, making the spectrum for wireless services free and available for flexible use. Finally, make the access to new technology universal.

The third speaker, Mr Mahmoud Mohieldin (Senior Vice President of the World Bank Group), argued that there is a need for strategy and policies to deal with opportunities and challenges of information technology. He added three examples of resistance to change and resistance to technology: the reaction of the former Mexican President, Santana, who was against the introduction of steam engines; England’s prohibition of automated machines in sock production; and the initial concerns about Jakar machines. He then moved to more recent successful example – the M-Pesa mobile phone payment system – used in Kenya. His main point was that at the moment, it is enough to have one specific strategy. There is a need for a global and comprehensive approach and strategy. He introduced the three ‘Bs’ concept: building, boosting and brokering through the implementation of public-private partnerships. Finally, he talked about some positive applications of emerging technologies, such as big data for social good and the IT4D.

The fourth speaker, Ms Anastassia Lauterbach (Author of ‘The Artificial Intelligence Imperative’, and International Technology Strategist Adviser and Entrepreneur), argued that AI is one of the most powerful technologies. Indeed, she pointed out that among the ten top companies in the world, five are ‘AI first’: Google, Facebook, Microsoft, Apple and Amazon. The ‘AI first’ feature can be defined as the focus on investing in their own semiconductors to provide hardware capabilities for data mining. These companies are investing in fundamental AI research. She talked about three main risks than could be encountered while dealing with AI: design mistakes – biases in technology reflecting the technology’s creator; malicious intent – unethical behavior of the system; and, the absence of humans in the collecting and analysing of data. This led her to address concerns over the ethics of AI, related to the governance of AI safety, the decision-making guidelines for autonomous systems, the incentive design for autonomous systems, and the goal alignment between autonomous agents and humans. Finally, she concluded her speech by discussing social governance in AI, which includes actors such as municipalities, schools, AI companies and organisations.

The session was closed by Dr Kemal Huseinovic (Chief of the Department of Infrastructure, Enabling Environment and E-Applications at the ITU/BDT). He argued that everything we love about civilisation is a result of human intelligence; and AI can foster that. The more we rely on technology, the more we need to trust this technology and the question on how we can ensure this trust is not only essential, but it raises ethical issues that require the engagement of policy makers.

Stefania Grottola

The application of Artificial Intelligence (AI) for malicious purposes can increase the impact of cyber threats on information and communications technology (ICT) networks.

The application of Artificial Intelligence (AI) for malicious purposes can increase the impact of cyber threats on information and communications technology (ICT) networks. However, AI can also be used to strengthen cyber defense and to improve cybersecurity and create new competences, skills and jobs. The second session of the GSR – 18 focused on the positive application of AI to strengthen the security of ICT infrastructures and services, while having a positive impact on the workforce and end users. The session was moderated by Mr Stephen Bereaux (Chief Executive Officer Utilities Regulation and Competition Authority (URCA) of the Bahamas) who introduced the panel, stressing that the key aspect in the regulatory mandate is to understand what these new technologies are, and how they will impact the regulatory frameworks.

The first panellist was Mr Benedict Matthey (Account Executive at Dark Trace). He explained how large organisations are already able to launch attacks; however, the increased availability of learning machines has made small organisations able to launch attacks as well. Thus, the complete visibility of all organisations’ devices is needed. To this extent, organisations need to make sure that it is clear what is going on in the network. The application of AI can enable humans to go beyond their limits: despite attackers using AI, defenders can also use it in tackling security issues because it saves time and is efficienct.

The second panellist was Mr Michael Nelson (Tech Strategy at Cloudflare). He talked about the misconception about AI and learning machines which results in ineffective and counterproductive policies. He talked about these misconceptions in terms of myths:

  • The term ‘artificial intelligence’ is often believed to be a useful term; however, its definition is too broad and refers to too many aspects.

  • One myth about the Internet of Things (IoT) is that it is different from the Internet. With regard to his, he argued on his Twitter account (@MikeNelson) that ‘We are not going to “fix” the IoT by replacing the Internet’.

  • There is a misconception about the possibility of controlling software; however, this is unpractical.

  • Regulating AI by controlling algorithms and making companies disclose their algorithms and software does not work. Software evolves minute by minute because of the amount of data that is put into it.

  • The need for standards and check-lists that define how IoT devices work with the relative proposal of implementing outdated security solutions for all devices should be considered as an additional cost and a subtraction of incentives for innovation.

  • The final misconception is that we need to create a global framework for securing IoT devices. However, an alternative solution is to rely on the ‘programmable cloud’ to create techniques for securing the different types of IoT applications. To this extent, the main key is the interoperability of devices.

The third panellist, Mr Graham Butler (Chairman at Bitek Global Limited) stressed that the quick evolution of the network means that we see 2.5 million attacks carried out every 20 minutes. Moreover, he underlined that rules on voice telecommunications exist and are applicable, while there are no rules on data. This results in an enormous loss of income. Moreover, policy and law enforcement actors are facing problems because of encrypted traffic: 50-60 % of attacks are encrypted and this creates challenges for law enforcement when it comes to prosecuting the attackers. He finished by saying that the World Wide Web in any country belongs to that country, and that it is that country’s duty to protect it.

The fourth panellist, Mr Ilia Kolochenko (CEO at High-Tech Bridge) argued that the purpose of using AI from a big firm’s perspective is based on the idea that AI technologies solve problems and diminish the costs. Thus, before trying to implement AI, it is important to understand its practical features within the context of the firm.

The fifth panellist, Mr Stefano Bordi (Vice President Cyber Security of Leonardo Company) argued that the cyber defense capability can be described by the coexistence of technology, procedures, processes and people. With regards to the activities of cyber defense centres, he stressed that the application of AI can be implemented in the prevention phase of the activities. Despite he fact that the cybersecurity aspect will always be ‘in front of the monitor’ and the control system, the new cybersecurity experts will need to change their competency package.

The sixth panellist, Ms Miho Naganuma (Manager Regulatory Research Office and Cyber Security Strategy Division at NEC Corporation) argued that in order to liberate AI, we need to face four issues: data, information, knowledge and intelligence. AI gives intelligence features to the devices it is applied to. Thus, for this intelligent part to support human activities, it needs to have broader views for solving issues. In line with the previous statement, he said that in the near future, many processes will be automatised, thus highly skilled people will be needed.

The last panellist was Mr Guido Gluschke (Co-Director of the Institute for Security and Safety, Brandenburg University of Applied Sciences). He started his speech by recalling the history of nuclear weapons and the relative discussion on the international level. He underlined that after the Stuxnet attack, nobody discussed the cybersecurity aspect of the topic. It took five years to make regulators feel confident in ruling about cybersecurity; yet, today there is still no clear understanding about cyber threats. In his closing, he advised including cybersecurity in nuclear security plans and then having a discussion on the topic. There is a need for regulators to understand the topic in its specificity and to act on a co-operative basis, by supporting nation states in the implementation of the policies. Education is a key factor and has to be implemented. Finally, a multistakeholder approach is necessary.


Stefania Grottola

Cybersecurity and privacy represent two interconnected aspects.

Cybersecurity and privacy represent two interconnected aspects. Legal frameworks are mandatory in any cyber context because of the amount of personal information that needs to be protected while keeping up with the speedy evolution of technologies. Data is essential for Internet of Things (IoT) devices; indeed, by 2025, there will be over 20 billion connected devices. The session was moderated by Mr Marcin Cichy (President of the Office of Electronic Communications (UKE) of Poland). It focused on privacy considerations within the context of artificial intelligence (AI) and IoT, including references to the General Data Protection Regulation (GDPR).

The first speaker was Mr Mohammad N. Azizi (Chairman of the Afghanistan Telecom Regulatory Authority (ATRA)). He explained how the information technology landscape is in constant evolution and how most of data is generated from online and offline platforms. Thus, IoT will further transform the way we think about data and the way we use it. With the application of AI to IoT devices, the cybersecurity aspect becomes a crucial one. As a result, law enforcement agencies and regulators cannot work in silos, they need to work together. Regulators need to focus on how data is collected, while law enforcement should focus on how the data is used. Collaboration is necessary for going forward.

The second speaker, Mr Giampiero Nanni (Government Affairs of Symantec) talked about the impact of privacy in the context of Shadow IT, defined as information technology systems that live inside an organisation without explicit organisational approval. Thus, privacy issues are raised when dealing with data put into the cloud through these applications. Finally, he further argued that IoT is a ‘time bomb’ because it does not have provisions in terms of security.

The third speaker, Mr Aaron Kleiner (Director, Industry Assurance & Policy Advocacy at Microsoft) spoke from a deep industrial perspective explaining how technology companies think about security and adding Microsoft’s experience as an example. He argued that a change in people’s mindset is needed: approaches need to move from a security bolt at the end of production – to putting security in the core of production. In addition to that, an operational assurance framework should be put into consideration. Over the years, societal technology reliance reached policymakers. From the technology sector’s perspective, it is up to them to understand how to improve cybersecurity. In regards to this, he recalled Microsoft’s publication, The Future Computed: Artificial Intelligence and Its Role in Society. He finished his argument by stating that there is a need for time to identify and articulate the key principles of making AI, and enabling people to achieve more. The tech industry is collaboratively looking at AI. To this extent, a public-private dialogue should be fostered. With regards to the GDPR, he argued that it has a significant impact on the private sector, arguing that privacy represents the foundation for trust between the private sector and consumers.

The fourth speaker, Mr Luigi Rebuffi (Secretary-General of the European Cybersecurity Organization (ECSO)) argued that a right balance between monitoring activities and cybersecurity does not exist. It depends on various aspects, such as the cultural environment. Recently, surveillance has switched from physical surveillance to digital surveillance of data and information. He stated that it is a kind of surveillance that we, as citizens, are providing to society. Moreover, society will evolve with the increase of connected devices. With regards to privacy, a recurrent, still open question is: does privacy still exist? There is a need to find a pathway for the balance between the increase of security and the correct use of data. Furthermore, there is a need to educate both protectionists and also, citizens.

The fifth speaker, Ms Raquel Gatto (Regional Policy Advisor of the Internet Society (ISOC)) recalled ISOC’s publication the 2017 Internet Society Global Internet Report: Paths to Our Digital Future. She explained that the research identified six different drivers: cyber threats; AI; IoT; the role of governments; network standards; and Internet economy. Despite the apocalyptic view about jobs that will be lost, there is room to be optimistic: technological evolution can be used for better social development. With regards to cybersecurity, it has to be considered during the first stages of development, and it is up to regulators to change this mindset. She argued that this is already happening in the case of the IoT framework of the Online Trust Alliance (OTA). However, work should also be done on the prevention side. Finally, she concluded her speech by trying to answer the question ‘does privacy still exist?’ She argued that yes, it does, and it is about being aware of your data. Thus, no law will bring a definitive solution, but an efficient way to achieve privacy is to a collaborative by all stakeholders.

The sixth speaker was Mr Ivo Lõhmus, Vice President Public Sector of the Guardtime AS, who talked about the use of blockchain in the implementation of the use of data. He explained how blockchain technology works and explained that one important feature of blockchain is the immutability of data. As a result, this can have negative implications with regard to human rights such as the right to be forgotten.

The final speaker was Mr Vincenzo Lobianco (Chief Technology and Innovation Officer
(Autorità per le Garanzie nelle Comunicazioni) of Italy). He talked about the Italian experience in terms of a best practice example. There is a new paradigm in place: the use of IoT means that several different actors are involved in the collection and elaboration of data. They all have a common feature: they need a communications infrastructure to send data directly to the centre, to the cloud. The telecom regulator has to understand the need for working with different sectors. In conclusion, he gave three main examples of collaboration: the energy sector with smart metering; the transportation authority; and finally, the large investigation of big data and economy.


Stefania Grottola

The final session of the day brought together experts from the private and public sectors and academia.

The final session of the day brought together experts from the private and public sectors and academia. The focus of the session was to identify the next steps that have to be taken in order to improve national policies and strategies, create opportunities to implement ICT services for citizens, and generate social impact and economic development.

The session featured the speeches of Mr Mika Lauhde (Vice President Cyber Security & Privacy of Global Public Affairs, Huawei Technologies Co., LTD), Mr Dan Tara (Vice President of Positive Technologies), Dr Ram-Sewak Sharma (Chairman, Telecom Regulatory Authority of India (TRAI) of India), who introduced the audience to the concept of ‘electronic consent artifact’, Mr Jacques de Werra (Professor of Contract Law and IP Law, Vice Rector of the University of Geneva), and Mr Alan Gush (Senior Director of Cyber Solutions, Comtech Telecommunications Corp.).

The private sector stressed the contradictory situtation in which – the regulators ask for secure networks – but do not provide exhaustive guidelines on how to achieve that. Operators are often not ready. From an academic perspective, the future of education is deeply connected with the future of work, and it is crucial to prepare students for the challenges they will face in the work environment. However, formal higher education could and should be complemented with self-study and certification.

The session was closed with a speech by Mr Yushi Torigoe (Deputy to the Director and Chief of Administration and Operations Coordination Department at the ITU). He stressed the need for collaboration between different stakeholders to effectively tackle emerging issues. He proposed a three pillar approach based on: corporation, collaboration and coordination, while highlighting and recalling the five pillars on which the ITU is based: legal, technical, organisational, capacity building and international.

Stefania Grottola

Opening Session

The Opening Session of the 2018 Global Symposium of Regulators (GSR-18) began with speeches from Mr Brahima Sanou (BDT Director of the International Telecommunication Union (ITU)), Mr Sorin Grindeanu (President of the National Authority for Management and Regulation in Communications (ANCOM) of Romania, and Chair of the GSR-18), Ms Nerida O'Loughlin (Chair and Agency Head of the Australian Communications and Media Authority), Mr Mahmoud Mohieldin (Senior Vice-President of the World Bank Group), and Mr Manish Vyas (President of Communications, Media and Entertainment Business, and CEO of Network Services at Tech Mahindra). They introduced the topic of the symposium, New Regulatory Frontiers, by stressing the need to understand how Information and Communication Technologies (ICTs) and Internet of Things (IoT) devices can both change our daily life, as well as but pose important challenges. It is important to understand that the application and implementation of new technologies challenges everything in the daily life of people and businesses.

Session 1: AI and CybersecurityThe State of Play

The first session of the Global Symposium of Regulators (GSR) focused on emerging technologies such as Artificial Intelligence (AI), both in terms of emerging threats and vectors strengthening and improving the effectiveness of cyber-attacks. The session was moderated by Mr Joe Anokye (Director-General of the National Communication Authority (NCA) of Ghana) who introduced the discussion by exploring the current situation, and the relationship between AI, the Internet of Things (IoT) and cybersecurity. For instance, according to Anokye, AI should be considered with regards to its application in IoT devices: AI allows IoT’s devices to be intelligent. However, attention should also be given the occurrence of cyber-attacks. In the past two years, these attacks have increased. As a result, questions are arising related to the regulation of technologies that are still hard to understand.

The first panellist was Dr Kemal Huseinovic (Chief of the Department of Infrastructure, Enabling Environment and E-Applications, ITU/BDT). He talked about the dual use concept of AI. Indeed, AI can be used for good, as well as being the means for cyber-attacks. Thus, it is necessary to support research and engage with different stakeholders using a multistakeholder approach.

The second panellist was Mr Philip R. Reitinger (President and CEO of the Global Cyber Alliance). He argued that AI can improve the chances and abilities of the defender. To this extent, the notion of risk has to be contextualised. The risk of cyber-attacks is growing because of three factors: complexity, criticality and connectivity. The IoT is going to push these factors exponentially. He proposed thinking about security, not in terms of securing things, but in terms of securing the Internet and the network on which things work and are connected. He argued that the current use of the domain name system is a good way to protect IoT. Moreover, in the long term, there is a need for strong authentication, use of automation, and interoperability.

The third panellist was Mr Manish Vyas (President of Communications, Media and Entertainment Business, and CEO of Network Services at Tech Mahindra). He followed the line of the previous argument: using AI to enable IoT systems. Currently, there is consensus on taking advantage of technology and balancing its negative implications. He further argued that ‘the world of innovation has changed – has changed for good and forever’. However, there is a need to gain the trust of intermediaries.

The fourth panellist was Ms Giedre Balcytyte (International Development Director NRD Cyber Security). She started her speech by explaining the concept of cyber resilience and how essential it is to have infrastructure in place, to rely on for resilient purposes. Technology is often used as a means for development and modernisation; however, it must be understood that technology does not tackle issues by itself. Moreover, in order to have an effective system in place, there is a need to emphasise the capacity of the organisations and to understand that knowledge has to move and adopt faster.

The fifth panellist was Mr Serge Droz (Director of the Board Forum of Incident Response and Security Teams). He talked about the danger of the evolution of large scale attacks and the effects they could have. The human component in the management of response situations has to be implemented; and it has to be implemented through collaboration on a large scale. Indeed, it is necessary to communicate because of the global scale and extension of the various issues.

The sixth panellist was Mr Neil Sahota (IBM Master Inventor and WW Business Development Leader IBM Watson Group) followed along the same lines. He stated that risk does not necessarily have a negative connotation and that the main danger we should consider is whether there is a possibility of creating AI that is the ultimate hacker.

The final speaker was Mr Aleksandar Stojanovic (Executive Chairman and Co-Founder AVA). He argued that the missing key element to collaboration is trust. The market is more and more fragmented, and the combination of AI and technology is to some extent extremely new. Thus, the question of trust is migrating to the hardware level. There is a need to trust the impressive amount of information and data coming in. Ensuring the trustworthiness of information will become the pillar of trustworthy AI.

Replying to questions from the audience, the panellists argued in favour of a regulatory framework that merges bottom up and push down approaches, stating that a micro regulatory framework for technology would be dangerous. Moreover, further issues discussed were the concept of trust and interoperability of devices; and the fact that a framework does not necessarily have to come from the regulatory side, but it could also be from the market side.

The 18th Global Symposium for Regulators (GSR), organised by the International Telecommunication Union (ITU), will be held on 9–12 July 2018, in Geneva, Switzerland.

Under the overarching theme 'New regulatory frontiers', the symposium will feature discussions about the impact of digital transformation on consumers, businesses, and citizens, as well as on the expansion of regulatory frontiers beyond traditional telecommunications/information and communications technology. 

The symposium will include three thematic events:

  • Global Dialogue on AI, IoT and Cybersecurity – Policy and regulatory challenges and opportunities:
    Discussions will revolve around issues such as the current situation and the relation between artificial intelligence (AI), Internet of Things (IoT) and cybersecurity, and the potential impact at the global level, the potential use of AI to secure ICT infrastructures and services and the impact on the workforce and users, and privacy within the context of AI and the IoT.
  • Chief Regulatory Officials / Industry Advisory Group for Development Meeting
  • Regional Regulatory Associations Meeting

The GSR main sessions will focus on the following topics: emerging technologies for digital transformation; AI for development, regulation for IoT, AI and 5G; the need for algorithms regulation and the importance of strengthening transparency and accountability, digital identity across different platforms, and the protection of personal data in a smart data driven economy. 

For more information, visit the event website.


The GIP Digital Watch observatory is provided by



and members of the GIP Steering Committee


GIP Digital Watch is operated by

Scroll to Top