AI for Good Global Summit 2018

15 May 2018 to 17 May 2018
Geneva, Switzerland

Resource4Events

Event report/s:
Stefania Grottola

This session addressed the application of artificial intelligence (AI) for achieving the sustainable development goals (SDGs) and especially, the SDG 4, ‘Ensure inclusive and equitable quality educ

This session addressed the application of artificial intelligence (AI) for achieving the sustainable development goals (SDGs) and especially, the SDG 4, ‘Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all’. The session was moderated by Mr Fengchun Miao (Chief of Unit for ICT in Education, UNESCO). He argued that AI can remove economic, cultural and language barriers. However, we often talk about ’responsible AI’ in the developed world, forgetting that in order to address the ethics of AI, we should focus on the equality in accessing these tools. He recalled the strategy pushed by UNESCO by investing in youth literacy and improving access to education. However, today more than 260 million children and youth are out of school. More than 600 million people have not reached the minimum proficiency level in reading and mathematics. More than 20% of the primary schools in the Sub-Saharan region do not have access to electricity. He recalled that the main barriers to access to education are connected to poverty, conflicts, natural disaster and gender discrimination. In developing countries in particular, literacy rates are low even among graduate students from schools. Thus, before opening the floor to the panellists, he asked the guiding question of the discussion: How can we use and facilitate the use of AI to manage and optimise education? Furthermore, he stressed the problem of ethics in AI: many people think about the negative aspects of AI and this hampers the applications for education purposes. In response to these trends and challenges, UNESCO is taking action by developing norms and building capacity – such as guidelines – for developing countries.

The session then moved to the keynote speech, addressed by Ms Jayathma Wickramanayake (UN Secretary-General's Envoy on Youth)​​, who talked about growing up in the AI era. She structured her speech around the topics of education, gender, equality and meaningful participation and governance. The world has the largest generation of young people it has ever seen. Furthermore, three-quarters of this youth lives in developing areas, and owing to technological developments and tools, can benefit from a competitive advantage to understand and respond to complex dynamics of the world we live in. However, access to digital technology is limited in developing and least developing countries. Thus, the impact of AI on education should be looked at in relative terms due to the inequality in accessing these tools. She also stressed the need for the education system to adapt to the technological developments to leave no one behind and to ensure continuing education. Human beings, ethics and values have to be at the centre of the discussion about AI. She then introduced the concept of ‘digital citizenship’ meant to improve transparency, protection and overall equality across all genders, ages, and regions. She also stressed the need for engaging young people in the discussion about AI because they are, ‘essential actors in finding a solution to the issues faced by the world today’. Young women are the most discriminated against in the access to technology. Gender equality in the digital age can promote greater partnerships in the international community. She concluded her speech with the following main remarks: the importance of making technology accountable to the system of human rights; the need to strengthen the access to education without discrimination; and finally, the need for all AI related programming to take into consideration the needs of the youth.

The next speaker was Mr Jeon Gue Park (Principal Researcher and Project Leader, Electronics and Telecommunication Research Institute (ETRI)), who demonstrated two applications meant to provide English language services. The idea was pushed by a joint effort of the Korean Ministry of Education and Ministry of Science in ICT. The Korean government wants equality in education opportunities nationwide; but the situation is difficult, due to a lack of human resources. With this regard, AI can be a realistic alternative to English or foreign language teaching. Thus, it is necessary to combine AI and the educational context.

Ms Bosen Liu (Founder, Ladder Education Group) talked about the role and aim of her team to bring the most up to date technology to serve the educational needs of the most isolated populations in the world. The goal is to provide opportunities to step out of isolation into the global labour market. The focus of her team is on English literacy and sustainable development, and their target audience are discriminated girls and women. She then explained the tools developed, mainly featuring solar-panel hardware and offline software. Finally, she concluded her remarks by stressing that in the matter of applying technology in education, technology is never and should never be the core. Education is.

Mr Jonnie Penn (Google Technology Policy Fellow, Pembroke College at the University of Cambridge) shared some insightful data on the world we live in, and stressed that there is a poverty of imagination in the way we talk about data. Indeed, we refer to data as we would to oil, coal or gold, which are resources that have been exploited. However, he pushed the audience to think about data as infrastructure. Meant as something that is invisible and that connects us; and all of us can benefit from it, including industry. He suggested that we think about data, not in terms of ownership, but in terms of access and control. He then moved to the possible applications of AI: it can be used to identify patterns of inequality. Finally, he concluded his speech by advocating for the following developments: incorporating citizenship education into young people's syllabuses; pushing for digital literacy; and to teaching technology through the lens of history.

The next speakers were Ms Elena Sinel and Ms Sara Conejo Cervantes (Artificial Intelligence Task Force, Teens in AI)​, who talked about their experience in involving youth in developing solutions to the challenges faced by our society. They stressed that the future is not going to be based on how much knowledge one has, but how they apply it to real life situations. It is necessary to teach young people how to use this knowledge and apply it to real life.

The final speaker was Mr Matt Keller (Senior Director Civil Society, XPRIZE Foundation). In introducing the structure and work of the foundation, he raised the question of the possibility to use the power of the crowd to solve problems. Can answers come from the most unlikely places? He explained the guiding lines of their project How to use technology for good in order to reach young people who cannot access these tools. They are working with the United Nations World Food programme, UNESCO, the government of Tanzania, and Google, to test and prove the supposition that children on their own can teach themselves how to read, write and do basic math. He then launched a video that explained step by step the pattern for the tools used and the challenges faced, mainly regarding the need for personalised education.

Cedric Amon

The panel discussion was co-organised by the ITU and the UN Institute for Disarmament Research (UNIDIR) and moderated by Mr Thomas Wiegand (Profess

The panel discussion was co-organised by the ITU and the UN Institute for Disarmament Research (UNIDIR) and moderated by Mr Thomas Wiegand (Professor, TU Berlin, Executive Director, Fraunhofer Heinrich Hertz Institute).

In his opening remarks, Wiegand mentioned two challenging aspects for the development of artificial intelligence (AI) from the engineering perspective, and well as the ethics of it. AI should reflect what society expects from it, but it must also come equipped with important safety measures.

Mr Robert Kirkpatrick (Director, UN Global Pulse) opened the discussion by introducing five tools regarding refugees. These tools range from recognition software able to identify xenophobic content about refugees on social media, to early warning systems of vessels in the Mediterranean, to satellite imagery support.

The UN Global Pulse has been working on guidelines for the use of AI which have been adopted by a variety of UN agencies. For Kirkpatrick, the widely accepted principle of ‘do no harm’ has two aspects to it that need to be taken into account for the development of AI tools. The first implication foresees that no direct harm should come from the use of a particular technology. But more importantly, this principle indicates that every reasonable step to prevent harm from happening must be undertaken. So far, privacy regulations fall short in establishing a satisfying level of protection. Indeed, nuclear technology regulation could serve as example on how to use and regulate the use of AI.

Mr Rob McCargow (Programme Leader Artificial Intelligence and Technology, PwC UK) foresaw that the greatest impact of AI on society will come when the private sector widely adopts AI technology. Its application will then range from the medical sector to the financial sector and truly change society.

He cited some figures from the PwC’s CEO survey which is conducted at Davos every year, showing that:

  • 72% of CEOs believe that AI will be a business advantage in the future; and

  • 67% of CEOs believe that AI will have a negative impact on stakeholder trust.

Thus, according to the speaker, the use of AI for good would be severely damaged if the disruptive aspects of AI are not addressed early on and gain traction. He further noted that AI will fail if its solely viewed as a standalone technology development. AI alongside other technologies will have severe workforce implications and therefore, companies need to prepare for it in a multidisciplinary and multistakeholder fashion.

Once it can be guaranteed that AI is safe, it will unlock its potential for good. McCargow said that so far there is not enough appropriate governance in businesses asking the right questions about the use and implementation of new technologies.

Mr Wojciech Samek (Head of Machine Learning Group, Fraunhofer Heinrich Hertz Institute) noted that one of the challenges of embracing AI for good stems from the fact that we do not understand how and why AI arrives to certain conclusions. To some extent, it can be viewed as a black box, where we fail to understand why certain methods work or fail. In order to build trust, it is therefore important to know and understand how these processes work and provide researchers with tools to interpret AI-generated results.

The interpretability of outcomes is also very important in terms of legal accountability and scientific progress. Results obtained through the use of AI need to be explainable and reproducible in order to unfold their full potential.

Tentative steps in that direction have been undertaken by Samek and his team who developed an application that visualise how AI image recognition operates. The AI algorithms were fed images of animals to be recognised and classified automatically by the software. Through the software, the researchers were able to identify the areas of the image that the algorithm had analysed to recognise the animal. They discovered that the software did not analyse the shape and features of the depicted animal, but rather scanned the small copyright signs on the bottom of the image, a sign that the AI had used deep learning to identify an animal. Samek points to the importance of being able to verify the predictions made by AI and to know how it comes to its conclusions.

During his speech, Mr Toufi Saliba (AI Decentralized) indicated that the way in which we judge the data will always be subjective. Our expectations of the outcome will always be biased in a certain way and we therefore have to look at feeding the learning patterns more precise data to teach the softwe how to come to our expected conclusions.

The criteria for AI’s operability should thus not be solely result-oriented but instead, should be focused on the input we provide it with.

Saliba further questioned our understanding of AI by asking what the audience would consider to be AI before stating that Bitcoin could be considered a form of AI because of its modus operandi: a machine that is incentivised to compete for resources and is not owned or directly controlled by humans.

According to Saliba, the question of regulating AI is central because it will define whether AI can liberate humanity or become one of its greatest challenges. Ethical considerations therefore need to be built-in from the beginning of its inception.

Mr Andy Chen (VP of Professional and Educational Activities, Institute of Electrical and Electronics Engineers (IEEE) Computer Society) spoke about the necessity to incentivise young professionals to build-in ethics into their AI developments.

He then introduced the Mind AI project, a linear qualitative research process whose results can be easily traced back by the researchers, and which works on the basis of natural languages. Through this open-source based and accessible to everyone project, AI will help to democratise progress.

He informed the audience about some ethics projects surrounding AI from Stanford University in the US and the IEEE’s global initiative on ethical design for autonomous and intelligent systems. The IEEE’s initiative has launched a call for papers for its second edition.

Ms Susan Oh (Chair of AI, Blockchain for Impact UN GA, Founder & CEO, MKR AI) briefly introduced MKR AI which she developed as a fact-checking system that tracks patterns of deception. The platform operates through input from users who validate or invalidate information that has been analysed on the website. If certain facts or methodologies are proven to be less accurate than those of the platform, users are rewarded with tokens.

The speaker noted that machine learning and AI will heavily rely on blockchain as they progress. On the other hand, blockchain also needs AI in order to validate or signal anomalies of the ledger.

Furthermore, if people have sovereignty over their data, they can volunteer to share their data and be rewarded for it in the form of tokens that could be used for their personal benefit. This way, AI evolution would be easier to regulate than through existing methods such as hard laws because regulations tend to be unable to determine what to track and are difficult to enforce.

According to Oh, tokenising societal processes benefits the development of AI because it helps AI better understand human interactions all the while benefiting all the parties involved. AI systems in combination with cryptocurrencies and other types of blockchains will provide a more transparent way of operating within society and incentivise collaboration among users.

 

Katharina E Höne

In his introduction to the session, Mr Houlin Zhao (Secretary-General of the ITU) highlighted the ITU’s connection to space and the relevance of space for telecommunications, and e

In his introduction to the session, Mr Houlin Zhao (Secretary-General of the ITU) highlighted the ITU’s connection to space and the relevance of space for telecommunications, and expressed the ITU’s commitment to opening new opportunities for space exploration. He quoted Valentine Tereshkova, the first woman in space, who said that ‘a bird cannot fly with one wing only’ and reminded the audience of the Chinese saying that women hold up half the sky. Thus, he stressed that the active participation of women is needed in space exploration.

The first speaker, Ms Anousheh Ansari (Member & Chair of Management, ​XPRIZE Foundation Board of Directors, Space Ambassador), shared her very personal experience of becoming the first female private space explorer. As a young girl growing up in Iran, becoming an astronaut seemed impossible. Yet, after successfully selling her own company and starting to work with the XPrize Foundation, she fulfilled her dream by going to the International Space Station. In her speech, she also stressed the importance of democratising space, a key aim of the XPrize Foundation.

Ms Liu Yang (Pilot, Astronau​t, and first Chinese woman in space) reflected on her own experience of becoming an astronaut and working on the Chinese space station, Tiangong-1. She argued that artificial intelligence (AI) is crucial for anticipating developments in space and supporting future (human) missions. While the essence of human space exploration will be improved through AI, the human astronaut can never be replaced.

Ms Samantha Cristoforetti (Astronaut, Pilot, and first Italian woman in space​) shared her childhood experience and journey to becoming an astronaut. She then added reflections on AI and space exploration and stressed that ‘AI is pervasive in everything we do in space’. She mentioned for example satellite data and that European Space Agency is interested in leveraging the potential of AI to make this data more usable. She also pointed out that robotic precursor missions will precede human missions to the moon and eventually to Mars.

All three women were presented with the World Telecommunication and Information Society Day Award by Zhao. In addition, Zhao awarded an ITU 50-year medal for his contribution to the ITU to Dr Marko Jagodic.

 

 

 

 

 

 

Katharina E Höne

Mr Kenny Chen (Innovation Director Ascender) moderated the debate which focused on sharing key lessons from the four tracks of the conference.

Dr Stuart Russel (Professor of Computer Science, University of California, Berkeley) summarised the ‘AI + satellites’ track by highlighting four broad areas of projects: a) predicting deforestation before it occurs, b) tracking livestock to reduce cattle raiding, c) implementing capabilities to ensure micro-insurance, and d) providing an infrastructure platform to deliver continuous, permanent global services based on the autonomous analysis of satellite data. He also stressed that while there are many laudable pilot projects, there is a gap between these projects and the availability of the services to a majority of people, on a global scale. Hence, in order to ensure an easier transition from pilot projects to global services, he suggested to build one single platform to facility this.

Dr Ramesh Krishnamurthi (Senior Advisor at the World Health Organization) summarised the findings from the ‘AI + health’ track. He outlined four work streams which include a) AI for primary care and service delivery, b) outbreaks, emergency response, and risk reduction, c) health promotion, prevention, and education, and d) AI health policy. He then described the 15 projects that the group discussed throughout the conference: AI to detect vision loss, detection of osteoarthritis, AI and digital identity, AI based health portal, AI-powered health infrastructure, AI-powered public health messaging, AI-powered epidemic modelling, Malnutrition detection based on images, child growth monitoring based on AI, strengthening the coordination of AI-powered resources, AI to improve the predictive abilities based on EMR data, Ai for public health in India, pre-primary care with AI, AI-powered snake bite identification for first responders, and AI-based social media mining to track health trends.

Dr Renato de Castro (SmartCity Expert) summarised the ‘AI + smart cities and communities’ track. He highlighted three key areas of this track. First, AI used for urban solutions should give voice to citizens in order to co-create their cities. It should also counter harassment and abuse. Second, AI should be used to foster smart governments. Examples of this came from Amsterdam and Brazil and de Castro stressed that the experience of Amsterdam shows that being allowed to fail and learning from failure is a very important feature. Third, AI can be used to empower smart citizens. Many examples came from Barcelona which focuses on using AI to empower people, not to replace them. Overall de Castro stressed that it is important to not only focus on cities but also the regions surrounding these cities. This was an important lesson from considering the African context where it is crucial that benefits are shared across the region so that citizens can benefit without moving to the city.

Also speaking about the findings of the ‘AI + smart cities and communities’ track, Mr Alexandre Cadain (CEO at Anima, Ambassador AI XPRIZE) identified some of the key questions and challenges ahead. First, he argued that it is important to counter the fear and the risk that all smart cities will eventually look alike. Tailored solutions are important that recognise history, cultural heritage, and linguistic diversity. Second, it is also important to get away from a top-down approach and to begin to view citizens as the problem owners who can identify areas of need and possible solutions. Third, connections and knowledge sharing between the emerging smart cities is needed and as such an ‘Internet of cities’ might be needed.

Dr Stephen Cave (Executive Director of the Leverhulme Centre for the Future of Intelligence, University of Cambridge) summarised some of the findings of the ‘Trust in AI’ track. He outlined four crucial tasks for the future: addressing gender imbalances, reaching marginalised communities, addressing structural inequalities, and decolonising AI. And he identified three important themes of the ‘Trust in AI’ track. First, developers must earn the trust of stakeholder communities that are affected. Second, there is a need to build trust across borders. Third, AI systems must be demonstrably trust worthy. In addition, he highlighted that broader outcomes of the track include the realisation that the idea of trust and trustworthiness needs to be interrogated in order to find a common frame of reference; the importance of recognising cultural differences; and the importance of recognising and fostering diversity.

Dr Huw Price (Professor of Philosophy at the University of Cambridge and Academic Director of the Leverhulme Centre for the Future of Intelligence, University of Cambridge) and Dr Francesca Rossi (Research Scientist at the IBM T.J. Watson Research Centre and Deputy Academic Director of the Leverhulme Centre for the Future of Intelligence, University of Cambridge) emphasised that it is important to create and use synergies and enable everyone to be aware of and learn from existing projects. In order to achieve this, they introduced Trustfactory.ai, which they envision to address some of the concerns that the track has discussed.

During the Q&A, Mr David Jensen (Head of Environmental Cooperation for Peacebuilding Programme at UNEP) mentioned the ‘planetary dashboard for global water monitoring’, which is a new partnership between UN Environment, Google, JRC, ESA, and NASA. The Q&A also raised the important question of how to meaningfully engage with GAFA (Google Apple, Facebook, Amazon), which was addressed with a reference to creating diversity and implementing multistakeholder approaches.

Katharina E Höne

Dr Jess Whittlestone (Postdoctoral Research Associate, Leverhulme Centre for the Future of Intelligence, CFI, Cambridge) spoke about bridging the policy-technical gap for

Dr Jess Whittlestone (Postdoctoral Research Associate, Leverhulme Centre for the Future of Intelligence, CFI, Cambridge) spoke about bridging the policy-technical gap for trustworthy AI. She stressed the importance of policy in shaping the way technology is used and the environment in which it is used.

She argued that AI policy-making is different from policy-making in other areas of science and technology, because it needs to be much more focused on anticipating challenges. The pitfall related to this is two-fold: policy should not be too reactive, at the same time, it should not fall victim to the hype.

Whittlestone suggested that broad and general thinking is needed that recognises the complexities of the societies and the environments in which technology is used in order to establish policy. In order to achieve this, inputs from a wide range of stakeholders are needed. While technical experts cannot answer these questions alone, it is also obvious that there are few senior policy makers with the necessary technical expertise. These two communities need to improve their communication and tackle the challenges arising from the very different languages they speak. In this regard, we also need to ask what level of technical understanding is needed for policy makers to be able to ask and answer the right questions.

Whittlestone suggested a number of ways to bridge the policy-technical gap: digital and technical training for policy makers, digital coaches for members of parliament (MPs), data ethics frameworks within governments, and scientific advisors in government.

She also stressed that terms such as trust, fairness, privacy, and transparency mean different things to different groups of people and are discussed in a variety of ways in relation to technical solutions. It will be important to connect the various communities to bridge the gaps in mutual understanding.

The next speaker, Dr Rumman Chowdhury (Senior Principal of AI, Accenture) spoke about ‘Trustworthy data: creating and curating a repository for diverse datasets’. She highlighted that in a number of cases, biases already come in at the stage of data collection. For example, AI that engages in natural language training based on broad input from the Internet often results in sexist AI. Similarly, because of a lack of diversity in the data sets that are used for training facial recognition AI, this AI often works best for white and male persons while struggling with the rest of the population.

As one solution, Chowdhury and her collaborator suggested building a repository for open data. Data scientists need to rely on ‘what is out there’ and the caveat with open data and ‘available data’ approaches is to convince people to make part of their data open. In order to work towards the repository, trust building and ethical principles need to be built into the process from the very beginning. Consent is of course an important aspect. However, with the rapid developments in AI, she argued that complications arise if people are asked to consent for their data to be used for purposes yet unknown.

Chowdhury and her collaborator argued that the question of what trustworthy data is, does not have an easy answer. However, they noted that the AI hype sometimes leads to researchers and developers disregarding the basic principles of data collection. Similarly, they stressed that data collection is impacted by policies. Changes in policies can change the data available and lead to further biases in the algorithm which then needs to undergo several further development iterations before it yields useful outcomes.

Chowdhury also stressed that bias can come from other sources, not just data, but also the data scientists. This includes collection biases, measurement biases, and contextual society biases. In the Q&A part of the session, Chowdhury and her collaborator stressed that the focus is not on creating non-biased data, which is impossible given how contextual bias is.

Dr Krishna Gummadi (Head of Networked Rese​arch Systems Group, Max Planck Institute for Software Systems) focused on the question of assessing and creating fairness in algorithmic decision-making. He used the example of algorithms that are used in the US justice system (such as COMPAS) to assess the likelihood of relapse into criminal behaviour. These algorithmic predictions then play a role in making decisions about granting bail.

Gummadi and his collaborators were interested in perceptions of fairness in relation to these algorithms and conducted surveys with people affected as well as the general population. In broad terms, perceptions of what is fair were similar among respondents. However, differences came in with regard to the relevance and reliability of some of the questions. For example, there was no agreement among those surveyed whether the criminal history of parents or the behaviour in the defendants’ youth should play a role in the assessment. The survey also showed that the causal mechanisms between these facts and the likelihood of relapse was assessed in diverse ways. One interesting finding of Gummadi and his collaborators is that differences in political position (liberal vs. conservative) leads to differences in the extent to which behaviour is viewed as volitional or as a feature of the environment and social group membership.

One conclusion is that it seems difficult to find agreement among survey respondents on the causal mechanisms that underlie algorithmic decision-making in this example. This raises the question to what extent we can actually settle societal disagreements in moral reasoning in order to build algorithmic decision-making tools.

Sorina Teleanu

Mr Frits Bussemaker (Chair, Institute for Accountability and Internet Democracy), acting as moderator, opened the session and explained that the aim is to showcase examples of how

Mr Frits Bussemaker (Chair, Institute for Accountability and Internet Democracy), acting as moderator, opened the session and explained that the aim is to showcase examples of how countries and organisations are approaching artificial intelligence (AI).

Dr Ahmed Al Theneyan (Deputy Minister for Technology Industry and Digital Capacities, Ministry of Communications and Information Technology, Saudi Arabia) started his intervention by underlining that technology is a key enabler for development. This is why Saudi Arabia has elaborated a comprehensive digitalisation strategy, which is built on several pillars: building resilient infrastructures to support all new technologies, developing the digital skills of the population (with a focus on youth), supporting innovation and entrepreneurship (through, for example, facilitating access to open data), and developing efficient electronic government services. The strategy focuses on promoting sustainable cities and communities, citizens’ health, decent work, economic growth, and gender equality, among other issues. Against this backdrop, the country is exploring the use of AI in innovative, responsible, and ethical ways, while supporting the development of this technological field, through key enablers: governance and legislation, investments, talent, and innovation.

Al Theneyan underlined that Saudi Arabia places a high importance on revamping the education system to match the technological progress: digital competencies are introduced in the curricula of primary schools, while universities create dedicated programmes and career paths focused on new technologies such as AI. The overall objective of these initiatives is to prepare the young generation for the skills needed in the future. In addition, Saudi Arabia has understood the importance of empowering more women to take active roles in technology fields. Its goal is to double the number of women in information and communication technologies (ICTs) over the medium term, and several programmes have been launched in this regard, mostly in collaboration with universities. Aiming to become one of the most attractive destinations for innovators and entrepreneurs, the country is building a network of innovation centres and tech accelerators to support these goals.

Amb. Amandeep Singh Gill (Permanent Representative of India to the Conference on Disarmament, and Member of the Task Force on AI for India’s Economic Transformation) argued that, while there is value in exploring the notion of beneficial AI, we should keep in mind that technology has multiple purposes and can be repurposed. He then went on to outline the work carried out by India’s Task Force on AI. Tasked with determining India’s vision with regard to AI, the task force reached several conclusions: AI should be treated as a tool for problem solving at scale; the country’s governance approach should be agile, sensitive and rooted in real needs; and enablers and safeguards should be put in place to avoid a backlash against AI, which would set the country back many years. Such enablers include expertise and awareness on AI, a positive social attitude and trust in AI, data literacy and policies for the proper use of data, and leveraging indigenous digital assets and local case-use scenarios. The task force took the approach of cautious optimism to the overall impact of AI on jobs, and outlined the need for AI to be transparent, explainable, and auditable. It also recommended the creation of a National AI Mission to coordinate AI-related activities in India and build public-private partnerships around concrete AI projects.

Singh Gill concluded his intervention by noting that collaboration and investments are key to supporting a country’s efforts to advance in the field of AI. Investments must be interdisciplinary, and all stakeholders need to be able to contribute to defining governance frameworks for AI.

Mr David Li (Founder, Shenzhen Open Innovation Lab) discussed the open nature of AI innovation. He noted that innovation is driven by access to knowledge, technology, and means of production. If these three elements are in place, innovation ‘can happen in the street’, and one does not have to be a large company to be able to innovate. To illustrate the concept of ‘AI from the street’, Li gave several examples of projects, such as an initiative focused on the development of machine translation tools within the framework of a school which teaches programming to refugees, and a Tibetan Buddhist centre dedicated to teaching young monks about digital technologies.

According to Lee, beneficial AI will not necessarily come from the Silicon Valley or from Shenzhen, but rather, from every street corner where people leverage resources to help their neighbours and communities. This is why we should see AI as a global resource and encourage people to use AI to create solutions to the problems faced by their communities.

During the moderated discussion that followed, a point was made that a smart city is much more than technology, it is community, common space, and governance. AI’s potential lies in its ability to bring people together around problems and problem solving. The success of smart cities will depend on three main elements: a right understanding of the technology, good collaboration among multiple stakeholders, and smart investments. Technology in itself will not deliver solutions, unless the environment around it enables this. Another concluding remark was that AI needs to be designed in such a way that there is transparency and understanding around it. We need to ‘take machines to schools and to streets’, to make everyone feel like they are part of the AI evolution. This, combined with proper governance and collaboration, will allow opportunities to be leveraged and risks mitigated.

Barbara Rosen Jacobson

This session explored the need for a common framework for data and artificial intelligence (AI), allowing stakeholders to work together to make AI for good a reality.

This session explored the need for a common framework for data and artificial intelligence (AI), allowing stakeholders to work together to make AI for good a reality. The moderator, Mr Amir Banifatemi (AI Lead at XPRIZE Foundation) reminded the participants about the twofold aims of the summit: identifying practical applications of AI to accelerate progress towards the sustainable development goals (SDGs), as well as formulating strategies to ensure the trusted, safe, and inclusive development and dissemination of AI technology, and the equitable access to its benefits.

Connecting remotely, Mr Wendell Wallach (Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics) recommended focusing on agile and comprehensive governance for AI to ensure that its adoption benefits humanity and minimises its potential harms. Comprehensive governance, ranging from technological solutions and standards to corporate oversight and soft law, can provide an agile way of managing this challenge. In this context, Wallach presented the Building Global Infrastructure for the comprehensive governance of AI (BGI for AI) initiative to resolve not only the technological, but also the political and practical challenges raised by AI through agile, comprehensive governance.

After providing an overview of UN initiatives which touch upon AI-related issues (the High Level Committee on Programmes, the Internet Governance Forum, and UN-DESA’s Forum on Science, Technology and Innovation), Mr Vincenzo Aquaro (Chief of Digital Governance, Public Institutions and Digital Government Division, United Nations) explained that this global summit is already one of the most important international forums about AI, due to its multistakeholder and multidisciplinary nature and especially in its aim to develop – rather than report about – concrete initiatives. Aquaro reminded the participants about the SDG’s mission to leave no one behind, which should be applied to the work of the AI community as well, to be able to support the creation and promotion of AI solutions for the common good. AI should be a ‘universal resource for all of humanity, to be equally distributed, available to everyone, no matter the level of development and capacity’. Yet, he noted that one of the biggest challenges is to create a common framework to regulate the proper use of AI without stifling innovation, and addressing this challenge requires the involvement of all stakeholders.

Banifatemi then presented a common platform for AI for good, which would facilitate the collaboration between AI practitioners and ‘problem owners’ (governments, civil society, domain experts, etc.) and provide solutions in a systematic manner, moving beyond pilots and individual projects. Mr Stuart Russell (Professor of Electrical Engineering and Computer Sciences, UC-Berkeley) added that this collaboration between problem owners and engineers, and the convergence of pilot projects into global services, was the main stumbling block identified by the AI + Satellites track. Projects often result in publications that are filed away, while real problems on the ground continue to persist. As this is a common challenge among almost all AI projects, we need to develop standardised ways of collaboration and ‘shepherds’ with experience to avoid the roadblocks that AI researchers are not equipped to anticipate. After all, AI for good is not just a technical issue, but also has governance and sociological dimensions requiring different kinds of expertise.

Mr Trent ​McConaghy (Founder, Ocean Protocol; Founder & CTO, BigchainDB) presented a framework for AI Commons, which is a scalable, de-centralised platform that brings together problem owners, AI practitioners, and suppliers of data and infrastructure. The platform contains a variety of data sources, provides incentives to share data, includes privacy provisions, and has in-built mechanisms for data governance (e.g. permissions, labels, and ontologies) and interoperability. McConaghy concluded that the SDGs are a great way to summarise global problems, and a high-level way to approach them with AI would benefit from a common platform, which is not just something that can hypothetically be built, but that is already in the process of being constructed.

Ms Francesca Rossi (Research Scientist at IBM Research and Professor at the University of Padova) highlighted the need for public involvement in creating AI, as AI will impact everybody. Besides practitioners and problem owners, it is important to include researchers, social scientists, data subjects, and policymakers. In addition, they need to be representative of different cultures, genders, disciplines, and stakeholders. Rossi emphasised the need for trustworthy AI, which should take into account fairness, values, explainability, and ethics, and which needs to collaborate with existing initiatives around AI ethics and trust.

Mr Chaesub Lee (Director of Telecommunication Standardization Bureau, ITU) closed the session by highlighting the urgency of working towards AI for good, as AI technologies risk being hijacked by those using it with bad attentions. In addition, he reiterated the identified need for smoother transitions from pilot projects to global services.

Barbara Rosen Jacobson

Can artificial intelligence (AI) help predict the spread of diseases? Can machine learning help respondents to better allocate resources in emergencies?

Can artificial intelligence (AI) help predict the spread of diseases? Can machine learning help respondents to better allocate resources in emergencies? These questions were raised by the moderator, Mr Dominic Haazen (Lead Health Policy Specialist, World Bank) to introduce this session, which addressed the potential of AI in the context of epidemics and emergency response.

Mr Ingmar Weber (Research Director for Social Computing, Qatar Computing Research Institute) explored the potential of social media to provide targeted advertising for public health campaigns. Whereas social media is currently predominantly used by public health agencies to broadcast messages, often ‘preaching to the choir’, there is potential to adapt messages to different groups, for example based on age, gender, marital status, location, education level, or interests. While bearing in mind privacy concerns, this allows for the distribution of the right message to the right person at the right time in a very cost-effective way.

Ms Jeanine Vos (Head of SDG Accelerator, GSMA) highlighted the potential of mobile big data to accelerate impact in the context of the sustainable development goals (SDGs), as it can create powerful insights about the location and movement of populations. For example, mobile data could detect the movement of internally displaced persons after an earthquake or the spread of a disease, especially when combined with other sources of data. GSMA’s Big Data for Social Good project explores these opportunities and places them in a consistent framework of best practices.

Ms Clara Palau Montava (Technology Team Lead, UNICEF) presented some of the work of UNICEF’s innovation unit. For example, responding to the 2015 Ebola crisis, it launched an open source messaging platform and worked with mobile operators to detect patterns of movement, extrapolating the spread of the epidemic. In the context of the Zika crisis, the agency combined various data sources, such as mosquito prevalence, poverty, and weather data, to estimate the disease’s dynamics. Montava emphasised that there is a continued need for scientific studies to better understand the bias behind these methods, especially if they are to be combined with machine learning. In addition, innovation in emergency response requires collaboration among organisations, and cannot be done by one agency alone.

Ms Anita Shah​​ (Managing Director of the Kenya office of Kimetrica) presented the Method for Extremely Rapid Observation for Nutritional Status (MERON), aimed to detect malnutrition in children based on facial recognition technologies during humanitarian emergencies. Traditional methods to measure the nutritional status of children in emergency settings are plagued with a number of challenges, such as the skills needed by the researchers, the bulky equipment that needs to be transported, and the degree of physical contact between the researcher and the child. The method can assist in timely identifying the children who are in need of nutrition support. The project is intended to scale up and be tested in different countries and emergency contexts.

Mr Marcel Salathé (Professor & Head of the Digital Epidemiology lab, École Polytechnique Fédérale de Lausanne (EPFL)) explained how health trends could be tracked using crowd-sourced social media data combined with machine learning. EPFL’s ‘Crowdbreaks’ monitors patterns in diseases in real-time across countries by collecting tweets with keywords that could be relevant to specific health issues. The algorithm is continuously updated with newly labelled tweets and feedback from users. As the application of AI to such projects often involves many actors, Salathé emphasised the need to harmonise the diverse incentives of different entities, adding that the failure of some projects is not due to a lack of ‘good’ incentives, but rather, due to their misalignment.

Mr Jochen Moninger (Head of Innovation, ​Welthungerhilfe) focused on the potential of detecting child malnutrition using AI. He pointed out that nutrition during a child’s first five years is crucial for its development, and while there is enough food in the world, it is not well-distributed; ‘we don’t know where to bring it’. Welthungerhilfe’s tool is developed to identify malnutrition through a mobile app that uses augmented reality in combination with AI, which can calculate someone’s weight and height through a 3D scan. This allows for rapid response to areas where malnutrition is prevalent, as it is of vital importance that action is undertaken swiftly due to the negative impact of children’s sustained malnutrition on their further lives.

Barbara Rosen Jacobson

This session took stock of the role of data across the breakout themes discussed over the previous days, and it proposed a common framework for the way in which data can addressed in the context of

This session took stock of the role of data across the breakout themes discussed over the previous days, and it proposed a common framework for the way in which data can addressed in the context of AI for good. The moderator, Mr Amir Banifatemi (AI Lead at XPRIZE Foundation) introduced the session and passed the floor to Mr Omar Bin Sultan al Olama (Minister of State for Artificial Intelligence, United Arab Emirates).

Bin Sultan al Olama noted the importance of having gatherings like these, as it is only through collaborating and unifying resources and knowledge that we can obtain the benefits of AI. He furthermore mentioned the Global AI Governance Forum, established by the United Arab Emirates, which brings together AI experts to discuss how to govern AI to be able to reap its benefits while avoiding its potential negative impacts.

Next, Mr Urs Gaser (Executive Director of the Berman Klein Center for Internet & Society at Harvard University) explained that ‘AI for good is only possible when we have data for good’. A team of rapporteurs tracked the conversations in the plenary and breakout sessions of the conference to distill common themes related to data and work towards a first framework while building towards data commons for AI for good. This framework provides a horizontal view across the vertical tracks of the conference, and consists of six layers:

  1. Technical infrastructure: servers, clouds, decentralised ledgers, etc.

  2. Data: qualitative/quantitative, structured/unstructured

  3. Formats and labels: metadata, taxonomies, interoperability

  4. Organisational practices: collaboration, incentives

  5. Institutions, law, and policy: accessibility, privacy, human rights

  6. Humans: knowledge and education
     

Rapporteurs from each of the tracks provided examples of ideas that fit within the framework:

  • AI + Satellites: there is a need for more on-the-ground data that is standardised and geo-referenced to be paired with satellite data.

  • Trust in AI: there is a need for greater transparency so that those who use the data know how, when, and by whom the data was collected. For example, labels food nutrition labels, readable by humans or machines, which help to prevent using the data in inappropriate ways or ways that introduce unintentional bias.

  • AI + Smart Cities and Communities: there is a need to gather data from experiments and best practices, publicly accessible, so that everyone can take part in the ethical design of solutions to urban and community problems.

  • AI + Health: there is a need for transparency in diagnostic e-health tools, which are sometimes used as ‘substitute doctors’: how to they arrive at their decisions?
     

Gaser added that ideas, projects, and best practices could be categorised in a common analytical framework, to be able to understand what works best in what context.

Banifatemi then moved the discussion towards private and public data: what should be open and what should be controlled? Bin Sultan al Olama suggested that the answer to this question needs to be made in consultation with citizens, integrating their preferences related to the collection and use of their data.

Asked about ways to standardise data, Mr Chaesub Lee (Director of Telecommunication Standardization Bureau, ITU) noted the large variety of data types that can be distinguished, raising questions related to their interoperability. These questions are currently explored by an ITU focus group. Furthermore, he voiced his concern related to the exchange of data and the lack of transparency of how much data is shared, and with whom. While data sharing is essential for smart operations, it requires adequate protection. In addition, one of the participants in the audience suggested that we need to think about making the potential of AI available for all, preventing skewed distributions of its benefits.

In their concluding remarks, Bin Sultan al Olama emphasised the utility of platforms like the UN to push countries towards sharing data, and Lee added that it is the role of UN agencies to ensure the use of data for good. Gaser highlighted the importance of powerful narratives that demonstrate the potential of unlocking data silos. Finally, the rapporteurs of the four thematic tracks stressed the continued importance of qualitative data, of building a community around data commons, of demystifying stories behind AI, and of working on applied problems rather than abstract concepts.

Katharina E Höne

Mr Frederic Werner (Senior Communications and Membership Officer, International Telecommunication Union (ITU)) moderated this session.

Mr Frederic Werner (Senior Communications and Membership Officer, International Telecommunication Union (ITU)) moderated this session. He highlighted some key areas of focus for the discussion: connecting those with a good understanding of the situation on the ground with artificial intelligence (AI) experts, making AI relatable for people with a non-technical background, and working towards the sustainable development goals (SDGs) with AI.

Dr Aimee van Wynsberghe (Co-Founder and Co-Director of the Foundation for Responsible Robotics) focused on ethics as a driver for innovation and explained that ethics relate to ideas of what we consider the good life, and help us distinguish between right and wrong, and good and bad. She emphasised that taking ethical considerations into account when designing new technology and especially robotics, should not be seen as a hindrance, but rather as a way to push engineers and developers a step further.

Wynsberghe argued that we should not perceive technology as being neutral. On the contrary, she pointed out that technology is creating and co-creating our societal norms, values, and meaning. Technology could change the elements of what we think is constitutive of the good life. In fact, technology already shapes how we get to what we perceive as the good life, such us how information and communications technology (ICT) helps us bridge geographical distances and helps us connect with friends and family. Building on this, we see how robotics and AI can change our perception of what constitutes a ‘good life’ and how we can achieve it. Building on this, Wynsberghe suggested that ethical questions surrounding robotics and AI can be clustered into three main categories: regulations, users, and technology. Regulations touch on questions such as: What are the standards of training AI? How can we make robots that enhance rather than replace humans? Ethical considerations surrounding users include questions such as: How do users perceive robots? What could human-robot interactions do to human-human interactions?

How should users be obliged to act towards robots? And last but not least, a key ethical question relating to the technology itself is whether or not robots and AI are (ethical) agents in themselves.

Mr Maurizio Vecchione (Executive Vice-President of Global Good and Research, Intellectual Ventures) focused on the role of technology in saving humanity and argued that population-scale problems need to be put into sharper focus. In order to do so, he argued, various disciplines need to work together and solutions need to recognise the complexities on the ground. One specific example Vecchione gave to illustrate the vast potential of technology, relates to the so-called small holder innovation paradox. He argued that it is generally recognised that agriculture is a way out of poverty in low income countries. Yet, in many cases, the agricultural activities do not yield enough productivity. Here, better data combined with AI can produce analytics on soil and the environment and give predictions on crop yields and planning and advice. Similarly, it can contribute to much needed financial services. The data and services can easily be accessed via smart devices and can allow small holders to improve productivity and access new opportunities.

Dr Francesca Bria (Chief Technology and Digital Innovation Officer, Barcelona City Council) spoke about her work and gave concrete examples from the city of Barcelona. She emphasised data commons and ethical digital standards to solve urban challenges with clear rules and democratic control. She also stressed opportunities for collective action and citizens taking control and ownership. The role of the city is crucial for the future of AI for good. There are opportunities for the bottom up empowerment of citizens and re-thinking of technology to serve the city.

Bria stressed that in order for data and AI to serve the common good, we need to create trust and ownership. This means focusing on transparency regarding data and the algorithms used. This also includes offline and online consultations with citizens, and giving citizens control of their data. Data needs to be treated as a commons and a new legal regime of data ownership needs to be created. She suggested using blockhain technology and attribute-based cryptography in order to give control back to citizens and to allow them to decide what data is private and what data can be shared and become a common good. She thus advocated that citizens regain data sovereignty.

 

Barbara Rosen Jacobson

This breakout session addressed the potential of satellite imagery, combined with artificial intelligence (AI) and machine learning, to help meet the sustainable development goals (SDGs).

This breakout session addressed the potential of satellite imagery, combined with artificial intelligence (AI) and machine learning, to help meet the sustainable development goals (SDGs). The session was opened by the moderator, Mr Stuart Russell (Professor of Electrical engineering and Computer Sciences, US Berkeley), who explained that an enormous amount of satellite data is being produced every day. This wealth of data could provide a snapshot of the entire world, at once, in real-time. AI systems are needed to be able to effectively map all this data, as it will go beyond the capacity of human satellite interpreters.

Next, Mr Einar Bjørgo, (Manager of the UN Operational Satellite Applications Programme) mentioned that the fusion of technologies could help accelerate the impact of earth observation, especially if there is open access to geospatial images. This would allow the distribution of the right data to the right people when they need it, particularly if we would be able to directly ‘stream’ such images. Satellite imagery has particular advantages for monitoring the SDGs, as it provides global-scale data-sets, ‘covering everything, even the smallest islands’. At the same time, Bjørgo indicated that the use of these technologies still requires extensive capacity development and training.

Mr Mark Doherty (Head of Earth Observation Exploitation Development Division, European Space Agency) identified the ‘tsunami of satellite data’ that is generated, especially with the rise of satellites from the private sector. Doherty explained that this data in particular could contribute to SDGs 2 (zero hunger), 11 (sustainable cities), 13 (climate action), 14 (life below water), and 15 (life on land). With investments in public infrastructure, global monitoring, and a direct link to government policy, the EU is addressing a lot of obstacles related to the use of satellite data for the public good. Doherty echoed Bjørgo’s emphasis on the importance of free and open access, and added that there are still a number of challenges to overcome related to universal standardisation, the ease of using satellite data, and the validation of this data. AI can help detect patterns and models that cannot be captured by humans, helping to identify the signals that provide predictive power and ‘deliver on full societal benefits’.

Andrew Zolli (Vice President of Global Impact Initiatives, Planet Labs Inc.) explained that his organisation daily creates images that capture the whole world, making global change visible with a large number of small satellites. ​This allows for the exploration of patterns and changes in systems. For example, it could help track the growth of refugee camps or the number of solar panels in a country. As there is more information generated than we can pay attention to, AI can assist in obtaining insights from continuously refreshed imagery, providing real-time analytics of social and ecological systems. To unleash this potential, the system as a whole needs to overcome its current fragmentation, requiring more coordination for ‘all of the groundtruth and satellite data to work together for the collective common good’.

Mr James Crawford (CEO and Founder of Orbital Insight) explained that AI can address the current bottlenecks that appear due to a lack of human capacity to interpret satellite imagery – it is simply impossible to look at all images every day. Crawford listed a number of examples of how this could be put into practice. For example, AI could detect patterns in deforestation, land use, and flooding. Ultimately, these technologies could be used to understand all socioeconomic and ecological processes.

Mr David Jensen (Head of Environmen​tal Cooperation for Peacebuilding and Co-Director of MapX, UN Environment Programme) raised some of the main governance questions related to the interplay between AI and satellite data for the SDGs. Jensen pointed out the enormous complexity of measuring 232 indicators for 193 countries is not unlike ‘solving a Rubik’s cube with 193 sides’. AI would be a perfect tool to mitigate the balances and trade-offs inherent to the SDGs, yet it is hindered by a lack of harmonisation among data sources. In addition, there is a capacity, knowledge, and information gap between industry, on the one hand, and governments and civil society, on the other. This has created imbalances that are at risk of being exacerbated. In conclusion, Jensen coined the question, of ‘how to create incentives and standards to keep satellite data and related AI algorithms in the public domain’.

 

 

Sorina Teleanu

The session was opened by Ms Claire Craig (Director of Science Policy, the Royal Society), who explained that trust is an issue which crosses the boundaries of nations and countrie

The session was opened by Ms Claire Craig (Director of Science Policy, the Royal Society), who explained that trust is an issue which crosses the boundaries of nations and countries; different cultures may have different understandings of the notion of trust, and it is important to understand these differences to be able to develop trusted applications.

Mr Liu Zhe (Professor, Peking University) spoke about cultural differences when it comes to trust in artificial intelligence (AI). Trust in AI, he argued, must be considered in the context of existing technology and possible progress in the foreseeable future; basing the discussion on science fiction is a dangerous thing. Zhe then went on to discuss the issue of scoping the problem of trust in AI and robots.

He mentioned that in China and other Asian cultures people seem to be enthusiastic about AI and other emerging technologies. This may lead to an over-trust in technology, which involves a certain deception in the interaction between humans and technology. The risks of over-trust and misplaced trust are very high and we need to address such risks when we think about the relation between humans and AI and robots.

She emphasised the importance of making a distinction between mistrust and misplaced trust or over-trust. He then explained that, when we think about the notion of trust, we consider it largely from the perspective of personal relations. But is it appropriate to look at the relation between human and technology as some type of interpersonal relation? Should we insist on using trust as an appropriate framework to conceptualise our relation to beneficial AI? If not, what is the alternative?

Answering a question from the audience about how we can measure trust, Zhe noted that, before measuring, we should understand the relationship between humans and AI, and what it entails, In his view, it is not clear whether we should use ‘trust’ as a framework to assess this relation. In a follow-up comment, a participant asked whether trust in AI is not a question of trust in other human beings (i.e. the programmer or the engineers building the application, the company, the government, etc.) rather than a question of trust in technology itself. The same goes when we talk about ethics in AI: the discussion is about ethics in how the engineer designs the system.

Ms Kanta Dihal (Research Project Coordinator, Leverhulme Centre for the Future of Intelligence, University of Cambridge) presented the AI Narratives project, which focuses on examining the stories we tell about AI and the impact they have on the technology and its use. The goal of the project is to understand the hopes and fears that shape how we perceive AI, and the relationship between our imagining of the reality and the technology itself.

Dihal spoke about the fact that the impact of AI will be global, and, because of this, managing AI for the benefit of all requires international and multidisciplinary co-operation. But different cultures see AI differently. To build trust across cultures, we must understand the different ways AI and what it could do are perceived.

She also pointed out that there might be limitations in the way we talk about AI; for example, we might be distracted from the real problems by science fiction, fantasies, and the fear of ‘killer robots’. The narratives of rebellion seem to significantly impact our fears about intelligent machines. And this reveals a paradox: we want clever, ‘superhuman’ machines that can do things better than us (and for this we entrust machines with human attributes like agency and intellect autonomy), but at the same time we want to keep them ‘sub-human’ in statute. The perception of AI is influenced by both fiction and non-fiction, and this creates a goal-alignment problem: whose values and goals are actually represented in the development of AI?

Mr David Danks (Department Head and Professor of Philosophy and Psychology, Carnegie Mellon University) and Ms Aimee van Wynsberghe (Co-Founder and Co-Director, Foundation for Responsible Robotics) presented their project on ‘Cross-national comparisons of AI development and regulation strategies – the case of autonomous vehicles’. Danks spoke about the fact that sometimes, when we think about trust, there is a feeling that we are not sure what we are talking about. However, he noted that trust is a very well understood notion and there is no need to reinvent the wheel. When we speak about trust and technologies, there are several important questions to consider: What do we expect from technologies? How do we make ourselves vulnerable through the use of technology? And how we do we find a middle ground?

We can think of trust in two ways. On the one hand, we have behavioural trust, based on reliability, predictability, and expectation grounded in history. This kind of trust is useful, but it can be fragile. On the other hand, we have trust grounded in the understanding of how the system works. This is the kind of trust we have in one another, and is based on our knowledge of people’s values, interests, etc. This trust is helpful because it can be applied to novel situations. Danks gave the example of how pedestrians in the city of Pittsburg, USA (where Uber used to heavily test self driving cars) interact with self-driving cars. There are many cases of people jaywalking in front of self-driving cars. When asked why they do this, they often say that they trust the car would stop, because they have seen other cars stopping when other pedestrians jaywalked. This is behavioural trust: the pedestrians trust the technology because they have seen it function a number of times.

Giving a brief overview of the project, van Wynsberghe explained that the aim is to explore the ways in which different states regulate AI technologies, and how these regulations impact the notion of trust. The project also looks at the differences between regulations and cultural norms across various countries. The hope is to be able to use the results of the project as a starting point to more systematically understand various best practices in terms of technology, regulations, and social norms.

The session concluded with an emphasis on the need to facilitate a better understanding of the interactions with AI and robots. In the case of self-driving cars, for example, mechanisms that indicate to pedestrians when a car is on autonomous mode could improve this understanding.

Sorina Teleanu

The breakout session on ‘AI + Smart Cities and Communities’ was opened by Mr Renato de Castro (SmartCity Expert) and Mr Alexandre Cadain (Co-Founder & CEO at A

The breakout session on ‘AI + Smart Cities and Communities’ was opened by Mr Renato de Castro (SmartCity Expert) and Mr Alexandre Cadain (Co-Founder & CEO at ANIMA and XPRIZE Ambassador). Castro explained that the session would explore smart city design projects and the technology behind them, with a focus on how artificial intelligence (AI) impacts cities and communities.

In de Castro’s view, smart cities are built on five key components: the underlying technology (information and communications technology, big data, algorithms, etc.), the citizen-centric dimension, the goal of improving citizens’ lives, the emergence of new economies (such as the sharing economy), and the promotion of resilience. Smart cities are not and should not be about building new cities, but about building better cities to live in. To achieve this, it is important that smart cities projects move from a public-private partnership (PPP) approach to a public-private-people partnership (PPPP) approach, and involve citizens as equally important stakeholders in the development and implementation of such projects.

Cadain spoke about an evolution of ‘smart cities’ towards ‘intelligent cities’ and ‘ideal cities’. The ideal cities of the future could be cities in which there is a perfect harmony between humans and technology. Finding solutions for achieving such harmony is a task that requires collaboration across stakeholder groups and disciplines. He further noted that, when we look at successful smart cities projects, we should try to identify solutions and applications that could be replicated in other cities around the world. Moreover, we should consider how certain smart cities applications from developed countries could be replicated in cities in developing countries.

Mr Akihiro Nakao (Professor, University of Tokyo), acting as moderator and panellist, focused his intervention on 5G technology and its use in smart cities applications. Speaking about the importance of resilient communications infrastructure for the development of future smart cities, Nakao presented a project which combines 5G technology with AI to deliver real time video surveillance. Described in simple words, the project involves the use of drones, 5G technologies, and machine learning to capture and transmit real time feed of city surroundings, and analysing what happens on the ground through object recognition technology. Such a project could help improve city safety, which is a growing concern nowadays.

Nakao also touched upon issues related to privacy and data protection in the context of smart cities. He pointed out that companies do need data to be able to produce services that are beneficial for citizens, but that such services should be developed without violating privacy rights.

Mr Brian Markwalter (Senior Vice-President, Consumer Trade Association) started his intervention by mentioning that the Internet of Things (IoT) and AI technologies are continuously evolving, and while this evolution comes with opportunities (such as supporting urbanisation processes around the world), there are also challenges. Technology in itself might not necessarily evolve in a positive way, and this is something we should always keep in mind. Markwalter noted that, as people are already experiencing ‘AI technology coming to meet them and make their lives easier’ (through everyday applications such as smart speakers and other digital systems), they are increasingly expecting the same from their cities. And there are many areas in which cities can put AI technology to use to improve people’s lives, from transport and finances, to energy and the use of resources. When it comes to challenges that can impact the evolution of smart cities, Markwalter mentioned privacy and data protection concerns for end-users, and costs and return of investments concerns for the private companies and public entities.

Mr Chaesub Lee (Director, Telecommunication Standardization Bureau, International Telecommunication Union) spoke about the work carried out by the ITU-T Study Group 20 on IoT and Smart Cities and Communities. The group has been working on developing international standards that leverage the use of IoT technologies to address urban-development challenges. It has so far produced several key performance indicators (KPI) for smart cities, which have already applied been in cities like Dubai and Singapore to assess the performance of smart cities applications and identify areas for improvement. The group will also explore the use of AI for smart cities.

Lee also spoke about the UN initiative on United for Smart Sustainable Cities, which aims to encourage the development of public policies and the use of information and communications technologies (ICTs) to facilitate the transition to smart sustainable cities.

Lee further explained that there are multiple layers behind the notion of smart cities: infrastructures support communication, connected device produce data, collection of data through platforms, the platforms provide capabilities to develop services and applications, which are then provided to citizens in line with regulations and operational principles. All these layers have multiple needs (for example, the infrastructures and devices need to be interoperable, while the regulations need to be based on shared knowledge). By considering all these needs, we can improve the technology and the ‘quality of smartness’.

Cities are, by nature, distinct in terms of geographic location, history, citizen behaviour, culture, etc. The goal of smart cities should not be to create uniform cities, but rather devise ‘smart solutions’ that are adapted to the specificities of each city. The main challenge at hand, therefore, is how to apply AI and other technologies to individualised smart cities.

Mr Andrejs Vasiljevs (Co-founder and Chairman of the Board, Tilde) spoke about the need to consider language diversity in the development of smart cities. Nowadays cities are increasingly multilingual, and AI technology can be put to use to create multilingual solutions for inclusive smart cities and societies, which empower people. Machine learning technology has seen significant progress over the past years, and has helped advance translation of smaller and more complex languages, demonstrating that technology is ready to support all communities, irrespective of how big or small they are. In Latvia, for example, the government has deployed machine translation to allow people speaking different languages to access and use e-government services.

Vasiljevs also mentioned another example of AI used to promote more inclusive societies: chatbots or virtual assistants used by public bodies to facilitate interactions with citizens. He again gave the example of Estonia, where a virtual assistant, Una, is providing guidance and support to citizens who want to set up a company.

During the final round of discussions, several points were made:

  • Combining technology fields – such as machine translation and telecommunications infrastructure to deliver simultaneous translation – has the potential to advance smart cities.

  • Every city is different and this needs to be considered when technologies are standardised and applied.

  • When it comes to privacy and data protection, what matters most is that people understand what happens with their data. Data is essential for smart services, and we might not have smart services without sharing and using data. But it is essential that users are able to make informed decisions about the data they disclose and stay in control of their data. Data should be used with the need for privacy in mind.

 

 

 

Stefania Pia Grottola

The session AI Empowering Smart Citizens, moderated by Mr Alexandre Cadain (Co-Founder & CEO at ANIMA; XPRIZE Ambassador)

The session AI Empowering Smart Citizens, moderated by Mr Alexandre Cadain (Co-Founder & CEO at ANIMA; XPRIZE Ambassador), stressed the need for strengthening and expanding smart cities and smart communities.

The first speaker, Mr Joaquin Rodriguez Alvarez (Professor and Researcher EPSI-UAB, Leading Cities Coordinator), said that technology had a lot of promise, but that it does not come without its share of problems. It is therefore necessary to be prudent when it comes to the development and applicability of artificial intelligence (AI). Furthermore, he stressed the importance of data management through two radical examples. First, he recalled the use of data during the Holocaust; second, he spoke about the role that data played in the extermination of a part of the population in Rwanda. He argued that it is not possible to completely trust either the private sector, nor the public one. Today, it is easy to manipulate the public opinion and negatively affect democracy with the tools we have available. It is crucial to be careful with digital technologies and AI.

Rodriguez Alvarez further stated that the concept of ’empowering people’ is based on the notion of sharing knowledge and awareness. Finally, he stressed that technology is not neutral, but that it ‘learns’ from the society it is installed in. The main concerns in this regard are related to, but not limited to lethal autonomous weapon systems (LAWS). Human dignity has to be taken into account in the development and application of these technologies: technology is used by humans who can have peaceful or hostile purposes.

Mr Jacques Ludik (Founder & CEO, Cortex Logic; Founder & President, Machine Intelligence Institute of Africa (MIIA)) focused on how to use technology for better development and problem-solving. The talk covered the topics of health, water, smart education and smart technology services for African smart cities. He first discussed the inclusion of AI in the community, and a data platform for Africa. On the issue of health data and analytics ecosystems for Africa, MIIA is collaborating with private companies in order to operationalise health systems. Ludik  also touched upon the need for smart education in Africa, in order for Africa to be play an active role in the 4th industrial revolution. He then continued his intervention by attempting to define smart citizens as those who put responsibilities first. Finally, he concluded by saying that empowering smart citizens in a new way involves: bottom up decision making, non-linear approaches, encouraging complexity, embracing uncertainty, enabling and boosting creativity.

The moderator then opened the floor for the presentation of three projects that were then discussed by the panellists and the audience. The first project focused on empowering homeless people in line with sustainable development goals ( SDGs) 8 and 11. The project plans to provide smartphones through which people provide data that governments and international organisations can use for more effective actions. The second project focused on managing cars as a common good. One of the direct effects of the project is saving urban space by car-sharing. People can change their concept of mobility and the way they behave. Finally, the third project was about an inclusive innovation roadmap, meant to incentivise investments in AI and to include the following points: enhance city operations, connect citizens with the city government, implement open data, close the digital divide and make Internet access a public good. In line with the SDGs’ values, the project satisfies the P4 concept: people, planet, performance and place. In the final comment of the session, the opportunities of blockchain were discussed as a means of empowering citizens.

 

Stefania Pia Grottola

The session AI Fostering Smart Government was moderated by Mr Frans-Anton Vermast (Strategy Advisor & International Smart City Ambassa

The session AI Fostering Smart Government was moderated by Mr Frans-Anton Vermast (Strategy Advisor & International Smart City Ambassador, Amsterdam Smart City). He started with an introductory speech about Amsterdam Smart City, its structure and purpose. Its team is currently focusing on digital transformation and social inclusion, and how to enhance public trust in local governments, as well as the accountability of private tech companies operating in the public sphere. He argued that the digital city is inclusive: data and technology do not have to constitute limitations for people. Furthermore, the smart city approach includes the following concepts: inclusivity, control, the need to be tailored to the people, legitimacy, openness, and finally, it has to be by everyone for everyone. Vermast also talked about the work of their DataLab meant to create innovation through competition. Their next project will be focused on the opening of algorithms. The final principle he talked about was the possibility for citizens to choose algorithms in accordance with the concept of ethical integrity.

Ms Carla Dualib (Secretary of Communication and Press at Diadema City Hall, Brazil) talked about Diadema Open Evolution's work in engaging Brazilian cities to use artificial intelligence (AI) for the benefit of people. Furthermore, she talked about the project ‘House Beth Loba’, a project for protecting women that are subject to violence, and explained that AI can enhance the collection, analysis and mining of data collected through the project which can be used for social benefit. The Internet of Things (IoT) and AI are the core interests of Diadema. Finally, she stressed that it is crucial to talk to people using emerging technologies, to make them aware of how these technologies work and what they imply.

Mr Renato de Castro (SmartCity Expert) structured his speech around the current evolution of technologies for smart cities and how the general understanding of smart cities should be questioned. Smart cities are not just for big cities. He went on to give some examples to show how AI can be used for addressing local issues. The first project he talked about was implemented in Brazil, in small villages faced with drought. IoT helped tackled the situation by providing better tools for weather forecasts and for sending alerts to citizens. The second example he talked about stressed the concept of public-private-people partnerships. The inclusion of people will indeed increase with the implementation of smart cities.

After the panellists’ speeches, the moderator opened the floor for discussion. Dualib commented that smart citizens are needed for smart cities. De Castro followed up by adding that the United Nations and its own agencies should start speaking the language of smart cities in order to have a proactive role with global impact. On the question of defining ‘smart government’, de Castro argued that the concept is new for a lot of countries, thus, a starting point is to gather the best practices around the world. Dualib proposed creating an open international community-driven platform for sharing information about various initiatives and serving as a repository of ideas. Finally, the moderator added the need to share the lessons learned, for a more constructive strategy.

One question posed by the audience regarded best practices in data management and data ownership, in the context of the governmental duty of data protection. It was highlighted that data protection represents a challenge that needs a balanced regulation that does not limit the development of new technology. Another question was posed about the limitations in the making of infrastructure developing policies. De Castro argued that public-private partnerships are a key solution; however, big global entities are playing a new role challenging the creation and implementation of these policies. The concept of subsidiary was also proposed as a solution to the problem by Vermast. Finally, the last comment introduced the concept of fairness, accountability and transparency (FAT) to be used in enhancing the ethical integrity of algorithms. De Castro called upon academia to take on a role in this regard.

Sorina Teleanu

The session was open by Ms Anja Kaspersen (Director, United Nations Office for Disarmament Affairs), who introduced the speakers.

Mr Wolfram Burgard (Professor of Computer Science, Albert-Ludwigs-Universitat Freiburg) started his intervention by noting that there is a need to transform the way we think about artificial intelligence (AI) and take a more positive attitude. AI is already a part of our lives and we see it in multiple applications, from web services and games, to manufacturing and agriculture. As the technology continues to progress, it is expected to play an increasingly important role in several areas. For example, highly accurate navigation systems empower industrial robots to move with more agility from one place to another, and, thus, enhance productivity. The same systems are crucial for companies working in the field of self-driving cars. In healthcare, big data, algorithms, and neural networks are used in multiple applications, from diagnoses for certain diseases to neuro-robots which help people with disabilities perform daily tasks. In agriculture, AI brings precision farming, supporting a more efficient and sustainable use of resources. These are only few examples which show that AI is an important tool for the well being of society.

Responding to a question from the audience about the risks associated with AI, Burgard acknowledged that one of the main challenge with AI agents is that they need to operate in a world that they do not fully know. Taking the example of self-driving cars, the technology needs to be able to take into consideration the environment in which it operates, and there is still much work to be done by researchers to empower algorithms in this regard.

Ms Terah Lyons (Executive Director, Partnership on AI) spoke about the work that the Partnership on AI plans to do to support the development of AI technology, which would benefit everyone. The partnership, which over 50 members from both private companies and non-profit entities, is intended to serve as an open multistakeholder platform dedicated to fostering discussions and a public understanding on the implications AI has on people and society, and to facilitating the development of best practices on AI technologies. Its members share the belief that AI holds the promise to raise the quality of peoples ‘lives, and to help humanity address some of its most pressing problems, such as poverty and climate change. The partnership will focus on six major areas of work: safety-critical AI; fair, transparent, and accountable AI; collaborations between people and AI systems; AI, labour and the economy; social and societal influences of AI; and AI and social good.

Lyons underlined the need for an active understanding of the challenges associated with the development and use of AI. These challenges can only be addressed in a multistakeholder and multidisciplinary manner, and this is also the case when it comes to developing policies and regulations in the field of AI. Moreover, it is important to start addressing these concerns now, if we are to be able to develop AI for the benefit of social good.

Ms Celine Herweijer (Partner, Innovation and Sustainability, PricewaterhouseCoopers UK) started by stating that the Earth has never been under so much strain, with many species being at the risk of extinction, the chemistry of oceans changing at a rapid pace, the air and water quality dropping, and climate change exacerbating. This is the backdrop against which the fourth industrial revolution is happening, and technologies such as AI can be put to use to address some of the Earth’s major challenges. For example, smart transportation systems are crucial for managing climate change, while precision agriculture allows for a more efficient use of natural resources.

It is in this context that the Fourth Industrial Revolution for the Earth initiative was started. It functions as a multistakeholder platform dedicated to developing a research base for applications for the Earth, supporting breakthroughs in this area, and building an accelerator platform to support projects and ventures to address the use of technology for the benefit of the Earth.

Herweijer noted that sustainability and responsibility principles need to be embedded into AI systems. It is also important to consider the risks of AI leading to bias and deepened inequalities in the early stages of developing AI applications. In addition, once developed and put to use, these applications should be monitored constantly so as to identify possible negative implications that may not have been considered during the development stage.

Mr Wendell Wallach (Consultant, Ethicist, and Scolar and the Yale University’s Interdisciplinary Center for Bioethics) spoke about the importance of looking not only at the benefits of AI, but also at the potential risks and undesirable consequences. He called for a distinction to be made between outwardly-turning and inwardly-turning AI for good. Outwardly-turning AI for good is about the potential of AI to help achieve the sustainable development goals (SDGs). But then we should also consider the impact of AI on areas such as decent work and global inequality, which are covered by the SDGs as well. While AI can help achieve the SDGs, it can also undermine our ability to achieve some of them. An inwardly-turning AI for good is about mitigating the harms that come with the progress of AI, and making sure that we do not go on a path we actually do not want. It is therefore important to look at both sides of AI for good, and devise technological and governance solutions to have an appropriate oversight over the technologies we develop.

In response to a question from the audience about whether we should focus more on issues such as rights and responsibilities for AI systems, Wallach pointed out that while such issues could be considered by researchers, we should focus more on the real challenges we have today. We should put more emphasis on the AI implications that are truly feasible and require immediate attention, and maybe less on those related to technologies we do not yet have.

During the discussions, a point was made that there is a mismatch between the adoption rate of AI technology and the ability to understand it. To address this, emphasis should be placed on issues such as audits for AI systems, the ethics of AI, and AI explainability. At the moment, many of the processes behind AI applications function as ‘black boxes’, and it is not clear how they make certain decisions or reach certain conclusions. While work is being done to make algorithms more explainable, we might need to live with the fact that humans might not be able to understand some systems. In such cases, it is important to carefully assess the risks of such systems during the development phase, test them in simulation environments, and continue to monitor them while in use, to be able to correct possible negative implications.

The session ended with a discussion on education systems and the need to adapt them to an increasingly AI-driven society. Investments are needed to enhance education systems and make sure that they prepare the needed amount of AI engineers and data scientists. At the same time, the nature of education needs to be changed, so AI is taught from a multidisciplinary perspective, combining, for example, technology with ethics. Re-training the current work force is also an important element to be considered, especially given the fact that AI progress leads to some jobs being made obsolete.

Kaspersen concluded the session by stating that the biggest transformation brought about by AI is about us, humans, and about how we adapt, evolve, govern, and educate ourselves and the world we live in.

Barbara Rosen Jacobson

The AI for Good Global Summit 2018 was opened with a keynote speech by Sir Roger Penrose (Emeritus Rouse Ball Professor of Mathematics, University of Oxford).

The AI for Good Global Summit 2018 was opened with a keynote speech by Sir Roger Penrose (Emeritus Rouse Ball Professor of Mathematics, University of Oxford). Based on his experience in physics, mathematics, and philosophy, Penrose addressed the question of Why Algorithmic Systems Possess No Understanding.

Artificial intelligence (AI) has advanced tremendously, and these developments have coincided with questions of whether – or when – AI will reach the level of human intelligence. Penrose compared AI to the cerebellum; the part of the brain that receives input from sensory systems and integrates them to fine-tune movement, coordination, precision, and timing, and he contrasted the cerebellum with the cerebrum, which initiates and coordinates activity in the body. Penrose explained that the relation between the cerebrum and cerebellum is akin to that between the programmer and the program.

According to Penrose, just like the cerebellum lacks an understanding of why it does what it does, computers are unlikely to encapsulate understanding or consciousness any time soon. The gap between algorithmic computation and general understanding is visible in quantum physics, and Penrose quoted the example of Schrödinger’s cat to highlight the discrepancy between computational outcomes (the cat is both alive and dead) and understanding (the cat is either alive or dead).

So how can we conceive consciousness and understanding? Penrose suggested that it must be rooted in physics, and may be explained by microtubules, which are tiny tubes located within brain neurons. According to this theory, the fine-scale activities of these microtubules form the building blocks for consciousness. In the absence of these biophysical elements, computers are unlikely to attain consciousness and understanding.

Penrose’s lecture was followed by a Q&A moderated by Mr Stephen Ibaraki, (Futurist and Social Entrepreneur) who started by asking Penrose about his thoughts on the term ‘AI’. Penrose explained that he felt particularly ‘nervous about the word intelligence’. Intelligence commonly requires understanding, and understanding commonly requires awareness. As AI devices are not aware, they are not intelligent in the normal use of the word; while such a system can achieve a lot, it ‘doesn’t seem to know what it’s doing’. He suggested that instead of AI, we could adopt the term ‘artificial cleverness’.

At the same time, Penrose explained that there is still a lot of room for AI to further develop. We can continue to use our human understanding to improve the algorithmic system, integrating missing ingredients, and transforming it into something that ‘goes beyond what you had before’. Yet, without the quantum processes of microtubules taking place in human brains, could computers ever mimic conscious brain activities? Penrose explained that we might, some day in the far future, be able to construct such protoconscious elements in a laboratory. However, this would raise many ethical problems that we are not ready to face.

The 'AI for Good Global Summit 2018' will be held on 15–17 May 2018 at the International Telecommunication Union (ITU) Headquarters in Geneva, Switzerland. The event, described as the leading UN platform for dialogue on artificial intelligence (AI), is being organised by the ITU, in partnership with XPRIZE Foundation, the Association for Computing Machinery (ACM), and sister UN agencies.

Under the theme 'Accelerating progess towards the Sustainable Development Goals (SDGs)' the 2018 summit will identify practical applications of AI and supporting strategies to improve the quality and sustainability of life on the planet. The discussions will be structured along four tracks:

  • AI and satellite imagery
  • AI and health
  • AI and smart cities and communities
  • Trust in AI

Each track is presently curated by a team that will present an overview of its work on Day 1, and a summary of its findings on Day 3.

Participation will be free-of-charge, however seats are limited. Registration will be carried out exclusively online.

For more information, visit the event website.

 

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee



 

GIP Digital Watch is operated by

Scroll to Top