Multi-stakeholder Discussion on issues about Generative AI

8 Oct 2023 08:00h - 09:30h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator – Yoichi Iida

In the field of AI governance, the importance of interoperability between different policy frameworks was underscored. This recognition stemmed from the understanding that transparency and predictability are crucial for ensuring effective governance and regulation of AI technologies. Japan, in particular, has pursued a non-binding software approach in AI governance.

The Hiroshima AI Process was launched in May and aimed at establishing high-level guiding principles and a code of conduct for AI actors. The process consisted of working group meetings held online, where priority risks, challenges, and opportunities presented by generative AI were discussed. The initiative received positive sentiment and was perceived as a step towards shaping responsible AI development and deployment.

One of the key arguments put forward was the need for open and inclusive discussions on the risks and challenges associated with AI. Yoichi Iida emphasised that by openly addressing these concerns, it would be possible to harness the full potential of AI while ensuring its safe and responsible use. This argument received positive sentiment as it was seen as a way to promote innovation and economic development by making the best use of the technology.

The positive potential of new AI applications and their potential to improve society and drive economic development were emphasised by Yoichi Iida. The sentiment towards the benefits of AI applications and systems was positive, with a focus on the positive impacts they can bring to various sectors of the economy. Panelists were invited to share their company’s services and the benefits they can provide to society, further highlighting the potential of AI applications.

Adapting AI solutions to local conditions was deemed vital for their successful implementation in communities. Speakers provided examples illustrating how different regions and communities have diverse needs and priorities when it comes to technology and AI usage. Failure to consider the realities and contexts of different communities may result in AI solutions that fail to meet their intended purpose. This neutral argument stressed the importance of understanding and incorporating local conditions into AI development.

The concept of interoperability between different AI frameworks was identified as another important aspect of AI governance. The ability of different systems to work together effectively was seen as necessary to deliver comprehensive and efficient solutions. This neutral argument emphasised the need for compatibility and collaboration between different AI frameworks to address complex challenges and achieve desired outcomes.

The potential of AI among digital technologies was highlighted and received positive sentiment. It was acknowledged that AI has far-reaching opportunities and potentials in various fields, presenting possibilities for transformation and innovation. This positive sentiment towards AI’s potential emphasised its significance as a driving force in the digital age.

Support for collaborations among companies, governments, and international organisations to facilitate the benefits brought about by AI was expressed. The World Bank’s active involvement in development support activities in the digital field was cited as an example, reinforcing the importance of collaboration to expand impacts and create effective AI solutions. This positive argument emphasised the need for collective efforts in harnessing the benefits of AI.

Promotion of collaboration among different stakeholders for AI evolution was seen as crucial. Yoichi Iida highlighted the potential for collaboration among different AI players in the ecosystem, and the Hiroshima process was identified as a means to foster such collaboration. International organisations such as the World Bank were urged to promote collaboration and share their knowledge and experiences. This positive sentiment demonstrated the recognition of stakeholder involvement as a catalyst for AI advancement.

Efficient AI development was seen as dependent on multi-stakeholder involvement. It was argued that finding the best solutions for enhancing AI projects involved the collaboration of public and private sector entities, as well as the World Bank. The involvement of different types of players in the AI ecosystem was deemed necessary to pave the way for efficient AI development. This positive argument reflected the need for diverse perspectives and expertise in shaping AI initiatives.

The notion of capacity building programmes was discussed, with the suggestion of extending these programmes to players from other countries in collaboration with the World Bank. This positive sentiment indicated an openness towards multinational collaboration in capacity building, recognising the importance of knowledge sharing and skill development in AI.

In conclusion, this summary highlights key discussions and arguments regarding AI governance and the potential of AI applications. It underscores the importance of interoperability, open discussions on risks and challenges, adaptation to local conditions, stakeholder collaboration, recognising AI’s potential, and capacity building programmes. The involvement of various stakeholders and international organisations is advocated to harness the benefits of AI in a responsible and inclusive manner.

Luciano Mazza de Andrade

Luciano Mazza de Andrade, the Director for Science, Technology and Innovation and Intellectual Property at the Brazilian Ministry of Foreign Affairs, holds a significant position in driving technology and innovation in Brazil. He recognises the potential of Artificial Intelligence (AI) in addressing the challenges faced by different countries.

Andrade highlights the transformative power of AI in various sectors. He emphasises how AI solutions can improve the provision of public services and e-government, enhance agriculture and food security, and contribute to the advancement of healthcare and education. In Brazil, there is a specific focus on AI innovation, supported by a legal framework designed to boost start-up growth. However, Andrade also stresses the importance of adapting AI to the local needs and communities of developing countries, as AI models are primarily trained on English language data, which may not reflect the realities and nuances of these nations. Additionally, AI models can also contain biases that may perpetuate inequalities.

To fully harness the benefits of AI, developing countries must establish adequate infrastructure and robust governance frameworks. Andrade points out that without these foundational elements, it will be challenging for these countries to take full advantage of AI’s capabilities and potential.

Furthermore, Andrade highlights the significance of dialogue and cooperation in the global AI landscape. He particularly praises Japan’s leadership role in this field and suggests engaging with different development banks to leverage investments. He believes that dialogue and cooperation are vital for sharing experiences, best practices, and building the necessary national capabilities. Andrade suggests that Japan should strengthen dialogue with other international organizations to avoid fragmentation of initiatives. Incoherence in narratives and policies can hinder progress, so building momentum at the United Nations for an inclusive dialogue is essential.

In conclusion, Luciano Mazza de Andrade underlines the importance of AI in addressing global challenges. However, he emphasizes the need to adapt AI to local needs and communities, establish adequate infrastructure and governance frameworks, and foster dialogue and cooperation. By considering these aspects, countries can fully harness the potential of AI for sustainable development and inclusive growth.

Hiroshi Maruyama

Hiroshi Maruyama’s company is heavily invested in hardware development and aims to broaden the applications of generative AI. One notable accomplishment is the development of an energy-efficient supercomputer, which received recognition by ranking highly in the Green 500 supercomputer ranking. This demonstrates the company’s commitment to sustainability and reducing energy consumption in the tech industry.

Additionally, Maruyama’s company utilized a deep learning method to improve the speed of new material discovery. By harnessing the power of AI, they have accelerated the process of discovering new materials, which has implications for industries such as medicine, manufacturing, and energy.

An interesting collaboration highlighted in the research is with Cowell Corporation, resulting in the creation of a virtual human generative model. This joint effort showcases Maruyama’s company’s expertise in combining AI technologies with virtual modelling, with potential applications in entertainment, virtual assistance, and virtual reality.

Maruyama himself is a strong advocate for pushing the boundaries of AI beyond human perception. He believes in the immense potential of this technology and its ability to transform various sectors. To support this, he cites ChatGPT’s scalability law, which highlights the limitless scalability of AI language models. Additionally, his company’s development of the Matalantis software for materials informatics demonstrates their commitment to expanding AI into new technological domains.

In a noteworthy collaboration, Maruyama’s company played a crucial role in the creation of the W3C virtual human generative model. This signifies the significance of their contributions towards advancing virtual human technology, ensuring ethical and responsible development in this field.

Despite these achievements, Maruyama expressed concerns about the energy consumption and cost of current hardware technology. He believes that existing hardware technology is too energy-consuming and expensive. However, his company has developed their own accelerator technology, enabling the creation of one of the most energy-efficient supercomputers. This demonstrates their commitment to addressing the challenges associated with energy consumption and cost in hardware development.

In conclusion, Hiroshi Maruyama’s company is at the forefront of hardware development and expanding the applications of generative AI. They have achieved significant milestones, including the creation of an energy-efficient supercomputer, collaborations in virtual human generative modelling, and breakthroughs in materials informatics. Maruyama’s passion for pushing the boundaries of AI beyond human perception is evident in his company’s accomplishments. However, he also recognizes the need to address energy consumption and cost issues in current hardware technology, taking steps to mitigate these challenges. Through their innovations and commitment to responsible development, Maruyama’s company is contributing to the advancement of AI and shaping its future.

Amrita Choudhury

The use of Artificial Intelligence (AI) in emerging economies has the potential to bridge the divide between these economies and developed countries, as well as improve public distribution systems. AI can be a powerful tool for development, helping emerging economies leapfrog traditional barriers. Its application in agriculture, for instance, can facilitate smart farming, maximize crop benefits, and assist in climate control. Furthermore, AI has the potential to enhance public distribution systems if accurate and relevant datasets are used.

However, it is important to acknowledge that AI algorithms, often developed in the Global North, can display inherent biases when applied in diverse global south contexts. This is attributed to the varying socio-cultural variables, environments, and genealogical distinctions across regions. Therefore, it becomes crucial to respect local cultures and customize AI solutions to specific localities to ensure inclusive and equitable outcomes.

Greater transparency, accountability, and local customization should be guiding principles in AI development. Respecting local cultures and conditions is essential, as is using AI responsibly to avoid or minimize biases. Promoting dialogue between industry innovators, regulators, and civil societies will allow for a collaborative shaping of the direction AI applications should take. This will contribute to sustainable development and progress, aligning with SDG 16: Peace, Justice, and Strong Institutions.

Regarding regulations, the development of AI should be guided by strong frameworks and best practices rather than restrictive regulations. This approach allows for innovation and growth, aligning with SDG 9: Industry, Innovation, and Infrastructure. Restrictive regulations could hamper the growth of small and medium enterprises and hinder innovation. Thus, collaboration, dialogue, and capacity-building around AI are encouraged.

Collaboration is necessary due to the cross-border nature of technologies. The interconnectedness of AI and its global reach requires cooperation among countries and stakeholders. By working together, they can address common challenges, share knowledge, and foster partnerships that contribute to achieving SDG 17: Partnerships for the Goals.

In developing countries, training and capacity-building play a crucial role in ensuring the effective and responsible use of AI. By investing in quality education, governments and organizations can equip individuals with the skills needed to leverage AI technologies for their benefit. This aligns with SDG 4: Quality Education.

It is also important for AI systems to respect rights and promote gender equality. Embedding these principles in AI systems ensures that they do not perpetuate discrimination or biases. This aligns with SDG 5: Gender Equality and SDG 16: Peace, Justice, and Strong Institutions.

Lastly, the security of AI systems must be prioritized to prevent misuse. State actors attacking different countries and bad actors infiltrating the system could have significant consequences. Protecting AI systems from these risks contributes to maintaining peace, justice, and strong institutions as outlined in SDG 16.

Entities like the World Bank can play a significant role in providing training and sharing best practices. By supporting capacity-building efforts and offering guidance to governments and stakeholders, they can help maximize the positive impact of AI development. This aligns with SDG 17: Partnerships for the Goals.

In conclusion, the use of AI in emerging economies has the potential to drive development and bridge the gap with developed countries. However, it is crucial to address biases, respect local cultures, and customize AI solutions accordingly. Transparency, accountability, collaboration, and capacity-building are important factors to ensure responsible and inclusive AI development. By addressing these aspects, AI can contribute significantly to achieving various Sustainable Development Goals.

Daisuke Hayashi

Daisuke Hayashi, a Senior Digital Development Specialist at the World Bank, focuses on digital infrastructure, international cooperation, and scaling up digital skills. The World Bank has been actively involved in digital development, supporting the growth of AI technology, and expanding digitalisation in developing countries. Hayashi’s work involves addressing the challenges associated with achieving consensus on AI within the international community.

Daisuke Hayashi recognises that getting consensus within the G7 countries on AI is a difficult task due to the potential and associated risks of AI technology. This acknowledgement highlights the intricate and complex nature of reaching an agreement on AI policies and regulations between different countries.

The World Bank has been actively working on building digital infrastructure and skills to bridge the gaps between connected and unconnected areas. They have supported infrastructure construction projects aimed at improving connectivity in developing countries. Additionally, the World Bank’s Digital Development Partnership focuses on capacity-building initiatives to expand and develop digital skills in these countries. This support is crucial for promoting development and reducing poverty in these regions.

The World Bank also emphasises the importance of establishing effective regulatory frameworks for AI. They collaborate with private companies and public sectors to find the best solutions for creating an efficient regulatory framework in the field of AI. This demonstrates their commitment to ensuring that AI technology is developed and implemented responsibly, taking into consideration ethical and legal considerations.

In conclusion, Daisuke Hayashi’s work at the World Bank focuses on digital infrastructure, international cooperation, and scaling up digital skills. The World Bank has been actively involved in digital development, supporting the growth of AI technology, and expanding digitalisation in developing countries. They acknowledge the challenges in achieving consensus on AI within the international community and are in favour of establishing effective regulatory frameworks for AI. Overall, the World Bank’s efforts in building digital infrastructure and supporting skill development play a crucial role in promoting economic growth and prosperity in developing regions.

Bonifasius Wahyu Pudjianto

Bonifasius Wahyu Pudjianto, a key figure at the Ministry of Communication and Informatics in Indonesia, is extensively involved in promoting IT literacy and nurturing the start-up ecosystem. His role primarily focuses on digital iteration and the start-up ecosystem. He is responsible for handling the IT-sector capabilities of all people and promoting IT literacy among the population. Pudjianto’s efforts align with SDG 9, which emphasizes industry, innovation, and infrastructure.

AI technology has rapidly gained prominence in Indonesia, contributing significantly to various sectors, including healthcare, education, skill development, poverty alleviation, and environmental humanitarian aid. AI solutions have improved remote healthcare access, and start-up companies have developed online courses for learners in rural areas. Moreover, AI solutions are being developed for disaster response and early warning systems. The widespread use of AI in Indonesian society has had a positive impact.

However, concerns regarding the ethical use of AI have been raised, and guidelines are being formulated to regulate its usage. These guidelines aim to ensure inclusivity, humanity, security, democracy, openness, credibility, and accountability in AI utilization, reflecting the need to consider individual rights and ethics.

Digital literacy is recognized as a vital component of societal capacity building, contributing to SDG 4 (quality education) and SDG 8 (decent work and economic growth). It is crucial for individuals to understand how to utilize AI and other technological advancements effectively and responsibly. Collaboration between industries and emerging start-ups is also seen as a key driver of innovation and economic growth.

To support the growth of start-ups, venture capital and financial engagement are essential. During the tech winter, many start-ups faced difficulties, leading to a decline in the start-up ecosystem. Venture capital and institutions like the World Bank play a crucial role in providing the necessary support and funding for start-ups to thrive, contributing to economic growth.

In conclusion, Bonifasius Wahyu Pudjianto’s work at the Ministry of Communication and Informatics in Indonesia focuses on promoting IT literacy and nurturing the start-up ecosystem. AI technology is widely utilized across various sectors, but ethical considerations and regulatory frameworks must be established. Strengthening digital literacy, fostering collaboration between industries and start-ups, and increasing financial engagement are identified as crucial factors for sustainable growth. Through these efforts, Indonesia can harness the potential of technology and innovation to drive progress and development in line with the relevant Sustainable Development Goals.

Melinda Claybaugh

Meda is actively developing AI products with a core focus on connecting people. These products utilize generative AI technology, allowing users to create and share images within the apps. This enhances the user experience and promotes interaction and engagement among users. Meda’s objective is to foster social networking and connect individuals.

In addition to connecting people, Meda is also investing in open source tools and products to democratise access to AI. They have recently launched the Lama2 open source large language model, allowing developers and researchers to utilise it in the field of AI. Meda aims to empower individuals and communities by democratizing access to AI tools, leading to a more inclusive and accessible AI landscape.

Furthermore, Meda’s commitment to making a positive impact extends to their Data for Good program. This program tackles societal challenges through the use of AI. It supports translations between 200 languages, including low resource languages, facilitating better communication and understanding among different cultural and linguistic backgrounds. Additionally, their Data for Good program includes the Relative Wealth Index, which helps governments increase social protection program coverage using artificial neural networks. Meda leverages AI technology to assist governments in making informed decisions and addressing social inequality.

In conclusion, Meda’s development of AI products that connect people, investment in open source tools, and the implementation of their Data for Good program demonstrate their commitment to creating a positive impact. These efforts contribute to achieving Sustainable Development Goal 9, focusing on industry, innovation, and infrastructure. Meda’s initiatives highlight the transformative power of AI for the benefit of individuals and society as a whole.

Natasha Crampton

Natasha Crampton, Microsoft’s Chief Responsible AI Officer, is responsible for implementing the company’s responsible AI principles. She works closely with engineering teams to ensure that Microsoft’s AI technologies adhere to ethical and responsible standards. Crampton also defines the policies and governance approach for AI implementation within the company.

In her external-facing role, Crampton takes the knowledge gained from Microsoft’s responsible AI practices and actively participates in public policy discussions. She advocates for the development of new laws, norms, and standards that promote responsible AI systems.

Microsoft is at the forefront of developing a suite of co-pilots, AI-powered products that enhance productivity and creativity in coding. These co-pilots assist both experienced and novice coders in accomplishing tasks more efficiently. For instance, GitHub Copilot allows users to code in plain language, making coding more accessible to non-coders. Microsoft’s suite of co-pilots has been widely accepted for their ability to enhance productivity and foster creativity in coding.

Microsoft also believes that AI technology has the potential to bridge gaps in access and communication, particularly in linguistically diverse communities. Initiatives like the Be My Eyes platform, which uses OpenAI’s GPT-V model, aid blind or visually impaired individuals by providing textual descriptions of visual information. In India, Microsoft has deployed an AI-enabled chatbot that allows users to access government services in their local languages, overcoming the linguistic barrier.

Effective multi-stakeholder collaboration is crucial for addressing the challenges posed by AI technologies. Microsoft advocates for identifying specific problems and directing resources towards finding solutions. The Christchurch Call serves as an example of a successful multi-stakeholder initiative, bringing together governments, civil society, and industry to work collectively towards addressing the issue at hand. Building on existing frameworks has also proven effective in multi-stakeholder collaborations, rather than reinventing the wheel.

Tackling the digital divide and focusing on upskilling are vital steps in fully harnessing the potential of AI technology. Many parts of the world still lack access to AI due to the digital divide. By addressing this divide and providing the necessary resources and training, individuals and communities will be better equipped to leverage AI’s benefits.

Overall, Natasha Crampton and Microsoft are committed to implementing responsible AI principles while driving innovation and inclusivity. Through their suite of co-pilots and efforts to bridge gaps in access and communication, they are demonstrating the positive impact AI technology can have. Effective multi-stakeholder collaboration, building on existing frameworks, and addressing the digital divide are essential steps in fully realizing the potential of AI.

Session transcript

Moderator – Yoichi Iida:
Good afternoon, everyone. My name is Yoichi Iida, the assistant vice-minister at the Ministry of Internal Affairs and Communications of the Japanese government. And this session is talking about the opportunities and challenges, but mainly focusing on the opportunities brought by generative AI and the foundation models, and which they are looking at. having a great potential for development of society and economy for us and we have very prominent speakers and representatives from different communities and we will have interaction between these panelists on the potential possibilities and use of those technologies in the different types of societies and economies with different conditions and backgrounds. So I would like to start with the introduction by each speaker from my side to the end. So I pass the microphone to one from one to another maybe you take two three minutes to introduce yourself.

Amrita Choudhury:
Good evening everyone. My name is Amrita Choudhury. I come from India, represent CCUI which is a civil society organization. I am currently a MAG member and happy to be here. I also chair the Asia-Pacific Regional IGF amongst other hats I wear and I pass it on to the next fellow panelist. Good evening

Melinda Claybaugh:
everyone. I’m Melinda Kleba. I’m a director of privacy policy at MEDA and I look after AI and data regulation globally. Hi I am Hiroshi Maruyama. I

Hiroshi Maruyama:
work for a preferred networks and my background is software. I spent 26 years at the IBM research. Now I work for the preferred networks as a director part-time as well as I work for a corporation which is a chemical company for daily

Natasha Crampton:
products like shampoo and soap. I’m Natasha Crampton, Microsoft’s chief responsible AI officer. I have two parts to my job. The first part is an internal facing part to my job where I help our engineering teams implement our responsible AI principles and commitments by defining the policies and the governance approach that we have across the company. And then in my external facing role I try to take what we’ve learned from building AI systems responsibly and move that into the public policy discussion about what the new laws and norms and standards ought to be in this space. Good evening ladies

Bonifasius Wahyu Pudjianto:
and gentlemen my name is Bony from the Ministry of Communication and Informatics Indonesia. So I have two role of my responsibility. First is related to the digital iteration, which is to encourage people much more having capability on the IT sector for all the people, and secondly also related to the start-up ecosystem. That’s my primary responsibility. Thank you.

Luciano Mazza de Andrade:
Hi, good evening. I’m Luciano Maza. I’m with the Brazilian Ministry of Foreign Affairs. I’m the Director for Science, Technology and Innovation and Intellectual Property. And although the title of the job does not say it, that’s the department in our ministry that are responsible for all things digital. So anything that relates to digital economy, digital transformation, internet governance, and also disruptive technologies, so that’s part of our remit. So it’s a pleasure to be here, and I’m looking forward to a good discussion. Thank you.

Daisuke Hayashi:
Hi, my name is Daisuke Hayashi from the World Bank. My title is Senior Digital Development Specialist, and I’m very happy to be here with these excellent panelist colleagues here. And I engage in the kind of more digital infrastructure and also the international cooperation, as well as the digital skills scaling up issues. So thank you very much.

Moderator – Yoichi Iida:
Okay, thank you very much, the panelists. And as you see, we have a very excellent set of panelists from different communities and different regions of the world. And before starting the question and answer between panelists, let me briefly introduce what our government has been making our efforts in promoting AI governance. across the world through mainly G7 framework. As many of you are aware, Japan is taking the role of G7 presidency this year. We had the digital and tech ministers meeting at the end of April. Through the preparation, we have been discussing global AI governance. In the beginning, the objective of the discussion was to bridge the gaps between different policy frameworks and regulations across G7 members. Because, as you know, the EU and European countries are heading for the legally binding framework while the US, Japan and other members are maintaining, at least at the time, the non-binding software approach in AI governance. My objective was to keep this group to share the same policy direction. In the beginning, we encouraged European colleagues to admit the importance of open and enabling free environment for innovation through AI technology based on software approach. As you may know, even under the EU AI Act framework, the proportion of regulated AI will be limited. According to their explanation, most of the AI technology and AI systems will be mostly free to provide and free to use. They only regulate the AI systems with high risks. In some cases, they consider risks as unacceptable. But in most cases, the AI systems are free in the market. So, free doesn’t mean free of charge, but free from regulation, of course. So, we wanted to share this direction, but when they are discussing internally the introduction of the legally binding framework, it was a little bit difficult for us to find a landing point between the different approaches. So, we changed the direction of the discussion and G7 agreed at the end the importance of interoperability between different policy frameworks. Even if you have a legally binding approach or you have software-based approach, we believe interoperability and transparency between different frameworks, between different jurisdictions, it’s very important, so that the various players in AI ecosystem could maintain or could ensure the predictability and transparency of different legal and policy frameworks. So, that was the discussion at G7 digital and tech ministers meeting. And in the middle of the discussion, we saw the rapid rise of generative AI in the market and rapid expansion across the society. So, we decided to discuss how we could improve the governance of this very powerful technology of generative AI, but we didn’t have enough time because it came up all of a sudden in the middle of probably March or even April, and our ministers met at the end of April. So, we decided, our ministers decided to continue our discussion and efforts beyond our ministers’ meeting and also beyond the leaders’ summit in the middle of May this year. Leaders agreed to continue the work and directed the relevant ministers to continue the work toward the end of the year, and they named this initiative as Hiroshima AI Process. So, Hiroshima AI Process was launched at the end of May, and we have been having dozens of working group meetings online from June through to actually up until now. And we have been discussing what are the priority risks and the challenges brought by generative AI, what are the opportunities, and how we could address those risks and the challenges, and what would be the good approaches, in particular, when we do not have good technology, clear answer in addressing those issues and risks, such as lack of transparency or expansion of disinformation, misinformation, which are relatively new to us and brought by generative AI and the foundational models. So, we are continuing our discussion, actually, but in the beginning of September, as some of you may know, our ministers met online to exchange and confirm the interim outcome from the discussion. We had the minister’s statement, which included 10 items as priorities. They included the risk countermeasures to the risks and the challenges by generative AI and companies and AI actors should consider those measures before they develop and launch their models and the systems, and before they put them into the market. And also, those companies and organizations should continue their efforts after the launch of AI systems, so on and so forth. We have 10 key elements, which you can see on the website of our ministry, and these 10 elements are still being discussed at our working group to be more elaborated with content. And we are now trying to find a set of a little bit high-level guiding principles for players such as organizations developing generative AI and the foundation model, and even the new type of AI systems, which may come up in the near future. And we are also trying to find a set of guiding principles for AI systems, which may come up in the near future. And we are also discussing the action level of code of conduct, which will articulate how those AI actors can implement those high-level guiding principles. Our working group is now discussing those principles, high-level principles, and action-level code of conduct with the organizations and the players developing AI systems, because the working group believed the development stage of AI systems is most urgent priority for us. But at the same time, we believe different actors in AI ecosystem, I mean the AI service providers, AI deployers, or AI users, AI end users, all of those AI actors should be also responsible in their engagement with generative AI and advanced AI systems. So, in the second half of our work, we will be working on principles for other AI actors than AI developers, but up until now, we are more or less focusing on the players developing AI systems, including generative AI and AI advanced, I’m sorry, foundation models. That is what we have been doing from the beginning of the year as G7 framework, but we at the same time recognize G7 is a small group in the world. And in our discussion, everybody recognizes the importance of multi-stakeholder dialogue and dialogue with players, with partners beyond G7 group. So, this session is one of the very first steps for us to share our idea and to start our discussion with different players in the AI ecosystem. This is just an introduction, and I’m very sorry I’m taking a little bit longer than I expected, but in this session, having said this, introduced our efforts upon now, we are trying to focus on the, in particular, positive side of new AI applications, new AI systems, and because we often talk about the risks and challenges, but when we talk about the risks and the challenges, the purpose of the discussion is we want to know how we could make best use of this benefit of this technology while addressing the potential risks and the challenges. We all know even if there would be enormous benefit and the potential, if there is risk and the challenge waiting for us, people are not comfortable in actively using the technology. So, that is why we discuss risks and the challenges, but the ultimate purpose is how we can make use of those new technologies through innovation to improve our society and develop our economy. And this is true not only to the developed countries, but of course, true to all the different communities and societies across the globe. So now, I would like to invite our excellent panelists to share what kind of benefits and what kind of potentials your company’s services and the technologies and systems are now brought to the society through your services, and also what kind of benefits or potentials you are thinking of, planning to bring to the society. So first, I would like to invite three AI companies to share the information on your current services or solutions you are providing in the market, and what types of new benefits or development or advantages you are thinking of bringing through your newly developed services or technologies. So first, I would like to invite Melinda from my side to the end. So first, I invite Melinda, followed by Maruyama-san, and then Natasha.

Melinda Claybaugh:
So Melinda, please. Thank you so much. So I want to share some of the AI products and developments that MEDA has been developing, and they fall into a few buckets. So the first bucket, probably not surprisingly, is what’s core to our business in terms of helping people connect with each other, which is our mission. So we recently, a couple weeks ago, released a suite of new generative AI products that you can use in our existing apps and services, WhatsApp, Facebook, Instagram. And these are AI agents that you can interact with, have fun, ask questions, get information. We also launched generative AI products that allow you to make images that you can share with your friends and family in our products, and you can make stickers and fun things that already integrate with our products that allow you to just have fun, enjoy with your friends and family. And that’s really core to our business and furthering the experiences that people have in our apps. But there’s also two other types of deep investments that we’re making in AI that I want to highlight. So another area is around investing in open source tools and products. And so this is really about unlocking innovation globally and helping people take advantage of AI tools and democratizing access to AI tools. So I first want to call out something that we released this summer, which is a large language model called Lama2. And this is a large language model that we made available on an open source basis this summer. Anyone can download it and use it depending on, you know, you can download in different sizes depending on your computing capability. And you can build things on top of it. You can build generative AI products on top of it. And actually, a really exciting development is that a couple of days ago, we launched what’s called our Lama Impact Challenge. And we’re seeking applications from anyone who wants to propose a compelling use for Lama to solve a societal challenge. In particular, we’re looking for applications in the areas of education, the environment, and open innovation generally. So think about, for example, in the area of education, how you might use our large language model to support teachers or students in a particular learning environment. In the area of the environment, how might you use our open source model to understand how we can adapt to climate change, to understand how we might prepare ourselves for climate effects, and how we might mitigate or remove greenhouse gases from the environment. These are all things that can be propelled and powered by large language models. And so we’re very interested to see what people might come up with and the most compelling ideas we will fund and provide grants to. So that’s just an example of how we’re hoping to open up access to really powerful tools, particularly to solve societal challenges. And this is something that we committed to as part of the White House commitments that we signed in July, along with other companies, including Microsoft. And one of the voluntary commitments that we agreed to is investing in research to understand and advance to advance solutions to societal challenges. And we think this is a really powerful way to do that. The other thing, the third bucket I wanted to raise around our approach to AI and investments in AI is our Data for Good program. And so just a couple things I want to highlight from there. One is something that we have, a program we have called No Language Left Behind. This is a first of its kind project that open sources models capable of delivering high quality translations directly between 200 languages, including low resource languages. It aims to give people the opportunity to access and share web content in their native language and communicate with anyone anywhere, regardless of their language abilities. We then use those learnings from that program and feed that back into our products in order to improve our product experiences for communities around the world. I also wanted to share one final program that we have called the Relative Wealth Index. And this leverages artificial neural networks to analyze images that help identify poverty at a sub-neighborhood level. That information is then used by governments to increase coverage for social protection programs and make them available to a wider set of populations that need the support most. So from fun generative AI practices on the one hand to really grappling with critical social problems that we face around the world, I think we can start to see really the benefits of generative AI globally.

Moderator – Yoichi Iida:
OK, thank you very much, Melinda, for the very interesting examples. So now let me invite Maruyama-san to give you a story.

Hiroshi Maruyama:
Thank you. So our company is a little bit late in terms of coming to large language models. We have released our open source large language model two weeks ago. And we are going to demonstrate the applications of these language models in various domains next week in a CTEK exhibition. But today, I would like to focus on two directions, technological directions, that we are investing on, which is the first one is the hardware. ChatGPT is a significant breakthrough, but there are more innovations to come. And one of the new discoveries of ChatGPT or large language model is something called scalability law, which means more parameters, like a billion parameters, more data, and more computational power is the key to emergent capabilities, such as command of language in this case. But this means that if we put more computations, like 100 times larger computation power, then we may expect the next level emergent property, which is whatever it is. That’s the reason why we invest heavy in software hardware. We started as a software company, but we found that our current hardware technology is too expensive as well as too energy consuming. So we developed our own accelerator, which enables us to win the world’s most energy efficient supercomputers in the green 500 supercomputer ranking. Using our next generation hardware, we will make the next breakthrough in AI. So that’s the first area of our investment. The second area is the domains to apply the generative model thinking. Generative AI is currently around the world of human perception, like language, text, image, voice, et cetera. But there are other domains which is not very much familiar with human beings. For example, different scales. Looking at the molecular scales, we have a software service called the Matalantis for materials informatics. And we use deep learning technologies to accelerate the speed of the search of new materials by 1,000 times or 10,000 times compared to the traditional first principle-based simulations. Another example of the different domain is in a highly complex systems like in a human body, the biological systems. As I said, I also work for Cowell Corporation and in collaboration with Cowell and preferred networks, we developed the so-called W3C. virtual human generative model. I think you are familiar with the image generative model, such as mid-journey. It generates an image, for example, 100 pixel by 100 pixel. Each pixel represents the brightness of that dot. But supports that if you replace this image brightness with human body measurements, like age, sex, or blood pressure, glucose level, and so on. So we defined about 2,000 different attributes that is observable from the human body and created the generative model out of this data. And this is a very interesting and general purpose model, which can have many different applications, such as, for example, I am a 65 male. What is the average blood pressure in my age? And that kind of question can be easily done by this generative model. So we apply the technology to other domains beyond human perception. Of course, it’s fun to watch the machines doing what a human can do. But let the machine do what mere human cannot do is, I think, another way to going forward. Thank you.

Moderator – Yoichi Iida:
OK, thank you very much, Moriyama-san, for various types of applications and solutions. And now I would like to invite Natasha to share your knowledge. Do you mean the previous speaker? Oh, okay, okay.

Hiroshi Maruyama:
So, this is a collaboration with the Co-Operation and the Prefectural Networks.

Moderator – Yoichi Iida:
Thank you very much.

Natasha Crampton:
So, I’m Natasha Crankjian from Microsoft. I’m incredibly optimistic about AI’s potential to help us have a healthier and more sustainable, more inclusive future, and in fact, that’s what motivates me to do the work that I do within the company to ensure that that technology is safe and secure and trustworthy. And I think what’s exciting about the current moment is that you don’t just have to imagine potential use cases for AI. There are real use cases today that are making a difference. So, at Microsoft, we’ve been building this suite of co-pilots. They’re very intentionally called co-pilots, our products that incorporate the latest generation of AI, because they’re all about combining the best of humans and machines. So, if you take your Microsoft products that many of us know and use every day, things like Outlook or Teams or Word, we’re adding AI-powered assistants to those programs which allow you to do things like, instead of writing a long, lengthy email, you can just add in three bullet points and then the co-pilot will help you expand those bullet points into a first draft, which you can then look at and decide what to do. You can take a Word document and put it into the PowerPoint co-pilot and it will generate a first draft of a slide deck based on that Word document. Or, if you’re like me and sometimes you run a little bit late to some meetings and you join a Teams meeting five minutes into the meeting, you can get a summary of what’s already happened. that meeting using the Copilot and Teams. In addition to adding Copilots to the Microsoft Office products that we all know well, we’ve also created whole new products, which our customers are very much enjoying right now. An example of that is a product called GitHub Copilot. This is a product that allows you to type in plain language and generate code. And it’s an incredibly democratizing product in the sense that you no longer need to be a coder in order to code. You simply need to be able to issue instructions, describe the outcome that you want to achieve, and the code will be generated. And we’re finding that with that type of product, it’s both welcomed by new to coding individuals, people who do not have expertise in coding, but we also hear from very experienced coders at the level of coders who work on, say, Tesla’s autopilot system, so very sophisticated AI operators, that they too find it very, very useful in their work. So we have that suite of products, the Copilot suite. And we just think, together, these products help users be more creative. They help them do things that they might not have been able to do before, and more productive. And especially at a time when many countries are grappling with major population shifts, and many developed countries are shrinking of the population that’s of working age, these types of productivity-enhancing applications of AI are really meaningful. In addition to those Copilots, we also make available the basic building blocks of this technology. So we’re working very closely with our partner, OpenAI, who you may be familiar with. familiar with as the developers of ChatGPT. OpenAI has made available a number of different models, which we make available as building blocks. And then our customers and our partners come up with all sorts of exciting applications on top of those. I just want to mention two examples to you now, which I think give you a flavor for some of the potential that lies ahead with these models. So there’s a Danish startup called Be My Eyes. They were established in 2012. And they have been providing services to people who are blind or low vision. And they set up a program whereby people who are blind or low vision were partnered with sighted volunteers so that the volunteers could help navigate an airport, help identify a product. Microsoft was involved early in this program by making sure that our experts on Microsoft technology products were able to help explain how to use technology to people who were blind or low vision. So this was a very successful program. But it really got a step change and was able to be made available much, much more broadly just earlier this year when OpenAI made available a model called GPT-V. It’s a vision model. And it allows an image to be ingested and then described in text. So in practice, what you can do with this technology is something like open your refrigerator door, take a photo of what’s inside your refrigerator. The model will analyze the image, recognize the items in your fridge, and then suggest recipes for what you might be able to cook that evening for your meal. Now, of course, this is not just helpful to people. who are blind or low vision. This has everyday applications for many of us. So I think that’s the one example of an exciting application where it’s meeting a real community need. It’s serving 250 million people who are blind or low vision, but it also has broad application that we all benefit from. So if we move from Denmark, where that startup is based, to India, in a town called Biwan that’s about two hours outside of New Delhi. This is an arid farming village, and the farmers there are facing a number of challenges. They’re facing challenges like applying for pensions on behalf of their aging parents. Their government assistance payments have stopped. They want to be able to apply, in some cases, for their children to get scholarships to go to university. But in reality, in this particular village, there’s both a linguistic and a technology divide. So English is often the language of public life, of government life in India, and yet only 11% of the population speaks English. So into this situation enters a new offering based on OpenAI’s ChatGPT technology and built on Microsoft’s cloud, which is called Google Bandy. And it’s a chatbot that is allowing much, much greater access to government services than what was previously available. So users of this chatbot can ask questions in multiple languages. It turns out that India has 22 constitutionally recognized languages, but in practice, somewhere between 100 and 120 spoken languages. So this bot is able to operate. in a language of the user’s choosing. You can speak into the interface and it will convert your speech into text, or you can type, which again overcomes a literacy hurdle. This bot then retrieves the relevant information, which is usually made available in English, and translates it back into local language. So there’s one implementation of a bot in India that’s helping those farmers meet those needs to get pension payments, to get their government assistance program stipends to make sure that university students are able to access that funding. But you can really imagine how that framework could be used in many other parts of the world as well, and it’s exactly those sorts of democratizing applications of AI that I’m really excited about.

Moderator – Yoichi Iida:
Okay, thank you very much for very interesting examples in various fields and in various regions. So having listened to three speakers from AI industry, we have learned a lot about the current situation in the AI-based services and the solutions and the possibilities in the near future. So now I would like to invite the speakers from emerging economies and the developing countries who are expecting having some potential solutions or future services to address their challenges and problems in their society or economy. And maybe that would give us some hint to think about future collaboration. So in the beginning, from my side, once again, let me invite Amrita to share your idea.

Amrita Choudhury:
Thank you. Thank you. So if I look at the developing country perspective, and I think it’s a global phenomena, and correct me if I’m wrong, most countries understand the power of technology and they want to leapfrog their development. For example, they do understand technology can help them leapfrog. They understand the power of technology and they want to take, including AI, which is the flavor of the season, I would say, and use it in a. a better way because we do see a trend. The countries who are using technology or even AI and the countries who are not, the divide is increasing and we don’t want that to happen. So if I look at countries such as India as in just now, it was mentioned that AI is also being used for good. For example, agriculture, I would take that example and just a correction, Indian government websites are bilingual or trilingual but the end-to-end process may not be complete but they do have the local language, the official ones. If you look at countries such as India where the population is exploding, I would say, land is decreasing because of urbanization and everything. Agriculture is using AI, they are using it for smart farming, how to use better terrains, what kind of crops to be used. It is used even in climate control. For example, we are seeing the global warming. Actual change is happening in, you know, you have unprecedented weather, et cetera, coming even for fishermen. So these are places where it can use and it can maximize benefits. You can use it in the public distribution systems if you use the data sets correctly. I would add the caveat that it can be used for good provided the right data sets are being used. So governments understand that they want to use it but obviously the technology may not be with everyone. There needs more information sharing but I think the questions are that, you know, is the process transparent? Is the systems, you know, the data in which, the way it’s used, is it accountable? What are, I would say, the algorithms which are being used because there are concerns which governments are coming up with which is, you know, the biases in the system, it could be racial. biases, it could be systemic biases, it could be any kind of biases coming up. And, you know, just like, you know, an example was given that medical can use these data sets, you know, for health care, especially where you have limited doctors or physicians. But we also need to realize, for example, someone in a particular region, let’s take Japan, the Constitution or the genealogy of the person may be pretty different from a European or from for even an Indian. So the same set of patterns may not work for everyone. It would have to be customized locally for those kind of genealogy. And it happens for everything else. So the data sets need to be of that place. For example, many times we’ve seen many of the algorithms which are used are from Global North, it doesn’t work in Global South or I would say the majority places. If I take Asia Pacific, for example, we are very diverse. We have countries such as Japan and we have the Pacific Islands who are kind of on the process of development. We have different cultures, races. So, you know, when you have systems working, they need to respect the culture of the place. That’s very important. What works in some place may not. There is no right and wrong in this. This is how the places are. So we need to respect those. So I think those are the concerns which come in, but it can be used for good. And I think what is needed is, you know, if you look at, if you speak to youngsters, they’re using ChatGPT for their answers, they’re using it for many things. But it has much more better, as in it can be used much more. And I think those things need to be spoken about, how it can be used. Perhaps even companies speaking to the regulators, policymakers or even civil society, etc. And understanding, you know, what are the needs. Everyone comes up with good intentions, but when it comes into you know, reality, it may be used in different ways. For example, many countries are coming up for elections. I hope it’s not being used, as you were saying, spreading misinformation or disinformation. So how can those be avoided and it can be used in a proper way is something and perhaps, would you like me, you know, there needs to be more collaboration and I think capacity building, especially for decision makers, how technologies work, what are the pluses, what are the concerns and as you mentioned, it’s important that AI or generative AI is growing. We really don’t know how it will shape up. So I think having guidelines for actually, you know, so that it is used properly makes more sense than try to stifle it because if we look at emerging countries, they want small and medium enterprises growing, they want innovation to happen in those places. So sometimes if we try to restrict things, it may counter effect the aspirations of that country. So I think having more frameworks, more dialogues, sharing best practices is a good way and perhaps if we have some time, I would share that at the IGF, we have the policy network on AI, which is working on three main parameters and we do have a discussion on the 11th. It’s on interoperability because, you know, you have different governance structures coming up, different, you know, OECD coming up and others coming up with frameworks but each of them needs to have some converging points. So that’s what is being tried to look at. Gender and race biases, you know, how can you kind of mitigate it a bit lesser is something which needs to, it’s looking and how AI can be used for environment. So, and this has a global south lens because it has been argued many times that many of the researches which are coming is more global north, but the majority countries are not taken into consideration. So that’s where it comes. And I think if even the Hiroshima Dialogue is expanding and trying to get developed nations into the discussion, that’s good because it remains an exclusive club of seven countries, whereas the power shifts are happening and there are other countries who are coming up. So it would be good, not only having the countries, but also different stakeholders. For example, if you have the public industry who’s innovating, the government who regulates, even civil society or academia who come up with the data in the same room, that helps. And I think those dialogues and the capacity building is important because the train has left the station. It will go further. You can’t stop something. But how you regulate the movement of the train in a positive way is something which needs to be looked at. And I think I would end it at there. Thank you.

Moderator – Yoichi Iida:
Okay, thank you very much. So we learned that AI is not almighty, but when it’s tailored or localized according to the conditions of the communities and the societies, that would be a powerful instrument to bring some innovation or improvement to the community. And as pointed out, G7 never tried to be exclusive club of a small number of countries, but we are always looking outward and we are always looking to collaboration with various partners. So thank you very much for the comment. And I would like to invite Mr. Mata Luciano, Director from the Foreign Ministry of Brazil. Oh, okay, I’m sorry. I skipped my, okay. So may I invite first Bonnie-san from Indonesia and then Luciano. So Bonnie-san from the Ministry of Communication and IT from Indonesia government. So Bonnie-san, floor is yours.

Bonifasius Wahyu Pudjianto:
Okay thank you so much. Yes, everybody knows that the AI now becoming a very well-known and very useful for our society and it’s also going very fast because the technology itself, it is evolved and giving a huge impact to all society. AI technology has begun applied in various sectors in Indonesia, starting from the improved access to the healthcare because Indonesia is an archipelagic, we have thousands of islands, so we have to have like a solution for each individual citizen even though they are in the remote area. Infrastructure, yes, it was the beginning obstacles but now and at the end we have to provide like a solution for doctors, medical healthcare for the patient in the remote area. Secondly, in education and skill development because the young generation is also scattered in many areas, so recently there are sort of a solution provided by the startup companies to provide like an online courses which is suitable for those who are not living in the major city. This is dedicated for those in rural area with the suitable content and also we have some of the solution for AI to alleviate the poverty and then the interesting thing is in environmental humanitarian aid and disaster response including the early warning system because nowadays due to the heat wave in Indonesia and surrounding Southeast Asia, there are quite plenty from the aspect. of, how to say, like a fire in the desert area, and then also a problem in the environmental. So the solution itself should be developed. It’s not only from the government side, but also from the private sector. In the meantime, the innovation from start-up has delivered a good and significant innovation and invention by utilizing AI, and they also have shown a significant contribution in solving those problems, and increasing the quality of service as well as the productivity. These are some of the examples, the implementation of AI, which is being used widely. However, we also, some of the stakeholders, are having some concern about utilizing AI. So from the regulatory perspective, academics, practitioners, as well as civil society, they concern to ensure that the utilization of AI should be attention and consider of individual right and ethics. So fortunately, we just established the National Artificial Intelligence Strategic in 2020. It’s now being prepared to formulate into a presidential regulation, and from the business activities during 2020 and up to 2022, they also consider to be prepared a derivative regulation related to norm, and then standard, procedure, and criteria, or kind of a code of conduct. So from the regulatory point of view, we are formulating a guide of ethical values that can be part of references for business actors. This is very essential, because the company and other institution should obey about regarding data and internal ethics as a field of artificial intelligence. So quite plenty of innovation has been made, but we have to put attention in the ethical value guidelines. It’s including the inclusivity, humanity, security, democracy, openness, as well as credibility and accountability. This is the basic values of the norm in the Indonesian nation. Thank you.

Moderator – Yoichi Iida:
Okay. Thank you very much, Iboni-san. Sorry about my mistake, but we heard a very interesting report from your country, and now I invite Luciano about your expectation and foresight. Thank you.

Luciano Mazza de Andrade:
Sorry I was off. Thank you very much, Yoshi. Well, I think our colleagues and previous speakers covered some interesting issues that I wanted to touch upon a little bit as well. Of course, we think about areas where AI can be most effective in addressing challenges and problems in our countries, in different countries. I think it’s important to bear in mind that what is a priority in one country is completely different from others, and in this case, I think to understand the priorities for developing countries are probably a lot different than the priorities for most developed countries. So considering specific areas where we see a lot of potential to employment in Brazil, I think there are obvious topics, and one that is not normally or not much mentioned when we think about more concerns that are more clear to develop economies, I think food security and agriculture is certainly one of them, and I think Amita mentioned this issue, health and education issues. Of course, a lot of examples were brought about on this topic. Probably one area that we see a lot of potential is leveraging AI solutions to improve the provision of public services. So I think e-government in general, and also the very use of AI for the workings of the public service in some areas, increasing productivity and efficiency, so on and so forth. Again, I think something that was referred to before, I think it’s important to mention, for developing countries, having the adequate capabilities and governance frameworks, both in government and outside government, is crucial, and I think that’s come first, because without this, it would be very hard to make sure we can benefit from all this positive perspective that we see. So without this prerequisite infrastructure, it would be very hard to take full advantage of the benefits that AI can bring. One aspect that was mentioned, I think Brazil, we have an AI strategy that is very much focused on innovation, and we also have a lot of legal framework that is there to boost start-up innovation, and Brazil has a dynamic innovation ecosystem, and I think that fits in well with some of the comments that I made before, because I think one big challenge is how you can make AI more local, and we need to bring a sense of ownership of those models to countries where probably we’re not developing our own big, large-language models. It’s very unlikely the ever-developed country will have its own, or have big firms that will develop their own systems, so I think a crucial thing will be how those models are adapted for local needs and for local communities. And then, I think that was mentioned, these models based on open-source systems is something that I think is important, because I think that’s the entry point for local innovation, and I think that’s something that makes sense, and that I think is where we see a potential. to liaise with the local innovation ecosystems, and I think that’s something that would be important. But it’s something that must be taken into account as well, is of course it was mentioned by Amrita, that these models, they are trained based on data that’s normally not, that does not come from developing countries. And I think that’s a challenge that we have to face somehow. And when we develop these local solutions and local applications, it’s important to find ways to make sure that we can also bring these perspectives to the data that, in which those models are trained. Because of course, we’re talking about troves of data that do not necessarily reflect the realities of developing countries. So they can may contain a lot of biases, not because that, because they are just there. They’re based on English language mainly. We don’t have a lot of, they’re not normally, they’re not trained on local languages or different languages. They may contain bias that I said that don’t reflect the realities of developing countries. So this process of adapting and adjusting those models when applied to developing countries, I think that’s something that’s very important. And it’s crucial to bring a sense of ownership to developing countries when these solutions are presented. I think that’s what I think it’s, I would say at this point, and we look forward to discussion more on these issues.

Moderator – Yoichi Iida:
Thank you. Thank you very much. Yeah, thank you very much for the thoughtful comment. And yeah, it seems that the adapt, adaption to the local conditions will be one of the key element for success to provide a good solution to the community. And in order to possibility, a wider possibility for adaptation, probably the interoperability between different frameworks should be very important. At the same time, local versus universal may be a kind of different, complicated question, but we don’t go too much into this element because of the limit of the time. But maybe we need to discuss this point at different occasions. So having listened to the excellent speakers from supply side of the AI economy and also demand side of AI ecosystem, we learned there will be a lot of opportunities and there will be a lot of potential for AI technology to provide a lot of benefit to different types of communities and societies. And now we have one speaker from World Bank who has been playing a very important role in international cooperation. Especially in my knowledge, World Bank has been very active in development support activities in digital field, especially through digital development partnership. And we have been talking a lot about the potential of replug provided by digital technology. I personally believe AI brings the biggest potential of replug among different types of digital technologies. So I would like to invite Daisuke to share your thought and your experience and probably your idea to create some chances for collaboration among companies, government, and international organizations to facilitate the benefit brought by AI technology in the global economy. So Daisuke, please.

Daisuke Hayashi:
Okay, thank you, thank you very much. And of course, this year’s G7 process, we as World Bank participated for the first time in the framework of the G7 for discussing for further collaboration with the G7 countries to expand the digitalization in the developing countries and emerging tech, emerging economies. So of course, I recognize the kind of difficulties of getting consensus within the G7 countries. So I think, of course, involving beyond the G7 countries on AI is more difficult. But at the same time, that we all recognize that the potential of the AI, that’s why we are now facing this, we are now discussing about the potentials and what are the risks of the AI. And from our perspective in the World Bank, we are supporting, have been supported for a long time for, of course, developing the economy. to mitigate the poverty and enhance the prosperity globally. And of course this agenda is very new as compared to the traditional infrastructure like the road, energy or other issues, but more and more people are focusing on our activities for digital development. And we have been developing in these areas by firstly of course the infrastructure construction support and this is very important to fill the gaps between the connected and the unconnected. And this is very important and of course this is the foundation to develop the country through the digitalization. But also, this is mostly important, I think many people indicated that filling the gaps of the skills. And this has been done within our framework of the digital development partnership and we have been a lot of capacity building projects including many developed countries and also the private companies’ participation. Of course Microsoft and Meta and our other companies are so actively working with us for expanding and developing these kinds of skills in developing countries. And we believe that the expansion of these digital skills will promote the development in the developing countries and of course to be innovative. like the human-centric, the things that you are discussing right now to be more livable planet, what we call. And also, finally, I’d like to mention about the things that, just wait a minute, sorry, that works. Sorry. Oh. Yes. Okay. So, and I think that more importantly, of course, the regulatory framework is many, has been more and more important in terms of creating the environment. And of course, the private companies are trying to promote this AI within their direction. But at the same time, public sectors are just trying to preserve the rights of the nations and nationals, and of course, the human rights, et cetera, et cetera. So we are now, as a World Bank, coordinating together with the private companies and the public sectors to find out the best solutions to in the regulatory framework. So this is a kind of some examples, but of course, AI is a very new agenda. And of course, we are now trying to find out best solutions for enhancing these AI projects. So I’d be happy to, we are very happy to discuss further with the private companies, as well as the public sectors, and as a whole, the multi-stakeholders to improve this AI environment. This is kind of our approach. Thank you.

Moderator – Yoichi Iida:
Okay. Thank you very much for your very proactive comment and some lessons from the previous experiences. It is good to know there is a lot of potential for collaboration among different stakeholders. So, having discussed among different types of AI players in the ecosystem, I hope we have a lot of potential to promote collaboration, and having listened to others’ presentation, I would like to ask any one of you, two of you, on your thought on what would be the good ways to proceed for us to promote collaboration among different types of players. Our government will stand close to World Bank and other international organizations to promote collaboration, make use of our knowledge and experience to go ahead together, and the Hiroshima process will be one of those instruments. So I would like to invite any speaker for volunteer to make a comment and share your thought. So who can volunteer? So Amrita first.

Amrita Choudhury:
Thank you. I think the collaboration is a must because if you look at technologies, they are cross-border and there is a lot of collaboration required, and I think what can be done through people who are experienced in it with World Bank, etc., is provide the necessary trainings in the developing countries as to what’s happening, what needs to be secured, what are the rights-based. Many countries are still arriving at those consensus, like it has to be rights-respecting, as was mentioned. It has to be gender-respecting. Many times we see gender-biased also in the systems. So I think the training, capacity-building, passing the best practices is important, and you all have been doing it through the GPI or the other initiatives also because they all overlap each other. So I think more dialogue, more capacity-building, sharing best practices are important. not only about algorithmic biases, transparency, accountability, but also security. Because these systems need to be secured. We see state actors attacking different countries. We see different bad actors attacking into the system and if it is hacked, it can be used, a public good can become public bad. So even the security aspect, et cetera, is important. And I think those trainings, if they are given, how entrepreneurs can use those best practices, et cetera, I’m sure most governments would be willing to get into those dialogues and benefit from it.

Moderator – Yoichi Iida:
Okay, thank you very much for your comment and proposal. Actually, we have been providing capacity building program in collaboration with the World Bank, which provides study tours to Japan, inviting the government officials and other relevant people from Asian and African developing countries to share our knowledge and expertise and also the sum of the practices at private companies in Japan. And we have been doing that mostly among Japanese relevant companies and people. But maybe we can do that with multinationally, together with players from different countries, such as Meta or Microsoft in the location of, not only limited to Tokyo, but anywhere else. But we can think about that kind of capacity building program provided by the World Bank, if possible. But anyway, that can be one of the ideas. And thank you very much for your proposal. And is there any other volunteer? So, Luciano, please.

Luciano Mazza de Andrade:
No, thank you. Yes, I would go along the same lines. I think it’s, we commend the leadership that Japan is playing in this field. But we understand dialogue and cooperation. dialogue with other initiatives and cooperation, different organizations and countries is crucial. And again, the cooperation is important not only for sharing experiences and best practice, but we think that crucially to help building the necessary national capabilities that will be required to make sure countries around the world can benefit from the potential. And again, I think engaging with different development banks is something that would be an interesting perspective in the sense that would be necessary to leverage investments to those countries that need to acquire those capabilities. Something from our international institutional perspective would mention, considering this leadership position that Japan is playing right now, I think it’s important to strengthen the dialogue with other organizations. Also, in the sense that it’s important to ensure coherence in terms of narratives and policies, to make sure we don’t have a fragmentation of spaces where all these initiatives are being developed. So in the sense, I think it’s important to take consideration, we see as useful to build some momentum at the UN as well, in terms of achieving, let’s say, an overarching narrative in this field. And there, of course, all countries are represented and we can make sure we have a debate that is as inclusive as possible in this area. So thank you.

Moderator – Yoichi Iida:
Thank you very much. I believe avoiding fragmentation and promoting interoperability would be a very, very important agenda for us, and we are expecting very much, highly, about your presidency of G20 next year. So any other, probably last volunteer? Who wants to be last volunteer?

Bonifasius Wahyu Pudjianto:
Okay, just maybe a little bit, repeat it from the previous suggestion. So the first and the most is the digital literacy to build the capacity building for the society to become more knowledgeable and understand how to utilize the AI and other aspects. Secondly, I think we have to boost the collaboration between the industry, Meta, Microsoft, and others, engaging with the start-up, which is in the emerging countries. So this is important to be able to leverage the solution that will be provided by the technology from the start-up. And the last part is World Bank and also other maybe venture capital to engage with the start-up and other industry because without the venture capital or World Bank I think it’s quite difficult during the tech winter right now. We suffer during the tech winter because some of our start-up diminish from the ecosystem. Thank you.

Moderator – Yoichi Iida:
Okay, thank you very much for such a comprehensive and concluding remark. You took the role of the moderator by your comment. But before ending up, let me add one volunteer from industry who wants to, yeah, whoever, yes.

Natasha Crampton:
I think my fellow panellists have shared many good ideas here. I think one thing that works well in the multi-stakeholder context is when a specific challenge is identified and that allows you to direct resources into it and to make more than incremental progress. So I can point to some other multi-stakeholder initiatives, not specifically in the AI context where we’ve seen really significant progress in a short period of time. And one of them that I would call out is the Christchurch call. So actually in my home country of New Zealand there was a terrorist attack that was streamed online, the first of its kind type of attack that involved terrorist and violent extremist material. And what was so effective in the response to that, you know, tragic incident was that governments and civil society and industry came together to work on a very specific problem. And that problem was how do we avoid the proliferation of this terrorist and violent extremist content. And it was a specific problem, but their solution was actually very multifaceted. I mean, industry came up with a protocol to respond quickly and avoid the proliferation of that type of content. But it wasn’t just a point solution like that. It also involved literacy campaigns, further study by academia as to what the problem space really involved. And so I think as we think about what’s next for multi-stakeholder collaboration on AI, I think there’s some lessons that we can glean from other past successful multi-stakeholder initiatives. Often, they work best when there’s a specific targeted problem that everyone’s coming together to try and address. They work best when there are multi-stakeholder initiatives build on what exists already, as opposed to reinventing the wheel. So my hope is that we take the holistic approach here, which involves capacity building on the technology front. We mustn’t forget that there’s a huge digital divide that we still need to close in order to even make access to AI possible in large parts of the world. But we need to remember that fundamentally, this is about people. And so there’s a lot of skilling work that is needed for us to be able to truly take advantage of this AI moment. So I hope we can take those sorts of lessons forward in our multi-stakeholder collaboration on AI.

Moderator – Yoichi Iida:
OK, thank you very much. In the end, we reconfirm the human-centric AI. And AI society should be very important. And that is what we should pursue all together. or we take any different approaches or different frameworks. So thank you very much for the very active discussion, and sorry about the poor management by the moderator. We wanted to have a little more time, but still I believe we had a very good discussion. And thank you very much for your attention to the audience, and I think unfortunately our time is up, but we stay in touch and we continue our effort together. So thank you. The session is closed. Thank you very much.

Amrita Choudhury

Speech speed

176 words per minute

Speech length

1674 words

Speech time

570 secs

Bonifasius Wahyu Pudjianto

Speech speed

105 words per minute

Speech length

795 words

Speech time

452 secs

Daisuke Hayashi

Speech speed

121 words per minute

Speech length

687 words

Speech time

340 secs

Hiroshi Maruyama

Speech speed

126 words per minute

Speech length

688 words

Speech time

329 secs

Luciano Mazza de Andrade

Speech speed

169 words per minute

Speech length

1174 words

Speech time

417 secs

Melinda Claybaugh

Speech speed

142 words per minute

Speech length

866 words

Speech time

365 secs

Moderator – Yoichi Iida

Speech speed

113 words per minute

Speech length

3025 words

Speech time

1609 secs

Natasha Crampton

Speech speed

151 words per minute

Speech length

1910 words

Speech time

760 secs