OPENING SESSION | IGF 2023

9 Oct 2023 02:00h - 04:00h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Ulrik Vestergaard Knudsen

Artificial intelligence (AI) holds immense potential to transform numerous sectors such as science, healthcare, education, and climate change. It has demonstrated its ability to contribute to scientific discoveries, enhance healthcare services, improve educational outcomes, and address environmental challenges. However, while AI presents numerous opportunities, it also carries significant risks that must be addressed.

The Organisation for Economic Cooperation and Development (OECD) has taken a crucial step in the development of international standards for digital policies, including AI. These standards are designed to align with human rights and democratic values, ensuring the responsible and ethical use of AI. By establishing these standards, the OECD aims to promote a global effort towards AI governance that prioritises the protection of human rights and democratic principles.

Governance and regulations play a vital role in managing the impact of AI. Generative AI, in particular, poses a risk of generating false and misleading content, which can undermine democratic values and social cohesion. Additionally, the use of generative AI raises complex questions related to copyright. Therefore, specific attention needs to be directed towards the governance and regulations of AI to prevent these potential challenges.

Furthermore, international cooperation and coordination are paramount in formulating effective AI policies. The OECD recognises the importance of bringing nations together to discuss AI-related issues and develop better policies for the benefit of all. By serving as a forum and leveraging its convening power, the OECD endeavours to facilitate global discussions on AI, fostering collaboration and partnership among countries.

In conclusion, while AI possesses great potential to revolutionise various sectors, there is a need to mitigate the risks associated with its adoption. The OECD’s efforts in setting international standards for AI, aligning with human rights and democratic values, are commendable. Additionally, proper governance and regulations are essential to prevent the spread of false content and ensure responsible AI use. By promoting international cooperation and coordination, the OECD aims to drive forward better policies for the responsible deployment of AI, ultimately benefiting societies worldwide.

Junji Suzuki

The G7 Gunma Takasaki Digital and Tech Ministers’ meeting discussed the opportunities and risks posed by generative AI and agreed to utilise the Organisation for Economic Co-operation and Development (OECD) as a framework to address these concerns. The G7 Hiroshima Leaders decided to continue the discussion of the Hiroshima AI process. International guiding principles and a code of conduct for AI are considered essential for realising safe, secure, and trustworthy AI.

Promoting research and investment to technologically mitigate risks posed by AI, including the development and introduction of mechanisms for identifying AI-generated content, is seen as crucial. Examples of mechanisms include digital watermarking and provenance systems. Research and investment are viewed as ways to address AI risks.

AI developers should prioritize the development of advanced AI systems for tackling global issues like climate change and global health. Minister Suzuki voiced the need for appropriately handling data fed into advanced AI systems.

Disclosure of information on the risks and appropriate use of advanced AI systems is necessary. Businesses should clarify the results of safety assessments and the capabilities and limitations of their AI models. Developing and disclosing policies on privacy and AI governance are considered important.

Generative AI was discussed at the Internet Governance Forum (IGF), where stakeholders from all over the world gathered. Generative AI provides services that transcend national boundaries and significantly impact lives worldwide. It involves both possibilities and risks and is a transformative technology.

The Hiroshima AI process will aim to reflect the opinions provided by various stakeholders. Opinions were collected from international organizations, governments, AI developers, corporations, researchers, and representatives of civil society.

Plans to establish an AI expert support center under the Global Partnership on AI (GPAI) to tackle AI challenges and broaden possibilities are positive. The center will aim to broaden the possibilities of AI through project-based initiatives.

Listening to the views of various stakeholders and taking initiatives accordingly is an important aspect of AI governance.

In conclusion, the discussions held among G7 ministers have underscored the need for international guiding principles, research and investment, disclosure of information, and stakeholder engagement in realizing safe and trustworthy AI. The recognition of the transformative potential of AI, particularly in addressing global challenges, further highlights the importance placed on responsible AI development and implementation. The establishment of an AI expert support center under GPAI signifies a proactive approach to addressing AI challenges and exploring new opportunities. Overall, these discussions and initiatives contribute to advancing AI governance and ensuring its positive impact on society.

Vint Cerf

Understanding and sharing information about the development of Artificial Intelligence (AI) and Machine Learning (ML) is crucial for the advancement of these technologies. With the increasing dependence on software in various industries, having a clear understanding of AI and ML is important to ensure their proper application and use. Bugs, which are the difference between what software is told to do and what one wants it to do, highlight the importance of understanding AI and ML development to minimize errors in software.

In high-risk applications such as healthcare, there is a need for greater scrutiny. Applications like health care, health advice, and medical diagnosis should receive more attention and evaluation to ensure accuracy and reliability. The European Union’s efforts to grade the risk factors of these applications are acknowledged and appreciated. This demonstrates the importance of careful evaluation and regulation of AI and ML systems in critical areas like healthcare to protect public health and well-being.

Transparency in the source and application of training material in ML and AI is necessary. Knowing where the content for machine learning systems comes from and the conditions under which these systems may misbehave promotes accountability and allows for better decision-making in the application of AI and ML models.

Large ML models primarily deal with probability rather than causality. While correlation is important, it is crucial for models to also appreciate causality. Understanding the causal relationships between variables can lead to better model performance and outcomes. Incorporating causality in the training and usage of ML models can have significant benefits.

Incorporating causality in training and usage of ML models can also lead to power savings. Using causality in training Google’s machine-learning system resulted in a 40% saving in power for cooling data centers. This demonstrates the practical benefits of considering causality in AI and ML systems, not just for accuracy but also for efficiency and resource optimization.

Determining objective functions and measuring quality for language and ML models is challenging. Assessing the quality of large language models and machine learning models poses difficulties. Evaluating the response and output utility of these models requires careful consideration and evaluation to ensure their effectiveness and usefulness.

Safety in high-risk environments should be prioritized when evaluating the success of AI and ML models. Measuring the quality of responses in high-risk environments is crucial to identify areas where improvements are needed and ensure the safety and well-being of individuals who interact with these systems.

Measuring the quality of large language models requires a high level of creativity due to their complexity. Innovative approaches and metrics are needed to assess the quality and performance of these models accurately.

In conclusion, understanding and sharing information about the development of AI and ML are crucial for their effective and ethical application in various industries. There is a need for greater scrutiny in high-risk applications, such as healthcare, to ensure accuracy and reliability. Transparency in the source and application of training material is necessary for accountability and responsible use of AI and ML. While large ML models primarily deal with probability, appreciating causality can lead to better performance, power savings, and more accurate outcomes. However, determining objective functions and measuring quality for language and ML models pose challenges that require innovative solutions. Prioritizing safety in high-risk environments and measuring the quality of large language models also requires careful evaluation and creative approaches.

Jun Murai

The analysis explores multiple perspectives on the evolution and significance of Artificial Intelligence (AI) in various domains. It highlights how AI has progressed from analysing books in the 70s to now analysing social media and sensor data, showcasing its ability to process information from different sources.

The importance of data accuracy and trustworthiness is emphasised, with Jun Murai discussing the need for reliable information and mentioning the ‘originator profile’ initiative in Japan. This initiative aims to identify and authorise information on the web, ensuring credible data sources for AI systems.

Additionally, it is stressed that AI goes beyond analysing text alone, as it can also utilise sensor-generated data. This type of data is commonly used in studies related to global warming and environmental science, enhancing AI’s capability to address complex issues.

The analysis also highlights the use of AI in disaster management, particularly in Japan, which frequently faces earthquakes impacting digital data networks and human lives. AI, combined with precise data, can greatly assist in effectively managing and recovering from such disasters.

Another issue brought up is Japan’s challenges with an ageing population and inadequate healthcare facilities, resulting in unprocessed hospital and medical data over the past 30 years. The application of AI is crucial in addressing the healthcare needs of an elderly society and improving the processing of medical data.

In conclusion, the analysis emphasises the importance of AI, data accuracy, data privacy, and hardware resources in healthcare and disaster management. The need to monitor and share accurate data among AI players is crucial for improved performance. It is also important to monitor the implementation of guiding principles and codes of conduct in the AI field, and involving third parties or independent entities in the monitoring process can contribute to better outcomes. Lastly, investments in research and education by governments and public sectors are essential for enhancing the quality of AI process monitoring and ensuring progress in the field.

Maria Ressa

The analysis delves deeper into the key points raised by the speakers, shedding light on the detrimental effects of disinformation, technology, and surveillance capitalism. It emphasises the need for truth, trust, and a shared reality in society.

The first speaker highlights the alarming rate at which lies spread on social media compared to facts. Citing a study by MIT, it is revealed that falsehoods propagate six times faster than accurate information, contributing to the widespread dissemination of disinformation. The speaker also brings attention to the role of emotions in accelerating the spread of disinformation, emphasising that fear, anger, and hate further amplify its reach. To support this argument, data from Rappler is presented, indicating that disinformation spreads even more rapidly when infused with these strong emotional elements.

The second speaker focuses on the negative impact of technology on human rationality. Referring to it as a biological hack, the speaker asserts that technology has found ways to bypass our rational minds, triggering the worst aspects of our human nature. However, no supporting evidence is provided to substantiate this claim.

The third speaker critiques the phenomenon of surveillance capitalism, contending that it has turned our world upside down and has been exploited by authoritarians. Unfortunately, no specific examples or evidence are provided to validate this argument, leaving it somewhat unsupported.

The fourth speaker emphasises the importance of facts, truth, trust, and a shared reality. They argue that without these essential elements, society is unable to function effectively. Democracy and the rule of law heavily rely on these foundations, and their absence can lead to the erosion of these principles. However, the speaker does not provide any specific examples or evidence to back up their claim.

The final speaker advocates for urgent action to combat the negative consequences of surveillance capitalism, address coded bias, and uphold journalism as a safeguard against tyranny. While no supporting evidence is provided, the speaker asserts that these actions are necessary to preserve peace, justice, and strong institutions. It is worth noting that the speaker’s stance aligns with the United Nations’ Sustainable Development Goals, particularly SDG 16 (Peace, Justice, and Strong Institutions) and SDG 10 (Reduced Inequalities).

Overall, this analysis highlights the grave concerns surrounding the proliferation of disinformation, the impact of technology on human rationality, and the exploitation of surveillance capitalism by authoritarians. It underscores the importance of truth, trust, shared reality, and the role of journalism in upholding democratic values. Urgent action is called for to combat these challenges and create a more just and informed society.

Kishida Fumio

Mr. Kishida Fumio, a prominent figure in the Japanese government, recognises the vast potential of Artificial Intelligence (AI) in driving socio-economic development. He firmly believes that Generative AI, in particular, will shape the course of human history. To support the growth and utilization of AI, the Japanese government is formulating an economic policy package that includes measures to enhance its development.

In addition to his stance on AI development, Mr. Kishida Fumio advocates for international solidarity and balanced AI governance. He emphasizes the importance of involving diverse stakeholders in shaping AI governance, and the international initiative known as the Hiroshima AI Process aims to establish guiding principles for responsible AI governance.

While Mr. Kishida Fumio remains optimistic about AI, he acknowledges the potential risks associated with its widespread use. Specifically, he is concerned about the dissemination of disinformation and the resulting social disruption. Sophisticated false images and misleading information pose significant threats. To address these risks, he calls for proactive measures that foster a secure digital environment.

In conclusion, Mr. Kishida Fumio’s contributions to the AI discourse underscore its potential for socio-economic development. He emphasizes the need for international cooperation and responsible governance. Furthermore, he addresses the risks of disinformation and social disruption, highlighting the importance of proactive measures to safeguard against them. Mr. Kishida Fumio aims to strike a balance between harnessing the benefits of AI and mitigating its potential downsides through his advocacy and policy initiatives.

Ema Arisa

The analysis highlights several key points regarding the development and implementation of AI systems. Firstly, it suggests that AI systems should be developed specifically to address some of the world’s most pressing challenges, including the climate crisis, global health, and education. The potential of AI in handling these challenges lies in its ability to analyse vast amounts of data and predict outcomes. By leveraging these capabilities, advanced AI systems can play a significant role in managing critical problems on a global scale.

Furthermore, the analysis emphasises the importance of organisations prioritising diverse fields in their AI activities and investments. While the climate crisis, global health, and education are crucial areas to focus on, it is also essential to explore other fields to deploy AI for maximum benefits. By investing in a broad spectrum of fields, organisations can unlock the full potential of AI and harness its capabilities to address a wide range of challenges and opportunities.

Transparency in AI technology is another key aspect highlighted in the analysis. It is argued that transparency plays a vital role in building trust in AI systems. To ensure public confidence in these technologies, there is a need for AI developers to prioritise openness and provide clear explanations of how AI systems operate. Additionally, the analysis mentions prominent figures, such as Nick Clegg and Ema Arisa, who support the idea of transparency in AI technology. Trusting AI systems becomes more attainable when their inner workings are transparent and easily understandable.

The analysis also highlights the need for countries, international organizations, and companies to uniquely frame their responses to AI based on their cultures and legal frameworks. Different initiatives are already underway in various countries and companies to develop personalized approaches to AI. This recognition of cultural and legal diversity is important to ensure that AI technology is implemented in a manner that aligns with the values, norms, and rules of different regions and entities.

Collaboration is also emphasized as a key factor in developing and implementing AI technology. By working together, entities can responsibly use and improve AI technologies. Ema Arisa specifically acknowledges the significance of entities joining forces to achieve this. Collaboration facilitates the exchange of knowledge, resources, and expertise, leading to the responsible and effective use of AI for the benefit of all.

In conclusion, the analysis points towards the potential of AI in addressing global challenges, the need for diverse fields in AI activities, the importance of transparency in AI technology, the significance of framing responses to AI based on cultural and legal frameworks, and the role of collaboration in developing and implementing AI technology. Taking these insights into consideration can pave the way for harnessing AI’s capabilities to tackle the world’s most critical problems and drive positive change in a wide range of sectors.

Nick Clegg

Large language models are considered a substantial advancement in artificial intelligence (AI) and require significant computing power and data. One exciting development is the potential for open-source sharing of these models, allowing researchers and developers to access and contribute to AI progress. AI technology, including language models, has also been effective in combatting harmful content on social media platforms. Through the use of AI algorithms, hate speech on Facebook has seen a significant decrease. However, there is a need for industry-wide collaboration to accurately identify and detect AI-generated content, particularly text content. The future of AI systems is predicted to become multimodal, incorporating both text and visual content, while also being trained in multiple languages, expanding their impact beyond English. Contrary to belief, future AI models may focus more on specific objectives and be more efficient with less data and computing power. Transparency is crucial in AI, as it allows users to understand the processes and establish trust. AI technologies should serve people, and collaboration is necessary to ensure transparency and responsible use of AI across the internet.

Denise Wong

Singapore has played an active role in AI governance, continuously updating its AI model governance framework. In 2022, the framework was updated to ensure its relevance and effectiveness in regulating AI technologies. Additionally, Singapore has launched the AI Verify Open Source Foundation, a platform dedicated to discussing and addressing AI governance issues. This showcases Singapore’s commitment to responsible AI development and deployment.

A shared responsibility framework is necessary to establish clear roles and responsibilities between policymakers and industries in the model development life cycle. This framework helps ensure that adequate safeguards and measures are taken to mitigate risks associated with AI technologies. By clarifying responsibilities, policymakers and industries can collaborate effectively to uphold ethical AI practices and accountability.

Transparency is crucial in AI model development and testing. It is imperative to share information about the development process, testing procedures, and training datasets used. This transparency builds trust and confidence in AI systems. Similarly, end-users should be informed about the limitations and usage of AI models, enabling them to make informed decisions.

To enhance consumer awareness and choice, AI-generated content should be labeled and watermarked. This allows consumers to differentiate between AI-generated and human-generated content, giving them the ability to make informed decisions about the content’s authenticity and reliability.

There is strong support for a global and internationally aligned effort in AI governance. This approach aims to collaborate and harmonize AI regulations and standards across countries and regions, fostering responsible development and deployment of AI technologies.

In the consultative process for developing principles, codes of conduct, and technical standards for AI, involving thought leaders and countries outside of the G7 is beneficial. This inclusion enriches discussions, leading to more diverse and inclusive AI governance policies.

Singapore’s experience highlights the importance of testing technology through concrete projects with industry players and stakeholder bodies. Such projects serve as real-world experiments that identify risks and develop suitable measures and regulations. By involving industry players, policymakers ensure that regulations are practical and effective.

Overall, there is strong support for the multi-stakeholder effort in AI development. Collaboration among governments, industries, academia, and civil society is crucial in shaping responsible AI practices. This inclusive and collaborative approach considers diverse perspectives, fostering innovation while addressing ethical and societal concerns.

In conclusion, Singapore actively participates in AI governance, continuously updating its AI model governance framework and initiating discussions through the AI Verify Open Source Foundation. It establishes shared responsibility frameworks, promotes transparency in model development, informs end-users, labels AI-generated content, and advocates for global alignment in AI governance efforts. By involving thought leaders, conducting concrete projects, and encouraging multi-stakeholder collaboration, Singapore strives to shape a future of AI that benefits society as a whole.

Patria Nezar

Artificial Intelligence (AI) has had a significant impact on Indonesia’s workforce, with 26.7 million workers being added in 2021, equivalent to 22% of the country’s workforce. This highlights the substantial contribution of AI in terms of job creation and economic growth. However, along with these benefits, AI also brings various risks that need to be managed.

One of the major concerns related to AI is privacy violations. As AI systems gather and analyse vast amounts of data, there is a risk of personal information being misused or breached. Intellectual property violations are another concern, as AI technologies can potentially infringe on copyrights, patents, or trademarks. Additionally, biases in AI algorithms can result in unfair or discriminatory outcomes, and the occurrence of hallucinations by AI systems raises questions about the reliability and safety of their outputs.

To address these risks, the implementation of technical and policy-level mitigation strategies is imperative. The Indonesian government has taken steps in this direction by developing a National Strategy of Artificial Intelligence, which outlines a roadmap for AI governance in the country. Furthermore, Indonesia supports the T20 AI principles, which aim to establish a common understanding of the principles of AI.

It is recognised that effective AI governance requires collaborative efforts with stakeholders. The Indonesian government actively invites contributions from various parties to participate in policy development regarding AI. Moreover, efforts are being made to explore use cases, identify potential risks, and develop strategies to mitigate those risks. This collaborative approach ensures that the development of an efficient AI governance ecosystem takes into account diverse perspectives and expertise.

In the context of upcoming elections, there are concerns about misinformation and disinformation spread through AI-powered digital platforms. To address this issue, regulations are being issued to curb the spread of fake information and ensure the integrity of the electoral process. Collaborating with global digital platforms such as Google and Meta can prove beneficial in tackling this challenge.

The use of AI in political campaigns for the next election raises questions and potential ethical implications. The impact and consequences of AI’s involvement in election campaigns need to be carefully considered to ensure fairness, transparency and trust in the electoral process. It highlights the need for support for fair and safe elections, where AI is used ethically and responsibly.

Artificial Intelligence has become a topic of global concern, with countries engaging in discussions to define best practices for AI regulation. However, this remains an ongoing challenge, as AI continues to evolve rapidly and new ethical dilemmas and policy considerations arise. It underscores the need for international cooperation and collaboration to address the multifaceted issues associated with AI effectively.

Notably, Indonesia is working with UNESCO to develop AI implementation guidelines. This collaboration reflects the recognition that defining fundamental norms and guidelines for the responsible implementation of AI is crucial. Nezal, a key actor in this context, supports the collaboration with UNESCO and emphasises the importance of working together to establish ethical and sustainable AI practices.

In conclusion, while AI contributes significantly to workforce growth and economic development in Indonesia, it also poses several risks that need to be managed. Implementing technical and policy-level mitigation strategies, fostering collaboration with stakeholders, and addressing concerns such as misinformation in elections are crucial steps in achieving responsible and beneficial AI governance. Global discussions and collaboration, along with the development of guidelines, are essential to ensure the widespread adoption of AI that advances societal well-being.

Doreen Bogdan-Martin

The private sector, specifically companies such as Meta and Google, is considered a major driving force behind AI innovation. It is noted that the private sector plays a significant role in the International Telecommunication Union (ITU) membership. The sentiment towards the private sector’s involvement is positive, highlighting their contributions to AI development.

Incentives, both economic and explicit recognition at national and international levels, are seen as significant motivators for the private sector to invest in socially beneficial initiatives. The argument presented is that offering incentives can encourage businesses to allocate resources towards projects that have a positive impact on society. This aligns with the goal of achieving sustainable development.

AI technology has shown great potential in school connectivity initiatives and disaster management. It is mentioned that AI techniques are utilised to find schools and explore different connectivity configurations to reduce costs. Additionally, AI has been proven useful in disaster management for tasks such as data collection, natural hazard modelling, and emergency communications. The positive sentiment towards incorporating AI in these areas suggests its potential for improving education accessibility and enhancing disaster response efforts.

The speaker identifies healthcare, education, and climate issues as key priorities for AI focus within the ITU. This indicates a recognition of the significance of addressing these sectors in order to achieve sustainable development goals. The sentiment towards this perspective is positive, emphasising the need to leverage AI in these areas.

It is highlighted that effective change can be driven by leveraging multi-stakeholder partnerships. The importance of partnerships in driving positive change is emphasised, acknowledging that collaboration between different entities is crucial for achieving common goals. The sentiment towards multi-stakeholder partnerships is positive and underscores their role in addressing complex challenges.

Universal connectivity is identified as a critical aspect of the AI revolution. The lack of connectivity affecting 2.6 billion people is highlighted, emphasising the importance of bridging the digital divide. This observation suggests that ensuring universal connectivity is essential for maximising the benefits of AI technologies.

Technical standards and addressing the gender gap are emphasised in the context of AI guidelines. The speaker highlights the importance of technical standards as prerequisites for effective implementation and emphasises the role of the UN as a catalyst for progress. Gender equality and reducing inequalities are also mentioned in relation to achieving AI goals. These aspects are presented with a positive sentiment, indicating their significance in guiding AI development.

Both the United Nations (UN) and the ITU are suggested to play a larger role in advancing AI initiatives. The ITU has already begun incorporating AI into their capacity development offerings. The ITU’s AI for Good Global Summit and the establishment of an AI advisory body are mentioned as examples of their efforts. This observation implies that the UN and ITU have the potential to drive innovation and promote collaboration in the field of AI.

In conclusion, the expanded summary provides a comprehensive overview of the main points made in the given information. It highlights the private sector’s role, the importance of incentives, the potential of AI in school connectivity and disaster management, the prioritisation of healthcare, education, and climate issues, the significance of multi-stakeholder partnerships, the need for universal connectivity, the emphasis on technical standards and addressing the gender gap, and the role of the UN and ITU. Overall, the sentiment presented is positive, reflecting the potential benefits and opportunities that AI brings in various domains.

Moderator

Generative AI, its potential, and associated risks have become the subjects of global discussion. This technology, which is comparable to the internet in terms of its transformative impact, is expected to bring massive changes to various fields. However, there are concerns about the risks of false information and disruption that could arise from the use of generative AI.

Recognising the significance of AI development, the Japanese government has unveiled plans to include AI support in its economic policy package. This move reflects the government’s commitment to strengthening AI development and promoting innovation. By incorporating AI support into its economic policies, Japan aims to foster an environment conducive to technological advancement and economic growth.

The Hiroshima AI process, endorsed by G7 leaders, focuses on building trustworthy AI. This initiative seeks to establish international guiding principles for the responsible and ethical use of AI. By promoting the adoption of these principles, the Hiroshima AI process aims to ensure the development and deployment of AI technologies that can be trusted by individuals, organisations, and governments alike.

The impact of generative AI extends beyond national boundaries, necessitating international collaboration. Recognising this, there is a growing need for multi-sector discussions on AI to address its global implications. Cooperation and coordination among countries, industries, and stakeholders are vital to effectively harness the potential of generative AI while mitigating its associated risks.

The issue of disinformation spreading through social media and AI technologies has gained attention due to its negative consequences. Studies have shown that lies spread six times faster than facts on social media platforms. The rapid dissemination of disinformation fueled by emotions such as fear, anger, and hatred poses a threat to truth and undermines public trust. Maria Ressa argues that social media and AI technologies are exploiting human emotions and attention for profit, leading to what she describes as “surveillance capitalism.”

The spread of disinformation also poses a threat to democratic processes, particularly elections. In the absence of factual integrity, elections can be manipulated through the dissemination of false information. As the year 2024 is seen as a critical year for elections, it is crucial to address and combat the spread of disinformation to safeguard the integrity of democratic processes.

Maria Ressa highlights the need to counteract surveillance for profit, eliminate coded bias, and promote journalism as a defense against tyranny. She has launched a 10-point action plan that addresses these issues and spoke about them at the Nobel Peace Summit in DC. Ressa believes that by taking these measures, society can protect individual privacy, reduce inequalities, and uphold the principles of peace, justice, and strong institutions.

In conclusion, the potential and risks of generative AI, the Japanese government’s plans for AI support, the need for trustworthy AI, and the global impact of generative AI underscore the importance of international collaboration. The detrimental effects of disinformation and the exploitation of human emotions and attention through social media highlight the urgency of addressing these issues. Through measures such as promoting journalism and combating surveillance for profit, society can work towards ensuring ethical and responsible AI development.

Kent Walker

Artificial Intelligence (AI) is a powerful and promising technology with the potential to revolutionise various sectors. Google has been utilising AI in applications like Google Search, Translate, and Maps for over a decade. Its impact goes well beyond chat bots, extending to fields like quantum mechanics, quantum science, material science, precision agriculture, personalized medicine, and clean water provision. The DeepMind team at Google has made significant advances in protein folding, which would have taken every person in Japan three years to accomplish.

However, the development of AI must be accompanied by responsibility and security considerations. To strike a balance between the opportunities AI presents and the need for responsible and secure deployment, ongoing work is being carried out with industry partners like the Frontier Model Forum, Partnership on AI, and ML Commons to establish norms and standards. Governments, companies, and civil society must collaborate to develop the appropriate frameworks.

Security and authenticity are crucial aspects of AI that require a collaborative approach and global digital literacy. Google is taking measures such as Synth ID for identifying videos and images at the pixel level to ensure security. Additionally, policies have been implemented to regulate the use of generative AI in elections, safeguarding democratic processes. Global digital and AI literacy are necessary to address security and authenticity concerns effectively.

AI has evolved significantly in search engine technology and language processing. Research efforts have resulted in mapping words in English and other languages into mathematical terms. The identification of ‘transformers’ has contributed to AI’s understanding of human language. The next challenge is to expand AI’s capabilities to understand and process thousands of languages worldwide.

Technical measures play a vital role in content authentication. Watermarking, content provenance mechanisms, and data input control measures are crucial for verifying content authenticity. Google’s “about this image” feature aids in understanding the origin of an image, while the disclosure of the use of generative AI in manipulating election results ensures transparency.

Governments should collaborate to implement AI tools for public welfare. AI tools have shown promise in predicting natural calamities like earthquakes, floods, and forest fires, enabling better disaster preparedness and response.

While openness, security, and transparency are essential in AI, tradeoffs need to be considered. Achieving the right balance is necessary to ensure the ethical development and deployment of AI. Explainability in AI tools and the classification of AI models require careful consideration.

Encouraging investments in AI research is crucial to make tools and computation accessible worldwide, promoting innovation and equitable access.

AI has the potential to enhance productivity and employment opportunities. It can enable workers to perform tasks more efficiently, contributing to an improved quality of life.

International cooperation is key in harnessing the potential of AI for good. Efforts like the G7, OECD, and ITU emphasize collaboration and partnerships to ensure the responsible and beneficial use of AI.

In conclusion, AI holds immense promise as a transformative technology. However, its development must be accompanied by responsibility and security considerations. Collaboration, global digital literacy, and technical measures are vital for ensuring authenticity, security, and welfare-enhancing potential. Balancing openness, security, and transparency is crucial, along with encouraging investments in AI research for global accessibility. International cooperation is necessary to harness the positive impact of AI for societal betterment.

Luciano Mazza

The AI debate is calling for the inclusion of more voices from developing countries. The complexity of the issues at hand requires broader representation to ensure comprehensive understanding. It is emphasised that organisations, specifically those operating in developing countries, should be mindful of the importance of local ownership in the countries and communities where they operate.

One key aspect to consider in the AI debate is the adaptation of AI models to reflect local realities. This involves adjusting the training of the models with data that accurately represents the local circumstances. By doing so, AI models can better serve the needs of different regions and populations. This argument is supported by the sentiment that AI models should be adaptable to local contexts.

Another important element is the need to incentivise and strengthen local innovation ecosystems. Even if certain countries may not have their own open AI-style companies, creating dynamic AI ecosystems can democratise the market. This fosters economic growth, decent work, and infrastructure development. The sentiment is positive towards the importance of local innovation ecosystems in the AI debate.

However, there is a concern that AI has the potential to amplify economic, social, and digital divides between developed and developing countries. The negative sentiment highlights the risk of further widening the existing inequalities. To mitigate this, it is argued that new technologies, including AI models, should be designed with inclusivity as a primary consideration. The sentiment is positive towards the importance of inclusive design.

Ensuring diverse voices and constituents are actively involved in the AI debate is essential. It is important to have an inclusive discussion that values different perspectives. This sentiment reflects the argument that inclusivity is crucial to hear different viewpoints and to avoid bias.

Engagement with other organizations and stakeholders is seen as crucial for the long-term sustainability of efforts in the AI debate. Collaboration and partnerships are necessary to drive impactful progress. This positive sentiment highlights the importance of engaging with various stakeholders in the AI debate.

To reduce information and capability asymmetries between developed and developing countries, multilateral engagement is deemed necessary. This sentiment supports the argument that engaging in discussions at the international level, such as in the United Nations, can help bridge the gap between different countries. The positive sentiment recognizes the significance of multilateral initiatives in addressing inequalities and imbalances.

Additionally, concerns have been raised about fragmentation in the AI debate. It is important to address this issue to ensure consistency and cohesion in efforts. The negative sentiment highlights the need for a unified approach to maximize the effectiveness of AI developments.

Finally, placing energy and effort in the multilateral system is considered essential to provide ownership and inclusivity to everyone involved in the AI debate. This positive sentiment emphasizes the commitment to the global digital compact and the renewal of the WSIS mandate. It also encourages countries to invest more in multilateralism.

In conclusion, the AI debate requires the involvement of more voices from developing countries to address the complex issues at hand. Adaptation of AI models to reflect local realities, strengthening local innovation ecosystems, designing for inclusivity, and engaging with various stakeholders are all critical aspects. Multilateral engagement, consistency, and cohesion are also necessary to reduce inequalities and foster a more inclusive AI landscape.

Session transcript

Moderator:
Thank you very much for waiting. Now, I would like to welcome the guests of honor and the speakers for the high-level Panel 5, Artificial Intelligence, to the stage, and we’ll begin with the photo session. And first, an official photographer at the front will take a photo. Thank you. Thank you. Thank you. So, at first, the official photographer at the front, please take a photo. Thank you. And now, I’d like to take our time for the press in the back seat. Please stay put and take a photo. Thank you for cooperation. Thank you very much. So, the excellency, please proceed to the individual seat on the stage. Thank you. Thank you. Ladies and gentlemen, may I draw your attention, please. We will now start the high-level Panel 5, Artificial Intelligence, of the 18th Annual Meeting of the Internet Governance Forum. I will now invite the guests of honor to deliver keynote speeches. First, I would like to welcome His Excellency Mr Kishida Fumio, the Prime Minister of Japan, to deliver keynote speech. His Excellency Mr Kishida, please proceed to the podium.

Kishida Fumio:
On behalf of the host country, I would like to welcome you to the special session on AI at the Internet Governance Forum Kyoto 2023. As the potential and risks of rapidly developing generative AI are being debated around the world, it is gratifying that the topic of global AI governance is being discussed by representatives with diverse fields today in Tokyo, in Japan. I would like to thank you all for taking part in this session. Generative AI has been called as a technological innovation comparable to the Internet. Just like the Internet has brought about remarkable democracy and socio-economic development by connecting people beyond the constraints of time and space, generative AI is about to change the history of mankind. This year, I myself have had participated in discussions with young researchers and AI developers only to realize the unlimited possibilities that generative AI holds. Generative AI not only improve operational efficiency, but also to accelerate innovation in various fields such as drug discovery and development of new treatment, thereby bringing about dramatic changes in the world. We expect the world will be changed dramatically. The Japanese government is planning to compile economic policy package by the end of this month that include support for strengthening AI development such as for building computational resources and foundational models, as well as AI introduction support by SMEs and to medical application. We will incorporate strong support for both AI development and utilization in that package. On the other hand, risks of sophisticated false images and disinformation that cause social disruption or other threats to the society are pointed out. A wide range of stakeholders need to play their roles in the development of AI. For example, in order to promote the distribution of reliable information, it would be effective to develop and promote the spread of technologies that can prove and confirm the originator of the information or provenance technologies. The international community as a whole must share this understanding and deal with these issues in solidarity. It is important that we should now gather our wisdom of mankind to strike a balance between promotion and regulation while taking into account the possibilities and risks of generative AI in order to reduce the risks it poses to the economy and society while maximizing its benefits to all of us. With this in mind, at the G7 Hiroshima Summit, I proposed the creation of the Hiroshima AI process to further international discussions towards the realization of trustworthy AI, which was agreed upon by the leaders and the G7 leaders instructed their ministers in charge to deliver the results within this year. The Hiroshima AI process is to develop, by the end of this year, the international guiding principles for all AI actors as common principles indispensable for the realization of trustworthy AI. In particular, as a matter of urgency, we are working on international guiding principles and a code of conduct for organizations developing advanced AI systems, including generative AI, in preparation for the G7 summit online meeting to be held this fall. Generative AI is a cross-border service, therefore concerns people all over the world. For this reason, the Hiroshima AI process will also take advantage of this IGF opportunity to incorporate a wide range of views through multi-sector discussions, including government, academia, civil society, and the private sector. By being informed by the opinions of diverse stakeholders beyond the G7 who are participating today, we will drive the creation of international rules that will enable the entire international community, including the global south, to enjoy the benefits of safe, secure, and trustworthy generative AI and to achieve further economic growth and improvement of living conditions. Before closing, I would like to express my hope that this special session on AI will be a landmark meeting where meaningful discussions will be held among representatives of international organizations, governments, AI developers, researchers, and civil society that will later be remembered as a turning point in the discussion on generative AI. With this, I would like to conclude my remarks. Thank you very much for your kind attention.

Moderator:
Next, I would like to welcome Ms. Maria Ressa, CEO and President of Rappler Inc., 2021 Nobel Peace Prize winner, to deliver keynote speech. Ms. Ressa, please proceed to the podium.

Maria Ressa:
I’m so sorry I’m short. I will tiptoe. Thank you so much. Thank you to our host country, to Japan, to the Internet Governance Forum. I am new to the Internet Governance Forum, and so I bow to your collective wisdom. I really hope to just be a voice to urge you to think about where we are today and to urge you to act. Because thank you to our initiative on generative AI, but let me just remind you of the problems we face right now. Today, truth is under attack. We’re engulfed in an information war where disinformation, the bullets of information operations, spread like wildfire to obscure and to change reality. What power used to consolidate power is technology, social media, the first human contact with AI. In 2018, and this has probably changed since then, MIT released a study that said lies spread six times faster on social media than these really boring facts. And what Rappler data has shown is that it spreads even faster when it’s laced with fear, anger, hate. Every human being, all of us, has two systems of thinking, and here I quote Daniel Kahneman. He said, thinking fast, our emotional, instinctive side, and thinking slow, our rational side. This rational side is where conversations like this one happen, where rule of law, journalism, democracy happens. Technology hacked our biology to bypass our rational minds, to trigger the worst of who we are, and to keep us scrolling in our information economy. Attention, that is the prize. Your attention is commodified, changing how you feel, what you think, and how you act. That fundamental design choice, and this is the first social media contact, right? That lies spread faster. Surveillance capitalism, or surveillance for profit, turned our world upside down. And here, I’m sorry to be irreverent, Netflix’s Stranger Things, if you’ve watched it, you know how they go into the upside down? We are literally living in the upside down, and while it seems deceptively familiar, everything is covered with goo, and there are monsters in every corner. Because that design of the new gatekeepers to our public sphere, was exploited by authoritarians. If you can convince people lies are facts, then you can control them. And the same three sentences I’ve said since 2016. Without facts, you can’t have truth. Without truth, you can’t have trust. Without these three, we have no shared reality, no rule of law, no democracy. So I have two minutes left to tell you what we should do. And actually, the internet we want has those five values. I thank the Secretary General for appointing the leadership panel. We each have two years. It’s extremely honest, open, and we hope to urge you to act. But I’ll leave you with two last thoughts. One is the impact beyond the individual. This is what I’ve laid out for you, right? The behavioral aspect for us. If you don’t have integrity of facts, you cannot have integrity of elections. And 2024 becomes a critical year for elections, which is part of the reason everyone in this room, from civil society, parliamentarians, government officials, NGOs, journalists, we each have a role to play. I keep saying that we are in the last two minutes. If you play basketball, last two minutes for democracy. In my last minute, I just want to tell you about an initiative that aligns with the Internet Governance Forum that was launched last year at the Nobel Peace Summit in DC. This year, over 300 Nobel laureates, civil society groups, the same kind of multi-stakeholder arrangement. We need to come together. We launched a 10-point action plan that has three buckets. And these would be the same that you would need to operationalize in every single one of our agreements. The first, stop surveillance for profit. Give us back our lives. Two, stop coded bias. If you are a woman, LGBTQ+, you are further marginalized in the virtual world. And we want a secure, safe, and trustworthy internet. Third, journalism as an antidote to tyranny. Thank you so much.

Moderator:
Thank you very much, Ms. Ressa. Next, I would like to welcome Mr. Wulich Vestagerde-Knudsen, OECD Deputy Secretary General, to deliver keynote speech. Mr. Knudsen, please.

Ulrik Vestergaard Knudsen:
Thank you very much. It seems I have the opposite challenge compared to the previous speaker, so I will not be tiptoeing. What an honor it is to speak after a prime minister and a Nobel Prize winner. And what an honor it is, indeed, to join this high-level meeting on global AI governance and generative AI. Convened in the context of the G7 Hiroshima AI process led by Japan. Thank you very much. Rapid technological transformation is heralding a brand new era of boundless opportunity, and at the same time, of great risks. Some even talk about existential threats. Now, my organization, the OECD, was founded over 60 years ago on a simple yet very powerful premise that international cooperation is essential for economic growth and social prosperity. In the decades gone by, we have leveraged evidence-based policy expertise, mutual exchange, data, and analysis to keep ahead of the global cross-border challenges. Key examples include codes of liberalization, guidelines for multinational enterprises, and of course, the all-famous BEPS-inclusive framework of tax, with almost 140 tax jurisdictions around the world. To sum it up, through international cooperation and shared values, the OECD’s goal has been to drive forward better policies for better lives. And let me be as frank as I can. Digital policies are no exception to that. On the contrary, here too, the OECD has delivered landmark international standards. For example, just last year, the Declaration on Government Access to Personal Data held by the private sector entities. These standards, and many others in areas like broadband connectivity, data governance, and digital security, provide guidance to support countries in reaping the benefits of digital transformation, fostering innovation while addressing mitigating risks, advancing responsibility, and promoting trust. In the last decade, we have increasingly dedicated our attention to artificial intelligence. With AI, and in particular with the public availability of generative AI applications, humanity is facing what is really a watershed moment. Our well-being, our economic prosperity, and our very identity, perhaps even as humans, will be affected by the collective action we take today. AI already now demonstrates its revolutionary potential for productivity, for scientific discoveries, health care, education, climate change. However, AI also carries significant risks, including to privacy, safety, autonomy, and, to some extent, at least, jobs. And as G7 members have underlined, under the Japanese presidency, generative AI creates a real risk of false and misleading content, threatening democratic values and social cohesion, I guess what you could call the upside-down world of Stranger Things from Netflix. Generative AI also raises complex questions of copyright, and the computing power required for its training highlights issues of supply chains, access, and divides. What we need now, ladies and gentlemen, is a global effort for the governance, the safe development, and deployment of AI aligned with human rights and democratic values. The OECD has helped lead the way on AI policymaking with the landmark 2019 OECD recommendations on AI, the very first intergovernmental standard on AI. We are now gathering the evidence on AI through the OECD AI Policy Observatory, the Framework for Classification of AI Systems, the Catalog of Tools and Metrics, and also now the latest, the AI Incidence Monitor. These achievements have gained traction and influenced AI policymaking around the world. But with technology developing now at breakneck speed, we need to make collective decisions to ensure this technology will be safe and beneficial to societies. Unfortunately, as you all know, there are many, many questions and not too many answers. Do we need hard rules about the design of AI systems? How do we marshal the innovation, governance, and regulations of AI? Do we use existing approaches and frameworks that have proven effective, from, for example, airplanes to food safety? Or do we need radically new approaches? And how do we prepare society for this transition? How do we make sure powerful technology doesn’t rest solely in the hands of a few, be that countries or companies? And perhaps most importantly, how do we make sure that we seize the boundless opportunities for people and planet in a just, equitable, and democratic manner? That we don’t answer these questions that I raised above with policies that hamper progress. I don’t have the answers to all those questions, but I do think I know one thing. That decisions we make in response to these questions require international cooperation and coordination. And it’s the ambition of the OECD to work with our international partners to provide the forum and convening power for these discussions, informed by the best possible evidence base. The G7 has a key role. We are here under the auspices of Japan’s G7 presidency. And Japan has been a true pioneer and visionary in identifying the policy importance of AI. Japan’s 2016 G7 presidency really kick-started development of the AI principles, which then served as the basis for the G20 AI principles under Japan’s G20 presidency in 2019. In this vein, the Hiroshima process has set an ambitious and necessary objective. International guiding principles applicable for all AI actors and a code of conduct for organizations developing advanced AI systems. The OECD is very proud to be informing this process in many ways, not least later this year by launching the Global Challenge alongside our key partner organizations like UNESCO and others. And we also look forward to providing comprehensive guidance across different actors and different aspects of AI. Before I end, let me say that we cannot advance the global effort on AI governance without effective stakeholder engagement. Multi-stakeholder participation has always been the OECD approach to policy development. Examples include our one-AIM policy group with over 400 international experts from governments, from industry, from academia, and from civil society. The recently launched OECD Global Forum on Technology is another example of this way of building outreach and engagement. Only honestly with your involvement can we develop policies that work for all parts of society. Prime Minister, ladies and gentlemen, Stephen Hawking defined intelligence, and I quote him, as the most important thing and I quote, as the ability to adapt to change, unquote. Let us continue working together to ensure that our intelligence, both human and artificial, will keep politically paced with developments and continue to guide us responsibly. We simply cannot afford not to. Thank you.

Moderator:
Thank you very much, Mr. Kudusen. And unfortunately, His Excellency Mr. Kishida will not be able to stay due to the schedule. So please give him a round of applause. Thank you. Thank you. Thank you. Now the panel discussion shall commence. I would like to ask Ms. Emma Risa, Associate Professor Institute of Future Initiative, the University of Tokyo, to moderate the session. Ms. Emma, the floor is yours.

Ema Arisa:
So good morning, ladies and gentlemen. My name is Arisa Emma. Emma is my family name. And so I am Associate Professor at the University of Tokyo. And it’s a really great honor for me to moderate this great panel session. So first of all, I would like to introduce the panelists. I go from my side to the other end. So the person next to me here is Mr. Nick Clegg, President of Global Affairs, META. Then the second person is Mr. Luciano Matzadeh Andrade, Director of the Department of Science and Technology and Intellectual Property at the Brazilian Ministry of Foreign Affairs, Brazil. The third person is Ms. Denise Wong, Assistant Chief Executive, Data Innovation and Protection Group, Infocommon Media Development Authority, IMDA, Singapore. The next panelist is Mr. Nazar Pateria, Vice Minister of Communication of Informatics, Indonesia. The next panelist is Mr. Kent Walker, President of Global Affairs, Google and Alphabet. On the right-hand side, we have His Excellency Mr. Suzuki Junji, Minister of Internal Affairs and Communications. The panelist next to Minister Suzuki is Mr. Vinton Tsev, IGF Leadership Panel Chair and so-called Father of the Internet. And next to Mr. Tsev, we have Professor Jim Murai from Keio University, as known as Father of Japan’s Internet. And last but not least, the panelist on the other end is Ms. Doreen Bodgan-Martin, Secretary General of the International Telecommunications Organization. So we have an excellent lineup of panelists. But before we jump in the panel discussion, I would like to invite Minister Suzuki to share with us the brief overview of the current state of Hiroshima AI process led by the Japanese government now. So Minister Suzuki, the floor is yours.

Junji Suzuki:
Good morning, everyone. I am Suzuki Junji, Minister of International Affairs and Communication. I would like to extend my gratitude to all of those who are attending the Internet Governance Forum Kyoto 2023. I would also like to thank Ms. Maria Ressa and Mr. Tsuji Yamaguchi for their excellent presentations. And I would also like to thank Mr. Tsuji Yamaguchi for his excellent presentation. And I would also like to thank Ms. Maria Ressa and Mr. Ulrich Knussen, OECD Secretary General, for their very insightful keynote speeches. Now I would like to introduce the status of the discussion on the Hiroshima AI process to set a stage for a panel discussion with multi-stakeholders to be held in this session. The rapid development and penetration of generative AI has allowed us, the international community, to maximize its benefit to humanity while mitigating its risks to the economy and society. G7 Gunma Takasaki Digital and Tech Ministers’ meeting held April this year agreed to promptly discuss the opportunities and risks posed by generative AI and to utilize the OECD and J-PAY, established a forum for international discussion on generative AI, including five issues such as AI governance and promotion of transparency. And in the G7 Hiroshima Leaders’ Communique, it was decided to continue the discussion of the Hiroshima AI process. Subsequently, in September this year, the Hiroshima AI process G7 Digital and Tech Minister statement was formulated, which agreed on the following points. First one, necessity to prioritize issues such as ensuring transparency. Two, establishment of international guiding principles for all AI actors and code of conduct for organization developing the advanced AI system. Three, project-based cooperation, including promotion of research that contribute to counter measures against disinformation. Four, importance of exchanging views with stakeholders other than G7 governments. In today’s session, we would like to receive opinions from panelists on the contents of the international guiding principles and the code of conduct for organization developing advanced AI system, which are currently under consideration. The international guiding principle for organization developing advanced AI system is to put together the principles that AI – all AI developers are expected to follow to realize safe, secure, and trustworthy AI. It also provides a set of concrete actions as a code of conduct, the first point of which is to measure to mitigate the risks of advanced AI system to society. This includes measures to identify, evaluate, and mitigate risks before bringing AI models to market, as well as measures to address system vulnerabilities even after market placement. So what types of risks and vulnerabilities should AI developers bear in mind when implementing measures? The second point is to disclose information on the risks and appropriate use of advanced AI systems, and to share such information among stakeholders to ensure the users can use AI systems with confidence businesses should clarify the result of safety assessment and capabilities and limitation of their AI models. This also includes developing and disclosing their own policies on private and AI – privacy and AI governance, or they could establish a mechanism to develop and share best practices among various stakeholders. The third point is to promote research and investment to technologically mitigate the risks posed by AI, for example, development and introduction of mechanisms that enable users to identify AI-generated contents, such as digital watermarking and provenance systems. In addition to these, I believe it would be necessary to prioritize the development of advanced AI systems to tackle global issues such as climate change and global health, and ensure appropriate handling of data which is fed into advanced AI systems. We believe that today’s AI special session is a valuable opportunity to directly hear the opinions of people from diverse backgrounds, and I am sincerely looking forward to discussions that are about to take place. I would appreciate your frank opinions. I hope that today’s session will be meaningful not only for the panelists, but also for all of you who are listening to the discussions in the audience and online. Thank you very much for your kind attention.

Ema Arisa:
Thank you very much for this really informative presentation, Minister Suzuki. Then I would like to invite our panelists to share their views on some of the important aspects in the Hiroshima AI process from their perspectives. I would like to ask my first question. What types of AI systems, in particular advanced AI models such as foundation models, are you developing or placing on the market? How are they used? What solutions and benefits do they seek to offer? What do you see as the major risks and challenges associated with the advanced AI systems you are developing, and how are you addressing those risks and challenges? I hope this question and answer to this will give us an overview of the current situation of generative AI and foundation models across countries. I would like to ask this question to two speakers from global AI developers, Google and Meta. First, President Kent Walker, what is your view on this? You have five minutes.

Kent Walker:
Thank you very much, Professor Emma, and thank you for the chance to be here today. The power of AI is vast, and that is exactly why we think it needs to be developed both boldly and responsibly. AI goes well beyond a chatbot. It has the potential to change the way we do science, the way we develop technology. I thought it was very nice that the rounds of applause for today’s panel went to our two technologists joining us here today, because that will be the foundation of the next great technological advance we have. Of course, you’ve been using AI for a dozen years, if you’ve used Google Search or Translate or Maps, but it’s going to go well beyond that now. We are seeing dramatic advances that are going to change quantum mechanics, quantum science, material science, precision agriculture, personalized medicine, bring clean water to people around the world. The potential is extraordinarily exciting. Just one example, our DeepMind team has helped fold proteins, understand how proteins express themselves, for the 200 million proteins known to science. That would have taken hundreds of millions of years for a biologist to do. It’s as though you took every man, woman, and child in Japan and trained them to be a biologist and then had them do nothing but fold proteins for three years. And as a result, these tools are now being used by more than a million researchers around the world to help advance the study of medicine. And there are many more advances like that coming. But at the same time, we recognize that all of the opportunity agenda must also be balanced by a responsibility agenda and a security agenda. And it’s not for one company or even one group of companies to do alone. We have worked together across the industry on groups like the Frontier Model Forum, the Partnership on AI, ML Commons, and more to develop frameworks, norms for the right kinds of research we need to do, the right standards that need to be applied. And beyond that, we need the role of government and all of you, civil society, for the frameworks that are going to matter to everybody on the planet. This is why we salute and appreciate the leadership of Japan and the Hiroshima process to drive forward with an innovation agenda that recognizes the opportunity, but also the need for thoughtful balance and the hard tradeoffs that will need democracies to exchange ideas about. How do you balance security versus openness? How do you balance the various notions of efficiency and equity in these different tools? These are fundamental and important questions, and we welcome the participation of groups like the IGF who have brought wisdom to those debates over the Internet as we apply them to this latest and great round of new technology. Thank you.

Ema Arisa:
Very much, President Walker. Then I would like to invite President Nick Clegg. What is your view? You also have five minutes.

Nick Clegg:
So as Ken said, I mean, AI, of course, is not new. It’s been talked about since the 1950s, and companies like Google, like Meta, like others, sort of research-heavy organizations have been conducting research using AI and integrating into their products for many, many years. But clearly, this latest development of these large language models is a qualitative and quantitative, because they’re very expensive. You need a lot of compute power and a lot of data leap forward. So we’re all asking ourselves in forums like this and many other forums, is it good? Is it bad? How is it going to reshape the world? There’s been quite a lot of breathless, somewhat hyperbolic predictions about what might happen in the future, and I would really venture three points at this stage. Firstly, where possible, and I think the Deputy Secretary General asked this, is this technology going to be for the many or for the few? Where possible, it is desirable, in my view, that this technology should be shared, that there should be open innovation, there should be open sourcing as much as possible of these foundation models. Companies like Meta, like Google, I don’t want to speak for Google, but in Meta’s case, over the last decade, we have open sourced over 1,000 AI databases and models. It’s not always appropriate. Sometimes there are reasons not to do it, but where possible, the more that this technology can be shared, because otherwise, the risk is it really is technology which is only developed by a very small number of highly resourced institutions, public and private institutions around the world, with deep enough pockets, with enough GPU capacity and with enough large-scale data to get their hands on, and that’s why we, for instance, have open sourced our large language model, Lama 2, and we have had around 30 million uses of it already from researchers, innovators and developers around the world, including here in China, so that’s the first point. The second point is, it is human nature to worry about the worst, but I think it’s also worth remembering that AI is also a sword, not just a shield. If I look, for instance, at the work of meta in social media and the constant adversarial work we have to do to try and minimise bad content, hate speech, for instance, the prevalence of hate speech, and this is publicly available audited data on Facebook, now stands at somewhere between 0.01% and 0.02%, so that means if you’re constantly scrolling on Facebook for far too long than you should, you would find maybe one or two bits of hate speech out of every 10,000 bits of content that you might find. I wish it would be zero, but it’s never going to be zero, because remember, it’s legal speech. But here’s the point, that is down by over, well, almost, it’s down by about 60% just over the last 18 to 24 months for one reason alone, which is AI. So AI, yes, of course, it poses new challenges, it’s also a fantastic tool for us to do as the Prime Minister himself said, to minimise the bad and amplify the good. And then the final thing I would say is this, as we grapple with risks, yes, there’s been lots of talk about long-term potential existential risks, the prospect of so-called autonomous AI or general AI, which will develop an autonomy and agency of its own, but there are things we need to do now, which as mentioned earlier, we need to have some kind of agreement across the industry and with governments and with stakeholders on how we identify the provenance and we can detect AI-generated content, not text content, that’s not possible, but certainly visual content. The more you can have uniform standards developed quickly, the safer all those elections that people have talked about which are taking place next year will take place. So I think it’s important to focus on the here and now, not just on the theoretical tomorrow.

Ema Arisa:
Thank you very much, Mr. Clegg. I believe that everyone has lots of questions, but there’s lots of questions I have today, so now I move on to the second question. So in this previous question, we heard that AI companies are developing highly advanced AI systems and various applications. At the same time, they are making efforts to respond to risks and challenges brought by generative AI. So the guiding principles and code of conduct for organizations developing advanced AI systems set out how organizations should take measures and actions against risks and challenges prior to the model released and market placement, and should continue to work on addressing vulnerabilities and mitigating risks after the release. What risks and challenges do you think are most important for those organizations to address in their efforts? What technical measures and actions do you think to be most effective? I would like to invite Mr. Nizal to answer this question and give your insight. You have three minutes.

Patria Nezar:
Thank you. Excellencies, distinguished speakers, ladies and gentlemen, good afternoon. First of all, allow me to thank the organizer of the session for the opportunity to be in the same stage with honorable presenters today to share about artificial intelligence, a hot topic recently. The development of AI has greatly improved efficiency across commercial sectors. In 2021, AI added 26.7 million workers in Indonesia, or equivalent to 22% of the workforce. Yet we must acknowledge that AI also comes with various risks, such as privacy, intellectual property violations, the potential of biases, as well as hallucinations that requires our attention. Against such backdrop, Indonesia believes we must intensify our approaches on mitigating the risks of AI, both at policy and practical levels. One of the milestone evidence such commitment was made four years ago in Japan, when we supported the T20 AI principle during Japan T20 presidency to set a common understanding on the principles of AI. With the recently issued G7 Hiroshima process as previously presented by Minister Suzuki, the effort to involve different stakeholders, even beyond G7 members, are applaudable. The urgently growing need to establish governance to mitigate various risks that AI, specifically on the generative AI, has demanded us, the global community, to act promptly, but not recklessly. Indonesia is also not waiting in silence. We have begun the development of our AI governance ecosystem from 2020, through several policies, namely National Strategy of Artificial Intelligence, outlining roadmap on the development of AI governance ecosystem in Indonesia. Secondly, standard classification of business line for business-developing AI-based programming, and provision for automated personal data protection under law on personal data protection. While not specifically addressing AI-based personal data processing, it provides foundation for more complex personal data processing activities. Last but not least, we are also in the process of developing a circular letter on artificial intelligence ethics that is hoped to embody principles from prominent global references, infused with our local wisdom to respond to the demand of AI governance. Ladies and gentlemen, we understand that the government cannot act alone. As such, in the process of improving our governance, we invite the involvement from various stakeholders to contribute on the development of our policies as well as ecosystem. Specifically, we are in the process of exploring user cases, potential risks, as well as approaches and technologies to mitigate risks of AI utilizations. We also cognize that AI governance itself is not sufficient to mitigate the risks and threats of AI. We still need additional provisions to ensure positive impact of AI for everyone. This includes the implementation of supportive policies encompassing areas such as the content moderation, ensuring fairness and non-discrimination in the market, as well as digital literacy efforts. In such a backdrop, Indonesia is ready to further the discussion of AI global governance, especially to play the role as bridge builder between various countries with different maturity levels of AI, to ensure our AI utilization might advance the well-being of our society now and in the future. Thank you very much.

Ema Arisa:
Thank you very much, Mr. Nassar. So I would like to ask the same question to Mr. Matzah.

Luciano Mazza:
Well, thank you. Thank you very much. I think, first of all, I think when… main things we must realize as a challenge is how we can bring more voices from developing countries into this debate. And that’s very hard because sometimes, given the true complexity of the issues at hand, it’s not something that’s very simply done. So I will thank you very much, the Japanese government, for the kind invitation to be here today. I want to commend it for the open way and inclusive way it’s making, the effort it’s making to make this process as open and inclusive as possible. I think that’s very important. I think we must be realistic about the huge asymmetries in the AI landscape and how they affect the way different countries and actors approach this issue when it comes to discussing risks and mitigation measures. Large language models have been developed by a few companies based in very few countries. Of course, being mostly just seven countries, the Hiroshima process may be of particular relevance considering this status quo. In an event, we’re talking about a very concentrated market. That may change in the future. Hopefully, it will. But that’s the reality today. So from our perspective, I think organization, that was touched upon before from other colleagues, organizations, particularly in the developing world, should be mindful of the need to bring a sense of local ownership to the countries and communities where they operate. In the sense, one of the main issues will be, I think, the adaptation of those models to local realities. And crucially, here, I think there is an issue of how to adjust the training of the models with data that is more reflective of local circumstances. I think that’s a main topic that must be addressed. Also essential, in our views, to incentivize and strengthen local innovation ecosystems in order to allow for the development of a growing number of applications by domestic companies. Countries should strive to have dynamic AI ecosystems, even if they’re not able to have their own, let’s say, open AI style companies. We know that would be unrealistic to expect. So we believe that this effort to incentivize local ecosystems would be a possible way forward with a view to democratizing this market that, as I said, is very concentrated today. Another topic I wanted to raise is when it comes to risks and mitigation of risks, we think it’s important to widen a little bit the scope of what we understand by risks. We should not lose sight of the big risks that AI has of exponentially amplifying economic, social, and digital divides between developed and developing countries. This should be counted as a risk. So we have seen from some time now that there is a concept of safety by design that is well accepted by many actors in this field that are working on it. We should also work on the notion that new technologies, including AI models, should be inclusive by design in a way that social and digital inclusion should not be an afterthought but should be at the forefront of our considerations. Thank you very much.

Ema Arisa:
Thank you very much, Mr. Matzah. Now, I would like to move on to the question three. So, we heard that the draft guiding principles and draft code of conduct include principles and actions for AI developers to responsibly share information on security and safety risks posed by the models, the measures taken to address these risks, and to publish transparency reports and to establish and disclose privacy policies. And to establish and disclose privacy policies and AI governance policies. What information do you think those organizations should be encouraged to share and with who? What elements do you think should be included in transparency reports? How can information sharing be best done along the value chain, especially with downstream developers who further develop and fine-tune models? So, I would like to invite Chair Vint Cerf to give your answer to this. So, you have three minutes. The floor is yours.

Vint Cerf:
Thank you very much. First of all, I want to say I’m very, very grateful to the Prime Minister for his opening observations about AI and the Internet Governance Forum. I found them most hopeful and very encouraging. I also would like to point out to you some parallels. First of all, the Internet is simply a very large software artifact. So is artificial intelligence and machine learning. As a young programmer, I became fascinated by the idea that you could use software to create your own little universe and it would do what you told it to do. Then I discovered that it does what you told it to do, but not necessarily what you wanted it to do. And the difference between those two is called a bug. And I discovered how easy it was to create bugs and how hard it was to find them and fix them in the software. So why is that relevant? I think all the things that you are hearing about artificial intelligence and machine learning apply generally to software. And so we should be thinking about not just the rules for AI and ML development, but also generally software. We have become intensely dependent on software. It is by far the most powerful and adaptable technology ever created and I would argue that the machine learning world has taken a step beyond that. But with dependency comes risk, and you’ve heard that theme repeatedly. The result is that the risks are a function of the application to which the machine learning and AI models are put. And this leads to the question about single points of failure and the side effects of becoming increasingly dependent on these pieces of software. That leads to a very important point about responsibility and the responsible development and use of software. It leads to questions of ethics in research and academia. What kind of research do you perform and under what conditions? How does business apply and use these artificial intelligence and machine learning tools and software in general? And finally, how are these systems governed? And we’ve been hearing some major and important initiatives. Now to come to your specific question about information sharing, there are several obvious things that we would want to share. The first one is the source of the training material. Where did this content come from? When these machine learning systems are actually used, it’s important to have some idea of how the source material was actually applied and so we can have some sense of judgment about the quality of the resulting system. We also need to be able to understand under what conditions these systems will misbehave. It’s become more and more difficult to predict because the systems are so complex and their function is less like the if-then-else kinds of software that I grew up with and more like a highly probabilistic system that has a probability of being correct and a probability of being not correct. So if we’re going to share information, we should be able to share our experiences. We should be able to alert the consumers and users of these applications as to the potential hazards that they might encounter. I would like to applaud the European Union’s effort to grade the risk factors of applications. So there are some high-risk applications like health care, health advice, medical diagnosis, which should get considerably more scrutiny the software that’s used to provide those services, whereas if it’s just entertainment perhaps the risk factor is lower. I suspect I’ve run way over my time as I can see our moderator wielding her microphone. So I’ll stop there and thank you for your time.

Ema Arisa:
Thank you very much, Mr. Vint Cerf. I wish I had more time. However, now I would like to invite Ms. Wan to respond to the same question.

Denise Wong:
Thank you very much and thank you to the Japanese government, our host country. Singapore has always cared a lot about AI governance. We had an AI model governance framework in 2018, which we updated in 2022 and are now working on the next update. In June of this year, we launched the AI Verify Open Source Foundation to have a conversation and a global platform for discussion on AI governance issues and we also wrote a discussion paper highlighting some of the risks and issues as well as practical solutions to deal with generative AI, its risks and potential pathways forward. Specifically on this question, we do think that there is space for policymakers and industries to co-create a shared responsibility framework as a first step in order to clarify the responsibilities of all parties in the model development life cycle, as well as the safeguards and measures that they need to respectively undertake. There is some useful information that can be shared, especially by model developers, for example, information about how their models are developed and tested, as well as transparency on the type of training datasets used. Specifically to the end user, information can be provided, for example, limitations on the performance of models, as well as information on how and whether data input by a user into the model will be used by developers to further enhance the model. We do think such a shared responsibility framework, which is common in the world of software development, will allow us to parse out the different responsibilities and immediately having a layer of complexity because of the foundational nature of these models. But we do think that for clarity, establishing standardized information to be shared about the model will allow deployers and end users to make proper risk assessments. We do agree that labeling and watermarking of AI-generated content will allow consumers of content to make more informed decisions and choices, and there is certainly much to commend the globally and internationally aligned efforts with many stakeholders involved in this process. Thank you.

Ema Arisa:
Thank you, Ms. Wan. I would like to move on to the next question. So the guiding principles and code of conduct include principles and actions for AI developers to invest in and develop security measures, as well as technical measures for content authentication, such as watermarking and content provenance mechanism and data input control measures. What types of measures do you think would be most effective for organizations to invest in or develop? So now I would like to invite President Walker again to respond to this question. So President Walker, you have three minutes. Thank you. The large language models

Kent Walker:
that we’re seeing today come out of a problem in search. Originally in search, you’re trying to take a word and search the internet for matching words, and then you realize you need to search for synonyms, and then for related concepts. How does the king of England relate to the queen of Spain? Research that was being done about a dozen years ago mapped every word in English and then ultimately in many languages around the world in mathematical terms, in vectors. And then about five or six years ago, further research that was published helped identify something called transformers, an architecture which allowed us to understand all the richness in human language and soon to be a thousand languages around the world. Now we’ve learned many things in working with content on the internet that will carry over to these new challenges of security and content authenticity. So for example, when it comes to security, we believe that we need to work collectively. We have proposed something called a safe AI framework, safe for short, that establishes an ecosystem approach to making sure that model weights and other core information are kept secure when necessary, but to Nick’s point, made open and available when possible. When it comes to authentication of these tools, there are a number of efforts we are progressing. We have an effort called Synth ID that identifies video and images that are available at the pixel level. So even if they’re transformed or turned upside down or changed in different colors, you can still authenticate where they came from. A second effort has to do with about this image in search, allowing you to understand the provenance of when an image was first uploaded to the internet. And finally, we have adopted a new policy that requires the disclosure of the use of generative AI for election ads in ways that are misleading or that could change the results of elections. These efforts and many more like them across the industry will be an important part of answering this question of how do we make sure we can trust the products of AI. But at the same time, I must say that some things can be authenticated and still false. And so we collectively and all the people around the world, we need to educate ourselves. We need to become digitally literate, AI literate about these new tools so that we understand the underlying meaning and what we can and what we can’t trust.

Ema Arisa:
Very much. So now I would like to ask Professor Murai to respond to the same question.

Jun Murai:
Okay. Yeah, I remember when I visited one of the U.S. universities and who did the philosophical researches using a computer, that was like early 70s. And so typing into all the books, philosophy books, and analyzing it, and then understanding what’s a human being is you know, thinking about type of a thing. So that was very much, you know, kind of starting up about AI with the language information based on that. But that language was very much trustable books in the philosophy books and other things. What’s different today for the, you know, working on generative AI and other things. So it’s generated from all the people’s social networking context and IOT sensor data as well. And many of the information on the web and generated from everywhere. And then that is basically the very much security measures of the AI today. And then, you know, so sources and accuracy of the data and how trustable all those information would be. So in Japan we started one of the industrial efforts called originator profile, which is the information on the web to be who’s originated and then authorized because of that reason, particular reason. In order to achieve that, then the ID and the sources of information and traceable from the exact data is also important. And not only the text messages for the AI now is accurate number generated from sensors around, which is very much, you know, kind of learning sources for the, you know, global warming and other environmental studies. And so that kind of accuracy is going to be monitored and discussed and also shared as a wisdom among the AI players.

Ema Arisa:
Thank you very much, Professor Murai. So I would like to move on to the next question. So the draft guiding principles and draft code of conduct includes requirement for AI developers to promote development of advanced AI systems to address world’s greatest challenges such as climate crisis, global health, and education. What fields do you think those organizations should prioritize in their activities and investment other than those described in their previous questions? What are some proactive measures including giving kinds of incentives to companies we could embed in a code of conduct that would enable innovation as opposed to only mitigating risks? So I would like to invite Ms. Doreen Bodgen-Martin to respond to this question.

Doreen Bogdan-Martin:
So the floor is yours. Thank you. And good morning. It’s great to be here. Let me start by thanking the government of Japan for putting this topic so high on the agenda here at the IGF. I think to answer your question, the private sector really is the driving force behind AI innovation, and I would say I’m happy to see how much They’re they’re stepping up to address some of the world’s greatest problems And I guess Nick I think you mentioned this minimizing the bad and of course trying to Amplify the good I guess vent that would be get rid of the bugs So that we can amplify that that good The private sector is also a key constituent in the ITU Membership and happy that two of the ITU members Meta and Google are part of the ITU family because I think that kind of engagement also helps us understand what they’re looking for when it comes to providing insights when it comes to our engagements with policymakers and regulators We have found that sort of a combination of incentives are important Ranging from perhaps economic incentives to explicit recognition of contributions at national and international levels to help really Effectively motivate the private sector to invest in initiatives that ultimately benefits society of course that includes Innovative public private partnerships. I think that’s that’s key You mentioned healthcare education climate definitely I Think I would give perhaps an example something that that stands out for me We’ve been very focused on school connectivity Of course that’s linked also to the WSIS process that had a target to connect every school by 2015 We didn’t get there But we do have an initiative together with UNICEF and many private sector partners And we’re using AI to actually find schools. So we’re using AI techniques for mapping We’re also using AI techniques to look at different connectivity Configurations so that we can ultimately bring down bring down cost and perhaps just to share another example. I would say Disaster management. That’s a key priority also for the government of Japan I think AI has shown lots of potential in that space We’re part of the early warning for all initiative working closely with with Japan WMO and UNEP. We’re looking at ways that you can Use AI when it comes to data collection and handling when it comes to natural hazard modeling and of course when it comes to effective emergency communications So I think really there there’s probably nothing that we cannot do if we actually manage to leverage Multi-stakeholder partnerships to drive positive change. Thank you

Ema Arisa:
Thank you very much. Mr. Bogdan Martin. So I would like to ask the same question to Mr. Nozal

Patria Nezar:
Yes Yeah, we we concern about misinformation and disinformation actually so because Next year we will have elections and we try to issue some Regulations on the spread of information through the digital platform that using AI and We collaborate with multi-stakeholders also and we work closely with global digital platform like Google and Meta as well and Hopefully we can handle it because this is really a big test how AI Will use in the next election for the Political campaigns and hopefully we have fair and safe Elections next year. Thank you

Ema Arisa:
Very much. So what about Professor Murai?

Jun Murai:
Yeah, thank you very much for raising the Disaster things and the natural disaster earthquake. That is a really Very important issue of this country and we are facing always the you know, it’s a big earthquake and Then they know so recovery from that but every time we encounter the earthquake then it’s a lot of digital data network and also all the support and It’s saving people’s life, which is a very very serious one and now AI and more precise and the trustable data Would be a benefit for the you know, kind of a next one so that the Japan need to be preparing for that and another big issue of Japan is a very serious elderly society issue, which is When the people get older Then they know there is a lot of health care issues and then they know so health care issues including the hospital and medical data Manipulations which has never been processed in a proper way for the past I should say 30 years Right and not only in Japan though Everywhere in the world. Oh, by the way, anyway, so so then they know we started to work on those areas and then it’s a very interesting that the very critical area like, you know data privacy and you know that data accuracy and the Manipulation and the amount of the data to manipulate and you know amount of the resources of a hardware to do that It’s gonna be a very serious one and therefore that I think I think it’s a very important area health care and You know those disaster management area because it’s a very much a multiple responsibility exists in there, you know everywhere and then they need to work together and So, you know the self assessment is going to be important Third party checking is going to be important and also government involvement. Of course, it’s going to be very important. Therefore the exactly the very very important example of Multi-stakeholder model achieving and approaching the AI and the other future

Ema Arisa:
Professor Murai, so I would like to move on to the next question So how do you foresee a developing over the next few years and what do you think? Organization developing advanced AI systems do in order to realize trustworthy AI across society So first, I would like to ask this question to president click

Nick Clegg:
Well, I think the trick of the problem about the future is it’s always very difficult to predict particularly With with technology, which is evolving as fast as it is But I think some things are Relatively predictable as far as the development of these large language models are concerned. So one thing I think You’ll see Fairly soon is a lot of these large language models as the name implies were large language models. So they were Focused on language and then you had separate models based on visual content and I think those things will merge So that you’ll have Models, which are what they call multimodal. They operate both in terms of text and and visual content and that will introduce Significant additional versatility to those models I think the issue of how the Languages that are used in the training data is a very important one a lot of these large language models particularly ones obviously emanating from the big US tech companies were originally trained in in English I Think I mean that doesn’t mean by the way that developers can’t use the the models And redeploy them in their own language. So for instance here in Japan Eliza a company has taken Lama to it’s it’s open-sourced form and has Actually developed a very high-performing Large language model in Japanese here in Japan, but I think you will see those models in future being trained If I can put it at a very sort of foundational level in multiple languages at the same time I think one of the big open questions would be very fascinated what? particularly some of the leading It’s very difficult to talk about these things by the way when you have two godfathers of the Internet on the stage I mean, I defer to them completely so I just know what they think But I think there has been this assumption that these that these models just get bigger. I’m not actually necessarily Clear if that’s necessarily gonna turn out like that I mean firstly there’s going to be just a just an incentive to be efficient to be more efficient with data use less data Use less computing power, you know use less money. So I think there’s going to be a draw and also And also, you know the applications of these models particularly in their fine-tuned form will be most impactful Not necessarily because they’re bigger just if they’re if they’re just fine-tuned to deliver particular Objectives, so I think this assumption which has certainly been there in the public debate that they just get exponentially bigger all the time I’m not necessarily sure if that’s gonna be the case Anyway these large language models, I mean there’s only so many times that you can Reconsume and redigest all the content the public content on the Internet You can only do that a few times after a while you run out so so I think size is gonna be I don’t think size is the only determinant of Capability here and nor do I think risk is only associated with size either

Ema Arisa:
Much so I would like to ask the same question to the father of Internet, Dr. Cerf

Vint Cerf:
I’m sure that everyone will recognize that just because I’ve had a lot to do with the Internet doesn’t necessarily mean I know anything about Artificial intelligence, so you should be careful about my answers. I will tell you something I’ve learned from a colleague at UCLA. His name is Judah Pearl He is one of the winners of the Turing Award Which is the top award in computer science for his work in machine learning and AI He’s written two books one is called causality and the other one Is if I remember that right? Causality and the other one was called the book of why as in wh why what was his point? His point was that large Machine learning models are all about probability. They talk about probabilistic Performance, they don’t necessarily deal with causality. And so you can’t conclude anything from them Unless you have a causal model to go along with the correlation that these large machine learning models incorporate and I’m using Machine learning here rather than large language model or artificial intelligence very deliberately If you don’t appreciate causality versus correlation you’ll appreciate the story that some parties would conclude looking at the statistics that flat tires cause babies and the reason for this is that there’s a high correlation between the number of flat tires occurring near a Hospital and the number of babies that are born and you can quickly appreciate that the real reason that there are flat tires is because Someone is racing to get the mother to the hospital So the baby can be born there and not in the car and the result of the wrath fast driving Sometimes is flat tires to give you one other example Where causality was really important at Google you can imagine we consume a lot of power Cooling the data center off Because running all those computers generates a lot of heat once a week we used to have an engineer who tried to adjust the valves in order to figure out how to Minimize the amount of power required to cool the data center We trained a machine learning system to perform that task It saved 40% of the power requirement that we have been able to achieve Manually so causality is going to be our friend here and we need to Incorporate that into the way in which we train and use these models

Ema Arisa:
Just serve so I would like to ask move on to the next question So do you think there should be consideration given to developing tools or a mechanism to help? organization monitor the implementation of the guiding principles and the code of conduct and to hold Organizations accountable for their progress in doing so so I would like to invite professor Murai to answer this question

Jun Murai:
Sorry I missed The order It’s the the question number seven So, how how do we consider about the monitoring the implementation of the guiding principles and the code of conduct? Okay. Yeah, that’s Yeah, that’s a kind of repeat but Monitoring in the is a self self assessment is a really important one and the by the any entity who’s processing the thing in the code and it’s a own responsibility to do this and the including the including the individuals Of the who’s being involved in this it’s going to be a very important and also the not, you know the apart from yourself and the third party or the Independent entity it’s going to be in a kind of a monitoring and that then sharing the wisdom For the processing is going to be a really important public Need to address on I know what they should do but also the investing on the researchers and the Educations is going to be a very much a government role and the public sectors role and increasing the quality of the monitoring of the for the AI processes

Ema Arisa:
Thank you, Professor Murai, so I also would like to invite dr. Suffolk and to respond to the same question

Vint Cerf:
Of course the key question here is what do we measure and what objective function are we trying to achieve and it takes a great deal of creativity to figure out how to assess the quality of Large-language models and machine learning models the concrete objective functions like the one I mentioned earlier Which is to reduce the cost of cooling off our data centers are pretty obvious kind of measurement But it’s a much more complex question to answer How well did the large language model respond to your prompt and to produce its output? Was it usable or not? I don’t have a deep notion right now of how to measure how to apply an objective function to those kinds of applications the one thing I will say is that if we can detect the quality of responses coming back in high-risk environments that that might be a top priority for us to Make sure that if there are high risk applications being used that we measure safety as the most important Metric of success

Ema Arisa:
Thank you, dr. Cerf, I would like to ask this next question to mr. Bogdan so as the head of the UN engine agency working hard to bridge the digital divide What needs to be done to ensure that the global South is not left behind on AI developments while doing so responsibly? What are some of your recommendations to the Hiroshima AI process in this regard?

Doreen Bogdan-Martin:
Thank you. I think Luciano you in part kind of answered this before so I’m gonna pick up from from where you kind of left off I think you you can’t be part of the AI Revolution if you’re not part of the digital revolution So this is a reminder about the 2.6 billion people that are still offline today The 2.6 billion people that are actually digitally excluded today so that that I would say a Clear message, maybe let’s not lose sight of some of the fundamentals the fundamentals around meaningful universal connectivity the building blocks of that Universal meaningful connectivity from the infrastructure to the digital skills I think can’t you mentioned that before the affordability the cyber security and of course much much more in terms of Specific recommendations perhaps. I I would share maybe three I would say the first is that role of meaningful universal connectivity, I think to Embrace that in the context of the guiding principles and the code of conduct including perhaps targeted commitments from companies in Different areas like capacity development. The skills piece would be great Also focused on the gender gap we got that big gender gap so I would just perhaps suggest that the ITU is very focused on that capacity development piece in the space of AI. We’re working to incorporate it in our capacity development offerings together with other UN partners from UNDP, UNESCO, and others. I guess my second recommendation would be in the space of the technical standards, and His Excellency the Minister, I think you laid that out. I think it was point ten, so just to ensure that technical standards are actually a sort of prerequisite when we look at effective implementation of the guidelines. And again, on ITU’s side, we’ll do our part working with other UN agencies in technical standards areas. And I guess the last piece, and this is kind of a plea picking up on the UNSG’s comments in the opening, linked to the governance gap perhaps. And I would say, use the UN as a catalyst for progress in this context. I think that’s really important. This morning, the Vice Minister from Japan kind of plead with us to ensure collaboration amongst these different discussions, so I think that’s really important. Many things are happening. Many countries are taking different approaches, and I think it’s important that we share experiences and work together. The ITU has the AI for Good Global Summit. We work with some 40 different UN agencies. Many of the partners up here on stage. I think that’s also a good space to exchange experiences, best practices, and of course, the upcoming AI advisory body that we heard also mentioned by the SG. I think the UN tech envoy was in the crowd. But I think that’s also another important element, because that group will lay out recommendations that we can take forward in the context of the Global Digital Compact and the summit of the future. So three pillars, universal connectivity, technical standards, and of course, see the UN as a process that can be leveraged. Thank you.

Ema Arisa:
Thank you very much, Ms. Baldwin-Martin. So I would like to ask the next question to Vice Minister, Mr. Nezal. So how can we engage a wide range of stakeholders on the guiding principles and the code of conduct?

Patria Nezar:
Yes, this is still a big question for us in Indonesia as well, because as you know, that artificial intelligence become a hot discussion among countries, and at the global levels, we still seek to the best practices that can inspire us to regulate AI in our country. But we believe UNESCO also working on it, and we share some insights as well with UNESCO and try to set fundamental norms on guidelines to implementation of artificial intelligence. Thank you.

Ema Arisa:
Thank you very much. Next, I would like to ask the same question to Mr. Matzah.

Luciano Mazza:
Thank you. As I mentioned before, I think it’s important to make sure the discussion is as inclusive as possible, and that allows for a very diverse range of voices and constituents to be heard. So full engagement with other organizations, different stakeholders, I think it’s essential to ensure the long-term sustainability of this effort. We believe that it’s important, in particular, to ensure that this process is carried out in dialogue and is consistent with efforts that are being developed in other organizations, in other fora, as I said, to ensure this exercise is effective in the long term. So we believe it’s important that in due course those discussions are expanded to multilateral spaces somehow, which can make it more representative and also sustainable. We have a concern about fragmentation. We think fragmentation, there’s a parallel institutional fragmentation with fragmentation of the digital world. I think these two things go somehow hand in hand. So we think it’s important to look for consistent and cohesion in the discussion, both in terms of the overall narrative about challenges, risks, and opportunities, but also in terms of policies and regulatory approaches. We believe also that multilateral engagement will also be necessary to reduce a little bit those asymmetries I mentioned before in terms of capabilities and information between developed and developing countries, and so to help countries acquire the expertise and capabilities they need to navigate this landscape with a minimum sense of autonomy and ownership of the process, so as they can fully enjoy the benefits AI can bring to everyone. I think Secretary General Jorgin mentioned discussions in the UN, and I think that’s a way forward. We encourage countries to double their bet on multilateralism and give it a chance. We see that we have important debates in the UN in the context of the Global Digital Compact and looking forward in terms of how we engage in the renewal of the mandate of the WSIS. So I think we have a chance to place again our energy and effort in the multilateral system, and I think that’s a great opportunity to give a sense of ownership to everybody in that debate. So that would be my comment. Thank you.

Ema Arisa:
Thank you very much, Mr. Mata. I would like to ask the same question to Ms. Wang.

Denise Wong:
I think we very much agree with the comments of my colleagues so far. A consultative process is absolutely important, both for developing the principles and the code of conduct, as well as technical standards that will eventually hold companies, organizations accountable. It would be useful to hear from other thought leaders and countries outside of the G7 in key groupings. For example, we have the Association of Southeast Asian Nations that a number of us are part of. There’s also the Forum of Small States that are able to bring together some of the voices in the global south into this conversation, and this will allow the principles and the codes of conduct and the standards to be richer and more textured and able to account for the rich cultural diversity that we have in the globe. I think the other experience we would share from Singapore’s perspective is working very closely with industry players, such as those on the stage, as well as others and other international and multilateral stakeholder bodies on concrete projects to sort of test out this technology, understand firsthand, getting your hands dirty on what responsible AI really means in context-specific applications, domain-specific applications. Drilling down into the details is very important in this multi-stakeholder process. We’re very supportive of the effort. Thank you.

Ema Arisa:
Thank you, Ms. Wang. This is the last question. In addition to our work on the guiding principles and code of conduct for organizations developing advanced AI systems, the Hiroshima AI process will seek to elaborate guiding principles for all AI actors and promote project-based cooperation with international organizations. Do you have any views you wish to share on potential outcomes for these future streams of work? What is the most urgent deliverable? I would like to ask this question to President Walker.

Kent Walker:
Thank you. I think today’s discussion has illustrated the incredible importance of highlighting both the responsibility side and the opportunity side of this tool. As Vint says, we probably shouldn’t call it AI. It’s computational statistics. But what an amazing tool it is proving to be. It is giving us ways to help predict the future in different ways. We can now forecast the weather a week away, as well as we used to be able to predict it a day away. For issues like earthquakes, today I understand Japan experienced a tragic earthquake, Afghanistan just in the last couple of days, thousands of people were killed in Afghanistan. If we could provide just a little bit more warning for issues like that, we are already predicting forest fires and where they might spread. We have tools that will predict flooding that are now covering 80 different countries around the world. So governments working together to understand how they can implement those tools and make them available to their citizens is an important agenda. There are hard tradeoffs, of course, between openness and security, transparency, how do we have more explainability for these tools, how to define what tools should be regulated, how different models should be classified. But governments are at the forefront of trying to figure out how to get this right. And then there are additional steps that we need to take to understand how to invest in research to make both the research tools and the computation broadly available around the world, and how do we imagine the future of work. In many countries like Japan, we need desperately to have more productivity for citizens, but that also means that jobs will change. How do we help our workers throughout the world imagine a new AI-enabled future where they are more productive and live better, healthier, wealthier lives? So collectively, through efforts like the G7 and the OECD work and the ITU work on AI for good, we’re confident that we can actually achieve that potential, and we encourage the international community to take an optimistic and forward-looking view in doing just that. Thank you.

Ema Arisa:
Thank you. Next, I would like to invite President Clegg for the same question.

Nick Clegg:
The most impactful deliverable, well, I think in the broadest sense of the word, transparency. I think one of the things that is happening and why the debate has swithered around so much in recent months is a sort of mystique and mystery has built up around this technology. It is very powerful. It will be very transformative in many respects, but in other respects – what did you call it, Ken? Computational statistics, one way. In many ways, they’re like my cruder version of that. It’s like a sort of giant autocomplete system, particularly the large language models, because they’re literally just guessing what the next word or rather the next token should be in the response to a human prompt by processing huge amounts of data across vast amounts of parameters. But I think sometimes in the debate, we’ve sort of anthropomorphised the technology and sort of almost confer in it a certain kind of power, yeah, and intelligence, which oddly enough actually doesn’t possess. These systems don’t know anything inherently. They’re just extremely good at guessing and predicting along the probabilistic logic that was described earlier. So we need to make it as transparent as possible, transparent in terms of how big companies like Meta and Google developed these models in the first place. How is the data being used? How do the model weights operate? What are the red teaming we do to make sure it’s safe? How do we make it accessible to researchers? But also transparent to users, and this is why I stressed earlier, it’s in the draft Hiroshima Code of Conduct, this work on provenance, detectability and watermarking. You can’t trust something if you can’t detect it in the first place, and there’s a lot of very, very difficult technical detail involved in that, because quite a lot of, in the future, content will be a hybrid between AI and human creativity, so how do you identify that? How do you make sure that once you have detected something that has been generated by AI, how does that travel from one platform to the other? The internet is not, you know, just balkanised in different silos. The internet flows across the internet around the world, so I think transparency, transparency, transparency, to give people greater comfort that this technology is there to serve them, that they are not there to serve the technology.

Ema Arisa:
Thank you, Mr. Clegg. So that was our last question, and as a role of the moderator, I need to summarise this discussion, however, I think I need additional one hour to summarise overall this discussion, so I’ll just make one or two comments regarding this panel discussion. So once again, I really enjoyed this panel discussion, and since we have actually discussed about this guiding principles and code of conduct, but what I heard is that beyond this kind of guiding principles, so there’s a lot of initiatives ongoing, and each companies or each international organisations and each nations are actually having their own legal frameworks and their own culture, and they have developing the measures towards this newly developed technology, the generative AI, and also not only the technology, but also the AI system, the services, including not only machines, but also the interaction with the human beings, and that’s really important, and that actually makes it very confusing, but that’s also the very important thing. So today, as Clegg lastly mentioned, the transparency is really important, and also the other key words we actually repeatedly heard is the collaboration, and I guess we have this IGF conference, and I think it’s really important, and I think it’s really important that we continue to discuss how important this topic is, and how we can be responsible as the developers or the deployers, or maybe actual users to kind of face this new technology, and to make the society more better, and so today, the discussion, the panel discussion will be very effective guidance to the AI Hiroshima process, but also to all of us who are actually interested in this topic. So I will stop here, unless I will kind of continuously speak out, so lastly, I would like to invite Mr. Suzuki, Minister of Internal Affairs and Communications, for the closing remarks.

Junji Suzuki:
So thank you very much for your valuable discussions today. As was mentioned by Prime Minister Kishida in his keynote address, generative AI provides services transcending national boundaries and touches the lives of people across the world. I think it is most beneficial that we were able to engage in discussions at IGF where the stakeholders the world over have gathered. Generative AI entails possibilities as well as risks, and is also technology that will transform society in a major way. I’m convinced that today’s discussions will deepen our awareness about the risks of generative AI, and that it will become a step forward to share the possibilities of generative AI, including regions, standpoints, and positions. As for the valuable opinions offered by international organizations, governments, AI developers, and corporations, researchers, and representatives of civil society, we will aim to reflect them in the Hiroshima AI process going forward. Moreover, under global partnership on AI, GPAI, we plan to establish an AI expert support center anew to tackle the challenges of AI and broaden the possibilities through project-based initiatives. With regard to these project-based initiatives to resolve social issues, we have received hopes and expectations from the governments of the global south yesterday, day zero, in a session hosted by the Ministry of Internal Affairs and Communications. Today’s discussions were most meaningful, and as we continue our discussions on AI governance, it will be important to listen to the views of various concerned persons, and we’ll make sure to take such initiatives. Thank you very much for your presentations and for your attendance.

Ema Arisa:
Thank you very much. This concludes the opening session, Global AI Governance and Generative AI. So please give the last round of applause for all the panelists. Thank you very much. So the panel is closed, and I will give back my microphone to the master of the conference.

Moderator:
Thank you very much. So ladies and gentlemen, we have now come to the end of the high-level panel, Five Artificial Intelligence. I would like to extend our appreciation for your presence here today. Thank you very much for all the speakers and the panelists. Thank you so much.

Denise Wong

Speech speed

165 words per minute

Speech length

609 words

Speech time

222 secs

Doreen Bogdan-Martin

Speech speed

160 words per minute

Speech length

1057 words

Speech time

396 secs

Ema Arisa

Speech speed

151 words per minute

Speech length

2026 words

Speech time

806 secs

Jun Murai

Speech speed

147 words per minute

Speech length

890 words

Speech time

364 secs

Junji Suzuki

Speech speed

144 words per minute

Speech length

1037 words

Speech time

432 secs

Kent Walker

Speech speed

169 words per minute

Speech length

1357 words

Speech time

481 secs

Kishida Fumio

Speech speed

95 words per minute

Speech length

713 words

Speech time

452 secs

Luciano Mazza

Speech speed

177 words per minute

Speech length

974 words

Speech time

330 secs

Maria Ressa

Speech speed

126 words per minute

Speech length

742 words

Speech time

353 secs

Moderator

Speech speed

103 words per minute

Speech length

404 words

Speech time

236 secs

Nick Clegg

Speech speed

179 words per minute

Speech length

1734 words

Speech time

581 secs

Patria Nezar

Speech speed

114 words per minute

Speech length

749 words

Speech time

396 secs

Ulrik Vestergaard Knudsen

Speech speed

172 words per minute

Speech length

1211 words

Speech time

422 secs

Vint Cerf

Speech speed

164 words per minute

Speech length

1290 words

Speech time

473 secs