OPENING SESSION | IGF 2023

9 Oct 2023 02:00h - 04:00h UTC

Event report

Speakers and Moderators

Denise Wong

Doreen Bogdan-Martin

Ema Arisa

Jun Murai

Junji Suzuki

Kent Walker

Kishida Fumio

Luciano Mazza

Maria Ressa

Nick Clegg

Patria Nezar

Ulrik Vestergaard Knudsen

Vint Cerf

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Ulrik Vestergaard Knudsen

Artificial intelligence (AI) holds immense potential to transform numerous sectors such as science, healthcare, education, and climate change. It has demonstrated its ability to contribute to scientific discoveries, enhance healthcare services, improve educational outcomes, and address environmental challenges. However, while AI presents numerous opportunities, it also carries significant risks that must be addressed.

The Organisation for Economic Cooperation and Development (OECD) has taken a crucial step in the development of international standards for digital policies, including AI. These standards are designed to align with human rights and democratic values, ensuring the responsible and ethical use of AI. By establishing these standards, the OECD aims to promote a global effort towards AI governance that prioritises the protection of human rights and democratic principles.

Governance and regulations play a vital role in managing the impact of AI. Generative AI, in particular, poses a risk of generating false and misleading content, which can undermine democratic values and social cohesion. Additionally, the use of generative AI raises complex questions related to copyright. Therefore, specific attention needs to be directed towards the governance and regulations of AI to prevent these potential challenges.

Furthermore, international cooperation and coordination are paramount in formulating effective AI policies. The OECD recognises the importance of bringing nations together to discuss AI-related issues and develop better policies for the benefit of all. By serving as a forum and leveraging its convening power, the OECD endeavours to facilitate global discussions on AI, fostering collaboration and partnership among countries.

In conclusion, while AI possesses great potential to revolutionise various sectors, there is a need to mitigate the risks associated with its adoption. The OECD’s efforts in setting international standards for AI, aligning with human rights and democratic values, are commendable. Additionally, proper governance and regulations are essential to prevent the spread of false content and ensure responsible AI use. By promoting international cooperation and coordination, the OECD aims to drive forward better policies for the responsible deployment of AI, ultimately benefiting societies worldwide.

Junji Suzuki

The G7 Gunma Takasaki Digital and Tech Ministers’ meeting discussed the opportunities and risks posed by generative AI and agreed to utilise the Organisation for Economic Co-operation and Development (OECD) as a framework to address these concerns. The G7 Hiroshima Leaders decided to continue the discussion of the Hiroshima AI process. International guiding principles and a code of conduct for AI are considered essential for realising safe, secure, and trustworthy AI.

Promoting research and investment to technologically mitigate risks posed by AI, including the development and introduction of mechanisms for identifying AI-generated content, is seen as crucial. Examples of mechanisms include digital watermarking and provenance systems. Research and investment are viewed as ways to address AI risks.

AI developers should prioritize the development of advanced AI systems for tackling global issues like climate change and global health. Minister Suzuki voiced the need for appropriately handling data fed into advanced AI systems.

Disclosure of information on the risks and appropriate use of advanced AI systems is necessary. Businesses should clarify the results of safety assessments and the capabilities and limitations of their AI models. Developing and disclosing policies on privacy and AI governance are considered important.

Generative AI was discussed at the Internet Governance Forum (IGF), where stakeholders from all over the world gathered. Generative AI provides services that transcend national boundaries and significantly impact lives worldwide. It involves both possibilities and risks and is a transformative technology.

The Hiroshima AI process will aim to reflect the opinions provided by various stakeholders. Opinions were collected from international organizations, governments, AI developers, corporations, researchers, and representatives of civil society.

Plans to establish an AI expert support center under the Global Partnership on AI (GPAI) to tackle AI challenges and broaden possibilities are positive. The center will aim to broaden the possibilities of AI through project-based initiatives.

Listening to the views of various stakeholders and taking initiatives accordingly is an important aspect of AI governance.

In conclusion, the discussions held among G7 ministers have underscored the need for international guiding principles, research and investment, disclosure of information, and stakeholder engagement in realizing safe and trustworthy AI. The recognition of the transformative potential of AI, particularly in addressing global challenges, further highlights the importance placed on responsible AI development and implementation. The establishment of an AI expert support center under GPAI signifies a proactive approach to addressing AI challenges and exploring new opportunities. Overall, these discussions and initiatives contribute to advancing AI governance and ensuring its positive impact on society.

Vint Cerf

Understanding and sharing information about the development of Artificial Intelligence (AI) and Machine Learning (ML) is crucial for the advancement of these technologies. With the increasing dependence on software in various industries, having a clear understanding of AI and ML is important to ensure their proper application and use. Bugs, which are the difference between what software is told to do and what one wants it to do, highlight the importance of understanding AI and ML development to minimize errors in software.

In high-risk applications such as healthcare, there is a need for greater scrutiny. Applications like health care, health advice, and medical diagnosis should receive more attention and evaluation to ensure accuracy and reliability. The European Union’s efforts to grade the risk factors of these applications are acknowledged and appreciated. This demonstrates the importance of careful evaluation and regulation of AI and ML systems in critical areas like healthcare to protect public health and well-being.

Transparency in the source and application of training material in ML and AI is necessary. Knowing where the content for machine learning systems comes from and the conditions under which these systems may misbehave promotes accountability and allows for better decision-making in the application of AI and ML models.

Large ML models primarily deal with probability rather than causality. While correlation is important, it is crucial for models to also appreciate causality. Understanding the causal relationships between variables can lead to better model performance and outcomes. Incorporating causality in the training and usage of ML models can have significant benefits.

Incorporating causality in training and usage of ML models can also lead to power savings. Using causality in training Google’s machine-learning system resulted in a 40% saving in power for cooling data centers. This demonstrates the practical benefits of considering causality in AI and ML systems, not just for accuracy but also for efficiency and resource optimization.

Determining objective functions and measuring quality for language and ML models is challenging. Assessing the quality of large language models and machine learning models poses difficulties. Evaluating the response and output utility of these models requires careful consideration and evaluation to ensure their effectiveness and usefulness.

Safety in high-risk environments should be prioritized when evaluating the success of AI and ML models. Measuring the quality of responses in high-risk environments is crucial to identify areas where improvements are needed and ensure the safety and well-being of individuals who interact with these systems.

Measuring the quality of large language models requires a high level of creativity due to their complexity. Innovative approaches and metrics are needed to assess the quality and performance of these models accurately.

In conclusion, understanding and sharing information about the development of AI and ML are crucial for their effective and ethical application in various industries. There is a need for greater scrutiny in high-risk applications, such as healthcare, to ensure accuracy and reliability. Transparency in the source and application of training material is necessary for accountability and responsible use of AI and ML. While large ML models primarily deal with probability, appreciating causality can lead to better performance, power savings, and more accurate outcomes. However, determining objective functions and measuring quality for language and ML models pose challenges that require innovative solutions. Prioritizing safety in high-risk environments and measuring the quality of large language models also requires careful evaluation and creative approaches.

Jun Murai

The analysis explores multiple perspectives on the evolution and significance of Artificial Intelligence (AI) in various domains. It highlights how AI has progressed from analysing books in the 70s to now analysing social media and sensor data, showcasing its ability to process information from different sources.

The importance of data accuracy and trustworthiness is emphasised, with Jun Murai discussing the need for reliable information and mentioning the ‘originator profile’ initiative in Japan. This initiative aims to identify and authorise information on the web, ensuring credible data sources for AI systems.

Additionally, it is stressed that AI goes beyond analysing text alone, as it can also utilise sensor-generated data. This type of data is commonly used in studies related to global warming and environmental science, enhancing AI’s capability to address complex issues.

The analysis also highlights the use of AI in disaster management, particularly in Japan, which frequently faces earthquakes impacting digital data networks and human lives. AI, combined with precise data, can greatly assist in effectively managing and recovering from such disasters.

Another issue brought up is Japan’s challenges with an ageing population and inadequate healthcare facilities, resulting in unprocessed hospital and medical data over the past 30 years. The application of AI is crucial in addressing the healthcare needs of an elderly society and improving the processing of medical data.

In conclusion, the analysis emphasises the importance of AI, data accuracy, data privacy, and hardware resources in healthcare and disaster management. The need to monitor and share accurate data among AI players is crucial for improved performance. It is also important to monitor the implementation of guiding principles and codes of conduct in the AI field, and involving third parties or independent entities in the monitoring process can contribute to better outcomes. Lastly, investments in research and education by governments and public sectors are essential for enhancing the quality of AI process monitoring and ensuring progress in the field.

Maria Ressa

The analysis delves deeper into the key points raised by the speakers, shedding light on the detrimental effects of disinformation, technology, and surveillance capitalism. It emphasises the need for truth, trust, and a shared reality in society.

The first speaker highlights the alarming rate at which lies spread on social media compared to facts. Citing a study by MIT, it is revealed that falsehoods propagate six times faster than accurate information, contributing to the widespread dissemination of disinformation. The speaker also brings attention to the role of emotions in accelerating the spread of disinformation, emphasising that fear, anger, and hate further amplify its reach. To support this argument, data from Rappler is presented, indicating that disinformation spreads even more rapidly when infused with these strong emotional elements.

The second speaker focuses on the negative impact of technology on human rationality. Referring to it as a biological hack, the speaker asserts that technology has found ways to bypass our rational minds, triggering the worst aspects of our human nature. However, no supporting evidence is provided to substantiate this claim.

The third speaker critiques the phenomenon of surveillance capitalism, contending that it has turned our world upside down and has been exploited by authoritarians. Unfortunately, no specific examples or evidence are provided to validate this argument, leaving it somewhat unsupported.

The fourth speaker emphasises the importance of facts, truth, trust, and a shared reality. They argue that without these essential elements, society is unable to function effectively. Democracy and the rule of law heavily rely on these foundations, and their absence can lead to the erosion of these principles. However, the speaker does not provide any specific examples or evidence to back up their claim.

The final speaker advocates for urgent action to combat the negative consequences of surveillance capitalism, address coded bias, and uphold journalism as a safeguard against tyranny. While no supporting evidence is provided, the speaker asserts that these actions are necessary to preserve peace, justice, and strong institutions. It is worth noting that the speaker’s stance aligns with the United Nations’ Sustainable Development Goals, particularly SDG 16 (Peace, Justice, and Strong Institutions) and SDG 10 (Reduced Inequalities).

Overall, this analysis highlights the grave concerns surrounding the proliferation of disinformation, the impact of technology on human rationality, and the exploitation of surveillance capitalism by authoritarians. It underscores the importance of truth, trust, shared reality, and the role of journalism in upholding democratic values. Urgent action is called for to combat these challenges and create a more just and informed society.

Kishida Fumio

Mr. Kishida Fumio, a prominent figure in the Japanese government, recognises the vast potential of Artificial Intelligence (AI) in driving socio-economic development. He firmly believes that Generative AI, in particular, will shape the course of human history. To support the growth and utilization of AI, the Japanese government is formulating an economic policy package that includes measures to enhance its development.

In addition to his stance on AI development, Mr. Kishida Fumio advocates for international solidarity and balanced AI governance. He emphasizes the importance of involving diverse stakeholders in shaping AI governance, and the international initiative known as the Hiroshima AI Process aims to establish guiding principles for responsible AI governance.

While Mr. Kishida Fumio remains optimistic about AI, he acknowledges the potential risks associated with its widespread use. Specifically, he is concerned about the dissemination of disinformation and the resulting social disruption. Sophisticated false images and misleading information pose significant threats. To address these risks, he calls for proactive measures that foster a secure digital environment.

In conclusion, Mr. Kishida Fumio’s contributions to the AI discourse underscore its potential for socio-economic development. He emphasizes the need for international cooperation and responsible governance. Furthermore, he addresses the risks of disinformation and social disruption, highlighting the importance of proactive measures to safeguard against them. Mr. Kishida Fumio aims to strike a balance between harnessing the benefits of AI and mitigating its potential downsides through his advocacy and policy initiatives.

Ema Arisa

The analysis highlights several key points regarding the development and implementation of AI systems. Firstly, it suggests that AI systems should be developed specifically to address some of the world’s most pressing challenges, including the climate crisis, global health, and education. The potential of AI in handling these challenges lies in its ability to analyse vast amounts of data and predict outcomes. By leveraging these capabilities, advanced AI systems can play a significant role in managing critical problems on a global scale.

Furthermore, the analysis emphasises the importance of organisations prioritising diverse fields in their AI activities and investments. While the climate crisis, global health, and education are crucial areas to focus on, it is also essential to explore other fields to deploy AI for maximum benefits. By investing in a broad spectrum of fields, organisations can unlock the full potential of AI and harness its capabilities to address a wide range of challenges and opportunities.

Transparency in AI technology is another key aspect highlighted in the analysis. It is argued that transparency plays a vital role in building trust in AI systems. To ensure public confidence in these technologies, there is a need for AI developers to prioritise openness and provide clear explanations of how AI systems operate. Additionally, the analysis mentions prominent figures, such as Nick Clegg and Ema Arisa, who support the idea of transparency in AI technology. Trusting AI systems becomes more attainable when their inner workings are transparent and easily understandable.

The analysis also highlights the need for countries, international organizations, and companies to uniquely frame their responses to AI based on their cultures and legal frameworks. Different initiatives are already underway in various countries and companies to develop personalized approaches to AI. This recognition of cultural and legal diversity is important to ensure that AI technology is implemented in a manner that aligns with the values, norms, and rules of different regions and entities.

Collaboration is also emphasized as a key factor in developing and implementing AI technology. By working together, entities can responsibly use and improve AI technologies. Ema Arisa specifically acknowledges the significance of entities joining forces to achieve this. Collaboration facilitates the exchange of knowledge, resources, and expertise, leading to the responsible and effective use of AI for the benefit of all.

In conclusion, the analysis points towards the potential of AI in addressing global challenges, the need for diverse fields in AI activities, the importance of transparency in AI technology, the significance of framing responses to AI based on cultural and legal frameworks, and the role of collaboration in developing and implementing AI technology. Taking these insights into consideration can pave the way for harnessing AI’s capabilities to tackle the world’s most critical problems and drive positive change in a wide range of sectors.

Nick Clegg

Large language models are considered a substantial advancement in artificial intelligence (AI) and require significant computing power and data. One exciting development is the potential for open-source sharing of these models, allowing researchers and developers to access and contribute to AI progress. AI technology, including language models, has also been effective in combatting harmful content on social media platforms. Through the use of AI algorithms, hate speech on Facebook has seen a significant decrease. However, there is a need for industry-wide collaboration to accurately identify and detect AI-generated content, particularly text content. The future of AI systems is predicted to become multimodal, incorporating both text and visual content, while also being trained in multiple languages, expanding their impact beyond English. Contrary to belief, future AI models may focus more on specific objectives and be more efficient with less data and computing power. Transparency is crucial in AI, as it allows users to understand the processes and establish trust. AI technologies should serve people, and collaboration is necessary to ensure transparency and responsible use of AI across the internet.

Denise Wong

Singapore has played an active role in AI governance, continuously updating its AI model governance framework. In 2022, the framework was updated to ensure its relevance and effectiveness in regulating AI technologies. Additionally, Singapore has launched the AI Verify Open Source Foundation, a platform dedicated to discussing and addressing AI governance issues. This showcases Singapore’s commitment to responsible AI development and deployment.

A shared responsibility framework is necessary to establish clear roles and responsibilities between policymakers and industries in the model development life cycle. This framework helps ensure that adequate safeguards and measures are taken to mitigate risks associated with AI technologies. By clarifying responsibilities, policymakers and industries can collaborate effectively to uphold ethical AI practices and accountability.

Transparency is crucial in AI model development and testing. It is imperative to share information about the development process, testing procedures, and training datasets used. This transparency builds trust and confidence in AI systems. Similarly, end-users should be informed about the limitations and usage of AI models, enabling them to make informed decisions.

To enhance consumer awareness and choice, AI-generated content should be labeled and watermarked. This allows consumers to differentiate between AI-generated and human-generated content, giving them the ability to make informed decisions about the content’s authenticity and reliability.

There is strong support for a global and internationally aligned effort in AI governance. This approach aims to collaborate and harmonize AI regulations and standards across countries and regions, fostering responsible development and deployment of AI technologies.

In the consultative process for developing principles, codes of conduct, and technical standards for AI, involving thought leaders and countries outside of the G7 is beneficial. This inclusion enriches discussions, leading to more diverse and inclusive AI governance policies.

Singapore’s experience highlights the importance of testing technology through concrete projects with industry players and stakeholder bodies. Such projects serve as real-world experiments that identify risks and develop suitable measures and regulations. By involving industry players, policymakers ensure that regulations are practical and effective.

Overall, there is strong support for the multi-stakeholder effort in AI development. Collaboration among governments, industries, academia, and civil society is crucial in shaping responsible AI practices. This inclusive and collaborative approach considers diverse perspectives, fostering innovation while addressing ethical and societal concerns.

In conclusion, Singapore actively participates in AI governance, continuously updating its AI model governance framework and initiating discussions through the AI Verify Open Source Foundation. It establishes shared responsibility frameworks, promotes transparency in model development, informs end-users, labels AI-generated content, and advocates for global alignment in AI governance efforts. By involving thought leaders, conducting concrete projects, and encouraging multi-stakeholder collaboration, Singapore strives to shape a future of AI that benefits society as a whole.

Patria Nezar

Artificial Intelligence (AI) has had a significant impact on Indonesia’s workforce, with 26.7 million workers being added in 2021, equivalent to 22% of the country’s workforce. This highlights the substantial contribution of AI in terms of job creation and economic growth. However, along with these benefits, AI also brings various risks that need to be managed.

One of the major concerns related to AI is privacy violations. As AI systems gather and analyse vast amounts of data, there is a risk of personal information being misused or breached. Intellectual property violations are another concern, as AI technologies can potentially infringe on copyrights, patents, or trademarks. Additionally, biases in AI algorithms can result in unfair or discriminatory outcomes, and the occurrence of hallucinations by AI systems raises questions about the reliability and safety of their outputs.

To address these risks, the implementation of technical and policy-level mitigation strategies is imperative. The Indonesian government has taken steps in this direction by developing a National Strategy of Artificial Intelligence, which outlines a roadmap for AI governance in the country. Furthermore, Indonesia supports the T20 AI principles, which aim to establish a common understanding of the principles of AI.

It is recognised that effective AI governance requires collaborative efforts with stakeholders. The Indonesian government actively invites contributions from various parties to participate in policy development regarding AI. Moreover, efforts are being made to explore use cases, identify potential risks, and develop strategies to mitigate those risks. This collaborative approach ensures that the development of an efficient AI governance ecosystem takes into account diverse perspectives and expertise.

In the context of upcoming elections, there are concerns about misinformation and disinformation spread through AI-powered digital platforms. To address this issue, regulations are being issued to curb the spread of fake information and ensure the integrity of the electoral process. Collaborating with global digital platforms such as Google and Meta can prove beneficial in tackling this challenge.

The use of AI in political campaigns for the next election raises questions and potential ethical implications. The impact and consequences of AI’s involvement in election campaigns need to be carefully considered to ensure fairness, transparency and trust in the electoral process. It highlights the need for support for fair and safe elections, where AI is used ethically and responsibly.

Artificial Intelligence has become a topic of global concern, with countries engaging in discussions to define best practices for AI regulation. However, this remains an ongoing challenge, as AI continues to evolve rapidly and new ethical dilemmas and policy considerations arise. It underscores the need for international cooperation and collaboration to address the multifaceted issues associated with AI effectively.

Notably, Indonesia is working with UNESCO to develop AI implementation guidelines. This collaboration reflects the recognition that defining fundamental norms and guidelines for the responsible implementation of AI is crucial. Nezal, a key actor in this context, supports the collaboration with UNESCO and emphasises the importance of working together to establish ethical and sustainable AI practices.

In conclusion, while AI contributes significantly to workforce growth and economic development in Indonesia, it also poses several risks that need to be managed. Implementing technical and policy-level mitigation strategies, fostering collaboration with stakeholders, and addressing concerns such as misinformation in elections are crucial steps in achieving responsible and beneficial AI governance. Global discussions and collaboration, along with the development of guidelines, are essential to ensure the widespread adoption of AI that advances societal well-being.

Doreen Bogdan-Martin

The private sector, specifically companies such as Meta and Google, is considered a major driving force behind AI innovation. It is noted that the private sector plays a significant role in the International Telecommunication Union (ITU) membership. The sentiment towards the private sector’s involvement is positive, highlighting their contributions to AI development.

Incentives, both economic and explicit recognition at national and international levels, are seen as significant motivators for the private sector to invest in socially beneficial initiatives. The argument presented is that offering incentives can encourage businesses to allocate resources towards projects that have a positive impact on society. This aligns with the goal of achieving sustainable development.

AI technology has shown great potential in school connectivity initiatives and disaster management. It is mentioned that AI techniques are utilised to find schools and explore different connectivity configurations to reduce costs. Additionally, AI has been proven useful in disaster management for tasks such as data collection, natural hazard modelling, and emergency communications. The positive sentiment towards incorporating AI in these areas suggests its potential for improving education accessibility and enhancing disaster response efforts.

The speaker identifies healthcare, education, and climate issues as key priorities for AI focus within the ITU. This indicates a recognition of the significance of addressing these sectors in order to achieve sustainable development goals. The sentiment towards this perspective is positive, emphasising the need to leverage AI in these areas.

It is highlighted that effective change can be driven by leveraging multi-stakeholder partnerships. The importance of partnerships in driving positive change is emphasised, acknowledging that collaboration between different entities is crucial for achieving common goals. The sentiment towards multi-stakeholder partnerships is positive and underscores their role in addressing complex challenges.

Universal connectivity is identified as a critical aspect of the AI revolution. The lack of connectivity affecting 2.6 billion people is highlighted, emphasising the importance of bridging the digital divide. This observation suggests that ensuring universal connectivity is essential for maximising the benefits of AI technologies.

Technical standards and addressing the gender gap are emphasised in the context of AI guidelines. The speaker highlights the importance of technical standards as prerequisites for effective implementation and emphasises the role of the UN as a catalyst for progress. Gender equality and reducing inequalities are also mentioned in relation to achieving AI goals. These aspects are presented with a positive sentiment, indicating their significance in guiding AI development.

Both the United Nations (UN) and the ITU are suggested to play a larger role in advancing AI initiatives. The ITU has already begun incorporating AI into their capacity development offerings. The ITU’s AI for Good Global Summit and the establishment of an AI advisory body are mentioned as examples of their efforts. This observation implies that the UN and ITU have the potential to drive innovation and promote collaboration in the field of AI.

In conclusion, the expanded summary provides a comprehensive overview of the main points made in the given information. It highlights the private sector’s role, the importance of incentives, the potential of AI in school connectivity and disaster management, the prioritisation of healthcare, education, and climate issues, the significance of multi-stakeholder partnerships, the need for universal connectivity, the emphasis on technical standards and addressing the gender gap, and the role of the UN and ITU. Overall, the sentiment presented is positive, reflecting the potential benefits and opportunities that AI brings in various domains.

Moderator

Generative AI, its potential, and associated risks have become the subjects of global discussion. This technology, which is comparable to the internet in terms of its transformative impact, is expected to bring massive changes to various fields. However, there are concerns about the risks of false information and disruption that could arise from the use of generative AI.

Recognising the significance of AI development, the Japanese government has unveiled plans to include AI support in its economic policy package. This move reflects the government’s commitment to strengthening AI development and promoting innovation. By incorporating AI support into its economic policies, Japan aims to foster an environment conducive to technological advancement and economic growth.

The Hiroshima AI process, endorsed by G7 leaders, focuses on building trustworthy AI. This initiative seeks to establish international guiding principles for the responsible and ethical use of AI. By promoting the adoption of these principles, the Hiroshima AI process aims to ensure the development and deployment of AI technologies that can be trusted by individuals, organisations, and governments alike.

The impact of generative AI extends beyond national boundaries, necessitating international collaboration. Recognising this, there is a growing need for multi-sector discussions on AI to address its global implications. Cooperation and coordination among countries, industries, and stakeholders are vital to effectively harness the potential of generative AI while mitigating its associated risks.

The issue of disinformation spreading through social media and AI technologies has gained attention due to its negative consequences. Studies have shown that lies spread six times faster than facts on social media platforms. The rapid dissemination of disinformation fueled by emotions such as fear, anger, and hatred poses a threat to truth and undermines public trust. Maria Ressa argues that social media and AI technologies are exploiting human emotions and attention for profit, leading to what she describes as “surveillance capitalism.”

The spread of disinformation also poses a threat to democratic processes, particularly elections. In the absence of factual integrity, elections can be manipulated through the dissemination of false information. As the year 2024 is seen as a critical year for elections, it is crucial to address and combat the spread of disinformation to safeguard the integrity of democratic processes.

Maria Ressa highlights the need to counteract surveillance for profit, eliminate coded bias, and promote journalism as a defense against tyranny. She has launched a 10-point action plan that addresses these issues and spoke about them at the Nobel Peace Summit in DC. Ressa believes that by taking these measures, society can protect individual privacy, reduce inequalities, and uphold the principles of peace, justice, and strong institutions.

In conclusion, the potential and risks of generative AI, the Japanese government’s plans for AI support, the need for trustworthy AI, and the global impact of generative AI underscore the importance of international collaboration. The detrimental effects of disinformation and the exploitation of human emotions and attention through social media highlight the urgency of addressing these issues. Through measures such as promoting journalism and combating surveillance for profit, society can work towards ensuring ethical and responsible AI development.

Kent Walker

Artificial Intelligence (AI) is a powerful and promising technology with the potential to revolutionise various sectors. Google has been utilising AI in applications like Google Search, Translate, and Maps for over a decade. Its impact goes well beyond chat bots, extending to fields like quantum mechanics, quantum science, material science, precision agriculture, personalized medicine, and clean water provision. The DeepMind team at Google has made significant advances in protein folding, which would have taken every person in Japan three years to accomplish.

However, the development of AI must be accompanied by responsibility and security considerations. To strike a balance between the opportunities AI presents and the need for responsible and secure deployment, ongoing work is being carried out with industry partners like the Frontier Model Forum, Partnership on AI, and ML Commons to establish norms and standards. Governments, companies, and civil society must collaborate to develop the appropriate frameworks.

Security and authenticity are crucial aspects of AI that require a collaborative approach and global digital literacy. Google is taking measures such as Synth ID for identifying videos and images at the pixel level to ensure security. Additionally, policies have been implemented to regulate the use of generative AI in elections, safeguarding democratic processes. Global digital and AI literacy are necessary to address security and authenticity concerns effectively.

AI has evolved significantly in search engine technology and language processing. Research efforts have resulted in mapping words in English and other languages into mathematical terms. The identification of ‘transformers’ has contributed to AI’s understanding of human language. The next challenge is to expand AI’s capabilities to understand and process thousands of languages worldwide.

Technical measures play a vital role in content authentication. Watermarking, content provenance mechanisms, and data input control measures are crucial for verifying content authenticity. Google’s “about this image” feature aids in understanding the origin of an image, while the disclosure of the use of generative AI in manipulating election results ensures transparency.

Governments should collaborate to implement AI tools for public welfare. AI tools have shown promise in predicting natural calamities like earthquakes, floods, and forest fires, enabling better disaster preparedness and response.

While openness, security, and transparency are essential in AI, tradeoffs need to be considered. Achieving the right balance is necessary to ensure the ethical development and deployment of AI. Explainability in AI tools and the classification of AI models require careful consideration.

Encouraging investments in AI research is crucial to make tools and computation accessible worldwide, promoting innovation and equitable access.

AI has the potential to enhance productivity and employment opportunities. It can enable workers to perform tasks more efficiently, contributing to an improved quality of life.

International cooperation is key in harnessing the potential of AI for good. Efforts like the G7, OECD, and ITU emphasize collaboration and partnerships to ensure the responsible and beneficial use of AI.

In conclusion, AI holds immense promise as a transformative technology. However, its development must be accompanied by responsibility and security considerations. Collaboration, global digital literacy, and technical measures are vital for ensuring authenticity, security, and welfare-enhancing potential. Balancing openness, security, and transparency is crucial, along with encouraging investments in AI research for global accessibility. International cooperation is necessary to harness the positive impact of AI for societal betterment.

Luciano Mazza

The AI debate is calling for the inclusion of more voices from developing countries. The complexity of the issues at hand requires broader representation to ensure comprehensive understanding. It is emphasised that organisations, specifically those operating in developing countries, should be mindful of the importance of local ownership in the countries and communities where they operate.

One key aspect to consider in the AI debate is the adaptation of AI models to reflect local realities. This involves adjusting the training of the models with data that accurately represents the local circumstances. By doing so, AI models can better serve the needs of different regions and populations. This argument is supported by the sentiment that AI models should be adaptable to local contexts.

Another important element is the need to incentivise and strengthen local innovation ecosystems. Even if certain countries may not have their own open AI-style companies, creating dynamic AI ecosystems can democratise the market. This fosters economic growth, decent work, and infrastructure development. The sentiment is positive towards the importance of local innovation ecosystems in the AI debate.

However, there is a concern that AI has the potential to amplify economic, social, and digital divides between developed and developing countries. The negative sentiment highlights the risk of further widening the existing inequalities. To mitigate this, it is argued that new technologies, including AI models, should be designed with inclusivity as a primary consideration. The sentiment is positive towards the importance of inclusive design.

Ensuring diverse voices and constituents are actively involved in the AI debate is essential. It is important to have an inclusive discussion that values different perspectives. This sentiment reflects the argument that inclusivity is crucial to hear different viewpoints and to avoid bias.

Engagement with other organizations and stakeholders is seen as crucial for the long-term sustainability of efforts in the AI debate. Collaboration and partnerships are necessary to drive impactful progress. This positive sentiment highlights the importance of engaging with various stakeholders in the AI debate.

To reduce information and capability asymmetries between developed and developing countries, multilateral engagement is deemed necessary. This sentiment supports the argument that engaging in discussions at the international level, such as in the United Nations, can help bridge the gap between different countries. The positive sentiment recognizes the significance of multilateral initiatives in addressing inequalities and imbalances.

Additionally, concerns have been raised about fragmentation in the AI debate. It is important to address this issue to ensure consistency and cohesion in efforts. The negative sentiment highlights the need for a unified approach to maximize the effectiveness of AI developments.

Finally, placing energy and effort in the multilateral system is considered essential to provide ownership and inclusivity to everyone involved in the AI debate. This positive sentiment emphasizes the commitment to the global digital compact and the renewal of the WSIS mandate. It also encourages countries to invest more in multilateralism.

In conclusion, the AI debate requires the involvement of more voices from developing countries to address the complex issues at hand. Adaptation of AI models to reflect local realities, strengthening local innovation ecosystems, designing for inclusivity, and engaging with various stakeholders are all critical aspects. Multilateral engagement, consistency, and cohesion are also necessary to reduce inequalities and foster a more inclusive AI landscape.

Speakers

&

’Denise

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Doreen

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Ema

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Jun

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Junji

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Kent

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Kishida

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Luciano

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Maria

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Nick

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Patria

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Ulrik

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Vint

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more