DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News | IGF 2023

10 Oct 2023 01:30h - 03:00h UTC

Event report

Speakers and Moderators

Speakers:
  • Ana Cristina Ruelas, United Nations Educational, Scientific and Cultural Organization (UNESCO), Intergovernmental Organization, Latin America and Caribbean Group
  • Arun Venkataraman, Google News Initiative, Private Sector, WEOG
  • Nompilo Simanje, International Press Institute (IPI), Civil Society, Africa Group
Moderators:
  • Daniel O’Maley, Center for International Media Assistance (CIMA), Civil Society, WEOG
  • Waqas Naeem, International Media Support (IMS), Civil Society, Asia Pacific Group

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

David Leslie

The discussion on AI governance focuses on the need for resilient and responsible assistive technologies that prioritize human agency and social relationships. It highlights the importance of adopting a human-centred approach and involving stakeholders in the innovation process. This approach is seen as crucial to ensure that AI technologies align with the values and needs of society.

Resilience, in this context, is viewed as a societal phenomenon that requires a technologically embedded and communicatively rich infrastructure. The co-evolution of society and technology is emphasised, with AI being seen as a tool that is utilised by societal stakeholders. It rejects the notion of technological determinism and emphasises the importance of considering the social impact of AI technologies.

However, concerns are raised about the potential drawbacks of data-driven systems. It is argued that if biases, discrimination, and prejudice are not mitigated, data-driven systems will reproduce and reinforce existing patterns of inequality and prejudice. The impact of data-driven systems on social and demographic data is particularly emphasised, highlighting the challenges of processing such data without perpetuating biases.

Another point of contention is the role of human creativity and innovation in shaping agency. It is argued that data-driven systems do not have the ability to create new worlds in the same way that human beings do. Human agency is driven by our creative capacity and our ability to imagine and create things that did not previously exist.

The role of AI as critical infrastructure is acknowledged, with an increasing reliance on AI in sectors such as energy systems management. However, there is a concern about control over AI infrastructure lying predominantly in the hands of large tech companies. This raises questions about the public interest and social good being served by AI technologies, as private interests may prioritise profit over public welfare.

The need to regain control over AI technologies for the public interest and social good is emphasised. It is argued that the control of AI infrastructure by private firms may not necessarily serve the broader public interest. This is considered one of the central problems of our generation when it comes to technology.

In terms of governance, global collaboration and inclusivity are seen as crucial. The current generative AI moment is described as a commercialisation revolution, which has raised several issues that are now being addressed at a global level. The importance of thinking globally and inclusively about AI governance is underscored.

It is suggested that AI governance should be approached from the perspective of global public interest, rather than relying solely on voluntary agreements by tech companies. Comparisons are made with governance structures for nuclear energy and climate change, highlighting the need for multilateral international bodies to govern consequential science and innovation. The concept of a CERN (European Organization for Nuclear Research) for responsible AI is mentioned to address the challenges of AI governance.

In conclusion, the discussion on AI governance highlights the need for resilient and responsible AI technologies that prioritize human agency and social relationships. It emphasises the importance of involving stakeholders and adopting a human-centred approach in the innovation process. Concerns are raised about the potential biases and discrimination in data-driven systems, and the need to foster human creativity and innovation. The role of AI as critical infrastructure and the control of AI technologies by private firms are also points of contention. The importance of global collaboration, inclusivity, and a perspective of global public interest in AI governance is emphasised.

Arisa Ema

The discussion on AI governance emphasises the importance of various principles such as privacy, fairness, and accountability. These principles serve as a foundation for the responsible implementation and use of artificial intelligence. It is acknowledged that AI governance should consider the ethical implications and potential risks associated with AI technologies. The need for technology to aid in disaster recovery is also highlighted, showcasing the potential of AI to restore and support affected areas during crises.

Agile governance is put forward as an approach to enhance resilience in the face of crisis. This concept suggests that actors involved in governance need to be flexible and adaptable while adhering to established principles and rules. By adopting an agile approach, it becomes possible to effectively deal with situations where outcomes are difficult to predict.

However, caution is raised about the challenges of overdependence on technology. It is argued that relying too heavily on technology can increase vulnerability during crises. Unexpected situations can become difficult to manage if there is excessive reliance on technology. Therefore, striking a balance between technology and human interaction is deemed necessary to ensure a coexistence that considers the limitations of technological convenience.

The rapid growth of AI gives rise to concerns about control, particularly in the context of critical infrastructure. It is stated that AI can emerge in any domain of human experience, and as society becomes more connected, reliance on AI technologies also increases. The fact that large tech companies currently control key AI infrastructures raises concerns about a potential disconnect between private interests and the functioning of AI as a public utility.

The issue of responsibility and costs in AI governance is deemed significant. Creating multiple layers of resilience comes at a significant cost. It is emphasised that the control of AI infrastructure should not be the responsibility of one company or organization alone. Instead, governmental and possibly global dialogue is required to address the shared responsibilities and costs associated with AI governance.

Furthermore, the importance of addressing AI governance not only at the governmental level but also at the international level is highlighted. The existence of international bodies such as the Global Partnership on AI and the Partnership on AI demonstrates the recognition of the need for global collaboration in addressing AI governance. The UK’s organisation of an AI Safety Summit and Japan’s creation of the Hiroshima AI process further exemplify the commitment to international cooperation on AI governance.

Arisa Ema advocates for an inclusive, multi-stakeholder approach to tackling AI governance issues. She believes that all issues should be discussed collectively rather than separately to ensure a comprehensive and inclusive dialogue. This approach encourages the involvement of various stakeholders, including the Orihime pilots, in the discussion on AI governance. This inclusive approach aligns with the broader goals of achieving peace, justice, and reduced inequalities.

In conclusion, the discussion on AI governance highlights the importance of principles, the need for technology in disaster recovery, the adoption of agile governance, challenges of technological dependence, the balance between technology and human interaction, concerns about control and critical infrastructure, the global discussion on responsibility and costs, and the significance of addressing AI governance internationally. The inclusive, multi-stakeholder approach advocated by Arisa Ema further underscores the need for a comprehensive and inclusive dialogue to navigate the complex landscape of AI governance.

Audience

During the discussion, the speakers focused on several key topics relating to technology, inclusion, and sustainability. One important topic addressed was the use of technology to promote greater inclusion and accessibility in society. The speakers emphasised that emerging technologies have the potential to be valuable tools in achieving this goal. They argued that technology should be harnessed to enable people of all abilities to fully participate in society by providing equal access to information, services, and opportunities. This perspective was supported by the fact that technology has the ability to bridge gaps and create more inclusive environments.

Another topic discussed was the need for a more holistic approach to resilience solutions. The speakers highlighted the importance of offering optional solutions that provide individuals with more choice and agency. By considering diverse perspectives and accommodating individual needs, resilience solutions can become more effective and sustainable. This argument was reinforced by the evidence that a one-size-fits-all approach to resilience may not adequately address the varied challenges faced by different communities and individuals.

The need for more comprehensive impact assessments was also highlighted during the discussion. The speakers pointed out that existing impact assessments often focus solely on technical, ethical, or human rights aspects, which may overlook other dimensions of sustainability. To ensure more viable and sustainable solutions, the speakers argued for the adoption of holistic impact assessments. These assessments should take into consideration a wide range of factors, including environmental, social, economic, and cultural dimensions. By incorporating a broader perspective, decision-makers can make more informed choices that align with the principles of sustainability and promote long-term well-being.

Another important point emphasised by the speakers was the potential risk of forgetting core skillsets due to the increasing reliance on technology. They warned against taking for granted the support provided by AI systems and other technologies. The speakers urged individuals and society as a whole to maintain and nurture their core skillsets to prevent dependence on technology from eroding essential capabilities. This neutral argument reminded the audience of the value of fundamental skills such as critical thinking, problem-solving, and creativity in a rapidly changing technological landscape.

Additionally, the speakers discussed the potential view of AI as part of critical infrastructure. They highlighted an audience member’s insightful comment and question on this topic. This perspective suggests that AI, with its increasing presence and impact, should be considered a vital part of critical infrastructure. This view implies that the integration of AI systems into critical infrastructure planning and management is necessary for maximising efficiency and resilience, particularly in areas such as industry, innovation, infrastructure, and climate action.

In conclusion, the discussions held by the speakers underscored the importance of technology, inclusion, and sustainability. By harnessing emerging technologies for inclusion and accessibility, adopting a holistic approach to resilience solutions, conducting comprehensive impact assessments, and maintaining core skillsets, societies can work towards more viable and sustainable solutions. Furthermore, considering AI as part of critical infrastructure can potentially enhance efficiency and resilience in various sectors. These insights shed light on the multifaceted challenges and opportunities we face in achieving a more inclusive, resilient, and sustainable future.

Inma Martinez

Inma Martinez is an expert in technology and artificial intelligence (AI) who advocates strongly for a human-centric approach to AI development. She believes that AI should prioritise meeting human needs and improving people’s lives. Martinez emphasises the importance of values, common sense, and mindset in addressing challenges that arise from the use of AI. While AI can provide comfort and convenience, Martinez highlights that it is ultimately human values and cultural teachings that enable individuals to navigate significant hurdles.

Martinez points out that resilience is a cultural value that can be learned from family and school. She draws on her experience of living in Scandinavia, where individuals are taught survival skills in nature, contributing to their empowerment and resilience. Additionally, building infrastructure with self-healing mechanisms is crucial to ensuring the stability of AI systems. Telecom sectors, for example, include multiple fallback plans, enabling nodes to shift in case of breakdown. Martinez suggests that AI services should have similar resilience built into their design.

In disaster response and recovery, Martinez stresses the importance of providing tools and information at the community or family level to foster resilience. During the Fukushima disaster, personal Geiger counters were distributed to families to effectively monitor radiation levels. Martinez notes that positioning these devices at the level of pets and children, where radioactivity tends to be heavier, proved highly effective.

However, Martinez emphasises the importance of developing AI in a safe and responsible manner. She raises concerns about companies that have released AI systems in unsafe ways, highlighting the need for proper training, testing, and commercialisation to ensure the safety of AI for human use. She believes that AI should be carefully developed and regulated to protect individuals and society as a whole.

Martinez also highlights the power of public awareness and demand in shaping the use and regulation of AI. She believes that collective action is more impactful than government regulation and underscores the need for individuals to demand what they believe they deserve in terms of AI safety and ethics.

Finally, Martinez advocates for collaboration between governments and the people in creating solutions. She highlights that governments should listen to the needs and requests of the public to ensure the development of AI aligns with societal expectations and values.

In conclusion, Inma Martinez argues that AI should be human-centric, with a focus on meeting human needs and improving lives. She emphasises the importance of values, common sense, and mindset in navigating challenges associated with AI. Resilience, learned from cultural teachings, is also crucial. Building AI systems with self-healing mechanisms and providing tools and information at the community level fosters resilience in disaster response and recovery. However, she underlines the need for safe and responsible development, testing, and commercialisation of AI, as well as public awareness and demand to shape its use and regulation. Collaboration between governments and the public is seen as key to creating solutions that align with societal expectations.

Rebecca Finley

The analysis features speakers discussing various aspects of AI development and deployment, with a focus on inclusivity, economic inclusion, safety, monitoring, and responsibility. They emphasize the importance of designing AI systems in an inclusive way to avoid negative impacts on communities. The Partnership on AI, a global non-profit organisation, is mentioned for its focus on developing and deploying AI that prioritises people and society. The speakers note that AI systems have had negative effects on communities when not deployed inclusively.

Regarding economic inclusion, the speakers argue that workers’ perspectives should be at the centre of AI-driven economic inclusion. They highlight that while AI systems provide economic opportunities, they may also be fragile and non-resilient. The argument is made that what might be seen as augmentations for some individuals could be perceived as automations for others, emphasising the need for a balanced approach.

The topic of AI safety and resilience is also discussed. The emergence of large-scale AI models has brought attention to the question of AI safety. Ensuring the safety of AI systems is crucial, particularly as they interact with various other systems of infrastructure. There is a recognition that safety and responsibility are key considerations in managing and ensuring the resilience of emerging technologies, including AI. Collaboration among industry, civil society, and academia is noted as pivotal in understanding the management and safety of AI.

The speakers stress the significance of post-deployment monitoring of AI systems. They suggest that monitoring can reveal the differential impact these systems may have on various communities. It is highlighted that not enough attention has been given to the impacts and safety of deployed AI systems.

The analysis also highlights the challenges associated with the complexity of AI. With AI being a critical system, it is noted that it has many different meanings and applications, which may lead to potential confusion. The importance of system mapping and understanding implications is emphasised to better comprehend how AI interacts within a system.

Mapping the AI ecosystem and determining points of intervention is seen as key to understanding the role and significance of AI. By focusing on model providers due to their outsized impact, scholars and researchers have aimed to identify areas for regulation and intervention.

While regulation for AI is crucial, the speakers argue that measures around responsibility and safety should be implemented in the meantime. They appreciate the need for regulation but advocate for interim measures to ensure that AI technology remains accountable and protective of societies.

Interestingly, the analysis highlights the collaborative efforts of civil society and stakeholders in defining what constitutes “good” in the context of AI. The speakers note a moment of convergence, where multiple stakeholders are coming together to establish standards and guidelines for AI, ensuring its responsible and inclusive development.

The potential of AI technology for greater inclusion and opportunities across different sectors is acknowledged. Generative AI is mentioned as a driver of greater inclusion in some workplaces, highlighting its positive impact.

Lastly, there is an emphasis on making AI technology accountable to humans. The importance of ensuring technology is responsible and protective of societies is stressed, aligning with the goal of promoting peace, justice, and strong institutions.

Overall, the analysis provides comprehensive insight into various dimensions of AI, urging for its inclusive development, economic inclusivity, safety, monitoring, responsibility, and accountability. The collaboration among different stakeholders and the drive to establish guiding principles for ethically and responsibly using AI are noteworthy takeaways from the analysis.

ayako kitano

The analysis focuses on the increasing influence of Artificial Intelligence (AI) and stresses the crucial need for its responsible use. It suggests that AI has the potential to achieve groundbreaking discoveries on par with Nobel Prize-level accomplishments. Mr. Kitano, the source of this claim, highlights that AI is not merely a tool but possesses the capability for high-level autonomy. In other words, AI can make discoveries that are of exceptional quality or even surpass those acknowledged by Nobel Prizes.

Furthermore, the analysis highlights the potential of AI to fast-track solutions for challenges that are currently considered unsolvable. By leveraging the power of AI, problems that elude human comprehension and current methods of solving can be mitigated more effectively. This exemplifies the vast potential and impact of AI in addressing grand challenges across various fields of scientific discovery.

However, a cautious approach is necessary in the context of natural disasters. The analysis argues that over-reliance on AI during such events can have catastrophic consequences. It underscores the vulnerability of AI systems in the face of major calamities, such as earthquakes. If a substantial earthquake occurs, causing power and server disruptions, the reliability of AI for survival during and after the disaster becomes precarious. Professor Kamata at Kyoto University supports this perspective, highlighting that AI relies on stable power supply, telecommunications, and functioning devices like PCs or mobile phones to operate optimally. These assumptions would be swiftly dispelled in the aftermath of a significant earthquake, rendering AI partially or entirely useless.

Consequently, the analysis asserts the necessity to prepare for major catastrophic events. It supports Professor Kamata’s prediction that a catastrophic earthquake is likely to occur around 2035, with some room for variance. The analysis stresses the importance of recognizing that AI requires the availability of stable power supply and telecommunication infrastructure to be fully functional and effective. Consequently, neglecting to consider this factor and overly relying on AI during natural disasters can have detrimental consequences.

In conclusion, the analysis showcases the increasing influence of AI and the critical need for its responsible use. It highlights the potential of AI to contribute to groundbreaking discoveries equivalent to Nobel Prize-level achievements and to fast-track solutions for grand challenges. Furthermore, it warns about the potential calamities that over-reliance on AI during natural disasters can pose, emphasizing the requirement for comprehensive preparation for major catastrophic events. Overall, this analysis provides a nuanced perspective on AI’s role in society and the considerations essential to harness its potential effectively while avoiding potential pitfalls.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more