Intelligent Society Governance Based on Experimentalism | IGF 2023 Open Forum #30

9 Oct 2023 08:45h - 09:45h UTC

Event report

Speakers and Moderators

Speakers:
  • Zhang Peng, Deputy Director General of Information Technology Bureau, Cyberspace Administration of China [online]
  • Su Jun, Dean of the Institute of Intelligent Society Governance, Tsinghua University [online]
  • Huang Cui, Professor, Dean of the Institute of National Intelligent Society Governance, Zhejiang University [onsite]
  • Zhu Songchun, Director, Institute of Artificial Intelligence, Peking University [online]
  • Kelly Sims Gallagher, Professor of Energy and Environmental Policy, Tufts University / Former Senior Policy Advisor to the White House Office of Science and Technology Policy [onsite]
  • Simon Jonathan Marvin, Professor Urban Institute, University of Sheffield [onsite]
  • Zhang Xiao, VP of China Internet Network Information Center(CNNIC) and the Executive Deputy Director of China IGF [onsite]
Moderators:
  • Zhang Fang, Associate Director of Center for Science, Technology & Education Policy, Tsinghua University

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Title: An In-Depth Analysis of the Main Arguments and Evidence Presented in the Text

Summary:

The following extended summary provides a comprehensive overview of the main points, arguments, evidence, and conclusion presented in the original text. Additionally, notable observations and insights gained from the analysis are also included. The summary is written using UK spelling and grammar.

The text under analysis argues that advancements in technology have had a profound impact on the modern world. The author asserts that these advancements have not only shaped our society but have also brought about significant changes in various sectors such as healthcare, education, and communication.

One of the main points highlighted in the text is the positive impact of technology on healthcare. The author argues that technological advancements have improved the accuracy and efficiency of medical diagnoses and treatments. They provide evidence by citing examples of cutting-edge medical devices that aid in diagnoses and advanced surgical procedures that have significantly improved patient outcomes. Moreover, the author discusses how telemedicine has revolutionized healthcare by making healthcare services more accessible to remote areas and underserved communities.

Another key argument put forward in the text is the transformative effect of technology on education. The author contends that technological tools and online learning platforms have enhanced the learning experience for students. They supply evidence by referencing studies that demonstrate improved academic performance and engagement among students who utilize technology in their studies. The author also highlights the potential of virtual reality and augmented reality in creating immersive educational experiences.

Additionally, the text addresses the impact of technology on communication. The author argues that advancements in communication technology have broken down physical barriers and enabled instant communication across the globe. They present evidence in the form of statistics on the rise of social media platforms and the increasing ease of global collaboration. However, the author also acknowledges the drawbacks of technology, such as the potential for privacy breaches and the negative effects of excessive screen time on individuals’ well-being.

In conclusion, the text asserts that technology has revolutionized multiple aspects of our lives, including healthcare, education, and communication. While presenting compelling evidence to support this claim, the author acknowledges the potential downsides of technology. Overall, the analysis provides a well-rounded view of the impact of technology, acknowledging both the benefits and challenges it brings to our society.

Note: The expanded summary aims to accurately reflect the main analysis text and include relevant long-tail keywords without compromising the summary’s quality or readability.

Frank Kirchner

The development of AI and robotics is seen as increasingly necessary due to demographic changes and the complexity of certain tasks. Robots are already being used in production facilities and private households, and there will be a greater need for automation in the future. However, the predominantly controlled nature of AI and robotics development, with a small number of private companies, limits access and understanding. This concentration of control raises concerns about the diffusion and democratization of these technologies. Advocates argue for the establishment of standards and regulated frameworks to democratize the design, understanding, and programming of AI systems. This would make them accessible to a wider range of individuals and organizations and foster a more inclusive AI landscape. A standardized design and programming framework would enable cradle-to-grave tracking of robotic components, ensuring accountability and sustainability in production. Transparency is also highlighted, with the validation of source, carbon footprint, and material composition of AI components. The International Development Agency (IDA) could play a role in monitoring AI and robotics development worldwide to promote inclusivity, transparency, and sustainability. Another concern is the concentration of control in a few big companies, and efforts should be made to prevent monopolies and ensure access for a wider range of stakeholders. The risks associated with AI and robotics, including hackers and misuse, cannot be entirely prevented but can be minimized and regulated. Open access and contribution to knowledge safeguard data and technology by minimizing misuse and promoting responsible use. In conclusion, the development of AI and robotics requires addressing issues of access, control, transparency, and accountability. Standards, regulated frameworks, and monitoring by organizations like the IDA can democratize AI, foster innovation, and ensure a more inclusive and sustainable future.

Audience

Suji, a PhD student from Seoul, Korea, is inquiring about the model of governance that AIDA is considering for AI. She is specifically interested in whether AIDA is looking towards models such as the International Atomic Energy Agency (IAEA) or the Food and Drug Administration (FDA). Suji is raising the question of whether AI, like nuclear energy, requires stringent governance due to its potential risks. She also wants to understand the authority and power that such a governance body should possess, as well as its specific roles and responsibilities.

Furthermore, the advancement of technologies like AI, AOT, IoT, and blockchain is resulting in a significant increase in data generation. This has led to the creation of an international database. The proliferation of these technologies has heightened the need for international regulations and rules to govern data transactions that occur across borders. One example is the existence of the SWIFT code, which is a system for international data transactions regulated by 835 different banks from various nations. Establishing international standards and guidelines for data transactions is crucial to ensure the efficient and secure exchange of data globally.

In addition to governance and data transactions, there is also consideration of ethics in regards to cybersecurity, with a particular focus on the issue of hacking. The ethical implications of cybersecurity breaches are a cause for concern. Safeguarding against hacking incidents is crucial for maintaining the security and integrity of data systems. This highlights the importance of incorporating ethical considerations into cybersecurity practices.

Overall, Suji’s inquiries shed light on the growing need for robust and comprehensive governance frameworks to regulate AI, as well as the importance of establishing international standards for data transactions. Furthermore, her observations underscore the significance of ethics in the realm of cybersecurity. Addressing these concerns is vital to ensure the responsible and secure development and deployment of AI technologies.

Evelyn Tornitz

In this session on promoting human rights through an International Data Agency (IDA), the speakers explored the role of IDA in strengthening human rights and ensuring responsible innovation. The session was moderated by Evelyn Tornitz, a Senior Researcher at the Institute of Social Ethics, University of Lucerne, Switzerland, and a MAG member at the UNIGF.

Peter Kirchschlediger, Director of the Institute of Social Ethics at the University of Lucerne, provided an overview of IDA and its purpose. He emphasised that IDA aims to create standards and monitor compliance with these standards in the design and development of robots and artificial intelligence (AI) systems. The goal is to promote responsible practices and prevent any misuse or negative consequences of AI technology.

Kutoma Wakanuma, a Professor at Montford University in Zambia and the UK, discussed the importance of responsiveness, inclusivity, and proactiveness in responsible innovation. She highlighted the need for AI systems to be inclusive of diverse voices and ensure that they respond to the needs and concerns of different communities. Additionally, she emphasised that responsible innovation should be proactive in addressing potential risks and negative impacts.

Frank Kirchner, a Professor at the German Research Institute for Artificial Intelligence, joined the session online and added a new aspect to the discussion. He highlighted the need for a tracking system that can monitor the use of robots and AI, as well as ensure compliance with established standards. By creating a system for monitoring and evaluating AI technologies, potential risks and negative consequences can be identified and addressed more effectively.

Yong Jo Kim, a Professor at Chuang University in Korea, focused on the role of education and knowledge in promoting human rights. He emphasised the importance of transparency, fairness, and embedding human rights in their specific contexts. By integrating human rights principles into education and promoting transparency in AI systems, the potential for violations can be minimised.

Migle Laokite, a Professor at Pompeo Fabra University in Barcelona, Spain, highlighted the challenges associated with handling the negative consequences and risks of AI. She stressed the need for robust mechanisms to address and mitigate these risks, particularly when it comes to high-risk AI technologies. She also mentioned the importance of impact assessments and using the information generated from these assessments to predict and prevent future risks.

Yuri Lima, from the Federal University in Rio de Janeiro, Brazil, focused on the inclusion of the Global South in discussions on labour rights and inclusive living. He emphasised the need to involve diverse perspectives and ensure that any discussions about human rights and technology include the voices and perspectives of those in the Global South.

During the Q&A session, participants raised questions about the concrete functions and powers of IDA, as well as the regulation of data. The panelists addressed these questions, highlighting the importance of regulation and proactive prevention of misuse and risks associated with AI. They emphasised the need for the inclusion of the Global South in discussions and decision-making processes related to AI and human rights.

In conclusion, this session emphasised the importance of responsible innovation and the role of IDA in promoting human rights. The speakers highlighted the need for inclusivity, proactiveness, and transparency in the development and use of AI systems. They also stressed the significance of education, knowledge, and regulation in addressing the risks and negative consequences associated with AI technology.

Kutuma Wakanuma

The analysis of the speakers’ viewpoints on AI technology and its social and ethical concerns reveals several key points. Firstly, there is a strong call for a proactive approach to addressing these concerns. The speakers advocate for responsiveness and the need to actively consider the potential threats and consequences associated with AI technologies. They argue that current AI technologies often focus on the positive aspects and neglect to address these important issues. This proactive stance is seen as crucial to avoid potential negative impacts and ensure the responsible development and use of AI technologies.

Inclusivity and understanding of the impact of technologies on different subjects is another key theme that emerges from the analysis. The speakers assert that technologies can have diverse impacts depending on the cultural and geographical context of their usage. They emphasize the need for diverse representation in decision-making processes and the development of AI technologies. This inclusivity is seen as essential to ensure that the technologies are designed and used ethically and consider the needs and perspectives of different groups.

The establishment of an agency like IDA (AIDA) to oversee ethical concerns in AI technologies is also supported by some of the speakers. They argue that such an agency could oversee, supervise, and monitor the ethical and social concerns associated with AI technologies. Inclusive decision-making can be facilitated through the existence of an entity like the IDA, ensuring that the perspectives of various stakeholders are taken into account. This would help set global standards and ensure the responsible and ethical development and use of AI technologies.

In addition to these points, one of the speakers suggests an overall employment-free status at borders, allowing individuals to earn globally. This viewpoint highlights the need to adapt to the changing nature of work in the digital age and to consider the global impacts of AI technologies on employment opportunities.

Furthermore, health and education are identified as key focus areas in AI policy. These sectors are seen as crucial for social development and well-being, and AI technologies can play a significant role in improving access and quality of healthcare and education. The speakers argue for greater emphasis on these areas in AI policy discussions and decision-making processes.

The analysis also brings to light the idea that different continents and countries may require different AI regulatory policies or acts. This recognition emphasizes the importance of considering the diverse contexts and needs of different regions when formulating AI policies and regulations.

The establishment of a global AI act that can protect everyone is a point of consensus among the speakers. They argue that this would ensure a universal standard for the responsible development and use of AI technologies, safeguarding individuals from potential harmful consequences.

Proactive measures and policies are seen as necessary to regulate AI technologies like CHAT-GPT, which is highlighted as an example of a technology with widespread effects but inadequate policies in place. The speakers emphasize the urgency of taking proactive steps to regulate such AI technologies, particularly in sectors like education, where the responsible use of AI is crucial.

Another noteworthy observation from the analysis is the emphasis on global inclusivity in discussions and decision-making processes related to AI regulation. Currently, more developed nations dominate these discussions, which can lead to a lack of representation and consideration of the perspectives of the Global South. The speakers stress the importance of including voices from both the Global South and North to ensure a comprehensive and inclusive approach to AI regulation.

In conclusion, the analysis highlights the need for a proactive approach to address the social and ethical concerns associated with AI technologies. Inclusivity, the establishment of an oversight agency like IDA, and the development of global policies and standards are seen as essential steps towards ensuring the responsible and ethical use of AI technologies. Additionally, the analysis emphasizes the importance of considering the diverse needs and contexts of different regions and the need for proactive measures and policies to regulate AI technologies. Overall, the speakers advocate for a comprehensive and inclusive approach that takes into account the potential impacts and concerns associated with AI technologies.

Yuri Lima

The rapid advance of new technologies has brought about significant challenges in our ability to comprehend and effectively integrate them into our economic systems. This has resulted in an uneven distribution of the advantages these technologies provide. The digital economy, as it currently stands, showcases a stark contrast between the international flow of profits and the conditions of labour.

Many individuals across the globe find themselves working under poor circumstances, with meagre pay and minimal labour rights or protections. This divergence from the ideals outlined in Article 23 of the Universal Declaration of Human Rights, which emphasises fair and favourable working conditions, poses a significant concern in the modern digital economy. The insufficiencies in addressing these issues further highlight the need for more comprehensive and inclusive approaches.

It is paramount to acknowledge the vital role that underdeveloped countries play in the global exchange of technology and wealth. Disregarding their importance hinders progress and sustains an unequal global value chain. For a fair and just digital economy, it is crucial that the global South, where much of this exploitative digital sweatshop labour occurs, has a say in shaping the global rules that govern the digital economy.

To address these challenges and foster collaboration, an International Database Assistance Agency (IDA) could be established at the United Nations level. This agency would shed light on hidden inequities, identify best practices, and propose actionable solutions. By providing transparency and serving as a platform for engagement between governments, workers, businesses, and civil society, an IDA could contribute to the achievement of a fairer digital economy. The goal would be to create a system that benefits all, promoting technical cooperation and ultimately shaping a just and equitable digital future for everyone.

In conclusion, the fast-paced introduction of new technologies has created a disparity between our comprehension and integration of these technologies into our economies. The current digital economy falls short of embodying principles such as fair working conditions and equal distribution of benefits. To rectify this, it is essential to consider the role of underdeveloped countries and ensure their inclusion in shaping global rules for the digital economy. Establishing an International Database Assistance Agency at the UN level can facilitate transparency, facilitate cooperation, and pave the way towards a more equitable digital future.

Hyung Jo Kim

The discussions centre around the incorporation of Artificial Intelligence (AI) within education, the necessity of an agency to regulate the use of AI, the importance of handling data with transparency and fairness, and the consideration of cultural contexts in discussions pertaining to human rights.

In the sphere of education, the Korea Ministry of Education has made the decision to introduce AI education to all children and high school students by 2025. This will involve utilising AI tools to teach fundamental subjects such as mathematics and English. The argument made is that including AI in education is essential for enhancing learning and equipping students with the skills required for the future. This move is viewed positively as it will enhance educational quality and prepare students for a progressively digitalized world.

Transitioning to the regulation of AI, it is asserted that establishing an agency or control tower to oversee its usage is imperative. It is acknowledged that AI technology has both positive and negative aspects. While it has the potential to revolutionise various industries and foster innovation, concerns regarding its ethical implications and potential risks have arisen. The proposed agency would assume responsibility for regulating the use of AI, ensuring it is implemented responsibly and ethically. It is noted that such an agency would inevitably amass substantial amounts of data, highlighting the necessity for cautious consideration and transparent handling of this information.

The significance of data transparency and fairness is additionally underscored in the context of AI regulation. In the age of AI, the issue of data ownership has become progressively intricate, emphasising the need for transparent and just treatment of data. The trustworthiness of the agency responsible for regulating data is emphasised, as it plays a critical role in upholding public trust and confidence in the use of AI. This is regarded as crucial for accomplishing SDG 16: Peace, Justice, and Strong Institutions.

Lastly, the consideration of cultural contexts is regarded as imperative in discussions encompassing human rights. Specifically, within regions such as Africa and Asia, it is necessary to concretise the concept of human rights by taking into account cultural diversity. It is asserted that research should strive to strike a balance between universal and diverse values, i.e., universality and diversity, in order to establish a comprehensive understanding of human rights that respects diverse cultural perspectives. This is deemed important for the achievement of SDG 10: Reduced Inequalities.

In conclusion, the discussions and arguments presented revolve around the integration of AI in education, the need for an agency to regulate its usage, the significance of data transparency and fairness, and the consideration of cultural contexts in discussions concerning human rights. The inclusion of AI in education is seen as a positive move towards improving educational quality and equipping students for the future. The regulation of AI is deemed necessary to address potential risks and ensure responsible implementation. Data transparency and fairness are emphasised as significant aspects in the age of AI, while cultural contexts are underscored for attaining a comprehensive understanding of human rights.

Melina

During the discussion session, Ayalev Shebeji raised a valid concern regarding the protection of international database information. The question focused on whether advanced technology or other methods could effectively safeguard sensitive data from hackers and potential security breaches.

The complex issue of protecting international database information from unauthorized access is brought into question when considering the effectiveness of sophisticated technological advancements. While advanced technology can enhance data security, it is not foolproof. Hackers continually develop innovative strategies to bypass technological barriers, rendering them less reliable for complete protection.

In addition to advanced technology, other measures can be employed to safeguard international database information from hackers. Implementing strict security protocols and utilizing encryption techniques can make it more difficult for hackers to gain access to sensitive data. Regular security updates and patches should also be applied promptly to address potential vulnerabilities. Furthermore, educating and training individuals who interact with the database on best practices for data protection can significantly reduce the risk of security breaches.

It is important to be aware that no security measure can provide absolute protection against hacking. Cybersecurity is an ongoing battle, as hackers continuously adapt and evolve their techniques. Thus, a multi-layered approach is necessary, combining advanced technology, robust security protocols, encryption techniques, regular updates, and ongoing training and education.

In conclusion, protecting international database information from hackers requires a comprehensive strategy that incorporates advanced technology and complementary security measures. While advanced technology plays a crucial role, it should be accompanied by robust security protocols, encryption techniques, regular updates, and continuous education and training. By adopting this multi-layered approach, organizations can reduce the risk of security breaches and protect sensitive data to the best of their ability.

Peter Kirchschlediger

The International Database Systems Agency (IDA) is a research project that originated at Yale University and was finalized at the University of Lucerne. Its primary objective is to identify the ethical opportunities and risks associated with Artificial Intelligence (AI) in order to promote the well-being of humanity and the planet. The IDA’s vision extends beyond AI regulation to encompass the entire value chain of AI, from resource extraction to the production and use of AI technologies.

The IDA aims to foster peace, sustainability, and human rights while promoting the responsible development and deployment of AI. Drawing inspiration from the International Atomic Energy Agency, the IDA is seen as a necessary step towards addressing the ethical concerns of AI, with the goal of preventing AI-based products that violate human rights from reaching the market.

Peter Kirchschlediger, a supporter of the IDA, argues for the need for stronger enforcement mechanisms in the field of AI. He notes that despite the existence of numerous guidelines and recommendations, businesses continue to operate as usual, highlighting the necessity for a structure similar to the International Atomic Energy Agency. This would provide orientation and ensure that AI is developed and deployed in an ethical and human rights-respecting manner.

In addition, it is suggested that the IDA should not only enforce regulations but also have the power to sanction both states and non-state actors that fail to fulfill their obligations. This would further strengthen the IDA’s effectiveness in promoting responsible AI practices and holding those who undermine ethical principles accountable.

The IDA also has the potential to address cyber security concerns by promoting technological cooperation and enforcing legally binding actions. It is believed that the IDA’s enforcement capabilities and global reach could contribute to the development of a global consensus on cyber security issues, given the significant risks cyber attacks pose to societies worldwide.

Overall, the IDA’s research project seeks to identify the ethical opportunities and risks associated with AI, with the aim of promoting the well-being of humanity and the planet. By fostering peace, sustainability, and human rights throughout the AI value chain, the IDA strives to ensure that AI is developed and deployed in an ethical and responsible manner. Drawing inspiration from the International Atomic Energy Agency, the IDA advocates for stronger enforcement mechanisms, including the power to sanction actors that violate ethical principles. Furthermore, the IDA could play a pivotal role in addressing cyber security concerns through technological cooperation and the enforcement of legally binding actions. The IDA’s mission is to shape a future where AI benefits society while respecting ethical standards and human rights.

Migle laokite

The European Parliament has recently proposed conducting an assessment to evaluate the impact of high-risk artificial intelligence (AI) systems on fundamental human rights. This assessment would take into account various factors, such as the purpose of the AI system, its geographical and temporal scope of use, and the specific individuals and groups likely to be affected. The aim of this assessment is to ensure that AI technologies are developed and deployed in a manner that respects and safeguards fundamental human rights.

There is a growing consensus that the Artificial Intelligence and Data Agency (AIDA) should play a central role in addressing the potential threats and risks associated with AI. Supporters argue that AIDA should gather and share knowledge on AI risks and harms with international organizations to prevent harm on a global scale. Making this information readily available and accessible can help protect against AI-related harm worldwide.

Furthermore, proponents advocate for AIDA to become the focal point for addressing AI risks and harms to protect individuals and prevent misuse of AI beyond Europe’s borders. They argue that by leveraging AIDA’s capabilities, the rest of the world can also benefit from the prevention of negative effects and potential abuses related to AI. This perspective aligns with the goal of reducing global inequalities, as AI can have far-reaching implications for societies and individuals in different regions.

In summary, the European Parliament’s proposal to assess the impact of high-risk AI systems on fundamental human rights acknowledges the importance of ethical and responsible development and deployment of AI technologies. The support for AIDA to play a central role in this endeavour aims to share knowledge and collaborate to mitigate potential threats and risks associated with AI within and outside of Europe. The ultimate goal is to protect people globally and foster a more equitable and inclusive AI landscape.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more