Can (generative) AI be compatible with Data Protection? | IGF 2023 #24

10 Oct 2023 08:00h - 09:30h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Kamesh Shekar

The analysis examines the importance of principles and regulation in the field of artificial intelligence (AI). It highlights the need for a principle-based framework that operates at the ecosystem level, involving various stakeholders. The proposed framework suggests that responsibilities should be shared among different actors within the AI ecosystem to ensure safer and more responsible utilization of AI technologies. This approach is seen as crucial for fostering trust, transparency, and accountability in the AI domain.

Additionally, the analysis emphasizes the significance of consensus building in regard to AI principles. It argues for achieving clarity on principles that resonate with all stakeholders involved in AI development and deployment. International discussions are seen as a crucial step towards establishing a common understanding and consensus on AI principles, ensuring global alignment in the adoption of ethical and responsible practices.

Furthermore, the analysis explores the role of regulation in the AI landscape. It suggests that regulation should not only focus on compliance but also be market-oriented. The argument is made that enabling the AI market and providing businesses with a value proposition in regulation can support innovation while ensuring ethical and responsible AI practices. This market-based regulation approach is believed to be beneficial for industry growth (aligning with SDG 9: Industry, Innovation, and Infrastructure) and economic development (aligning with SDG 8: Decent Work and Economic Growth).

Overall, the sentiment towards implementing principles and regulation in AI is positive. Although the analysis does not provide specific principles or regulations, it emphasizes the importance of a principle-based framework, consensus building, and market-based regulation. These insights can be valuable for policymakers, industry leaders, and other stakeholders in developing effective and responsible AI governance strategies.

Jonathan Mendoza Iserte

Artificial intelligence (AI) has the potential to drive innovation across sectors, but it also poses challenges in terms of regulation, ethical use, and the need for transparency and accountability. The field of AI is rapidly evolving and has the capacity to transform development models in Latin America. Therefore, effective regulations are necessary to harness its benefits.

Latin American countries like Argentina, Brazil, and Mexico have taken steps towards AI regulation and have emerged as regional leaders in global AI discussions. To further strengthen regulation efforts, it is proposed to establish a dedicated mechanism in the form of a committee of experts in Latin America. This committee would shape policies and frameworks tailored to the region’s unique challenges and opportunities.

The adoption and implementation of AI will have mixed effects on the economy and labor. By 2030, AI is estimated to contribute around $13 trillion to the global economy. However, its impact on specific industries and job markets may vary. While AI can enhance productivity and create opportunities, it may also disrupt certain sectors and lead to job displacement. Policymakers and stakeholders need to consider these implications and implement measures to mitigate negative consequences.

Additionally, it is crucial for AI systems to respect fundamental human rights and avoid biases. A human-centric approach is necessary to ensure the ethical development and deployment of AI technologies. This includes safeguards against discriminatory algorithms and biases that could perpetuate inequalities or violate human rights.

In conclusion, AI presents both opportunities and challenges. Effective regulation is crucial to harness the potential benefits of AI in Latin America while mitigating potential harms. This requires international cooperation and a human-centric approach that prioritizes ethical use and respect for human rights. By navigating these issues carefully, Latin America can drive inclusive and sustainable development.

Moderator – Luca Belli

The analysis delves into various aspects of AI and Data Governance, shedding light on several important points. Firstly, it highlights the significance of comprehending AI sovereignty and its key enablers. AI sovereignty goes beyond authoritarian control or protectionism and involves understanding and regulating technologies. The enablers of AI sovereignty encompass multiple elements, including data, algorithms, computation, connectivity, cybersecurity, electrical power, capacity building, and risk-based AI governance frameworks. Understanding these enablers is crucial for effective AI and Data Governance.

Secondly, the analysis underscores the need to increase representation and consideration of ideas from the Global South in discussions about data governance and AI. The creation of the Data and AI Governance Coalition aims to address issues related to data governance and AI from the perspective of the Global South. It highlights the criticism that discussions often overlook ideas and solutions from this region. To achieve comprehensive and inclusive AI and Data Governance, it is imperative to involve diverse voices and perspectives from around the world.

Moreover, the analysis emphasizes that AI governance should be considered a fundamental right for everyone. It is mentioned in Article 1 of the United Nations Charter and the International Covenants on Political, Civil, Economic, Social, and Cultural Rights. Recognizing AI governance as a fundamental right ensures individuals possess agency and control over their own technological destiny.

Furthermore, the analysis notes that the development of an international regime on AI may take between seven and ten years. This estimate is influenced by the involvement of tech executives who advocate for such an agreement. Due to the complexity of AI and the multitude of considerations involved, reaching international consensus on an AI regime requires ample time for careful deliberation and collaboration.

Lastly, the examination reveals that the process of shaping the UN Convention on Artificial Intelligence could be protracted due to geopolitical conflicts and strategic competition. These external factors introduce additional challenges and intricacies into the negotiating process, potentially prolonging the time required to finalize the convention.

In conclusion, the analysis offers valuable insights into AI and Data Governance. It emphasizes the importance of understanding AI sovereignty and its enablers, advocates for increased representation from the Global South, asserts AI governance as a fundamental right, highlights the time-consuming nature of developing an international regime on AI, and acknowledges the potential delays caused by geopolitical conflicts and strategic competition. These findings contribute to a deeper understanding of the complexities surrounding AI and Data Governance and provide a foundation for informed decision-making in this domain.

Audience

The analysis explores various topics and arguments relating to the intersection of AI and data protection. One concern is whether generative AI is compatible with data protection, as it may pose challenges in safeguarding personal data. There is also an interest in understanding how AI intersects with nationality and statelessness, with potential implications for reducing inequalities and promoting peace and justice. Additionally, there is a desire to know if there are frameworks or successful instances of generative AI working in different regions.

Privacy principles within Gen-AI platforms are seen as crucial, with 17 initial principles identified and plans to test them with 50 use cases. However, the use of AI also raises questions about certain data protection principles, as generative AI systems may lack specified purposes and predominantly work with non-personal data for profiling individuals.

There is a call for a UN Convention on Artificial Intelligence to manage the risks and misuse of AI at an international level. However, the analysis does not provide further details or evidence on the feasibility or implementation of such a convention. Potential geopolitical conflicts and strategic competition between AI powers are also highlighted as potential barriers to developing a UN Convention on Artificial Intelligence.

The “Brussels effect” is mentioned as a factor that may have negative impacts in non-European contexts. Concerns are raised about premature legislation in the field of AI and the need for clear definitions when legislating on AI to ensure comprehensive regulation. The analysis covers a broad range of topics and arguments, though some lack supporting evidence or further exploration. Notable insights include the need for privacy principles in Gen-AI platforms, challenges to data protection principles posed by AI, and the potential hindrances to global cooperation on AI regulation.

In conclusion, the analysis offers valuable insights into the complex relationship between AI and data protection.

Giuseppe Claudio Cicu

Artificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing society. Its integration is positive, as it enhances strategy setting, decision making, monitoring, and compliance in organisations. However, challenges arise in terms of transparency and accountability. To address this, an ethical approach to AI implementation is proposed, such as the AI by Corporate Design Framework, which blends business process management and the AI lifecycle. This framework incorporates ethical considerations like the human in the loop and human on the loop principles. Furthermore, it is suggested that corporations establish an Ethical Algorithmic Legal Committee to regulate AI applications. This committee would act as a filter between stakeholders and AI outputs, ensuring ethical decision-making. Additionally, there’s a call for legislators to recognise technology as a corporate dimension, as it has implications for accountability, organisation, and administration. By developing appropriate regulations and norms, responsible and ethical use of AI in corporate governance can be ensured. Overall, AI has potential benefits for corporate governance and business processes, but careful consideration of transparency, accountability, and ethics is necessary.

Armando José Manzueta-Peña

The use of generative AI holds great potential for the modernisation of government services and the improvement of citizens’ lives. By automating the migration of legacy software to flexible cloud-based applications, generative AI can supercharge digital modernisation in the government sector. This automation process can greatly streamline and enhance government operations. AI-powered tools can assist with pattern detection in large stores of data, enabling effective analysis and decision-making. The migration of certain technology systems to the cloud, coupled with AI infusion, opens up new possibilities for enhanced use of data in government services.

To successfully implement AI in the public sector, attention must be given to key areas. Firstly, existing public sector workers should receive training to effectively manage AI-related projects. Equipping government employees with the necessary skills and knowledge is essential. Citizen engagement should be prioritised when developing new services and modernising existing ones. Involving citizens in the decision-making process fosters inclusivity and builds trust. Government institutions must be seen as the most trusted entities holding and managing citizens’ data. Strong data protection rules and ethical considerations are crucial. Modernising the frameworks for data protection safeguards sensitive information and maintains public trust.

The quality of AI systems is heavily dependent on the quality of the data they are fed. Accurate data input is necessary to avoid inaccurate profiling of individuals or companies. Effective data management, collection, and validation policies are vital for meaningful outcomes. Strong data protection measures, collection, and validation processes ensure accurate and reliable AI-driven solutions. Developing nations face challenges in quality data collection, but good quality data and administrative registers are necessary to leverage AI effectively.

In conclusion, successful AI implementation in the public sector requires government institutions to familiarise themselves with the advantages of AI and generative AI. Workforce transformation, citizen engagement, and government platform modernisation are crucial areas. Strong data protection rules and ethical considerations are essential. The quality of AI systems relies on the quality of the data they are fed. Proper data management, collection, and validation policies are necessary. Addressing these aspects allows government institutions to harness the full potential of AI, modernise their services, and improve citizens’ lives.

Michael

The analysis examines the issue of harmonised standards in the context of AI and highlights potential shortcomings. It is argued that these standards might fail to consider the specific needs of diverse populations and the local contexts in which AI systems are implemented. This is concerning as it could result in AI systems that do not effectively address the challenges and requirements of different communities.

One of the reasons for this oversight is that the individuals involved in developing these standards primarily come from wealthier parts of the world. As a result, their perspectives may not adequately reflect the experiences and concerns of marginalised communities who are most impacted by AI technologies.

While some proponents argue that harmonised standards can be beneficial and efficient, it is stressed that they should not disregard the individual needs and concerns of diverse populations. Balancing the efficiency and standardisation of AI systems with the consideration of local contexts and marginalised populations’ needs is paramount.

The tension between the value of harmonised AI standards and the disregard for local contexts is noted. It is suggested that the development of these standards may further entrench global inequities by perpetuating existing power imbalances and neglecting the specific challenges faced by different communities.

In conclusion, the analysis cautions against the potential pitfalls of harmonised AI standards that do not take into account diverse populations and local contexts. While harmonisation can be beneficial, it should not be at the expense of addressing the specific needs and concerns of marginalised communities. By striking a balance between efficiency and inclusivity, AI standards can better serve the needs of all communities and avoid perpetuating global inequities.

Kazim Rizvi

In his paper, Kazim Rezvi delved into the important topic of mapping and operationalising trustworthy AI principles in specific sectors, focusing specifically on finance and healthcare. He discussed the need for responsible implementation and ethical direction in the field of AI, highlighting the potential synergies and conflicts that may arise when applying these principles in these sectors. To address this, Rezvi proposed a two-layer approach to AI, dividing it into non-technical and technical aspects.

The non-technical layer examines strategies for responsible implementation and ethical direction. This involves exploring various approaches to ensure that AI technologies are developed and deployed in a manner that upholds ethical standards and benefits society as a whole. Rezvi emphasised the importance of involving multiple stakeholders from industry, civil society, academia, and government in this process. By collaborating and sharing insights, these diverse stakeholders can contribute to the effective implementation of AI principles in their respective domains.

In addition to the non-technical layer, the technical layer focuses on different implementation strategies for AI. This encompasses the technical aspects of AI development, such as algorithms and models. Rezvi emphasised the need for careful consideration and evaluation of these strategies to align them with trustworthy AI principles.

Moreover, Rezvi highlighted the significance of a multi-stakeholder approach for mapping and operationalising AI principles. By involving various stakeholders, including those from industry, civil society, academia, and government, a more comprehensive understanding of the challenges and opportunities associated with AI can be gained. This approach fosters partnerships and collaborations that can lead to effective implementation of AI principles in relevant domains.

Rezvi also discussed the need for coordination of domestic laws and international regulations for AI. He pointed out that currently there is no specific legal framework governing AI in India, which underscores the importance of harmonising laws in the context of AI. This coordination should take into account existing internet laws and any upcoming legislation to ensure a comprehensive and effective regulatory framework for AI.

Furthermore, Rezvi explored alternative regulatory approaches for AI, such as market mechanisms, public-private partnerships, and consumer protection for developers. While not providing specific supporting facts for these approaches, Rezvi acknowledged their potential in enhancing the regulation of AI and ensuring ethical practices and responsible innovation.

In conclusion, Kazim Rezvi’s paper presented an in-depth analysis of the mapping and operationalisation of trustworthy AI principles in the finance and healthcare sectors. He highlighted the need for a multi-stakeholder approach, coordination of domestic laws and international regulations, as well as alternative regulatory approaches for AI. By addressing these issues, Rezvi argued for the responsible and ethical implementation of AI, ultimately promoting the well-being of society and the achievement of sustainable development goals.

Wei Wang

The discussion centres around the regulation of Artificial Intelligence (AI) across different jurisdictions, with a particular focus on Asia, the US, and China. Overall, there is a cautious approach to regulating AI, with an emphasis on implementing ethical frameworks and taking small, precise regulatory steps. Singapore, for instance, recognises the importance of adopting existing global frameworks to guide their AI regulation efforts.

In terms of specific regulatory models, there is an evolution happening, with a greater emphasis on legal accountability, consumer protection, and the principle of accountability. The US has proposed a bipartisan framework for AI regulation, while China has introduced a model law that includes the principle of accountability. Both of these frameworks aim to ensure that AI systems and their designers are responsible and held accountable for any negative consequences that may arise.

However, one lingering challenge in AI regulation is finding the right balance between adaptability and regulatory predictability. It is vital to strike a balance that allows for innovation and growth while still providing effective governance and oversight. Achieving this equilibrium is essential to ensure that AI technologies and applications are developed and used responsibly.

The need for effective governance and regulation of AI is further emphasized by the requirement for a long-standing balance. AI is a rapidly evolving field, and regulations must be flexible enough to keep up with advancements and emerging challenges. At the same time, there is a need for regulatory predictability to provide stability and ensure that ethical and responsible AI practices are followed consistently.

In conclusion, the conversation highlights the cautious yet evolving approach to AI regulation in various jurisdictions. The focus is on implementing ethical frameworks, legal accountability, and consumer protection. Striking a balance between adaptability and regulatory predictability is essential for effective governance of AI. Ongoing efforts are required to develop robust and flexible regulatory frameworks that can keep pace with the rapid advancements in AI technology and applications.

Smriti Parsheera

Transparency in AI is essential, and it should apply throughout the entire life cycle of a project. This includes policy transparency, which involves making the rules and guidelines governing AI systems clear and accessible. Technical transparency ensures that the inner workings of AI algorithms and models are transparent, enabling better understanding and scrutiny. Operational and organizational transparency ensures that the processes and decisions made during the project are open to scrutiny and accountability. These three layers of transparency work together to promote trust and accountability in AI systems.

Another crucial aspect where transparency is needed is in publicly facing facial recognition systems. These systems, particularly those used in locations such as airports, demand even greater transparency. This goes beyond simply providing information and requires a more deliberate approach to transparency. A case study of a facial recognition system for airport entry highlights the importance of transparency in establishing public trust and understanding of the technology.

Transparency is not limited to the private sector. Entities outside of the private sector, such as philanthropies, think tanks, and consultants, also need to uphold transparency. It is crucial for these organizations to be transparent about their operations, relationships with the government, and the influence they wield. Applying the right to information laws to these entities ensures that transparency is maintained and that they are held accountable for their actions.

In conclusion, transparency is a key factor in various aspects of AI and the organizations involved in its development and implementation. It encompasses policy, technical, and operational transparency, which ensure a clear understanding of AI systems. Publicly facing facial recognition systems require even higher levels of transparency to earn public trust. Additionally, entities outside of the private sector need to be transparent and subject to right to information laws to maintain accountability. By promoting transparency, we can foster trust, accountability, and responsible development of AI systems.

Gbenga Sesan

The analysis highlights the necessity of reviewing data protection policies to adequately address the extensive data collection activities of AI. It points out that although data protection regimes exist in many countries, they may not have considered the scope of AI’s data needs. The delayed ratification of the Malabo Convention further underscores the urgency to review these policies.

Another key argument presented in the analysis is the centrality of people in AI discourse and practice. It asserts that people, as data owners, are fundamental to the functioning of AI. AI systems should be modelled to encompass diversity, not just for tokenism, but to ensure a comprehensive understanding of context and to prevent harm. By doing so, we can work towards achieving reduced inequalities and gender equality.

The analysis also underscores the need for practical support for individuals when AI makes mistakes or causes problems. It raises pertinent questions about the necessary steps to be taken and the appropriate entities to engage with in order to address such issues. It suggests that independent Data Protection Commissions could provide the requisite support to individuals affected by AI-related concerns.

Additionally, the analysis voices criticism regarding AI’s opacity and the challenges faced in obtaining redress when errors occur. The negative sentiment is supported by a personal experience where an AI system wrongly attributed information about the speaker’s academic achievements and professional appointments. This highlights the imperative of transparency and accountability in AI systems.

Overall, the analysis emphasises the need to review data protection policies, foreground people in AI discourse, provide practical support, and address concerns regarding AI’s opacity. It underscores the significance of transparency and accountability in ensuring responsible development and deployment of AI technologies. These insights align with the goals of advancing industry, innovation, and infrastructure, as well as promoting peace, justice, and strong institutions.

Melody Musoni

The analysis explores the development of AI in South Africa as a means to address African problems. It emphasizes the significance of policy frameworks and computing infrastructures at the African Union level, which emphasise the message that AI can be used to tackle specific challenges that are unique to Africa. The availability of reliable computing infrastructures is deemed crucial for the advancement of AI technology.

Furthermore, the analysis delves into South Africa’s efforts to improve its computational capacity and data centres. It mentions that South Africa aspires to be a hub for hosting data for other African countries. To achieve this goal, the government is collaborating with private companies such as Microsoft and Amazon to establish data centres. This highlights South Africa’s commitment to bolstering its technological infrastructure and harnessing the potential of AI.

The discussion also highlights South Africa’s dedication to AI skills development, with a particular focus on STEM and AI-related subjects in primary schools through to university levels. This commitment emphasises the need to provide quality education and equip the younger generation with the necessary skills to drive innovation and keep up with global advancements in AI technology.

However, it is also stressed that careful consideration must be given to data protection before implementing AI policies. The analysis asserts that existing legal frameworks surrounding data protection should be assessed before rushing into the establishment of AI policies or laws. This demonstrates the importance of safeguarding personal information and ensuring that data processing and profiling adhere to the principles of transparency, data minimisation, data subject rights, and campus limitation.

Moreover, the analysis sheds light on the challenges faced by South Africa in its AI development journey. These challenges include power outages that are expected to persist for a two-year period, a significant portion of the population lacking access to reliable connectivity, and the absence of a specific cybersecurity strategy. This underscores the importance of addressing these issues to create an environment conducive to AI development and implementation.

Additionally, the analysis points out that while data protection principles theoretically apply to generative AI, in practice, they are difficult to implement. This highlights the need for data regulators to acquire more technical knowledge on AI to effectively regulate and protect data in the context of AI technology.

In conclusion, the analysis provides insights into the various facets of AI development in South Africa. It emphasises the significance of policy frameworks, computing infrastructures, and AI skills development. It also highlights the need for prioritising data protection, addressing challenges related to power outages and connectivity, and enhancing regulatory knowledge on AI. These findings contribute to a better understanding of the current landscape and the potential for AI to solve African problems in South Africa.

Liisa Janssens

Liisa Janssens, a scientist working at the Dutch Applied Sciences Institute, believes that the combination of law, philosophy, and technology can enhance the application of good governance in artificial intelligence (AI). She views the rule of law as an essential aspect of good governance and applies this concept to AI. Liisa’s interdisciplinary approach has led to successful collaborations through scenario planning in military operations. By using scenarios as a problem focus for disciplines such as law, philosophy, and technology, Liisa has achieved commendable results during her seven-year tenure at the institute.

In addition, there is a suggestion to test new technical requirements for AI governance in real operational settings. These settings can include projects undertaken by NATO that utilize Digital Twins or actual real-world environments. Testing and assessing technical requirements in these contexts are crucial for understanding how AI can be effectively governed.

In summary, Liisa Janssens emphasizes the importance of combining law, philosophy, and technology to establish good governance in AI. She advocates for the application of the rule of law to AI. Liisa’s successful engagement in interdisciplinary collaboration through scenario planning highlights its effectiveness in fostering collaboration between different disciplines. The suggestion to test new technical requirements for AI governance in real operational environments provides opportunities for developing effective governance frameworks. Liisa’s insights and approaches contribute to advancing the understanding and application of good governance principles in AI.

Camila Leite Contri

AI technology has the potential to revolutionise various sectors, including finance, mobility, and healthcare, offering numerous opportunities for advancement. However, the rapid progress of innovation in AI often outpaces the speed at which regulation can be implemented, leading to challenges in adequately protecting consumer rights. The Consumer Law Initiative (CLI), a consumer organisation, aims to safeguard the rights of consumers against potential AI misuse.

In the AI market, there are concerns about the concentration of power and control in the hands of big tech companies and foreign entities. These companies dominate the market, resulting in inequality in AI technology access. Developing countries, particularly those in the global south, heavily rely on foreign technologies, exacerbating this issue.

To ensure the proper functioning of the AI ecosystem, it is crucial to uphold not only data protection laws but also consumer and competition laws. Compliance with these regulations helps ensure transparency, fair competition, and protection of consumer rights in AI development and deployment.

A specific case highlighting the need for data protection is the alleged infringement of data protection rights in Brazil in relation to ChatGPT. Concerns have been raised regarding issues such as access to personal data, clarity, and the identity of data controllers. The Brazilian Data Protection Authority has yet to make progress in addressing these concerns, emphasising the importance of robust data protection measures within the AI industry.

In conclusion, while AI presents significant opportunities for advancement, it also poses challenges that require attention. Regulation needs to catch up with the pace of innovation to adequately protect consumer rights. Additionally, addressing the concentration of power in big tech companies and foreign entities is crucial for creating a fair and inclusive AI market. Upholding data protection, consumer rights, and competition laws is vital for maintaining transparency, accountability, and safeguarding the interests of consumers and society as a whole.

Session transcript

Moderator – Luca Belli:
All right, we are almost ready to go. It’s almost five past five. Should I give you a heads up to start? We can start. We are already online. OK, fantastic. Good afternoon to everyone. My name is Luca Belli. I’m professor at FGV Law School, where I direct the Center for Technology and Society. And together with a group of friends, many of whom are here with us today, we have decided to create this group, this coalition within the IGF, called the Data and AI Governance Coalition, where, as you might imagine, we are discussing already data and AI governance issues, and with a particular focus from the global south perspectives. So the idea to create this group was born some months ago during a capacity building program that we have at FGV Law School. It’s called the Data Governance School LATAM, which is itself the sort of academic spin off of a conference. We host CPDP LATAM. You might know the European one. There is also a Latin American one that we host in Rio every July. And so after these three days of intense discussions on data governance and AI in March, actually in April, at the end of April, we figured out that it was good to keep on maintaining this very good interaction we had, and even try to expand them to bring new voices. Because one of the main, let’s say, critiques that emerged is that frequently these discussions about data governance and AI have a novel representation of global north, if we can say so, ideas and solutions, and the severe under-representation of global south ideas and concerns, and even solutions sometimes. So the idea was precisely to start to discuss how to solve this. And as many of us have a research background or are interested in doing research, we decided to draft this book that we managed to organize and print in record time. But I have to also to disclaim that this is a preliminary version. So if you want actually to give us feedback on how to improve it, or in case anyone is interested in proposing some additional very relevant perspective, we might have missed. For instance, we know that the only region that is still a little bit poor in the book is Africa. The others are very well covered. And we are going to actually create the form. If you tape in your browser bits.ly slash DAIG, like Data and AI Governance, DAIG23 in capital letters, you will arrive directly on the form where you can also download for free this book. If you are allergic to Google Forms, which is something that may absolutely happen, you can even use another mini URL, bits.ly slash DAIG2023, where there is the direct downloading option of this from the IGF website without having to fill any form on comments. But if you want to provide us comments, actually we are here to hear them. The book deals with three main issues, AI sovereignty, AI transparency, and AI accountability. I’m not going to delve into the transparency and accountability part, because we have a very large set of very good speakers that will explore the various details of these topics from very different perspectives. I’m just going to say two words on the first topic, AI sovereignty, which is actually an application, an implementation of what I have been working with some colleagues from another project, the Cyber BRICS project, with regard to digital sovereignty over the past years. And the fundamental teachings of the past years have been of two types. First, there are a lot of different perspectives on digital sovereignty. A lot of people see this as authoritarian control or protectionism. But also, there are a lot of other perspectives, including those based on self-determination and the fact that both states or local communities or individuals have the right to understand how technology works, develop it, and regulate it. And there is nothing authoritarian in all this. And actually, it’s a right of all peoples in the world, according to Article 1 of not only the Charter of the United Nations, as we are in the United Nations context, but also the International Covenant of Political and Civil Rights, and the International Covenant on Economic, Social, and Cultural Rights. So it’s a fundamental right of everyone here to be the master of your own destiny, if you want, in terms both of social rights, governance, but also technology. And so the fundamental reflection of the first part of this book is about this. How do you achieve this? And in the chapter I’ve authored, I identify what I call the key AI sovereignty enablers that are eight key elements that go from a stack, an AI sovereignty. is stuck. They go from data. So you have to understand how data are produced, harvested, how to regulate them. So data, you have algorithms, you have compute, you have connectivity, you have cybersecurity, you have electrical power. Because something that many people don’t understand is that if you don’t have power, you cannot have AI at all. You have to have capacity building, which is short of transversal. And last, but of course not least, you have to have AI governance framework based on risks, which are the main thing that we are actually trying to regulate. But I think that if we only regulate AI through risks, we only look at the tree and we miss the forest. Because there are a lot of other elements that interact and they are interrelated. So that is, in a nutshell, the first chapter. I was very honored to have Melody and her co-author, Ceaseless Nile, that was one of the former directors of the South African regulator to draft a reply on this framework with regard to South Africa. There is another one with regard to India. And then there are a lot of other very interesting issues analyzed by our distinguished speakers of today. So without missing any more time, I would like to pass the floor to the first speakers. We have in this first slot of speakers, we have some more general perspective and we delve into the generative AI part. And then we delve again, we zoom out into other transparency and accountability, more general issues. So I would like to pass the floor to Armando. I’m not going to list all the speakers now. I will present them each one by one because there are a lot. So first we have Armando Mazueta, that is Director of Digital Transformation at the Ministry of Economy of the Dominican Republic. Please Armando, the floor is yours.

Armando José Manzueta-Peña:
Well, thank you, Luca, for the presentation. I’m more than thrilled to be present here and to share with you some important insights regarding AI. how governments, for example, are trying to use AI, specifically Gen AI, to modernize their infrastructure and provision of public services. Well, how to begin with this? Well, a few technologies have taken the world by storm, the way AI has over the past few years. That’s something that’s a reality. Not even the blockchain revolution had had this much impact on the world as AI had. And its many use cases have become a topic of public discussion, not just for the technical community or the so-called tech bros, but say all people has been discussing on how to implement AI one way or another. And generative AI in particular has a tremendous potential to transform society as we know it for good, give our economies a much needed productivity boost and generate public and private value, potentially in the trillions of US dollars in the coming years. Well, the value of AI is not limited to advances in industry and retail alone. When implemented in a responsible way, where the technology is fully governed, data privacy is protected, and decision-making is transparent and explainable, AI has the power to usher a new era of public services. Such services can empower citizens and help restore trust in public entities by improving workforce efficiency and reducing operational costs in the public sector. On the back end, AI likely has the potential to supercharge digital modernization in by, for example, automating the migration of legacy software to more flexible cloud-based applications or accelerating mainframe applications modernization, which is one of the main issues most government has. Despite the many potential advantages, many governments are still grasping on how to implement AI and gen AI in particular. In many cases, public institutions around the globe face a choice. They can either embrace AI and its advantages, tapping into technology potential to help improve the lives of the citizens they serve, or. They can stay on the sidelines and risk missing out on AI abilities to help agencies more effectively meet their objectives. Government institutions early develop solutions leveraging AI and automation over concrete insights into technology’s public sector benefits, whether modernizing the tax collection system to avoid fraud and predict trends, or using automation to greatly improve the efficiency of the food supply and production chain, or to better detect diseases before they occur and prevent major outbreaks, such as the pandemic that we had before. Other successful AI deployments reach citizens directly, including virtual assistants and chatbots to provide information to citizens across many government websites, apps, and messaging tools. Getting there, however, requires a whole-of-government approach focused on three key main areas. The first one is workforce transformation, or digital labor. At all level of governments, from national entities to local governments themselves, public employees must be ready for this new AI era. While that can mean hiring new talent like data scientists and developers, it should also mean providing existing workers with the training they need to manage AI-related projects. The goal is to free up time for public employees to engage in high-value meetings, creative thinking, and meaningful work. The second major focus must be citizen engagement. For AI to truly benefit society, the public sector must need to put people in front and center when creating new services and modernizing the existing ones. There is potential for a variety of uses in the future, whether it’s providing information in real time, personalizing services based on particular needs of the population, or hastening processes that have a reputation for being slow. For example, anyone here has ever to field paperwork or has to suffer doing impossible lines or queues just to receive documentation or have to—that must be repeated at several institutions just to receive the same service. that they need. And the thing is, most of the governments, for example, don’t have interoperability or any sort of services just to exchange information freely. And it’s something that, with AI and other related infrastructures, that’s something that we could be solving very quickly. The third one is the government platform modernization. And governments are regularly held back by true transformation, by legacy or ancient systems that are tightly coupled with workload rules that require substantial effort and cost to modernize. For example, public sector agencies can make better use of data by migrating certain technology systems to the cloud and infuse it with AI. Also, AI-powered tools hold the potential to help with pattern detection in large stores of data and be able to write applications. This way, instead of seeking hard-to-find skills, government institutions or agencies can reduce their skill gap and tap into the evolving talent. Last but not least, no discussion of responsible AI in the public sector is complete without emphasizing the importance of the ethical use of the technology throughout the lifecycle, design, development, use, and maintenance, something which most governments have promoted for years, to put it simply. Along with many organizations that belong in the health care industry or the financial sector, for example, government and public sectors must strive to be seen as the most trusted institutions because it holds most of the citizens’ data, one way or another. So if the citizens don’t trust the governments, how they can even trust all the institutions that exist in the same nation? That means that humans should be able to continue to be at the heart of the services delivered by government while monitoring for responsible deployment by relying on these five core aspects for trustworthy AI. Explainability, fairness, transparency, robustness, and last but not least, privacy. When we talk about explainability, it means that an AI system must be able to provide a human interpretable explanation for expeditions and insights to the public in a way that does not hide behind technical jargon. In government, there are many trends and many conversations regarding algorithmic transparency because that is the major aim, to reveal what’s in the black box and so for everyone to see how an AI system works and how it was built so we understand how it provides this insight and how it deploys and how it functions. The second one is fairness, that an AI system’s ability to treat individuals or groups equitably depending on the context in which the AI system is used, countering biases and addressing discrimination related to protected characteristics such as gender, race, age, and other status. Transparency, an AI system’s ability to include and share information on how it was being designed and developed and what data from which sources have fed the system, which is something that I previously mentioned with explainability, which is something that is closely related to it. Robustness, an AI system must be able to effectively handle exceptional conditions such as abnormal abilities in input to guarantee consistent outputs. And last, privacy is basically the ability to prioritize and safeguard consumers’ privacy and data rights and addressing existing regulations in data collection, storage, and access and disclosure, which is why it’s important that besides implementing AI, we also should be consistently improving, modernizing the frameworks that entice us everything related to data protection because if we don’t have those rules in place, there is the possibility that many people, not just in the private sector but also the government, use the data that is stored in the government databases to do harm, to use it as a political weapon. and many other things. So it’s important that we have strong data protection rules in place so the data isn’t used against the same citizens that the government is there to protect and to serve. Just to conclude, if AI is implemented in a way. I’m going to ask you to conclude quickly because we have a lot. OK, just a quick conclusion. If we implement AI, including all the traits mentioned above, it can help both governments and citizens alike in new waves. We can generate public value, but in a way that allows all the citizens to benefit from it and to build a future that we all want to live in. Thank you.

Moderator – Luca Belli:
Thank you very much, Armando. And thank you very much for giving us this initial inputs on what is the ideal that governments should strive for when they have to automatize their system and implement AI. And now I would like to give the floor to Gbenga, that might have a more critical perspective and less ideal. And it’s very good to have both these perspectives to try to synthesize our own opinion. Please, Gbenga.

Gbenga Sesan:
It’s like you framed my conversation already. I’m glad we’re having a lot of conversations around AI. This is my second panel on AI today. Thankfully, this is more focused on generative AI and data protection. But I think one of the advantages of having such conversations over and over is that you get to tease out all the points and ask the questions. And what I want to do is to very quickly, so that you don’t have to say I should conclude, I want to speak very quickly to three things. One is in terms of policy. The other is in terms of people. And if I have more time of my six minutes, I’ll conclude on practice. And by policy, I mean that we already, in many cases, have data protection regimes in many countries. There are countries that still don’t have data protection regulation. Of course, this presents an opportunity for them to have this conversation within the context of massive data collection and processing for AI. But for those who have, it means that this is also a chance to have a review. And I say this as an African who is excited that now finally the Malabo Convention has been ratified by as many countries, so it’s in force. But also concerned that it happened so late that Malabo Convention, the text of the Malabo Convention is to say the least outdated. And of course, there have been calls for reviews. There are countries that are literally just ignoring the fact that they have more recent policies on the subject. So I think in terms of policy, we need to have a conversation about how to make sure that existing data protection policies are useful as we have this conversation about massive data collection and processing. People are putting in their data, it’s being processed. And that takes me to my second point of people. I work in civil society and that means that much of my work is centered on people. And it means that when we have all these conversations over the last year, I mean, November 30, oh, actually, it’s just a month away. So November 30 is the birthday of CHAT-GPT, as everyone knows. So it’s been one year. And there’s been a lot that’s happened since then. But at the center of all this is people, the data owners themselves. I’ll give a very simple example. When CHAT-GPT came, a lot of people were just typing and typing because don’t forget many times, the reason why people engage with either social media or new platforms or new technology. which is the way we do, is that for many people, it’s literally magic. You know, you put in where you’re going, and then the map tells you how to get there. And it tells you there’s going to be traffic. And it’s almost like magic. But the problem is that many times, because people don’t understand that when they put in their data, that’s the input that is being processed. The output is what you get. But the input is also important. So I think in terms of people, we need to have a conversation around demystifying AI, which is one of the reasons I’m glad we’re having all these conversations over the last two or three days, for people to understand, you know, when I put in data, I’m training the system. When I ask questions, the response I’m getting is based on what input has already been given. Of course, that goes to the need for, and we talked about that a bit earlier today, that in modeling AI, we need to make sure that diversity, this is not about tokenism. This is real diversity. Otherwise, we’re going to build systems that don’t understand context. I’m going to cause more problems than solving things. And finally, it’s on practice. And I think this is where the data protection commissions come in. Hopefully, data protection commissions that are independent already understand the need to have conversations with various stakeholders. And the practice is, what happens if something goes wrong when I’m using, you know, any platform or system that is powered by artificial intelligence? You know, someone shared an article with me a few days ago. It was supposed to be an article about myself, but I read the article and I was confused. Because at the beginning, it was accurate, and then it gave me a master’s degree that I don’t have from a school I haven’t attended. And then it said I was on the UN I-level panel on digital cooperation, which is very close. Because, you know, I’m on the IGF leadership. panel, but not the one on digital cooperation. And this is quite tricky. And this, by the way, is one area of criticism from me to say that, what happens when I use this and something goes wrong? Who do I talk to? And I think this is one place where people, institutions, that already answer questions with data protection can come in. So I’ll close it here and say that it’s really important that we center this on people. But apart from saying that, there’s a need to review policy when necessary. People are the center of this. And when it comes to practice, what do I do when something goes wrong? Who do I talk to? We need to demystify this black box. Fantastic, Beng.

Moderator – Luca Belli:
I really like this trilogy of policy, people, and practice. Actually, while you were speaking, I was thinking that, in the best case scenario in most countries, we have some sort of policy, but the people part is almost inexistent. Even in the most in the country that have data protection, for instance, for 50 years, like in Europe, most of people would not be aware of their rights, let alone in the developing world. And the practice part is still something pretty much non-existent everywhere. All right, on this initial energy and optimism, let’s get to the third speaker of this first round, Melody Mussoni. Please, Melody, the floor is yours.

Melody Musoni:
Good afternoon, everyone. Thank you, Luca. I’m happy that you are bringing these issues around data protection and how laws and how that can help with regulating AI. And I’ve been following a couple of discussions around AI policy and regulation. And I keep on wondering, what exactly do we want to regulate here? Because when we look at law, it is quite vast. There are different areas of law. Are we looking at it from a direct perspective? intellectual liability, criminal liability, are we looking at intellectual property issues, data protection, there is a myriad of issues that I think when we have these discussions around AI policy and regulation, we need to keep that at the back of our minds on what exactly do we want to regulate, are we regulating the industries, are we regulating the types of partnerships that we may end up having, or it’s just going to be specifically data protection. And I’m sure some of our speakers will speak on the limitations that we have with data protection laws. And coming to my section on the chapter we wrote on South Africa, what we did was we looked at the case framework that Luca spoke about earlier, looking at how these key AI enablers can actually apply within the South African framework, and hopefully that can also be replicated across Africa and other African countries. And I’m just going to touch on four important key findings from the research that we have already conducted for South Africa. And the message that we are getting throughout is that there is the need for AI made in Africa to solve African problems. So when you go through some of the policy frameworks at the African Union level, for example, the digital transformation strategy, looking at the data policy framework, that is the message we keep getting across, that there is that urgency for Africa to start looking into AI and innovation to actually develop African solutions or homegrown solutions to deal with African problems. And then the second key point I want to emphasize in looking at South Africa is the issue of computational capacity and data centers and building the data in cloud market in Africa. So you understand, of course, that with AI development. government, it would depend more on the availability of computing infrastructures to host, to process, and to use data. And with South Africa, what we have noticed is that there are efforts to actually improve on its computational capacity. There have been discussions about having as many data centers within the country as possible, the private sector, the likes of Microsoft, Amazon. They’ve been actually working closely together with government to make sure that there are data centers on the continent in South Africa. So the vision for the country is not just to have data centers in South Africa to cater for businesses and government in South Africa, but also to become a host or to attract other African countries to actually host their data within South Africa. And there was a draft policy that was published sometime in 2020 called the National Data and Cloud Policy, and that policy seemed to actually point towards a direction where South Africa wants to locally own, to make sure that locally owned entities are active in the data market and promoting local processing and local storage of certain types of data. And as you can imagine, like with data localization, it’s something that is not so popular. So there have been clash back from different stakeholders. And now, as I understand, there have been an update on that draft policy. It’s yet to be finalized, the updated version. It’s yet to be released. But what we anticipate is we want to see this revised data and cloud policy to focus more on better regulation of foreign-owned infrastructure instead of indigenizing all existing infrastructures. while also promoting our public-private partnerships. And the third point I also want to speak on, which also supports this notion of AI sovereignty for Africa and for South Africa in particular, is the commitment towards AI skills development. So there is, again, what we are getting from going through the fragmented policies is that South Africa is hoping to build its own pool of AI experts to research and develop AI-driven solutions to address some of the problems that it has. And there are different programs, starting from basic primary education level all the way through to university levels, which are focusing on STEM subjects as well as AI-related subjects. Of course, the question would be, how long are these initiatives going to be actually implemented? Most of them, they are still strategies and they are still plans that are yet to be actually implemented. So it’s still a long process. And the last point I also want to point out is they still need to have an AI strategy. The country doesn’t have a clear AI strategy or an AI policy, but I would like to say or to think that it’s important for countries to first prioritize, like Kibenga said, data protection issues before you rush to have an AI strategy or an AI policy or law in place. So starting from what are the low-hanging fruits? We have data protection laws. Are they adequate enough to address some of the data processing activities? Do we have cybersecurity, cybercrime laws? To what extent do they cover issues like deepfakes if someone is going to commit a crime and they are using AI technologies? To what extent do the existing legal frameworks that we have, are they adequate? Are these legal frameworks addressing some of these issues? And of course, just to finalize, there are of course challenges that the country and other African countries are facing and likely to face in development of AI systems and even with data processing. Issue of power outages, unreliable power supply in South Africa, it’s now a very big problem. Almost every day there are electrical outages and load shedding and it’s been said that it’s going to run for a period of two years. So imagine you rely on electricity and already the amount of time you spend online is going to be cut short because there is no electricity. So that’s also a challenge that the country is facing. The second challenge, I think it applies to all other digital projects, issue on meaningful connectivity. Yes, there have been massive deployment of different digital infrastructures. Now we are moving to 4G and 5G, but still about 16 million people are still unconnected in the country. And then also the need again for stronger cyber security. So there are laws on cyber crime, there are laws on protection of critical infrastructure, but there is still no strategy specifically to deal with cyber security. And also coming at the last point on implementation of the laws that we have, especially data protection laws, there’s always going to be that challenge that our data regulators will not have the capacity and even the expertise to understand some of the AI tools that are in place to be in a position to actually assist with implementation and enforcement of the laws. So those are my thoughts.

Moderator – Luca Belli:
Thank you very much, Melody, and also for stressing how these issues are interconnected. And many of the most relevant ones are infrastructural issues, particularly, I would like to stress something that you mentioned about… compute and cloud computing there are actually three main corporations that have almost 70% of the cloud computing market Google Cloud AWS and Microsoft Azure then a little bit of Chinese corporation a little bit of Huawei and a little bit of Alibaba but then basically the entire world relies on five corporations to do AI and generative AI so that is that is a huge challenge because even if you want to if you want to find an alternative it’s some an investment that takes ages it’s ten years investment in the best case scenario to have something minimally reliable and no government is in charge for ten years or has the vision to do something in ten years so it’s it’s really something that is worth thinking about all right let this now the moment for the first break for questions so we can take two questions and then we will get into the second segment of the session so if you have questions you could line yes you can raise your hand you could there is a mic there for question so if we can take two and we have a quick round of replies and then we get into the second segment then we will take more questions and then we’ll take more question at the end all right so we have one there and yet I see two hands there so if you could use that mic and introduce and explain mention who

Audience:
you are thank you very much hi my name is Shuchi I work for nationality for all which is basically an organization that deals with nationality rights my background is not really in AI which was why I was so interested in this conversation because I really wanted to understand the question that this panel really proposes whether it can whether generative AI can be compatible with data protection and I understand the challenges that we’ve all been speaking about and those have been deeply insightful but for the for the second phase of this panel I would be super interested to actually know if there are sort of frameworks or if there’s any sort of ways that we actually have if this has basically worked in regions because again My background isn’t in AI. So I was really curious to know, because it’s very in line with statelessness and nationality.

Moderator – Luca Belli:
Yes, in the second, be sure that in the second segment, we will speak about this. So that was the quick reply to your question. So maybe if we can have another one, an extra one, if there is. This was a very fast reply. So another one, yes.

Audience:
Hi, my name is Pranav. I’m a technology law and policy professional. And I also had the opportunity of contributing to this report with a paper on generative AI, thinking about privacy principles. And the gentleman, the speaker, also mentioned about why there is need to ensure data protection within Gen-AI platforms. And my question is from everyone on the panel and in the room is, what are some of the key privacy principles at a normative level that should be ensured so that these Gen-AI platforms can comply with? And I have teased this question with identifying 17 of them in my paper. And this is just the first step, to seek inputs at this global forum. And then I would like to test those principles by deploying it on around 50 use cases and then make it better. So if at a normative level, you have any ideas that these are some of the key principles that should definitely be there, that level of consensus building would be really helpful for our people. Thank you.

Moderator – Luca Belli:
Fantastic. And yes, let me also mention that we have 24 chapters here with almost 30 authors. So given the time constraints and also space constraints, we were not able to have everyone. We plan to have webinars where everyone can present and have feedback. If you want, anyone else who wants to have, or even has an answer, actually. Even if you want to, we want to have a conversation here in this segment. So if anyone from the audience also wants to give a reply, you are very welcome to reply. And then we will have feedback from this panel.

Audience:
Thanks a lot, Luca, to give me the floor. And thanks to. to the previous speaker. First, I would like to thank you. It’s very glad to hear from the southern countries the voice. And that’s very important. As we got the problem of AI and data protection, that’s a very big question. And I have worked hard on that problem. It is quite clear that AI put into question a certain number of data protection principles. And I would like to have your feeling there about. First, the question of finality, question of purpose. Normally, you must have a determined purpose. And with generative AI system, you have no more the possibility to have a specified purpose. The second problem is the question of minimization. It is quite clear that it is totally contrary to the AI functioning. AI functioning is working on big data. You do not know, a priori, which kind of data will be interesting and pertinent for achieving your purpose. Another problem, and you have mentioned that, is the problem of explainability. It is very difficult to make AI system explainable, because it is quite clear that there is no logic. As Vint Cerf said, it is quite clear that you are working on correlation and not on a certain logic. And so you have no logic at all. I have other problems, but we might come back on this issue. It is quite clear that, as we got the problem of personal data, it is quite clear that AI are working more and more on non-personal data. And they are using that for profiling people. So it would be absolutely needed that data protection legislation will enlarge its scope.

Moderator – Luca Belli:
All right, these are very good questions. Do we have initial replies from the panel? Melody, yes, you can go first.

Melody Musoni:
I agree with you. We have more questions than we have answers. So looking at the protection of personal information of South Africa, we provide a framework to say when it comes to automated decision-making processes and profiling, these are the conditions that have to be met, and then looking again at the basic data protection principles on transparency, data minimization, data subject rights, campus limitation, it’s, the principles are there, but I think application, that’s where the problem is, because we don’t have any data, but I think application, that’s where the problem is, because it’s much easier to say, okay, in this context, this is the principle on processing of personal data. You need to know the purpose, you need to be very transparent, when it comes, especially with facial recognition technologies, we need transparency, can data subjects exercise their rights? So in principle, in theory, the principles apply, but then when it comes to practice, and especially with generative AI, I think we have more questions, and that’s why I was saying even with our data regulators, there is that level of expertise from someone with more technical knowledge on how that actually, the technical side of AI, that can be translated into the legal side. In my opinion, there is more questions than answers.

Moderator – Luca Belli:
Armando is an answer.

Armando José Manzueta-Peña:
Actually, I don’t have, like I said, like you said, there are many questions that are still to be solved, to be answered regarding AI uses, but I think it applies to most systems. the use of data, and any system, depending on any platform or any technology, its quality will depend on the quality of the data itself that the system has been fed on. And if we don’t have, as you said, the public protections in place, and we don’t have the data that is properly collected, and it’s properly minimized, then the system will, of course, will do a profiling of the person, of the company, or the subject itself in a way that doesn’t necessarily translate into the realities or provide a solution to solve a certain problem. So in that case, besides having strong data protection rules, there should be also strong data collection and data validation regarding the quality of the data itself in order for AI or any system to provide a proper solution or actually of use of any help at all. And that’s the main challenge that we as a government has, especially in that part, in the developing nations, because having data of good quality, good administrative registers, it’s the main issue that we’re facing right now, just to give this any use.

Moderator – Luca Belli:
Okay. This provides us a very good segue to the second segment of the session. So let me give the words to the regulator. We have Jonathan Mendoza, who is Secretary for Data Protection at the National Institute for Transparency, Access to Information, and Protection of Personal Data of Mexico. Please, Jonathan, the floor is yours.

Jonathan Mendoza Iserte:
Thank you, Luca. Good afternoon. How are you? I want to thank the organizers for bringing this topic to the table, especially Luca Belli, a leader in the Latin American region. Data governance and trust has become a crucial topic, and we find ourselves at a critical juncture in the history of technological advancement. Artificial intelligence is rapidly evolving, offering boundless potential for innovation growth and improvement in our daily lives. But in the same way, we must also recognize the challenges it poses for its regulation, ethical use, and the importance of promoting AI transparency and accountability. In the Latin American region, steps have been taken toward regulating artificial intelligence. However, we must remember that the region is very diverse and has technological deficiencies that only allow access to technology for some sectors and groups of the population, therefore closing the digital crest is a primary task. Even though there are some exercisers that are part of the efforts to regulate artificial intelligence, there needs to be a full instrument dedicated entirely to it. In 2019, the members of authorities of the Ibero-American Data Protection Network issued general recommendations for processing personal data in artificial intelligence. Also in the region, it seems to be a trend closer to the ethical use of technology, but how could we ensure that algorithms are fair if they are not accessible to public scrutiny? How can we balance the ethical design and implementation of AI? Artificial intelligence can contribute enormously to the transformation of development models in Latin America and the Caribbean, to make them more productive, inclusive, and sustainable. But to take advantage of its opportunities, and minimize its potential threats, reflection, strategic vision, regional and multilateral regulation and coordination is required. According to the first Latin America Artificial Intelligence Index in 2023, Argentina, Brazil and Mexico are regional leaders in participation in international spaces to influence the global discussion on AI. In the global context, according to the McKinsey Global Institute, the use and development of AI in multiple industries will bring mixed economic and labor results. 2030 estimations of AI are $13 trillion will be the impact of AI in the global economy. 1.2% will be its contribution to the annual gross domestic product globally. $15.7 trillion will be the additional income to the global GDP. 45% of the benefits of AI will be for finances, healthcare and the automotive sector. As Chris Newman, Oracle’s principal engineer said, as it becomes more difficult for humans to understand how AI tech works, it will become harder to resolve inevitable problems. In our interconnected world, multilateralism plays a key role because AI knows no borders and international cooperation is not just beneficial but imperative. We must ensure that AI respects fundamental rights with a human-centric approach, abiding biases. The paper I co-authored with my colleagues Nadia Garbacio and Jesús Sanchez is a proposal to start a debate on AI in the Latin American region. We propose the creation of a dedicated mechanism that contributes to AI-related matters. Cooperation and strategic alliances with the Organization of American States will help us achieve this goal. To facilitate the implementation of this proposal, it is suitable to create a committee of experts that analyze and agrees on the importance and urgent need to contribute through non-binding mechanisms to the situation regarding the use and implementation of existing and yet to be developed disruptive technologies given the risk they could imply in the private life of users. The objective of this committee of experts must be built on goodwill and on the exchange of knowledge and good practices that promote international cooperation based on multilateralism and the opportunities that it offers us to strengthen the protection of human rights, joining efforts with other international organizations that have also spoken out on the matter, as well as with groups of economic powers that have shown their concern about this panorama of the new digital age. The work of this committee will be based on a mechanism that will seek to analyze the specific cases, issue recommendations, provide follow-up, and develop cooperation tools. Let’s be part of the conversation to maximize the benefits of AI for our societies while minimizing its potential risks. We must remain committed to fostering international cooperation as well as strengthening these efforts to ensure that AI serves humanity’s best interests.

Moderator – Luca Belli:
Thank you very much, Jonathan, and let me also stress that ENI has been doing a lot of excellent work, both in terms of attempts of policy experimentation and international cooperation in trying to put forward some recommendations on how to work and regulate with generative AI. Staying in the Latin American region, I would like to ask Camila. has also been one of the minds behind the construction of this group since April to provide us a quick overview of what’s happening in Brazil.

Camila Leite Contri:
Perfect. Thank you so much, Luca, for the invitation, for the creation of the group, for all the amazing job that you do in FTV, and also a pleasure to be here with you. Considering that I’m from Brazil and also from a consumer organization in Brazil, I would like to focus on that. We are talking about data privacy, but as Melody mentioned, we are not only talking about data privacy. We have several other rights that we have to consider. So I’m going to talk first about the general risks that we can talk when facing the challenges of generative AI. Second, about the laws that might interconnect on that, focusing on data protection, but also on consumer protection. And also talk a little about the Brazilian context in terms of legislation and ways ahead, too. AI has lots of possibilities. And for example, EDAC works in financial services, in mobility services, in health. And all these areas can benefit from AI and generative AI. But as we can see, it has two sides. We have both an opportunity and a challenge on dealing with that, especially because innovation goes in a speed that regulation does not follow. So that is why it’s important also to think about current legislations that have to be applied when we are facing that. Some general risks that you are tired to hear, like we have issues related to power. We have issues related to wrong output on the use of this technology to manipulate people on bias, discrimination, privacy, vulnerabilities. And we also have a challenge in here coming from a global south country. And it’s a table of global south in here, which is dependence. So we are talking about how to protect people from that. And we rely on other countries, on other technologies. And how we can do that, how can we build the sufficient power on that? So it’s a great challenge that, obviously, I don’t have an answer. But I hope that we can build on that. Also, one important thing is the techno-solutionism that this kind of technology bring. Because when we do that, we disregard the context. And that is the reason that I want to talk more about Brazil. But before talking about Brazil and the different laws, I would also like to bring the issue of concentration of power. Once we are talking about generative AI, of course, we think about CHEP-GBT. But we are not only talking about CHEP-GBT. We depend not only on foreign companies when we are talking about the global south, but rely on big techs. And we know that these big techs can bring lots of solutions, but lots of abuse, considering that they dominate the market. So that is why it is important to consider not only data protection law that, of course, is extremely necessary, but also consumer law to protect people in the end. We are putting people in the center. And these people are also consumers. We are all consumers. And competition law also to face that. So first law that is important that we have to comply in its existence, and we have to enforce that, is competition law. The second one is data protection, as we are mentioning. And to develop on that, I will talk about a case in Brazil that was brought by a really known organization in Brazil, which is a really known person in Brazil, which is Lucca Belli, about this. And third of all, also consumer rights have to be respected. We are talking about transparency. We are talking about access to information, which is basically consumer traditional. rights Beyond that we have also IP IP law, of course copyright and But I’m going to to focus on that Okay talking about Brazil. Brazil is a huge market not only in terms of Market in general, but also on AI so Brazil is the fourth country on the use of chatty PT so it’s a concern that we have to to consider on that and Since is a since it is a concern I’m going to spend a little more time talking about the petition that was presented to the data protection authority in Brazil by Luca about about Not complying with the data protection law in Brazil and of chatty PT not complying on the law I’m gonna focus on the on the rights that the that was That was requested on this petition which is to know the identity of the controller of the data This is a minimum thing to know The second one is to access all the personal data that have respect to the person that is affected So this is about self self determination and as Luca mentioned, this is not only data protection, right? But this is a human right in the end The third one is the right to have access to clear and adequate Information on the criteria and the procedures used on the formulation of the automatic response these three topics can Luca brought it but everyone is affected by that not only in Brazil but also other countries and Also this this kind of complaint could have been brought also by the consumer authority Because we are talking about access to information in the end So this is a provocation also for you Like we have to think on how we can advance on that not only in Brazil, but other countries Unfortunately, I have some bad news that the data protection authority didn’t go forward with this this process, which I think It’s not only sad, but it’s an absurd And I hope that we can the authority can advance on that because it’s an important issue But nowadays the data protection authority is bringing a consultation on sandbox of AI But when we bring cases like this when Luca bring a case like this, they don’t advance on that. I Don’t know why Second context that Jonathan also brought me just ask you to wrap up in one minute. Okay, just one minute the the network of authorities in in the Ibero-american region also is focusing on chat GPT issues of legal hypothesis XR exercise of rights and transfers of data, which is interesting because the data protection authority in Brazil is also present on that and We have to comply with existing laws, but we can also advance on future frameworks as you were mentioning so In Brazil, we have a bill also on that and we hope to advance on that But meanwhile, we have we have to comply with existing laws. Thank you. Sorry for extending. Thank you very much

Moderator – Luca Belli:
Just a very brief comment because that case that she was mentioning that concerns me personally It’s it’s very also frustrating to say that even when there are laws in place and rights in place when there are every law has needs to have elements of flexibility not to Not to to be able not to regulate Technology in a way that is to strict and and allow the advancement of technology But when there are clauses of flexibility like what is an adequate information about how your data are processed or what is in Adequate information about the criteria according to which your data are utilized to train models That is the moment where the regulator has to enter into the game because adequate anything is adequate is the favorite word of lawyers together with reasonable because you can charge hefty prices and fees to your clients to debate what is adequate or reasonable. But the role of the regulator is precisely to say to enterprises, to people, what is adequate and what is reasonable. And it’s a little bit frustrating when the regulator don’t do it. And they find also that some very curious practices of data scraping by some corporations are maybe considered as adequate or reasonable because those are very hard to believe and to think as reasonable and adequate practices. Anyway, not to get into very personal matters. I would like to ask if our online panelists are online. Can you hear us? I would like to ask if Wei Wang is connected. Wei, can you hear us? Sure. OK, so actually, we have an example where generative AI has actually already been regulated in China that has just issued some specific recommendations on it. And so it’s quite interesting to understand what is the situation in China with regard to regulation of generative AI and data protection. So please, Wei, the floor is yours.

Wei Wang:
Thank you so much, Luca, as always. And thank you for having me today, at least virtually. Yes, and it’s very cool to meet quite a few new and older friends, at least virtually. Yes, and as per the content of our report, I think I’m supposed to share some Asia perspectives on regulating artificial intelligence in the first place. Since I came back from Latin America to Asia, yes, I have attended quite a few events, both online and in-person. I happen to find that quite a few, I mean, Asia jurisdictions are cautious in regulating AI. They prefer to let ethical framework to go first rather than making how to go come first. And they also prefer minor steps where what we call precise regulation. For example in Singapore the governance model prefers a light touch and a voluntary regulatory approach for a I basically the aims to use a I as a tool for economic growth and improve and improving quality of life. But they also acknowledge that Singapore might have to adopt to existing global frameworks instead of creating new regulations in isolation. So this is sort of I mean global least perception. I distinguish those Asia jurisdiction from others like you you Brazil you can the United States. Do you always. I think as all of us know you and the Brazil are adopting comprehensive acts or abuse at a UK model is based on a pro innovation idea so far at least while the United States seems to stick to the liberal market idea. Still I got my contract. China has a sector specific approach instead. For instance in the areas of recommendation and I present this is technology and a generator via as Luca has mentioned. So as some from the FPF I mean the future private firm argued that data protection authorities are becoming sort of default regulators for a I in this time gap. In the case of China the PIPO it’s personal information protection law as well. Articles for a double 24 27 55. They are clearly relevant to regulating automated decision making and official recognition. And then the newly established in Turin measures on generative I basically highlights the importance of ensuring the use of data and the underlying models from legitimate sources in compliance to the revolution laws and regulations as regards IP and the data protection. But. way. But things are it seems are becoming I mean more interesting as quite a few to reduction are considering a big change in this sort of regulatory model. For example in both states and China as you may be already aware of the recent proposed bipartisan framework for a U.S. act advocates for a regulatory focused on legal accountability and consumer protection proposing a license and regime administered by an autonomous oversight entity. Similarly in China a research group on the Chinese Academy of Social Science of which I’m currently to invite member as well drafted a model. I love proposing a negative least based and risk based approach to governing. I there are some similarities with the U.S. act but there are also some nonsense as well. But but I mean generally the model law introduced the principle of accountability catalyzing the entities along the value chain at assigned duties or responsibility in terms of retention disclosure at a manual assistant to data disclosure or data sharing with an institutional intent of fostering a transparent system. That being said some of the jurisdictional perspectives are reaching a conscience as we got the AI governance. But this also requires more continued comparative studies for example about more to go both sides and approaches. Those new development basically highlights the response of jurisdiction to address those challenges of AI with the focus on accountability principles were tailored obligations and a proactive technology design. You’ve been probably coming. I mentioned that the technos solution is a solution isn’t. But it’s still essential to seek an implementable where up up the operationalizable. I’ve mentioned in our chapter sort of about the reason lies lies about. requiring a sort of a long-standing balance between adaptability and the regulatory predictability to ensure effective and end-to-end governance within the dynamic AI landscape. We will definitely keep coming across the question of regulation versus innovation. And I think our DC is a perfect place to achieve this goal, I believe. So in this regard, I look very much forward to continuing the collaboration within and beyond the group in the near future. Okay, I think that’s all from me today. Thank you for having me here virtually today. Yes, I will hand it back to you, Luca.

Moderator – Luca Belli:
Thank you very much, Wei. And actually now we have, this is a good segue to enter into the last speaker of this segment, the Smriti Parshira from India. Smriti, can you hear us? Smriti, are you connected? Yes, I can hear you. Yes, so Smriti can bring us a little bit of, is going to broaden a little bit our perspective with some concrete cases from India, and then we can expand on this in the last segment. Please, Smriti, the floor is yours.

Smriti Parsheera:
Thanks so much, Luca, and hello to everyone in the room and online. So as Luca mentioned, I’m gonna really be a little broader than the topic which is suggested, which is more specific to generative AI. And my intervention in this book talks about the question of transparency and the interpretation of what really transparency should mean in the AI context. And this is a term which is well regarded now, well accepted in most AI strategies. India also has a AI strategy, and it talks about the principle of transparency among others. It’s also a principle that’s reflected in different ways in data protection law. So India very recently has adopted its data protection law. And the philosophy of transparency does come about when you think about processes like notice and consent, access to information, correction, redress facilities. So all of this does speak in some way. to transparency and very often in the AI context, transparency is connected with explainability and accountability. And what I do in this intervention is really, I say that when we often think about transparency in the AI context, it’s often the tools or even the discussions are very much about the technical side of transparency. So it’s about algorithmic transparency, transparency of the model itself, but the paper argues that we really need to step back and take a broader lens because we know that there are a number of actors who are involved typically in any AI implementation. And therefore transparency like every other principle you see in AI principles should permeate through the entire life cycle of the project. And in this paper, I specifically identify three layers and this is mostly in the context of, large scale public facing applications. And I take the case study of one such application in India in the context of facial recognition systems for entry into airports, which is something which is being seen across the world and many other countries you see similar system. And the argument of the paper is that, there are at least three layers of transparency that you need to think about. The first is policy transparency. So it’s about how did this project come about? Is there a law backing it? Who are the actors involved? Which government departments, ministries took this decision through what open and deliberative process? The second is about technical transparency, more well understood questions about a transparency of the model, what kind of data was used? Who designed the code? What does the code do? How well does it work, et cetera. And the third is about operational and organizational transparency, which is really about which is the entity which is finally giving effect to this. How does the system work on the day to day basis? What are the kind of failures that you’re seeing? What is the kind of accountability mechanisms that exist for this entity? And who exactly is it answerable to? Is it answerable to the parliament, to the public? Like what are the mechanisms for transparency within this body? And then I apply this in the paper. I’m not gonna go into great detail due to paucity of time into. the findings, but there are three, you know, broad observations that I made. One is that transparency in the policy sense cannot just be about imparting information to the public about the existence of such systems. It has to be a bit more deliberative about, you know, why we are bringing this, should we bring this in the first place, et cetera. The second point is about, you know, there is a culture of third parties working with the government, either as philanthropies, as think tanks, as consultants. There is the need for transparency, not just about who developed the code and whether we were transparent in the procurement process, but even how did these ideas come about and there is need for transparency, you know, at a deeper level. And finally, tools of transparency. Very often, if you have entities outside of the public, private sector, nonprofit bodies running these systems, then will the, you know, tools of transparency, which are in the form of right to information laws, for instance, in India, apply to these entities? And we see in this particular case study, which I study here, that the design does not enable the application of, you know, transparency and public disclosure, which a public body would be faced with in this particular structure. So I’ll stop with that. And people in the room, I would love to hear your comments if you have it later. Thank you, Luca.

Moderator – Luca Belli:
Fantastic, Smriti. Now, we have to do, to have a series of actions in the next five to 10 minutes, because we will have the possibility for participants to ask questions. At the same time, we will have the speakers of the initial two rounds that will move to the first row of chairs, and the speakers of the last round that will move to this part of the table, because for organizational purposes, we have to, speakers have to be here. So if you have questions in the room, please, this is the moment for you to ask questions using the mic there. We have questions from, oh, yeah, sorry. So let me also thank Shilpa Singh, that is our remote moderator, and you can take the mic and ask the question from the…

Audience:
participants. There’s a question from Mr. Amir Mokaberi. He’s from Iran and his question is that, could shaping the UN Convention on Artificial Intelligence help to manage its risk and misuse at international level? Do geopolitical conflicts and strategic competition between AI powers allow this? What is the

Moderator – Luca Belli:
role of IGF in this regard? That’s a very, very open question. I don’t know if the new set of panelists has any ideas on this. My personal take is that it will take a lot of time before having international agreement on any international regime on AI and that is precisely the reason why many tech executives, or at least some of them, may be advocating for having an international regime because they know very well it will take between seven and ten years to be developed and maybe start being slightly meaningful. So it’s, I don’t know if we have other opinions here in the panel on international organization. I think actually this is a very good connection with Michael Karanikola’s paper because he’s really, actually coincidentally, he’s the first speaker of this last slot and coincidentally he has written an excellent chapter in this book about this, exactly this topic. So Michael, no one better than you can reply to this and start presenting your paper, please.

Michael:
Thank you and I’ll start by echoing the other panelists and thank you, thanking Luca. I’m amazed at how quickly this has come together and with such a great group of authors. So my paper focuses on emerging transatlantic frameworks for AI that are being developed under the auspices of a handful of powerful regulatory blocks, namely the US, the EU, and China, and examines the implication of this trend for the emerging AI governance landscape. Gonna have to go through this very quickly so I won’t go too deeply into the paper. But just in response to the question about the potential UN framework, I discussed the OECD framework as well as these different structures. And I think there’s a broader tension between the value and benefits and efficiencies of harmonization, right? And the tendencies of harmonized standards, whether it’s at the UN level or the Brussels effect or the California effect or whatever, to trample over important local contexts, not only in terms of the needs of populations being impacted by AI, but also in terms of how, at a more basic level, how harms are framed and the assumptions and prioritizations that are inherent in any legislative framework. And I argue in my paper that there is a challenge in terms of trying to develop a harmonized structure that it’s going to fail to take into account diverse populations, particularly when the people that tend to have a seat at the table in the early development of these standards tend to be from wealthier parts of the world. So I explore that tension. I think that it’s, I’ll caution by saying that it can also be overly reductionist to view this dynamic purely in global north and global south terms, that there are a lot of different dimensions to this. But ultimately, I say that as frameworks begin to coalesce into transnational standards, it’s important to query whether they actually represent the needs and concerns of those on the sharpest edge of technological disruption, and whether the development of these standards and the harmonization of these standards. has the potential to further entrench inequities on a global scale. So that’s a two-minute version of my paper. And I’m happy to chat further, folks, have questions.

Moderator – Luca Belli:
Fantastic, Michael. Also for actually having provided both a reply to the question and the presentation of your paper. I guess you have also a question. Yes, so I think we can do this. We can take this question, and then go through the presentation. And then it will be the first question to be replied at the end of the presentation. OK, yes, please go ahead.

Audience:
I just wanted to build on what Michael just said. My name is Michael Nelson. I work at the Carnegie Endowment for International Peace in Washington. And one of my colleagues is Anu Bradford, who wrote the book The Brussels Effect, and now has a new book on digital empires that covers some of the same territory. I look forward to spending more than two minutes with your ideas. Anu and I have a friendly debate about whether the Brussels effect sometimes becomes the Brussels defect. One part of it is what you just said. Other countries are taking European language designed for a European legal system and putting it in a place where it doesn’t really work. But a more important problem, particularly with the AI Act, is they’re writing law that’s, I think, way too premature. They haven’t even really got a definition of what is AI. I’m a physicist. I’m not a lawyer. But when I was working on Capitol Hill, the first thing we did was get the definitions right, not just defining what you’re regulating so you can have a box, but defining what you’re not going to regulate. So I guess my question for anybody who wants to take it is, how do we avoid this problem of imposing these aspirational goals on a vague field of technology that will be totally different in 18 months?

Moderator – Luca Belli:
Thank you very much, Michael, for this. Excellent comment. As we have started with 10 minutes of delay, we might have a margin of 10 minutes at the end, because I saw already a, yes. So we can rush with the last round of very quick presentations. So the next one will be Kamesh Shikhar.

Kamesh Shekar:
Thank you so much, Luca. And so, yeah, so I guess we have very few time to rush through the paper. But our intervention, our chapter, talks a little bit or answers some of the questions that the first panel spoke about also. So I’ll very briefly touch upon the three things that we do in our paper and what’s the background to that. As we all know that there is already a lot of buzz around the uncertainty over the AI regulations and AI technologies itself. And just a response to that, we still see a lot of frameworks happening at the various levels, right? Like you use the strategy documents and legislations cropping in here and there. But one question, very important question that we try to answer through our chapter is that, OK, tomorrow we bring a framework and we say that AI developers have to follow a certain amount of principles. Will everything become fine, right? And that’s where our paper comes in and asks, what about AI deployers? What about impact population who also interface with the technology at this moment, right? So AI technology used to be B2B. But now it’s also B2C, the generative AI technologies where we also interface with it and use it. So it’s that specific, that’s this very specific question that we’re trying to ponder over and suggest a framework called as a principle-based framework at the ecosystem level where various responsibilities are divided across various stakeholders within the ecosystem such that collectively. or collaboratively, we can make the entire ecosystem of artificial intelligence utilization safer and responsible. So how we went about doing this is that first thing is we tried to like map the impact and harm across the AI lifecycle. At the different stages, let me give you an example and that makes it very clear, exclusion, okay? So if we talk about exclusion as the end impact of whatever is happening, adverse implication, it just doesn’t happen because one particular aspect has gone wrong. There are various aspects which come together at the different stages of the AI lifecycle and across this AI lifecycle, we all know that there are different players involved. So all of these implications come together and make the exclusion happen. So we went about actually mapping that and this also answers, kind of resonates with Melody’s point on what is the liability, where the liability or the responsibility lies with, right? Like we need to understand who and what they do. After doing this, obviously the organic progression is like what’s the principles that everybody has to follow. And this also answers somebody’s question from the online is that like the consensus building in principles. We have a lot of principles available out there on AI, right? So, but we need to now start having a conversation. Hey, you have those principles and I also think that this is the principle I resonate with. So I think that’s the starting point. Maybe that’s an answer to the question also starting for a point for like at the international level, everybody coming together and like discussing about something collaborative and like legitimate for the international level. So we map all the principles and then like the third point is like operationalization, which is also like spoken. In operationalization, like what we went about doing is that like very specific, you know, gap that we are trying to fill is bring out the differences in the principle at the different like, you know, stages. And show that, like, hey, when we talk about, giving an example again, like, human in the loop as a principle, we keep talking about it. But at the operationalization level, when we come to planning to designing stage, human in the loop means differently, right? Like, it means you have to engage with the stakeholders or, like, you know, you have to bring the impact population into the room and et cetera and stuff. But same principle means differently at the, like, you know, other levels. So that is what the difference that we bring. Thirdly, you know, final point, like, before I conclude, is that is also now it’s the impact. We’ve mapped the principles, operationalization, and also now it’s implementation, right? And that goes ultimately to your governments. So there what we try to do is that, like, you know, look at a little bit from also like somebody mentioned, the last speaker mentioned that, like, you know, there is a market in Brazil for generative AIs. That’s the case for any developing country. So we need to balance that approach and see, like, not necessarily regulation has to be, you know, compliance based, right? Like, it can also be market based. How can we enable the market? So we are trying to, like, look in that way and, like, how to operationalize this framework into that market based mechanism where there is a value proposition which the, you know, businesses see. So this is what, like, we do in the paper. Yeah. I can take more.

Moderator – Luca Belli:
Fantastic. So at this time, let me thank all the last set of panelists for being very concise because I know that we have time constraints and I know that our tech support, they are so kind to give us five or ten extra minutes to finalize. And so let me give the floor now to Kazim Rezvi for his very short presentation. Thank you.

Kazim Rizvi:
Thank you so much. So I think just moving on from what Kamesh was talking about and, you know, we have two papers as part of this brief. And the second paper actually looks at, you know, mapping and operationalization of trustworthy AI principles. So while what we are doing… what Kamesh is saying in terms of the first paper is to sort of just come up with all the principles. Here we are sort of looking at certain sectors where we have to sort of look at understanding the synergies and conflicts with respect to these AI principles and how they’ll play out. So what we try to do over here is basically look at two areas. One is the finance and finance sector, and the second is a health care sector. And for these two sectors, we sort of come up with certain principles which we believe are critical for operationalization and to make sure that you are deploying trustworthy principles on the ground. So the paper has adopted an approach where it has looked at the technical and non-technical layer of AI. Within the technical layer, there is basically looking at different implementation solutions and how do you integrate these solutions with the responsible AI framework which we are developing. And the non-technical layer is basically sort of exploring strategies to sort of look at responsible implementation and ethical directions, et cetera. And all of this has been done through a multi-stakeholder approach. So we’ve advocated for a multi-stakeholder approach towards mapping and operationalization of AI principles. I think that’s something which we’ve been very clear about because we believe that you need a different set of stakeholders. You need the industry, the civil society, the academia, the government, et cetera, to sort of come up and look at how these principles will be operationalized for these two particular sectors. So we’ve spoken to experts in these sectors. We will also be sort of looking at certain discussions and see if some of these principles can be implemented effectively. And also to look at domestic coordination of regulation. So what we’ve also identified that AI, there is no specific act or law which governs AI. in India. So we have tried to come up with some sort of principles where you have the privacy law, you have a IT law which regulates the internet, you have different other laws which are coming up. How do they all work together? And how do they harmonize with each other with respect to regulation of AI in the future? So at one level, we are talking about domestic coordination. We’re not saying that, look, you have to regulate it sort of very stringently. But existing internet laws, how can they be harmonized? And the second is around international coordination. And I think that’s where, even previously, what Kamesh was talking about. And this is something which we’ve looked at is, at a global level, can we come up with some sort of models or frameworks to identify implementation? And then looking at these two sectors, health care, what is required, what are the principles which are key for health care which may not be necessary for finance sector? So that kind of mapping and operationalization is what we are doing right now. And then we’re also sort of looking at alternative regulatory approaches. So we are talking about market mechanisms, private-public partnerships, even looking at consumer protection within the developers. And how do we ensure safety, et cetera? So I think that’s something which we’ve looked at as well. And the idea is to look at deployment and implementation and testing it with one of these two sectors.

Moderator – Luca Belli:
The technical support is telling me that we have to move fast without breaking things. And so let me pass the floor to the last two speakers that will very fast expose their brilliant papers. So Claudio, Chico, you have a presentation. We have a very last presentation, and then a very last other presentation. If we can have this online, can we have? of the presentation. Maybe in the interest of time, let me. Yes, we have a presentation. Excellent.

Giuseppe Claudio Cicu:
So konnichiwa to everyone. And thank you, Mr. Thank you, Professor Bailey, for the introduction. I will very quickly dive deep into the relationship between artificial intelligence and corporate governance. Because as we all can see, artificial intelligence is reshaping our social, economical, and political environment, but also the corporate governance framework and the business processes are being affected by this technological revolution. Indeed, we are hearing about, for example, an appointment of artificial intelligence as directors that, legally speaking, I really doubt about it, but it’s happening in this time. So I have the feeling that we are going toward a new form of corporate governance that I have labeled as a computational corporate governance model, where artificial intelligence is an auxiliary instrument, or can maybe substitute the directors in the main function of corporate governance bodies, like, for example, strategy setting, decision making, monitoring, and compliance. So I have put a question to myself. We are going toward a technologization of the human being. I’m afraid of it. So as we know, we have a lot of problem in this kind of revolution. For example, the main problems that I’m working on in my paper is about the transparency. the accountability problems. And so for this reason, I tried to create a framework to allow the corporation to implement artificial intelligence in an ethical way in the corporate governance and business processes. My proposal that I named AI by Corporate Design Framework is grounded on the business process management, the field of management that can allow to analyze, to improve the processes in the corporation. And it is just posed with the AI lifecycle. And I divided both of them in seven steps to combine them to control artificial intelligence and enhance the principle of human in the loop and human on the loop. Of course, this model is also grounded in a human right, a global AI framework. And it’s based also in a privacy by design principle that states that it’s better prevent than react. Under the corporate governance, and quickly, I’m going to conclude, I propose a creation of a new committee, the Ethical Algorithmic Legal Committee, composed by a mix of professionals, like, for example, not only directors, but consultants that can act as a filter between the stakeholder and the output of the artificial intelligence. And also, I conclude with asking, not only to myself, but also to you, if it’s not the case that the legislators start to think about that technology as a corporate dimension, as happened in Italy, for example, with reference to accountability, organization, administration, my answer is yes. I think that is the time.

Moderator – Luca Belli:
Thank you. Fantastic and thank you very much for doing this excellent and detailed presentation in literally three minutes. Excellent. So we have now the final one. The one by Lisa, the last but of course not least. Please Lisa the floor is yours.

Liisa Janssens:
Thank you very much. My name is Lisa Janssens and I will very shortly explain where I am from because that’s also connected to the paper that I have written. I’m a scientist at the Department of Military Operations at the Dutch Applied Sciences Institute and I have a background in law and philosophy and I combine those two disciplines in my projects and I work together with mathematicians, engineers and I’m very proud to say that because it’s very difficult to work actually interdisciplinary together. So my job at the Institute, I’m now there for my seventh year and since two years it actually works out to work together and how I’m doing that, I found a way how to work together. So scenario planning, scenarios, military theater scenarios can be a platform that you meet each other from different disciplines. You stay on your own discipline but you can meet each other in one focus point of problems and how to solve problems from the technical point of view and how to connect those two. For example rule of law mechanisms because I am trying to seek for new requirements from the from the point of view of rule of law tenets because we can find an agreement within the United Nations but also in the European Union and also in the USA that the rule of law matters and is very important to adhere to. So the rule of law for me is about good governance and if I connect it to AI it’s about good governance of AI. How do we do that? So, I am looking for new technical requirements informed from multiple disciplines, law, philosophy and technology and I found a way how to work together and that is a scenario that is like a very good informed operational scenario and you can even test the new requirements. For example, that’s very ambitious but we are going to try to do that in a NATO project via Digital Twins or even maybe a real setting, an operational test environment. Thank you.

Moderator – Luca Belli:
Fantastic. Fantastic. And so now, as everyone has been so patient to stay here until the end of the day, it’s 6.36 so you all deserve a free complimentary copy of the book and the first that will run here will deserve it. The other ones will have a free access PDF that you can download already on the page of the Data and AI Governance Coalition. I repeat, you can also use the mini URL bit.ly slash DIG23 or DIG2023. You have both. You can use the form to give us feedback. You can speak with us now to give us feedback. We can have a drink now together so that we can give us feedback. All feedback is very welcome. And thank you very much. Really thank you very much especially to the – I don’t want to diminish the importance of the first two sets of panelists but this last one has been fantastic and thank you a lot to the technical teams. You are excellent and you have done tremendous work. Thank you very much.

Armando José Manzueta-Peña

Speech speed

171 words per minute

Speech length

1779 words

Speech time

625 secs

Audience

Speech speed

167 words per minute

Speech length

1002 words

Speech time

361 secs

Camila Leite Contri

Speech speed

182 words per minute

Speech length

1243 words

Speech time

410 secs

Gbenga Sesan

Speech speed

187 words per minute

Speech length

1035 words

Speech time

333 secs

Giuseppe Claudio Cicu

Speech speed

133 words per minute

Speech length

509 words

Speech time

230 secs

Jonathan Mendoza Iserte

Speech speed

134 words per minute

Speech length

791 words

Speech time

354 secs

Kamesh Shekar

Speech speed

188 words per minute

Speech length

956 words

Speech time

305 secs

Kazim Rizvi

Speech speed

182 words per minute

Speech length

704 words

Speech time

232 secs

Liisa Janssens

Speech speed

153 words per minute

Speech length

373 words

Speech time

146 secs

Melody Musoni

Speech speed

158 words per minute

Speech length

1579 words

Speech time

600 secs

Michael

Speech speed

167 words per minute

Speech length

428 words

Speech time

154 secs

Moderator – Luca Belli

Speech speed

168 words per minute

Speech length

3484 words

Speech time

1244 secs

Smriti Parsheera

Speech speed

224 words per minute

Speech length

890 words

Speech time

239 secs

Wei Wang

Speech speed

157 words per minute

Speech length

839 words

Speech time

321 secs