Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Merve Hickok
The comprehensive analysis conveys a positive sentiment towards the regulation and innovation in Artificial Intelligence (AI), emphasising their coexistence for ensuring better, safer, and more accessible technological advancements. Notably, genuine innovation is perceived favourably as it bolsters human rights, promotes public engagement, and encourages transparency. This viewpoint is grounded in the belief that AI regulatory policies should harmonise the nurturing of innovation and the implementation of essential protective measures.
The analysis also underscores that standards based on the rule of law must apply universally to both public and private sectors. This conviction is influenced by the United Nations’ Guiding Principles for Business, which reinforce businesses’ obligation to respect human rights and abide by the rule of law. This represents a paradigm shift towards heightened accountability in the development and deployment of AI technologies across different societal sectors.
However, there is significant apprehension surrounding the perceived industrial domination in the AI policymaking process. Such dominance is viewed negatively as it could erode democratic values, potentially fostering bias replication, labour displacement, concentration of wealth, and disparity in power. Critics argue this scenario could compromise the public’s interests.
Moreover, the analysis highlights strong advocacy for the integration of democratic values and public participation into the formulation of national AI policies. This stance is complemented by a call for the establishment of robust mechanisms for independent oversight of AI systems, aiming to safeguard citizens’ rights. The necessity to ensure AI technologies align with and uphold democratic principles and norms is thus underscored.
Nevertheless, the analysis reveals resolute opposition to the use of facial recognition for mass surveillance and deployment of autonomous weaponry. These technologies are seen as undermining human rights and eroding democratic values—an interpretation echoed in UN negotiations.
In conclusion, despite AI offering tremendous potential for societal advancements and business growth, it’s critical for its advancement and application to adhere to regulatory frameworks preserving human rights, promoting fairness, ensuring transparency, and upholding democratic values. Cultivating an equilibrium and a forward-thinking climate for formulating AI policies involving public participation can assist in mitigating and managing the potential risks. This approach ensures that AI innovation evolves ethically and responsibly.
Björn Berge
Artificial Intelligence (AI) carries the potential to revolutionise various sectors worldwide, due to its capacities for improved efficiency, advanced decision-making, and enhanced services. It can significantly enhance productivity by automating routine and repetitive tasks usually undertaken by humans. Additionally, AI systems can harness big data to make more precise decisions, eliminating human errors and thereby resulting in superior service delivery.
Nevertheless, the growth of AI necessitates a robust regulatory framework. This framework should enshrine human rights as one of its core principles and should advocate a multi-stakeholder approach. It is vital for AI systems to be developed and used in a manner that ensures human rights protection, respects the rule of law, and upholds democratic values.
Aligning with this, the Council of Europe is currently working on a treaty that safeguards these facets whilst harnessing the benefits of AI. This treaty will lay down principles to govern AI systems, with a primary focus on human rights, the rule of law, and democratic values. Notably, the crafting process of this treaty doesn’t exclusively involve governments, but also includes contributions from a wide array of sectors. Civil society participants, academic experts, and industry representatives all play a crucial role in developing an inclusive and protective framework for AI.
The Council of Europe’s treaty extends far beyond Europe and has a global scope. Countries from various continents are actively engaged in the negotiation process. Alongside European Union members, countries from North, Central, and South America, as well as Asia, including Canada, the United States, Mexico, Israel, Japan, Argentina, Costa Rica, Peru, and Uruguay, are involved in moulding this international regulatory framework. This global outreach underscores the importance and universal applicability of AI regulation, emphasising international cooperation for the responsible implementation and supervision of AI systems.
Francesca Rossi
Francesca Rossi underscores that AI is not simply a realm of pure science or technology; instead, it should be considered a social technical field of study, bearing significant societal impacts. This viewpoint emphasises that the evolution and application of AI are profoundly intertwined with societal dynamics and consequences.
Furthermore, Francesca advocates robustly for targeted regulation in the AI field. She firmly asserts that any necessary regulation should focus on the varied uses and applications of the technology, which carry different levels of risk, rather than merely on the technology itself. This argument stems from the understanding that the same technology can be utilised in countless ways, each with its own implied benefits and potential risks, therefore calling for tailored oversight mechanisms.
Francesca’s support for regulatory bodies such as the Council of Europe, the European Commission, and the UN is evident from her active contribution to their AI-related works. She perceives these bodies as playing a pivotal role in steering the direction of AI in a positive vein, ensuring its development benefits a diverse range of stakeholders.
Drawing from her experience at IBM, she reflects a corporate belief in the crucial importance of human rights within the context of AI technology use. Despite absent regulations in specific areas, IBM has proactively taken steps to respect and safeguard human rights. This underlines the duty that companies need to uphold, ensuring their AI applications comply with human rights guidelines.
Building on IBM’s commitment to responsible AI technology implementation, Francesca discusses the company’s centralised governance for their AI ethics framework. Applied company-wide, this approach implies that it’s pivotal for companies to maintain a holistic approach and framework for AI ethics across all their divisions and operations.
Francesca also emphasises the crucial role of research in both augmenting the capabilities of AI technology and in addressing its current limitations. This supports the notion that on-going research and innovation need to remain at the forefront of AI technology development to fully exploit its potential and manage inherent limitations.
Lastly, Francesca highlights the value of establishing partnerships to confidently navigate the crowded AI field. She fervently advocates for inclusive, multi-stakeholder, and worldwide collaborations. The need for such partnerships arises from the shared requirement for protocols and guidelines, to ensure the harmonious handling of AI matters across borders and industries.
In summary, Francesca accentuates the importance of viewing AI within a social context. She brings attention to matters related to regulation, the function of international institutions, and corporate responsibility. Additionally, she illuminates the significance of research and partnerships in overcoming challenges and amplifying the capabilities of AI technologies.
Daniel Castaño Parra
AI’s deeply integrated role in our societal fabric underscores the profound importance of its regulation, exhibiting promising transformative potential while simultaneously posing challenges. As this transformative technology continues to evolve and permeate various aspects of societal fabric globally, the pressing need for robust, comprehensive regulations to guide its usage and mitigate potential risks becomes increasingly evident.
Focusing attention on Latin America, the task of AI regulation emerges as both promising and challenging. Infrastructure discrepancies across the region, variances in technology usage and access, and a complex web of data privacy norms present considerable obstacles. The diversity of the regional AI landscape necessitates a nuanced approach to regulation, considering the unique characteristics and needs of different countries and populations.
In response to these challenges, specific solutions have been proposed. A primary recommendation is the establishment of a dedicated entity responsible for harmonising AI regulations across the region. This specialist body could provide clarity and consistency in the interpretation and application of AI laws. Additionally, advocating for the creation of technology-sharing platforms could help bridge the gap in technology access across varying countries and communities. A third suggestion involves pooling regional resources for constructing a robust digital infrastructure, bolstering AI capacity and capabilities in the region.
The significance of stakeholder involvement in shaping the AI regulatory dialogue is recognised. A diverse array of voices, incorporating those from varying sectors, backgrounds and perspectives, should actively participate in moulding the AI dialogue. This inclusive, participatory approach could help to ensure that the ensuing regulations are equitable, balanced, and responsive to a range of needs and concerns.
Further, the argument highlights the potential of AI in addressing region-specific challenges in Latin America. The vital role AI can play in delivering healthcare to remote areas, such as Amazonian villages, is stressed, while also being instrumental in predicting and mitigating the impact of natural disasters. Thus, it strengthens its potential contribution towards achieving the Sustainable Development Goals concerning health, sustainable cities and communities, and climate action.
In conclusion, while AI regulation presents significant hurdles, particularly in regions like Latin America, it also unveils vast opportunities. Harnessing the promises of AI and grappling with its associated challenges will demand targeted strategies, proactive regulation, wide-ranging stakeholder involvement, and an unwavering commitment to innovation and societal enhancement.
Arisa Ema
Arisa Ema, who holds distinguished positions in the AI Strategic Council and Japanese government, is an active participant in Japan’s initiatives on AI governance. She ardently advocates for responsible AI and interoperability of AI frameworks. Her commitment aligns with SDG 9: Industry, Innovation, and Infrastructure and SDG 17: Partnerships for the Goals, showcasing her belief in the potential for technological advancement to drive industry innovation and foster worldwide partnerships for development.
Moreover, Ema underlines the crucial need for empowering users within the domain of AI, striving for power equilibrium. The current power imbalance between AI users and developers is seen as a substantial challenge. Addressing this links directly with achieving SDG 5: Gender Equality and SDG 10: Reduced Inequalities. According to Ema, a balanced power dynamic can only be achieved when the responsibilities of developers, deployers, and users are equally recognised in AI governance.
Ema also appreciates the Internet Governance Forum (IGF) as an indispensable platform for facilitating dialogue among different AI stakeholders. She fiercely supports multi-stakeholder discussions, citing them as vital to AI governance. Her endorsement robustly corresponds with SDG 17: Partnerships for the Goals, as these discussions aim to bolster cooperation for sustainable development.
Ema introduces a fresh perspective on Artificial Intelligence, arguing that AI should be perceived as a system, embracing human beings, and necessitating human-machine interaction rather than a mere algorithm or model. This nuanced viewpoint can significantly impact the pursuit of SDG 9: Industry, Innovation, and Infrastructure, as it recommends the integration of human-machine interaction and AI.
Furthermore, Ema promotes interdisciplinary discussions on human-AI interaction as a critical requirement to fully understand the societal impact of AI. She poses dialogue to bridge cultural and interdisciplinary gaps as quintessential, given the multi-faceted complexities of AI. These discussions will help in identifying biases entrenched in human-machine systems and provide credible strategies for their elimination.
In conclusion, Arisa Ema’s holistic approach to AI governance encapsulates several pivotal areas; user empowerment, balanced power dynamics, multi-stakeholder discussions, and interdisciplinary dialogues on human-AI interaction. Her comprehensive outlook illuminates macro issues of AI while underscoring the integral role these elements play in sculpting AI governance and functionalities.
Liming Zhu
Australia has placed a significant emphasis on operationalising responsible Artificial Intelligence (AI), spearheaded by initiatives from Data61, the country’s leading digital research network. Notably, Data61 has been developing an AI ethics framework since 2019, serving as a groundwork for national AI governance. Furthermore, Data61 has recently established think tanks to assist the Australian industry in responsibly adopting AI, a move that has sparked a positive sentiment within the field.
In tandem, debates around data governance have underscored the necessity of finding a balance between data utility, privacy, and fairness. While these components are integral to robust data governance, they may involve trade-offs. Advances are thus required to enable decision-makers to make pragmatic choices. The issue of preserving privacy could potentially undermine fairness, introducing complex decisions that necessitate comprehensive strategies.
As part of the quest for responsible AI, Environmental, Social, and Governance (ESG) principles are becoming increasingly prevalent. Efforts are underway to incorporate responsible AI directives with ESG considerations, thereby ensuring that investors can influence the development of more ethical and socially responsible AI systems. This perspective signals a broader understanding of AI’s implications that extend beyond its technical dimensions.
Accountability within supply chain networks is also being highlighted as pivotal in enhancing AI governance. Specifically, advances on AI bills of materials aim to standardise the types of AI used within systems whilst sharing accountability amongst various stakeholders in the supply chain. This marks a recognition of the collective responsibility of stakeholders in AI governance.
In light of the rise of AI in the realm of game theory, exemplified by AlphaGo’s victory in the game of Go, there’s a reassurance that rivalry between AI and humans is not necessarily worrying. Contrary to eliminating human involvement, these advances have instigated renewed interest in such games, leading to a historical high in the number of grandmasters in both Go and Chess.
Highlighting the shared responsibility in addressing potential bias and data unevenness within AI development is vital. The assertion is that decision-making concerning these issues should not be solely the responsibility of developers or AI providers, suggesting that a collective approach may be more beneficial.
In summary, it’s crucial to incorporate democratic decision-making processes into AI operations. This could involve making visible the trade-offs in AI, which would allow for a more informed and inclusive decision-making process. Overall, these discussions shed light on the multifaceted and challenging aspects of responsible AI development and deployment, providing clear evidence of the need for a comprehensive and multifaceted approach to ensure ethical AI governance.
Audience
This discourse unfolds numerous vital topics centring on democratic decision-making, human rights, and the survival of the human species from a futuristic perspective, primarily focusing on the speed and agility of decision-making and potential implications on the democratic process. A thoughtful proposal for superseding the conventional democratic process, when deemed necessary for the species’ survival was advanced. This may even necessitate redefining aspects of human rights to better manage unforeseen future challenges.
The discussion also touched on circumstances where democracy could pose certain hurdles, suggesting a more democratic model could be beneficial for overcoming such issues. This proposed approach underlines the idea of a holistic, global consultation for such decision-making scenarios, emphasising the inherent value of democratic ethos in contending with complex problems.
A notable argument for enhanced collaboration was presented, stressing on adopting a concerted, joint problem-solving strategy rather than attempting to solve all problems at once. This suggestion promotes setting clear priorities and addressing each issue effectively, thereby creating a more synergetic solution to thematic global issues.
Within the technology paradigm, concerns were raised about who governs the algorithmic decision-making of major US-based tech companies. The argument underscores the non-transparent nature of these algorithms. Concerns related to potential bias in the algorithms were voiced, considering the deep division on various issues within the United States. There were calls for transparent, unbiased algorithm development to reflect neutrality in policy-making and respect user privacy.
In essence, the conversation revolved around balancing quick, efficient decision-making with the democratic process, the re-evaluation of human rights in the face of future challenges, the importance of joint problem-solving in addressing global issues and maintaining transparency and fairness in technological innovations. The discourse sheds light on the intricate interplay of politics, technology, and human rights in shaping the global landscape and fosters a nuanced understanding of these issues in connection with sustainable development goals.
Ivana Bartoletti
Artificial Intelligence (AI) is a potent force brimming with potential for immense innovation and progress. However, it also presents a host of risks, one key issue being the perpetuation of existing biases and inequalities. These problems are particularly evident in areas such as credit provisions and job advertisements aimed at women, illustrating the tangible impact of our current and prospective use of AI. There’s a worrying possibility that predictive technologies could further magnify these biases, leading to self-fulfilling prophesies.
Importantly, addressing bias in AI isn’t merely a technical issue—it’s also a social one and hence necessitates robust social responses. Bias in AI can surface at any juncture of the AI lifecycle, as it blends code, parameters, data and individuals, none of which are innately neutral. This complex combination can inadvertently result in algorithmic discrimination, which might clash with traditional forms of discrimination, underlining the need for a multidimensional approach to tackle this challenge.
To effectively manage these issues, a comprehensive strategy that includes legislative updates, mandatory discrimination risk assessments and an increased emphasis on accountability and transparency is required. By imposing legal obligations on the users of AI systems, we can enforce accountability and regulatory standards that could prevent unintentional bias in AI technologies. Implementing measures for positive action, along with these obligations, could provide a robust framework to combat algorithmic discrimination.
In addition, the introduction of certification mechanisms and use of statistical data can deliver insightful assessments of discriminatory effects, contributing significantly to the strife against bias in AI. Such efforts have the potential to not only minimise the socially harmful impacts of AI, but also reinforce the tremendous potential for progress and innovation AI offers.
In summary, it’s clear that the expansion of AI brings significant risks of bias and inequality. However, by adopting a broad approach that encapsulates both technical and social responses while emphasising accountability and transparency, we can navigate the intricacies of AI technologies and harness their potential for progress and innovation.
Moderator
Artificial Intelligence (AI) is progressively becoming an influential tool with the ability to transform crucial facets of society, including healthcare diagnostics, financial markets, and supply chain management. Its thorough integration into our societal fabric has been commended for bringing effective solutions to regional issues, such as providing healthcare resources to remote Amazonian villages in Latin America and predicting and mitigating the impact of natural disasters.
Echoing this sentiment, Thomas, who leads the negotiations for a binding AI treaty at the Council of Europe, asserted that AI systems can serve as invaluable tools if utilised to benefit all individuals without causing harm. This view is reflected by Baltic countries who are also working on their own convention on AI. The treaty is designed to ensure that AI respects and upholds human rights, democracy, and the rule of law, forming a shared value across all nations.
Despite the substantial benefits of AI, the technology is not without its challenges. A significant concern is the bias in AI systems, with instances of algorithmic discrimination replicating existing societal inequalities. For instance, women being unfairly targeted with job adverts offering lower pay and families mistakenly identified as potential fraudsters in the benefits system. In response to this, an urgent call has been made to update non-discrimination laws to account for algorithmic discrimination. These concerns have been encapsulated in a detailed study by the Council of Europe, stressing the urgent need to tackle such bias in AI systems.
In response to these challenges, countries worldwide are developing ethical frameworks to facilitate responsible use of AI. For instance, Australia debuted its AI ethics framework in 2019. This comprehensive framework amalgamates traditional quality attributes with unique AI features and emphasises on operationalising responsible AI.
The necessity for regulation and accountability in AI, especially in areas like supply chain management, was also discussed. The concept of “AI bills of materials” was proposed as a means to trace AI within systems. Another approach to promoting responsible AI is viewing it through an Environmental, Social, and Governance (ESG) lens, emphasising the importance of considering factors such as AI’s environmental footprint and societal impact. Companies like IBM are advocating for a company-wide system overseeing AI ethics and a centralised board capable of making potentially unpopular decisions.
Despite the notable differences between countries regarding traditions, cultures, and laws governing AI management, focusing on international cooperation remains a priority. Such collaborative endeavours aim to bridge the technological gap through the creation of technology sharing platforms and encouraging a multi-stakeholder approach in treaty development. This cooperation spirit is embodied by the Council of Europe coordinating with diverse organisations like UNESCO, OECD, and OSCE.
In conclusion, while technological advances in AI have led to increased efficiency and progress, the need for robust regulation, international treaties, and data governance is more significant than ever. It is crucial to ensure that the use and benefits of AI align with its potential impact on human rights, preservation of democracy, and promotion of positive innovation.
Session transcript
Moderator:
good morning or good evening for those that are connected online from different time zones. Very happy to be here with you in Kyoto. It’s the first time in Japan so I’m very excited to be in this fantastic place. My name is Thomas. I work for the Swiss government and I happen to currently chair the negotiations on the first binding AI treaty at the Council of Europe. That is a treaty not just for European countries, it is open to all countries that respect and value the same values of human rights, democracy and rule of law. So we do have countries like Canada, the United States, Japan as well participating in the negotiations but also a number of countries from Latin America, from other continents. But I’m not here to talk about the convention right now. You will hear a lot about the convention, maybe also in this session but also in others. But I’m here to basically help you listen to experts from all over the world that will talk about AI and how to ensure, while fostering innovation, how to ensure human rights and democratic values to be respected when AI is used and developed. So as we all know, AI systems are wonderful tools if they are used for the benefit of all people and if they are not used for hurting people, for creating harm. And so this is about how to try and make sure that one thing happens and the other doesn’t. But before we go to the panelists, I have the honor to present to you a very special guest from Strasbourg, Mr. Björn Berge. He’s the Deputy Secretary General of the Council of Europe, which will give you a few remarks from his side. Thank you, Björn. Please go ahead.
Björn Berge:
Thank you very much, Ambassador Schneider, and a very good afternoon to all of you. It’s really great to be here in Japan, and at this very important occasion, and of course it’s 17 years now, and it’s the 18th time that the IGF is meeting, and it has really proven to be both a historic and highly relevant decision to start this process. And technology is, as we know, developing in a way and at a pace that the world has never seen before, which affects all of us, every country, every community around this globe. It therefore makes really perfect sense to keep up the work and do all we can to ensure enhanced digital cooperation and the development of a global information society. Basically, this is about working together to identify and mitigate common risks, so that we can make sure that the benefits that the new technology can bring to our economies and societies are indeed helpful and respect fundamental rights. Today, it is good to see the Internet Governance Forum making substantial progress towards a global digital compact, with human rights established as one of the principles in which digital technology should be rooted, along with the regulation on artificial intelligence, all to the benefit of people throughout the world. The regulation of AI is also something on which the Council of Europe is making good progress. In line with our mandate to protect and promote common legal standards in human rights, democracy, and rule of law. And the work we do is not only relevant for Europe alone, but has often a global outreach. So, dear friends, I believe all of us are fully aware of the positive changes that AI can bring. Increased efficiency and productivity with mundane and repeated tasks, moving from humans to machines, better decisions even made on the basis of big data, eliminating the possibility of human error, and improved services based on deep analysis of vast quantities of information, leading to scientific and medical breakthroughs that seemed impossible until very recent times. But with all of this comes significant right-based concerns. And just as a matter of fact, a few days ago the Council of Europe published a study on tackling bias in AI systems to promote equality. And I’m very happy and pleased that the co-author of this excellent study, Miss Ivana Bertoletti, is here with us today online and she will speak after me, I think. So, there are also other questions related to the availability and use of personal data, on responsibility for the technical failure of AI applications, and on their criminal misuses in attacking election systems, for example. And on access to information, the growth of hate speech, fake news, and disinformation, and how these are managed. The bottom line is that we must find a way to harness the benefits of AI without sacrificing our values. So, how can we do that? Our starting point should be the range of Internet governance tools that we have already agreed upon, some of which have a direct bearing on AI. If I focus on Europe for a moment, this includes the European Convention on Human Rights, which has been ratified by 46 European countries. Also, the European Court of Human Rights, with its important case law. And let me just mention one concrete example now from such a court judgment. A case that clarified that online news portals can be held liable for user-generated comments if they fail to remove clearly unlawful content promptly. This is a good example of the evolution of law in line with the times. Drawing from this European Convention and the court case law, which of course again builds on the Universal Declaration of Human Rights, we also develop specific legal instruments designed to help member states, but also countries outside Europe, apply our standards and principles in regards to Internet governance. Our Budapest Convention is the first international treaty to combat cybercrime, including offenses related to computer systems, data, and content. And its new second edition of protocol is designed to improve cross-border access to electronic evidence, extending thereby the arm of justice further into cyberspace. Our Convention 108 on data protection is similarly a treaty that countries also inside and outside Europe find highly relevant. And this Convention on data protection has also been updated with an amending protocol, widely referred to as Convention 108 plus, which helps ensure that national privacy laws converge. Added to this, over recent years we have, within the Council of Europe, adopted a range of recommendations to all our 46 member states, covering everything from combating hate speech, especially online, to tackling disinformation. And right now we are also working on a set of new guidelines on countering the spread of online miss and disinformation through fact-checking and platform design solutions. In addition, we are now looking at the impact of digital transformation of the media, and this year we will finalize work on a set of new guidelines for the use of AI in journalism. So all in all, we are indeed involved in a number of areas trying to help and contribute, but we need to go further still on AI specifically. And here we are currently developing a far-reaching and first-of-its-kind international treaty, a framework convention, and the work is led by Ambassador Schneider sitting next to me, that will define a set of fundamental principles to help safeguard human rights, rule of law and democratic values in AI. Experts from all over Europe, as well as civil society, representatives from the private sector, are leading and contributing to this work. Such a treaty will set out common principles and rules to ensure that design, development and use of AI systems respect common legal standards, and that they are rights compliant through their lifecycle. Like the Internet Governance Forum, this process has not been limited to the involvement of governments alone, and this is crucially important, because we need to draw upon the unique expertise provided by civil society participants, academics and industry representatives. In other words, we must always seek a multi-stakeholder approach, so as to ensure that what is proposed is relevant, balanced and effective. Such a new treaty, a framework convention, will be followed by a standalone, non-binding mythology for the risk and impact of AI systems, to help national authorities adopt the most effective approach to both regulation and implementation of AI systems. But it’s also important to say here today that all of this work is not limited only to the Council of Europe or our member states. The European Union is also engaged with the negotiations, as well as non-European countries, as well as Canada, the United States, Mexico and Israel. And this week, Argentina, Costa Rica, Peru and Uruguay joined, and of course Japan, a country that has been a Council of Europe observer for more than 25 years. And that is actively participating in the range of our activities, and there is no doubt that Japan’s outstanding expertise and track record of technological development makes it a much-valued participant in our work. And its key role globally, when it comes to AI and internet governance, is only reconfirmed by hosting this important conference here in Kyoto this week. So dear friends, there is still time for other like-minded countries to join this process of negotiating a new international treaty on AI, either taking part in the negotiations or as observers. A role that actually a number of non-member states have requested, and I must say the negotiations are progressing well. A consolidated working draft of the Framework Convention was published this summer, and it will now serve as the basis for further negotiations. And yes, our aim is that we should be able to conclude these negotiations by next year. I hope you agree. Let me also underline that this Framework Convention will be open to signature from countries around the world, so it will have the potential for a truly global reach, creating a legal framework that brings European and non-European states together, opening the door, so to say, to a new era of right-based AI around the world. So let me just make an appeal to governments represented here today to consider whether this is a process that they might join, and a treaty that they most likely will go on to sign, just as I encourage those who have not yet done so to join the Budapest Convention and the Convention 108 and 108+, as I just mentioned. I believe it makes sense to work closely together on these issues and make progress on the biggest scale possible. Let me lastly on this point just say, and more broadly, that on the regulation of artificial intelligence we can learn from each other, benefit from various experiences, and tap into a large pool of knowledge and expertise globally. For us, the Council Europe, it’s seeking multilateral solutions to multilateral problems is really part of our DNA, and that spirit of cooperation makes it natural for us to work with others with an interest in these issues as well. And I also want to highlight here today that we also work now very closely with the Institute of Electrical and Electronic Engineers to elaborate a report on the impact of the metaverse and immersive realities. And we are also looking then carefully into if the current tools are adequate for ensuring human rights, democracy, and rule of law standards in this field. We are also coordinating closely with UNESCO, as well as with the OECD, with the OSCE, the Organization for Security and Cooperation in Europe, and the European Union, of course. And I believe also why we are here today, as the Internet Governance Forum, we share that spirit and that ambition of international cooperation. And this is really the only approach for us, and I’m sure its success is a must, both for the development of artificial intelligence and for helping to shape the safe, open, and outward-looking societies that hold and protect fundamental rights and are true to our values. So with this, I thank you very much for your attention. Thank
Moderator:
you very much, Björn. And you said it, the key of us being together here is to learn from each other, which means listening and trying to understand each other’s situation, and we’re very happy to have quite a range of experts with different expertise here on the panel, but of course also in the room, so I’m looking forward to an interesting exchange. And I will immediately go, you’ve already named her, to Ivana Bartoletti, and she’s connected online, so we have this advantage after COVID that we can connect with people physically here, but also remotely, and Ivana Bartoletti works in a major private company specialized, among other things, in IT consulting. She’s also a researcher and teaches cybersecurity, privacy, and bias at Pamplin Business School at Virginia Tech, and Ivana is a co-founder of the Global Coalition for Digital Rights and Women Leading in AI Network. So Ivana, please tell us about your work. What would you say are the main challenges when it comes to bias, and in particular gender bias in AI, and what do you think we need to develop to do and develop and foster the appropriate solution to these problems? So Ivana, I
Ivana Bartoletti:
hope you will appear on our screen soon. Yes, we can already hear you. Wonderful, thank you so much, and thank you for having me here, and it was great to hear from both yourself in the introduction, and Mr. Bjorn Bertsch, and Deputy General Secretary of the Council of Europe, who I want to thank for their trust in giving to me and to Raffaele, and putting together this report, which is now available online. I wanted to start by saying that artificial intelligence is bringing, and will bring, enormous innovation and progress if we do it in the right way, and I do firmly believe, as many, that we are at a watershed moment in the relationship between humanity and technology. This is the time, and the Deputy General Secretary, Secretary General of the Council of Europe, Bjorn, was really articulating it well. We are at a watershed moment in this relationship between humanity and technology. Over the last few years, we’ve seen some of the amazing benefits that artificial intelligence, automated decision-making can bring to humanity. On the other hand, we’ve also seen some quite disturbing sights. We’ve also seen some quite disturbing effects that these technologies can bring, and bias, the perpetuation, the codification, the coding, and the automation of existing inequality has been one of them. And I want to make one point as we start, and the point that I want to make is that over the last few years, and sorry, over the last few weeks and months, we have seen a lot of people coming out with quite alarmist and dramatic appeals on artificial intelligence. And I want to say loud and clear here in this room, and that this alarmist approach to artificial intelligence has been quite distracting. And the reason for this is that it helps create a mystique around artificial intelligence. Well, we know very, very well right now what the risks are. We’ve been advocating, and especially have to say women and human rights activists over the last decade, for measures to tackle these harms. So I want us and everybody to remain focused on artificial intelligence risks and harms, do the nitty-gritty, as the Council of Europe mentioned now, as a lot of work is going, for example, in the European AI Act. in the development of legislation and guidance all across the world, in the work going in the Convention for the Council of Europe, as well as in the world that the United Nations with the Digital Global Compact is putting forward, to really focus on the harms that we know of, on the harms of bias, disinformation, the coding of existing inequalities in automated decisions, making choices about individuals now, but also making predictions about decisions tomorrow. So in these studies, Rafael and I have focused on bias in automated decision making, and looking at what this bias looks like. There’s been a lot of work going into this, and a lot of expertise all around the globe, focusing on bias, and we have seen that this bias has a very real effect. We’ve seen less credit given to women, because women traditionally make less money than men. We’ve seen countries and government grappling with the terrible mistakes of, for example, families wrongfully identified as potential fraudsters in the benefit system, and therefore putting families, parents, and children into poverty. We have seen what it means when job adverts target the pay less, are served to women, because traditionally women earn less than men. And we have seen the harms of automated decision making, for example, portraying images of women replicating stereotypes that we have seen for decades in our society. So the harms of automated decision making and the bias is all too real for people. It affects everyday life. And some people would argue, yes, but human people are biased. And I say, yes, they are biased. Obviously they are. But the difference is where that bias gets coded into software, and it becomes more difficult to identify, more difficult to challenge, and then it becomes ingrained even more in our society. And this is particularly complex, in my view, when it comes to predictive technologies, which, if we code this bias and these stereotypes into these predictive technologies, what could happen is that we end up into self-fulfilling prophecies. We end up into replicating the patterns of yesterdays into decisions that shape the world of tomorrow. And this is not something that we want. So what can we do? First of all, we must recognize the bias can be addressed from a technical standpoint, can be addressed from a technical standpoint, but bias is much deeper than that. It’s much more than a technical issue. It’s rooted into society because data, as well as parameters, as well as all the humans that go into creating a code, is much more than technology. Ultimately, I like to say that AI is a bundle of code, of parameters, of people, of data, and nothing of that is neutral. Therefore, we must understand that these tools are much more of a socio-technical tool rather than a purely technical one. So it’s important to bear in mind that the origin and the cause of bias, which could emerge at any point of the life cycle of AI, is a social, political issue that requires social answers, not purely technical ones. So let’s never lose this from our conversation. Second thing that is important to realize, we found in the study with Rafael, there is often not an overlap between the discrimination that people experience, which traditionally, especially in non-discrimination law across the world, are based on protected characteristics, and the new sources of discrimination that are often algorithmic. And the algorithmic discrimination, which is created by big data sets, so clustering of individuals done in a computational and algorithmic way. And, on the other hand, the more traditional categories of discrimination, they do not overlap. And therefore, what is happening is that we must look into existing non-discrimination law and try and understand if that non-discrimination law that we have in place in our countries is fit for purpose to deal with this new form of algorithmic discrimination. Because what happens in algorithmic discrimination is that individuals may be discriminated, not because of traditional protected characteristics, but because they’ve been put in a particular cluster, in a particular group, and this happens in a computational and algorithmic way. Then, so the updating of existing legislation is very important. We encourage member states to expand the use of positive action measures to tackle algorithmic discrimination and use the concept of positive obligations, which is, for example, in the European Convention of Human Rights case law, to create an obligation for providers and users to reasonably prevent algorithmic discrimination. This is really, really important, and in part in our report. We’re looking and we’re suggesting mandatory discrimination risk and equality impacts assessments throughout the lifecycle of algorithmic systems according to their specific uses. We’re looking really to see, to ensure that this equality by design is introduced into the systems. We’re looking and we’re suggesting to member states to consider how certification mechanisms could be used to ensure that this bias has been mitigated. So looking, for example, at how member states could introduce some form of licensing and say, well, actually, this system, due diligence has been into the systems to eliminate as far as possible for well-defined users. We’re looking at encouraging the member state to investigate the relationship between accountability, transparency, and free secrets. And finally, my last point, and we’ve been encouraging member states to consider establishing legal obligations for users of AI systems to publish statistical data that can allow parties, researchers, to really look at the discriminatory effect that a given system can have in the context of discrimination claims. So I want to close on this. It’s a vast report that I would encourage anyone to read. And the bottom line of this report is that discrimination through AI systems is something that brings together social and technical capabilities. It’s something that needs to absolutely be at the heart of how we deploy these systems. We must investigate the way that we can use these systems to actually tackle the discrimination in the first place. For example, by identifying sources of discrimination that are not visible to human eyes in the first place. So there is a positive side to all this, which we must harness. But to do so, we encourage everyone to really understand how we can get together, bring the greatest expertise that we have in the world and in this room, to really try and understand how we can not just further our knowledge, but also enshrine in legislation the importance to tackle bias in algorithmic systems.
Moderator:
Thank you very much, Ivana. And as you say, new technologies create new problems sometimes, but they can also be part of new solutions, and it’s good to highlight both. With this, let me move on immediately as we are slightly running behind schedule to Ms. Merve Hickok. She’s also connected online. We do, as you see, also have physically present speakers and experts. Merve is a globally renowned expert on AI policy, ethics, and governance, and her research and training and consulting work focuses on the impact of AI systems on individual, society, public, and private organizations with a particular focus on fundamental rights, democratic values, and social justice. She’s the president and research director at the Center for AI and Digital Policy, and with this hat, she’s also very actively present as one of the very present civil society voices in the negotiations on the convention. So Merve, what are some of the main challenges of finding proper regulatory solutions to the challenges posed by AI to human rights, democracy, and what kind of solutions to these challenges do you see? Thank you.
Merve Hickok:
First of all, thank you so much for the invitation, Chair Schneider. Good to see you virtually. And I appreciate the invite and expanding this conversation in this global forum as well. Also, I’m in great company here today and very much looking forward to the conversation. I actually want to answer the question by quoting from recommendation of committee of ministers of Council of Europe dating back to 2020, where the ministers recommend that achievement of socially beneficial innovation and economic development goals must be rooted in the shared values of democratic societies, subject to full democratic participation and oversight, that the rule of law standards that govern public and private relations such as legality, transparency, predictability, accountability, and oversight must also be maintained in the context of algorithmic systems. So this sentence alone, for me, provides us with a great opportunity, great summary of challenges, as well as an opportunity and direction towards solutions. First, in terms of challenges, we currently see a tendency to treat innovation and protection as a either-or situation, as a zero-sum game. I cannot tell you the number of times I’m asked, but would regulating AI stifle innovation? And I’m sure those in the room and around in the panel probably has lost count of this question. However, they should coexist. They must coexist. Regulation creates clarity. Safeguards making innovations better, safer, more accessible. Genuine innovation promotes human rights. It promotes engagement. It promotes transparency. The second challenge in this field that we are seeing is that the rule of law standards, which go on public actors’ use of AI systems, must apply to the private actors as well. It feels like every day we see another privately-owned AI product undermining rights or access to resources. Yes, of course there are differences in the nature of duties between private and public actors. However, businesses also have obligation to respect human rights and rule of law too. This is reflected in the United Nations Guiding Principles for Business, reflected in the Hiroshima process for AI now. We cannot overlook how the private sector’s use of AI impacts individuals and communities just because we want our domestic companies to be more competitive. Market competition alone will not solve this problem with human rights and democratic values. Unregulated competition might encourage a race to the bottom. And the third challenge, the final challenge, is the CEO and industry dominance in the regulatory conversations we’re seeing around the globe today. As ministers note, innovation must be subject to full democratic participation and oversight. We cannot create regulatory solutions behind closed doors with industry actors deciding whether or how they should be regulating. Of course, the industry must be part of this conversation. However, democracy requires public engagement. Whether it’s in the US, UK, or beyond, we’re seeing the dominance of industry in the policymaking process undermining the democratic values and is likely to exacerbate existing concerns about replication of bias, displacement of labor, concentration of wealth, and power imbalances. And as I mentioned, the challenge, the minister’s recommendation actually include the solutions with them, that we need to base our solutions in democratic values. In other words, civic engagement and policymaking in elections, in governance, transparency, and accountability. I would like to finish very quickly with some recommendations because core democratic values and human rights is core to the mission of my organization. We saw these challenges several years ago and set ourselves up for a major project to objectively assess AI policies and practices across countries. Our annual flagship report is called AI and Democratic Values Index. We published a third edition this year where we assessed 75 countries against 12 objective metrics, where our metrics actually allow us to assess whether and how these countries see the importance of human rights and democracy, and whether they keep themselves accountable for their commitments to these. In other words, do they walk the talk? You would be surprised to see how many commitments in a national strategy does not actually translate to actual practices. So I’m finishing my response with offering recommendations from our annual report over the three years that I hope will be applicable to this conversation. First, establishing national policies for AI that implement democratic values. Second, ensure public participation in AI policymaking and create robust mechanisms for independent oversight of AI systems. Third, guarantee fairness, accountability, and transparency in all AI systems, public and private. Four, commit these principles in the development, procurement, and implementation of AI systems for public services, where a lot of the time the middle one, procurement, falls between the cracks. Next recommendation is implement the UNESCO AI recommendations on ethics. And then the final one in terms of implementation is establish a comprehensive legally binding convention for AI. And I do appreciate the Council of Europe being part of the Council of Europe’s work and looking forward to this convention for AI. And then we have two specific recommendations for specific technologies because they undermine both human rights and democratic values and civic engagement. One is the facial recognition for mass surveillance. The second one is deployment of little autonomous weapons, both items that have been repeatedly also discussed in UN negotiations, UN conversations. With that, I would like to thank again and I’m looking forward to the rest of the conversation.
Moderator:
Thank you, Merve. That was very interesting, in particular also this fight against the notion that you can have either innovation or protection of rights, but both need to go together. With this, let me turn to Francesca Rossi. She’s also present online. By the way, have you noticed we have quite many women here on this panel? So for those that complain that you don’t find any women specialists, actually sometimes you do. So Francesca Rossi is a computer scientist currently working at the IBM TJ Watson Research Law in New York. And as an IBM Fellow, an IBM AI Ethics Global Leader. She’s actively engaged in the AI-related work of bodies like the IEEE, the European Commission High-Level Expert Group or the Global Partnership on AI. And she will give us a unique perspective as both a computer scientist and a researcher, but also as someone who knows the industry’s perspective on the challenges and on the opportunities created by AI and more especially on generative AI. You have the floor, Francesca. Thank you. Thank you.
Francesca Rossi:
Thank you very much for this invitation and for the opportunity to participate in this panel. So many of the things that have been said by the previous speakers resonate with me. So like I can, you know, of course, everything that Ivana said about the social technical aspects of this science and technology that is AI by several years now that AI is not a science or a technology only, but it’s really a social technical field of study. And that’s a very important point to make. I really support, you know, the efforts that the Council of Europe and the European Commission are doing in terms of regulating AI as in my company and myself, I really feel that regulation is important to have and it does not stifle innovation, as it was said also by the previous speaker. But regulation should be focusing, in my view, on the uses of the technology rather than the technology itself. The same technology can be used in many different ways, in many different application scenarios, some of them that are very, very low risk or no risk at all, and some others instead that are very, very high risk. So we should make sure that we focus where the risk is and to put obligations and compliance and scrutiny and so on. I would like to share with you the fact, what happened during the last years in a company like IBM, which is a global company and has applications of its technology and deployment of its technology to many different sectors of our society. So, and what we did inside the company, even though there was, and there is in some regions of the world, no regulation that’s really to be compliant with around AI, is because we really feel that regulation is needed, but it cannot be the only solution. Also because technology goes much faster than the legislation process. So companies have to play their role and their part in making sure that the technology they build and they deploy to their clients respects the human rights and freedom and human dignity and bias and many others. So the lessons that we have learned in these years are very few, but I think very important. First of all, that a company should not have an AI ethics team. This is something that maybe is natural to have at first, but it’s not effective in my view. because having a team means that then the team has to usually struggle to connect with all the business units of the company. So what it must have is a company-wide approach and framework for AI ethics and a centralized governance for that company-wide framework. For example, in our case, in the form of a board with representations from all the business units. Second thing, this board should not be an advisory board. It should be an entity that can make decisions for the company, even when the decisions are not well-received by some of the teams. Because for example, it says, no, you cannot sign that contract with a client. You have to do some more testing. You have to pass that threshold for bias. You have to put some contractual conditions in the contractual agreements and so on. The third thing that we learned is that we started like everybody with very high level principles around AI ethics, but then we realized very soon that we needed to go much deeper into the concrete actions. Otherwise there was no impact from the principles to what the developers and the consultants were doing. Next one is really the social technical part. For a technical company is very natural to think that an issue with the technology can be solved with some more technology. And of course, technical tools are very important but they are the easy part. The important and the most important and complimentary part to the technical tools is the education, the risk assessment processes, the developers guidelines, really changing the culture and the frame of mind of everybody in the company around the technology. Next point is the importance of research. AI research can augment the capabilities of AI, but it can also help in addressing some of the issues related to the current limitations of AI. And that’s very important. So to really focus on supporting the research efforts. We also have to remember the technology evolves. So over the years, our framework has evolved because of the new challenges and expanded challenges that came about with the evolution of the technology. And we going from a technology that was just rule-based to based on machine learning and then based on generative AI right now, which expands old issues, like issues related to fairness, explainability and robustness, but also creates new ones, right? It was mentioned misinformation, fake news, copyright infringement, and so on. And then finally the value of partnerships. So partnerships that are multi-stakeholder, that are global. And as deputy secretary mentioned, this is really a very important and a necessary approach. It has to be inclusive, multi-stakeholder and global. So I’ve been working with the OECD, with the World Economic Forum, with the partnership on AI, global partnership AI, the space is very crowded now. But, and we have to make an effort because of this crowded space to find the complementarity and how to work together. So each initiative tries to solve the whole thing, but I think that each initiative has his own angle that is very important and complementary to the other ones. So I’ll stop here by saying that I really welcome what the Council of Europe is doing under the leadership also of our moderator, but also I welcome what the UN is doing and is trying to do with the new advisory board that is being built, because really the UN can also play an important role in making sure that AI is driven in the right direction, which is guided by the UN Sustainable Development Goals. Thank you.
Moderator:
Thank you, Francesca, for sharing with us the lessons learned in a company like IBM from an industry perspective, but also I think a very important guidance is an appeal to intergovernmental institutions and other processes to not all try to solve all problems at once, but to each of the processes and institutions focus on their specific strength and try to jointly solve the problems together. Thank you very much. With this, let us move, and this is our last online speaker, then we do have the physical speakers here, Professor Daniel Castaño. He comes from the academic world. He’s a professor in law in the Universidad Externado de Colombia, but has a strong background in working with governments. He’s been a former legal advisor to different ministries in Colombia and actively engaged in the AI-related research and work in Colombia and Latin America in general. He’s also an independent consultant on AI and new technologies. So Daniel, what kind of specific challenges do AI technologies pose for regulators and developers of these technologies regionally, in your case, in Latin America in particular? Thank you very much, Daniel.
Daniel Castaño Parra:
Well, ladies and gentlemen, distinguished delegates and fellow participants, Deputy Secretary-General Bjorn-John Birch and Chair of the SCAI, Thomas Schneider, thank you very much for this invitation. I think that the best way to address this question is just trying to discuss the profound importance of AI regulation. Today, I must make clear that I’m speaking with my own voice and that my remarks reflect my personal views around this topic. So AI, as we know it, is no longer just a post-word or a distant concept from enhancing healthcare diagnosis to making financial markets more efficient. AI is deeply embedded in our societal fabric. Yet, like any transformative technology, its immense power brings forth both promises and challenges. But why, you may ask, is AI regulations of paramount not only to Europe, but to the world and to our region, to Latin America? At its core, it’s about upholding the values we hold dear in our societies. Transparency in the age where algorithms shape many of our daily decisions, understanding their mechanics is not just a technical necessity, but a democratic imperative. Accountability. Our societies thrive on the principle of responsibility. If an AI errors or discriminates, there must be a framework to address these consequences. Ethics and bias. We are duty-bound to ensure that AI doesn’t perpetuate existing biases, but instead aids in creating a fairer society. As we stand on the brink of a new economic era, we must ponder on how to distribute AI’s benefits equitably and protect against its potential misuse. Now, casting our gaze towards Latin America, a region of vibrant cultures and emerging economies, the AI landscape is both promising and challenging. In sectors ranging from agriculture to smart cities, AI initiatives are gaining traction in our region. However, the regulatory terrain is diverse. While some nations are taking proactive measures, others are still finding their own footing. However, the road to a unified regulatory framework faces certain stumbling blocks in our region. Like, for example, fragmentation due to inconsistent inter-country coordination. I mean, we lack the coordination and integration that Europe has nowadays. We have deeply technological gaps that are attributed to varied adoption rates and expertise level. And we have infrastructure challenges that sometimes hamper consistent and widespread AI application. But let’s not just dwell on challenges. Let’s try to architect some solutions together. So first, I will suggest that we require some sort of regional coordination. So for that purpose, we could establish a dedicated entity to harmonize AI regulation across Latin America, fostering unity in diversity. Also, I would suggest to promote the creation of technology sharing platforms that would allow for the creation of collaborative platforms where countries can share AI tools, solutions, and expertise, bridging the technological gap. Also, I would suggest some investments in shared infrastructure for our region. Consider, like, pooling resources to build regional digital infrastructure, ensuring that even nations with limited resources have access to foundational AI tech. Unique challenges also present themselves. Infrastructure discrepancies, variances in technology access, and a mosaic of data privacy norms necessitate a nuanced approach in our region. But herein also lies the opportunity. AI has the potential to address regional challenges, whether it’s delivering healthcare to remote Amazonian villages, or predicting and mitigating the impact of natural disasters. So what path forward do I envision for Latin America and, indeed, the global community? So first, I will suggest regional synergies are key. Latin American countries, by sharing best practices and even setting regional standards, can craft and harmonize AI narrative. Second, I highly encourage stakeholder involvement. A diverse core of voices, from technologists to the industry, to civil society, and ethicists, must shape actively the AI regulatory dialogue in our region. We also need capacity building. I mean, we have a huge technological gap in our region, and I think that investment in education and research is non-negotiable. Preparing our global citizenry for the highly augmented future is a shared responsibility with the world. Finally, I would also encourage to strengthen data privacy and protection, and to try to harmonize the fragmented regulatory scheme that we are having now in LATAM, because I think that could lead to a balkanization of technology, which will only hamper innovation, and would only put us many years back. So in conclusion, as we stand at this confluence of technology, policy, and ethics, I urge all stakeholders to approach AI with a balance of enthusiasm and caution. Together, we can harness the potential of AI in order to advance the Latin American agenda. Thank you all for your attention, and I very look forward to our collective deliberations around this pivotal issue. Thank you very much.
Moderator:
Thank you very much, Daniel, for these interesting insights, because in particular, people coming from Europe like me, although my country’s not a formal member of the European Union, I think we have a very well-developed cooperation, and also some harmonization of standards, not just through the Council of Europe when it comes to human rights, democracy, and rule of law, but also economic standards. And it is important to know that this is not necessarily the case in other continents, where you have a lot of greater diversity of rules and standards in different ways, which is, of course, a challenge also. And I think your ideas, your solutions to overcome these challenges are very, very valuable. Let me now turn to Professor Lai Ming Su. I hope I pronounce it correctly. He’s a research director at the Australia’s National Science Agency, and a full professor of the University of South Wales. So let us continue with the same topic, move to another region, to Asia Pacific, actually. And Lai Ming Su has been closely involved in developing best practices for AI governance and worked on the problem of operationalizing responsible AI, so the floor is yours.
Liming Zhu:
Yeah, thanks very much for this opportunity. It’s a great honor to join this panel. And so I’m from CSRO, which is Australia’s National Science Agency, and we have a part called Data61. If you’re wondering why Data61, 61 is Australia’s country code when you call Australia. It’s a business unit doing research on AI, digital, and data. So just back a little bit on the Australian journey on AI governance and responsible AI. So Australia is one of the very few countries back in 2019, late 2018, actually, started developing our AI ethics framework. So Data61 actually was the one leading the industry consultation and came up in middle 2019, the Australian AI ethics framework, which is the high-level principles. And we observe its principles are similar to many of the principles in other world and globally. But interestingly, it has three elements in it. One is it’s not just on high-level ethical principles, and it recognize human values with a plural being a different part of the community have different types of values and the importance of trade-offs and robust discussion. The second part of it is included many of the traditional challenging quality attributes like reliability, safety, security, and privacy, and realized AI will make those kind of challenges even more challenging. And the third part of the AI ethics framework included something quite unique to AI, such as accountability, transparency, explainability, and contestability. That AI, although they are very important in any digital software, but AI will make those things more difficult. And since then, Australia has been focusing on operationalizing responsible AIs, especially led by Data61. In the meantime, other agencies like Human Rights Commissioner, the Human Rights Commissioner, Lauren Finlay, when he heard about this particular topic at this forum, she was very excited and she forwarded our recent UN submission on AI governance. And also, you may bump into the E-Safety Commissioner from Australia in this forum, and she’s looking after some of the E-Safety aspects of AI challenges as well in this forum. But then the government actually launched about two years ago, the Australian National AI Center. The Australian National AI Center hosted by Data61 is not a research center. It’s an AI adoption center. Interestingly, its central theme is responsible AI at scale. So it has created a number of think tanks, including AI inclusion and diversity, responsible AI, and AI at scale to help Australian industry navigate the challenge of adopting AI responsibly in everything we do. In the meantime, as a science agency, I’m a scientist in AI, we have been working on the best practices, bridging this gap that Francesca has mentioned. How do we get high-level principles into something on the ground that organizations and developers and AI experts can use? So we have developed a patent-based approach for this. A patent is just a reusable solution, a reusable best practice. But interestingly, in a patent, not only you have the best practice, but it also captures the context of the best practice, and also the pros and the cons of this best practice. Not all best practice comes in free, and also many best practices need to be connected. There are so many guidelines, sometimes for governance, sometimes for AI engineers. There are a lot of disconnection between them. But when you connect those powerful best practices together and you see how the society, the technology companies, the governing bodies can make things, responsible AI more effectively implemented. Another key focus of our approach from Australia is on the system level. So much of AI discussion has been talking about AI model. You have this AI model, you need to give it more data to train it, to align it, to make it better. But remember, every single system we use, including TGPT and others, is the overall system. The system utilizes some of the AI model. There are a lot of system level guardrails we need to build in. And those system level guardrails are actually capturing the context of the use. And without context, many of the risks and the responsible AI practice are not gonna be very effective. So a system level approach, going beyond machine learning models, another key elements of our work. The next key elements we have is, as I mentioned earlier, is realizing the trade-offs we have to make in many of this discussion. For people familiar with data governance, we know there’s a trade-off between data utility and privacy. You can’t get both. And to how much data utility you need to sacrifice, sacrificing the value of data for privacy, and vice versa, how much privacy you can afford to sacrifice and for maximizing data utility, this is another question for scientists to answer. However, science plays two very important role. One is to push the boundaries of the utility versus privacy curve. Meaning for the same amount of privacy, we could do new science to make sure more utility is extracted. In the high level panel this morning, you have heard of federated machine learning and many other technologies has been advanced to enable this better trade-off, getting better from both worlds. But importantly, it’s not only utility and privacy. There’s also fairness. You may have heard a story that when we actually trying to preserve privacy without collecting certain data, it will also harm fairness in some cases. So now you have three quality attributes you have to trade off, utility, privacy, fairness, and there are more. So how science can enable that the decision makers to make that informed decision is the key of our work. The next characteristic of our work from Australia is to look at the supply chain. No one is building AI from ground up. You always rely on other vendor companies’ AI models. You may be using pre-trained model. How can you be sure what is AI in your organization? So similar to some of the work in software bills of materials, we have been developing AI bills of materials. So you can be sure that what sort of AI is in your system and have that accountability being held and shared among different players in the supply chain. And the final thing we have just embarked on working is to look at responsible AI and AI governance through the lens of ESG. Of course, ESG stands for environment, social and governance, and very aligned with the SDG of UN goals. On the other hand, environment is your AI footprint, environment footprint. Social elements, AI plays a very important role and the governance of AI often is too much of internal company governance, but the societal impact of AI needs to be governed as well. So looking at responsible AI through the lens of ESG, we also make sure investors can drive the levers of doing better responsible AI. So I will conclude by saying that Australian’s approach is really looking at connecting those practices, enable the stakeholders to make the right choices and trade-offs, and those trade-offs are not for us to make. Thank you very much.
Moderator:
Thank you, Liming, and it’s interesting to hear you talking about trade-offs and how we can turn this, maybe change them from perceived trade-offs to perceived opportunities if we get the right combinations of this goal. So let me turn to our last but not least expert, which is somebody that comes from the country hosting this IGF this year from Japan. Professor Ema is an associate professor at the Institute for Future Initiatives of the University of Tokyo, and her primary interest is to investigate the benefits and risks of AI in interdisciplinary research groups. She’s very active in Japan’s initiatives on AI governance, and I would like to give her the floor to talk about how Japanese actors, industry, civil society, and regulators see the issue of regulation of governance of AI. Thank you, professor.
Arisa Ema:
Thank you very much, Chair Thomas, and yeah, I think I am really be honored to here to make some. of the presentation and share what’s being discussed here in Japan also with my colleagues. So as Thomas nicely introduced me, so I am right now at the academic, the University of Tokyo, but also I am a board member of the Japan Deep Learning Association. It’s more of the startup companies community and also I am also a member of the AI Strategic Council and the Japanese government. However, today’s this talk, I would like to wear the academic hats on it and I would like to talk about what’s being discussed here in Japan. And so far, I see a lots of the sharing insights that being discussed with the panelists. And when I would like to introduce what’s kind of the status or what’s kind of discussion was ongoing here in Japan. So back in 2016, it was like, that was the G7 summit at the Takamatsu. So before 2023, there was once the summit. And there, Japanese government released the guidelines to the AI development. And I think, I believe that that was actually the turning point that the global discussion about the AI guidelines actually started and with the collaboration. And this year or the 2023 as the G7 summit at Hiroshima, we also see that there’s the very huge debate discussion on the ongoing and the generative AI and currently the G7 and also the other related companies, countries are discussing about the way to create the rules to govern the generative AI and also the AI in general. And that’s called the Hiroshima AI process. And I believe there will be the discussion tomorrow morning. And with that, the Ministry of Internal Communication Affairs and also the Ministry of Economic and Trade and Ministry is also creating the guidelines to discuss the development of the AI and also to mitigate the risks. So that kind of thing is right now ongoing here in Japan, but before going to discuss the further about the AI with the responsible use, I would like to talk a little bit about the AI convention that’s actually the COEs going under the negotiation. So why I’m here is that because I am actually very interested in the AI convention and my colleagues and I are actually organized an event here in Japan to discuss what’s the impact towards the Japanese and also to the other world. And actually we are creating some of the policy recommendation for the Japanese government. So if we are to sign this convention, what kind of thing we should investigate on. And I think this is a really important convention so that when we are to discuss the responsible AI. And in order to make some of the points that should be discussed within this panel, I would like to raise some of the three points that we published last month in September from my institution. The title is called the Toward Responsible AI Deployment, Policy Recommendation for the Hiroshima AI Process. If you are interested in, just search for my institution’s name and the policy recommendation and you can find out. And in there, we created this policy recommendation with a multi-stakeholder discussion, including not only the academics but also from the industry. And also we actually had a discussion with the government officials. And there, one thing we think really important is that the interoperability of the frameworks. So the framework interoperability of the AI is one of the key words that’s been discussed in the G7 Summit this year. But I guess many of us have questions, what does the interoperability mean? And there, for our understanding, is that we need somehow the transparency about each of the regulations or each of the frameworks that actually disciplines this AI development and also the usage. And in this sense, I think this AI convention is really important because, as explained here, the AI convention is a framework convention. And each country has its own, will take their own measures to this AI innovation and also the risk mitigation. And it’s really important to respect to each other’s, the other country’s culture or how they regulate their official intelligence and how actually they can connect to each other’s frameworks. And it’s really important that each country has this clear explainability, accountability of what’s the role of each stakeholder has, what will be the responsibility and how they can supervise whether that kind of measurement is actually working. And in that sense, I think that that kind of interoperability and also this AI convention have somehow, has really important views to share. And also, the next point we actually raised as our policy recommendation is that how we actually considers about the responsibility. And I think the other panelists also discussed, but I think what is important thing is that to discuss about the responsibility of the developers, the deployers, and also the users. However, the regulation is especially important regarding the user side. And because there’s the power imbalance between the users and the developers. So what we have to do is that not only the rules of the regulation, but also we need to discuss how to empower the citizens to create, to empower and to hire their literacy so that they can judge what’s the AI actually has, what kind of developed. And I am really happy to hear that the professor actually raised the ESGs, how the investors are also very important stakeholders. So not only the rules or the regulation by the legal framework, but actually there’s a lot of discipline that we can actually use. So for example, like the investors’ investment or maybe the reputation and also like the literacy. So we can, and also actually the technology itself as well. So there’s lots of measurement that we can take and with all that things concerned, I think we can create a more better or the more responsible AI systems or the AI implemented society as a whole. And in the last part, lastly, but not least, what we also emphasize is the importance of the multi-stakeholder discussion. And I believe that the here, the IGF is one of the very good timing that we actually discuss this here because we actually, there’s a Hiroshima AI process ongoing and lots of countries right now are dealing with their own regulatory frameworks. And as I said, Japanese government is also creating this guidelines or maybe updating the guidelines. And with that, this is actually the place we actually share what have been discussed or what we can share the values. And the important value that actually the Council of Europe raises the democracy, human rights, and the rule of law. That important values we share. And with that, we can have the transparency and also this kind of framework interoperability to discuss and then make this principles or the policy into practices. And so with that, I will stop here, but I really appreciate to be in this panel.
Moderator:
Thank you very much, Arisa. And before I react to, I would like to encourage people to those that want to interact, please stand up and go to the microphones. We have a little less time than planned for the interactive discussion, but we do have a little bit of time. I think what this panel has shown is that although we share the same goals, we do have different systems, we do have different traditions, cultures, not just legally, but also socially. And of course, it is a great challenge. And as Arisa has also said, if we want to develop this common framework with the Council of Europe, but not just for European countries, for all countries around the world, one of the biggest challenges is indeed, and we’ve heard a few hints how to, one thing is how to have governments commit themselves to follow some rules. This is the easy part, let’s say, in a convention where governments can commit to keep sticking to some rules. But since we have so differing systems of how to make the private sector respect human rights, contribute positively and not negatively to democracy, how do we deal with these differences? How do we responsibilize private sector actors in a way that they can be innovative, but that they contribute positively and not negatively to our values? So this is one of the key challenges also for us working on this convention, because we cannot just rely on one continent or one country’s framework. We have to find the common ground between different frameworks. So having said this, I’m happy to see a few people take the floor. Please introduce yourself with your name briefly and then make your comment or ask your question. Thank you.
Audience:
Thank you, sir. My name is Christophe Zeng. I’m the founder of AAA.AI Association based in Geneva, Switzerland. And my question regards a continuation of the topic that we just covered regarding responsibility and trade-offs. Continuing on this question, I would like to raise what I call a right of humans in comparison to the human rights. Isn’t it the right of humans to endure the test of time? In order to endure the test of time, is it a right or duty of collective sacrifice? Is it a right or duty to redefine some of our most fundamental beliefs and values? If there was a solution which could deliver a way to that right, but which could require us to relent temporarily or risk relenting permanently, some of our human rights as defined for the declaration because of speed or because of need for consultation. For that right to endure the test of time, speed versus right, consultation versus sovereignty, right versus rights. If there existed a binary choice at some point, ladies and gentlemen, what right ought we to choose? And my question extends to also our colleague at IBM and the broader world communities. If we are to not try to solve all problems at the same time, but instead jointly solve specific questions and tackle the overall question jointly, are we accepting to sacrifice the speed of decision-making or would we accept that one way makes at some critical point in time for the enduring of humans as a species, some decisions which require speed beyond reasonably possible by a fully democratic process to be made slightly less democratically for those to whom democracy is dearest? And some decisions which require global consultation to be made slightly more democratically for those to whom democracy compacts a challenge to overcome. Thank you.
Moderator:
Thank you very much. You pose an interesting questions if I can try and summarize it. Will we need the right to stay human beings and not turn into machines because we may have to compete with machines? That’s at least some aspects I think I’ve heard. Let’s take another comment and then see what the reactions are. Yes, please go ahead.
Audience:
Thank you for recognizing me. I’m Ken Katayama. Emma Sensei knows I have a work in the manufacturing industry but I have an academic hat with Keio University. My topic relates to some of the American speakers and the concern about bias. So within the Japanese academic world that I live in, our concern is about bias being in the sense that the platform has got Google, Apple, Facebook and Amazon are basically quote unquote American companies. So who decides these algorithms? When we look at the United States, we see extremely wide, like in the case of abortion or politics, there are many issues that are very divided in the United States. And then so if the platformers are deciding these issues, in the end, even if it’s technically possible to try to avoid the bias, who in the end actually decides which answer to go with? Thank you.
Moderator:
Thank you very much. So let me turn to the panel on these two issues or questions. One is like, yeah, how can we stay humans and given also the competition, the growing competition with machines and the other one is think about bias and where it’s not just the AI systems, it’s also the data, of course, that shaped the bias and of course, data is not evenly spread across the world, but there’s more data from some regions, from some people than from others. So whoever wants to react, maybe we give precedence to the ones that are physically present, thank you. Thanks very much.
Liming Zhu:
I think we have a lot of experts online and Professor Emma is here. So just very briefly, reversely, I think who makes those decisions as I alluded to earlier, I mean, as a scientist and I think all the developers or the AI providers, it’s not their decisions to make. It’s the ability to actually expose some of these trade-offs to allow the democratic process to have that debate with data and informed decision and then there’s a dial like privacy, utility and fairness and they decide where to implement. Then the technology assures that implementation is throughout all the system, properly monitored and can be improved by pushing the scientific boundary. I think in terms of AI and human competing, certainly there’s a concern. I mean, when AlphaGo beat human in Go, I mean, these days you may know, people used to say there might be worrying people stop playing chess and Go, but at this moment of in history, the number of people interested in playing Go and chess is historically high. The number of grandmasters in Go and chess is historically high. The reason being, human find the meaning in those work and the game, and they will continue to do that. Even AI surpassed them. They learn from AI, they work with AI, they make a better society, I think, but that speeds of change might be too fast sometimes for us to cope.
Arisa Ema:
Yeah, so I guess the important thing that we should discuss or that we should be aware is that although we are talking about the artificial intelligence as a technology, but it’s more like, as a professor said, it’s a system. It’s not only the AI algorithm, but AI models, but it’s more like AI systems, AI services, and within that systems, the human beings are also included. So we have to discuss about the human-machine interaction or the human-machine collaboration. In that way, maybe the partially kind of responded to the both questions is that, so we need to, we actually don’t have a clear answer, but we have to discuss the human-machine interaction and also the human biases are already included into this kind of human-machine systems. So from the academic hats on my head, I would like to say that we need to focus more on this cultural and also the interdisciplinary discussion on the human-machine interaction with artificial intelligence. Thank you very much.
Moderator:
We are approaching the end of this session, and I’d just like to maybe close with one remark that maybe showed that I’m not a lawyer. I’m an economist and a historian, and whenever we talk about the crucial moments that we are at the edge of history becoming completely different than before, this is also something that every generation in every point of history thought, and if we look back around 200 years or 150 years, when the combustion engine was spreading all over continents, that also had a huge effect. It did not replace like what AI is about to do, cognitive labor, but it replaced physical labor by machines that you can actually, there where you find a lot of comparisons. If you take engines and compare them, what they did, they were used in all kinds of machines to either produce something or to move something or somebody from A to B, and we learned to deal with engines. We have developed not just one piece of legislation, but hundreds of norms, technical, legal, social norms for engines used in different kinds of contexts, and we’ve been, for instance, if you take traffic legislation, we’ve been able to reduce the number of dead people in car accidents significantly. At the same time, the biggest challenge is how to reduce the CO2 emissions in engines. We are still struggling after 200 years of using engines on how to solve that problem. So, yeah, and there’s many more analogies between AI systems and engines, but also differences because AI systems are not physically placed somewhere, but can be moved and reproduced much easier than engines, but I’m very delighted by this discussion, and I hope that we will continue. There’s a number of AI-related sessions this week in this IGF. I’m part of a few of these, and I hope to see you again also in the couloirs because I’m really interested in, together with you, finding a solution on how this Council of Europe Convention can be a global instrument that will also not solve all the problems, but help us to get closer together to successfully use AI for the good and not for the bad. So, thanks a lot for this session, and see you soon. Thank you. Thank you. Thank you. Thank you. Thank you very much. Very good. Thank you very much, Chris. Thank you. Thank you very much. Thank you very much.
Speakers
Arisa Ema
Speech speed
151 words per minute
Speech length
1552 words
Speech time
616 secs
Arguments
Arisa Ema believes in the need for responsible AI and framework interoperability
Supporting facts:
- Arisa Ema serves on the AI Strategic Council and the Japanese government.
- She is active in Japan’s initiatives on AI governance.
Topics: AI governance, AI regulation, AI frameworks
Multi-stakeholder discussions are essential in AI governance according to Arisa Ema
Supporting facts:
- Igf has been appreciated as a platform by Ema connecting various stakeholders and sharing valuable discussions.
Topics: Multi-stakeholder discussions, IGF
Artificial Intelligence should be viewed as a system, not only as an AI algorithm or model.
Supporting facts:
- AI systems include human beings, implying a need for human-machine interaction or collaboration.
- Human biases are embedded in these human-machine systems.
Topics: Artificial Intelligence, Human-machine interaction, AI systems
Report
Arisa Ema, who holds distinguished positions in the AI Strategic Council and Japanese government, is an active participant in Japan’s initiatives on AI governance. She ardently advocates for responsible AI and interoperability of AI frameworks. Her commitment aligns with SDG 9: Industry, Innovation, and Infrastructure and SDG 17: Partnerships for the Goals, showcasing her belief in the potential for technological advancement to drive industry innovation and foster worldwide partnerships for development.
Moreover, Ema underlines the crucial need for empowering users within the domain of AI, striving for power equilibrium. The current power imbalance between AI users and developers is seen as a substantial challenge. Addressing this links directly with achieving SDG 5: Gender Equality and SDG 10: Reduced Inequalities.
According to Ema, a balanced power dynamic can only be achieved when the responsibilities of developers, deployers, and users are equally recognised in AI governance. Ema also appreciates the Internet Governance Forum (IGF) as an indispensable platform for facilitating dialogue among different AI stakeholders.
She fiercely supports multi-stakeholder discussions, citing them as vital to AI governance. Her endorsement robustly corresponds with SDG 17: Partnerships for the Goals, as these discussions aim to bolster cooperation for sustainable development. Ema introduces a fresh perspective on Artificial Intelligence, arguing that AI should be perceived as a system, embracing human beings, and necessitating human-machine interaction rather than a mere algorithm or model.
This nuanced viewpoint can significantly impact the pursuit of SDG 9: Industry, Innovation, and Infrastructure, as it recommends the integration of human-machine interaction and AI. Furthermore, Ema promotes interdisciplinary discussions on human-AI interaction as a critical requirement to fully understand the societal impact of AI.
She poses dialogue to bridge cultural and interdisciplinary gaps as quintessential, given the multi-faceted complexities of AI. These discussions will help in identifying biases entrenched in human-machine systems and provide credible strategies for their elimination. In conclusion, Arisa Ema’s holistic approach to AI governance encapsulates several pivotal areas; user empowerment, balanced power dynamics, multi-stakeholder discussions, and interdisciplinary dialogues on human-AI interaction.
Her comprehensive outlook illuminates macro issues of AI while underscoring the integral role these elements play in sculpting AI governance and functionalities.
Audience
Speech speed
155 words per minute
Speech length
522 words
Speech time
202 secs
Arguments
It is essential to endure the test of time and maybe redefine some human rights to face future challenges
Supporting facts:
- A potential scenario that requires speed beyond that which a fully democratic process can provide
Topics: Human rights, Future challenges, Decision-making speed, Democratic process
There may be a need for a more democratic approach where democracy poses a challenge to overcome
Supporting facts:
- The call for global consultation for certain decisions that require a more democratic approach
Topics: Democracy, Challenges
Concern over who decides the algorithms of major tech companies
Supporting facts:
- Most of the major tech companies are US based.
Topics: Algorithm Bias, Google, Apple, Facebook, Amazon, Policy Making
Report
This discourse unfolds numerous vital topics centring on democratic decision-making, human rights, and the survival of the human species from a futuristic perspective, primarily focusing on the speed and agility of decision-making and potential implications on the democratic process. A thoughtful proposal for superseding the conventional democratic process, when deemed necessary for the species’ survival was advanced.
This may even necessitate redefining aspects of human rights to better manage unforeseen future challenges. The discussion also touched on circumstances where democracy could pose certain hurdles, suggesting a more democratic model could be beneficial for overcoming such issues. This proposed approach underlines the idea of a holistic, global consultation for such decision-making scenarios, emphasising the inherent value of democratic ethos in contending with complex problems.
A notable argument for enhanced collaboration was presented, stressing on adopting a concerted, joint problem-solving strategy rather than attempting to solve all problems at once. This suggestion promotes setting clear priorities and addressing each issue effectively, thereby creating a more synergetic solution to thematic global issues.
Within the technology paradigm, concerns were raised about who governs the algorithmic decision-making of major US-based tech companies. The argument underscores the non-transparent nature of these algorithms. Concerns related to potential bias in the algorithms were voiced, considering the deep division on various issues within the United States.
There were calls for transparent, unbiased algorithm development to reflect neutrality in policy-making and respect user privacy. In essence, the conversation revolved around balancing quick, efficient decision-making with the democratic process, the re-evaluation of human rights in the face of future challenges, the importance of joint problem-solving in addressing global issues and maintaining transparency and fairness in technological innovations.
The discourse sheds light on the intricate interplay of politics, technology, and human rights in shaping the global landscape and fosters a nuanced understanding of these issues in connection with sustainable development goals.
Björn Berge
Speech speed
117 words per minute
Speech length
1838 words
Speech time
943 secs
Arguments
AI systems can bring positive changes such as increased efficiency, better decisions, and improved services
Supporting facts:
- AI can increase productivity with mundane and repeated tasks moving from humans to machines
- AI can make better decisions based on big data, eliminating human error
Topics: Artificial Intelligence, Technology Development
Report
Artificial Intelligence (AI) carries the potential to revolutionise various sectors worldwide, due to its capacities for improved efficiency, advanced decision-making, and enhanced services. It can significantly enhance productivity by automating routine and repetitive tasks usually undertaken by humans. Additionally, AI systems can harness big data to make more precise decisions, eliminating human errors and thereby resulting in superior service delivery.
Nevertheless, the growth of AI necessitates a robust regulatory framework. This framework should enshrine human rights as one of its core principles and should advocate a multi-stakeholder approach. It is vital for AI systems to be developed and used in a manner that ensures human rights protection, respects the rule of law, and upholds democratic values.
Aligning with this, the Council of Europe is currently working on a treaty that safeguards these facets whilst harnessing the benefits of AI. This treaty will lay down principles to govern AI systems, with a primary focus on human rights, the rule of law, and democratic values.
Notably, the crafting process of this treaty doesn’t exclusively involve governments, but also includes contributions from a wide array of sectors. Civil society participants, academic experts, and industry representatives all play a crucial role in developing an inclusive and protective framework for AI.
The Council of Europe’s treaty extends far beyond Europe and has a global scope. Countries from various continents are actively engaged in the negotiation process. Alongside European Union members, countries from North, Central, and South America, as well as Asia, including Canada, the United States, Mexico, Israel, Japan, Argentina, Costa Rica, Peru, and Uruguay, are involved in moulding this international regulatory framework.
This global outreach underscores the importance and universal applicability of AI regulation, emphasising international cooperation for the responsible implementation and supervision of AI systems.
Daniel Castaño Parra
Speech speed
158 words per minute
Speech length
911 words
Speech time
346 secs
Arguments
AI regulation is profoundly important
Supporting facts:
- AI is deeply embedded in our societal fabric
- Like any transformative technology, its immense power brings forth both promises and challenges
Topics: AI, Regulation
Latin America faces challenges in AI regulation
Supporting facts:
- The AI landscape in Latin America is both promising and challenging
- Infrastructure discrepancies, variances in technology access, and a mosaic of data privacy norms necessitate a nuanced approach
Topics: Latin America, AI, Regulation
Specific solutions proposed for Latin America
Supporting facts:
- Establish a dedicated entity to harmonize AI regulation
- Promote the creation of technology sharing platforms
- Consider pooling resources to build regional digital infrastructure
- Strengthen data privacy and protection
Topics: Latin America, AI, Regulation
AI has potential to address regional challenges
Supporting facts:
- AI can deliver healthcare to remote Amazonian villages
- AI can predict and mitigate the impact of natural disasters
Topics: Latin America, AI
Report
AI’s deeply integrated role in our societal fabric underscores the profound importance of its regulation, exhibiting promising transformative potential while simultaneously posing challenges. As this transformative technology continues to evolve and permeate various aspects of societal fabric globally, the pressing need for robust, comprehensive regulations to guide its usage and mitigate potential risks becomes increasingly evident.
Focusing attention on Latin America, the task of AI regulation emerges as both promising and challenging. Infrastructure discrepancies across the region, variances in technology usage and access, and a complex web of data privacy norms present considerable obstacles. The diversity of the regional AI landscape necessitates a nuanced approach to regulation, considering the unique characteristics and needs of different countries and populations.
In response to these challenges, specific solutions have been proposed. A primary recommendation is the establishment of a dedicated entity responsible for harmonising AI regulations across the region. This specialist body could provide clarity and consistency in the interpretation and application of AI laws.
Additionally, advocating for the creation of technology-sharing platforms could help bridge the gap in technology access across varying countries and communities. A third suggestion involves pooling regional resources for constructing a robust digital infrastructure, bolstering AI capacity and capabilities in the region.
The significance of stakeholder involvement in shaping the AI regulatory dialogue is recognised. A diverse array of voices, incorporating those from varying sectors, backgrounds and perspectives, should actively participate in moulding the AI dialogue. This inclusive, participatory approach could help to ensure that the ensuing regulations are equitable, balanced, and responsive to a range of needs and concerns.
Further, the argument highlights the potential of AI in addressing region-specific challenges in Latin America. The vital role AI can play in delivering healthcare to remote areas, such as Amazonian villages, is stressed, while also being instrumental in predicting and mitigating the impact of natural disasters.
Thus, it strengthens its potential contribution towards achieving the Sustainable Development Goals concerning health, sustainable cities and communities, and climate action. In conclusion, while AI regulation presents significant hurdles, particularly in regions like Latin America, it also unveils vast opportunities.
Harnessing the promises of AI and grappling with its associated challenges will demand targeted strategies, proactive regulation, wide-ranging stakeholder involvement, and an unwavering commitment to innovation and societal enhancement.
Francesca Rossi
Speech speed
156 words per minute
Speech length
1134 words
Speech time
437 secs
Arguments
AI is a social technical field of study
Supporting facts:
- Francesca Rossi agrees with Ivana that AI is not just pure science or technology, but it also has social impacts.
Topics: AI, Technology, Social Impact
Regulation is necessary on the uses of technology, not the technology itself
Supporting facts:
- Francesca believes that because the same technology can have various applications with different levels of risk, regulation should focus on the usage rather than the technology itself.
Topics: Regulation, AI, Technology use
Companies need to play their role in ensuring their AI technology respects human rights
Supporting facts:
- Francesca speaks from her own experience in IBM, where despite lack of regulation in some regions, they have taken steps to ensure their AI technology respects human rights.
Topics: AI, Companies, Human rights
Companies should have a company-wide approach and framework for AI ethics
Supporting facts:
- In IBM, Francesca explains they have a centralized governance for their company-wide framework for AI ethics.
Topics: AI, Ethics, Companies, Framework
Importance of research in expanding AI capabilities and addressing its limitations
Supporting facts:
- Francesca emphasizes that research can not only expand the capabilities of AI but also help in overcoming its current limitations.
Topics: AI, Research, Capabilities, Limitations
Value of partnerships that are inclusive, multi-stakeholder and global
Supporting facts:
- Francesca stresses the importance of multi-stakeholder partnerships and calls for a harmonized approach in dealing with the crowded space of AI.
Topics: AI, Partnerships, Inclusivity
Report
Francesca Rossi underscores that AI is not simply a realm of pure science or technology; instead, it should be considered a social technical field of study, bearing significant societal impacts. This viewpoint emphasises that the evolution and application of AI are profoundly intertwined with societal dynamics and consequences.
Furthermore, Francesca advocates robustly for targeted regulation in the AI field. She firmly asserts that any necessary regulation should focus on the varied uses and applications of the technology, which carry different levels of risk, rather than merely on the technology itself.
This argument stems from the understanding that the same technology can be utilised in countless ways, each with its own implied benefits and potential risks, therefore calling for tailored oversight mechanisms. Francesca’s support for regulatory bodies such as the Council of Europe, the European Commission, and the UN is evident from her active contribution to their AI-related works.
She perceives these bodies as playing a pivotal role in steering the direction of AI in a positive vein, ensuring its development benefits a diverse range of stakeholders. Drawing from her experience at IBM, she reflects a corporate belief in the crucial importance of human rights within the context of AI technology use.
Despite absent regulations in specific areas, IBM has proactively taken steps to respect and safeguard human rights. This underlines the duty that companies need to uphold, ensuring their AI applications comply with human rights guidelines. Building on IBM’s commitment to responsible AI technology implementation, Francesca discusses the company’s centralised governance for their AI ethics framework.
Applied company-wide, this approach implies that it’s pivotal for companies to maintain a holistic approach and framework for AI ethics across all their divisions and operations. Francesca also emphasises the crucial role of research in both augmenting the capabilities of AI technology and in addressing its current limitations.
This supports the notion that on-going research and innovation need to remain at the forefront of AI technology development to fully exploit its potential and manage inherent limitations. Lastly, Francesca highlights the value of establishing partnerships to confidently navigate the crowded AI field.
She fervently advocates for inclusive, multi-stakeholder, and worldwide collaborations. The need for such partnerships arises from the shared requirement for protocols and guidelines, to ensure the harmonious handling of AI matters across borders and industries. In summary, Francesca accentuates the importance of viewing AI within a social context.
She brings attention to matters related to regulation, the function of international institutions, and corporate responsibility. Additionally, she illuminates the significance of research and partnerships in overcoming challenges and amplifying the capabilities of AI technologies.
Ivana Bartoletti
Speech speed
143 words per minute
Speech length
1667 words
Speech time
701 secs
Arguments
AI brings enormous innovation and progress if done correctly, but also presents risks such as perpetuating existing inequalities and bias.
Supporting facts:
- Society already combating harms like bias and disinformation on AI.
- Seen AI impact in credit given to women and job adverts targeted to women.
- Predictive technologies can lead to self-fulfilling prophecies of those biases.
Topics: Artificial Intelligence, Innovation, Progress, Risk, Inequality, Bias
Updating existing legislation, mandatory discrimination risk and equality impact assessments, and focus on accountability and transparency can help manage bias in AI.
Supporting facts:
- Positive action measures and positive obligations can help tackle algorithmic discrimination.
- Certification mechanisms and statistical data can provide insights into discriminatory effects.
- Consider establishing legal obligations for users of AI systems.
Topics: Legislation Update, Discrimination Risk, Equality Impact Assessments, Accountability, Transparency
Report
Artificial Intelligence (AI) is a potent force brimming with potential for immense innovation and progress. However, it also presents a host of risks, one key issue being the perpetuation of existing biases and inequalities. These problems are particularly evident in areas such as credit provisions and job advertisements aimed at women, illustrating the tangible impact of our current and prospective use of AI.
There’s a worrying possibility that predictive technologies could further magnify these biases, leading to self-fulfilling prophesies. Importantly, addressing bias in AI isn’t merely a technical issue—it’s also a social one and hence necessitates robust social responses. Bias in AI can surface at any juncture of the AI lifecycle, as it blends code, parameters, data and individuals, none of which are innately neutral.
This complex combination can inadvertently result in algorithmic discrimination, which might clash with traditional forms of discrimination, underlining the need for a multidimensional approach to tackle this challenge. To effectively manage these issues, a comprehensive strategy that includes legislative updates, mandatory discrimination risk assessments and an increased emphasis on accountability and transparency is required.
By imposing legal obligations on the users of AI systems, we can enforce accountability and regulatory standards that could prevent unintentional bias in AI technologies. Implementing measures for positive action, along with these obligations, could provide a robust framework to combat algorithmic discrimination.
In addition, the introduction of certification mechanisms and use of statistical data can deliver insightful assessments of discriminatory effects, contributing significantly to the strife against bias in AI. Such efforts have the potential to not only minimise the socially harmful impacts of AI, but also reinforce the tremendous potential for progress and innovation AI offers.
In summary, it’s clear that the expansion of AI brings significant risks of bias and inequality. However, by adopting a broad approach that encapsulates both technical and social responses while emphasising accountability and transparency, we can navigate the intricacies of AI technologies and harness their potential for progress and innovation.
Liming Zhu
Speech speed
181 words per minute
Speech length
1636 words
Speech time
543 secs
Arguments
Australia has been focusing on operationalizing responsible AIs
Supporting facts:
- Australia started developing AI ethics framework it led by Data61 in 2019.
- Australian National AI Center, hosted by Data61, was launched two years ago primarily focusing on responsible AI at scale.
- Data61 has created think tanks to help Australian industry navigate the challenge of adopting AI responsibly.
Topics: Artificial Intelligence, Ethics, AI Governance
Trade-offs must be made in data utility, privacy, and fairness
Supporting facts:
- There are trade-offs between data utility and privacy.
- Advancements are needed to enable decision makers make informed decisions.
- Preserving privacy might harm fairness in some cases.
Topics: Data Governance, Privacy, Fairness
Responsible AI needs to be viewed through the lens of ESG
Supporting facts:
- Ground work is being done to consider responsible AI through ESG.
- It will ensure investors can drive the levers of doing better responsible AI.
Topics: Environmental, Social, and Governance (ESG), Artificial Intelligence
Supply chain accountability is essential
Supporting facts:
- AI bills of materials are being developed to make sure what sort of AI is in the system and that accountability is shared among different players in the supply chain.
Topics: Supply Chain, Accountability, AI Governance
AI and human competition should not be a concern as humans find meaning in work and game
Supporting facts:
- When AlphaGo beat human in Go, it resulted in increased interest in playing Go and Chess
- The number of grandmasters in Go and Chess is historically high
Topics: Artificial Intelligence, Human-AI interaction, AlphaGo, Chess, Go
Exposing trade-offs in AI and allowing for a democratic decision-making process is crucial
Topics: Artificial Intelligence, Decision Making, Democracy
Report
Australia has placed a significant emphasis on operationalising responsible Artificial Intelligence (AI), spearheaded by initiatives from Data61, the country’s leading digital research network. Notably, Data61 has been developing an AI ethics framework since 2019, serving as a groundwork for national AI governance.
Furthermore, Data61 has recently established think tanks to assist the Australian industry in responsibly adopting AI, a move that has sparked a positive sentiment within the field. In tandem, debates around data governance have underscored the necessity of finding a balance between data utility, privacy, and fairness.
While these components are integral to robust data governance, they may involve trade-offs. Advances are thus required to enable decision-makers to make pragmatic choices. The issue of preserving privacy could potentially undermine fairness, introducing complex decisions that necessitate comprehensive strategies.
As part of the quest for responsible AI, Environmental, Social, and Governance (ESG) principles are becoming increasingly prevalent. Efforts are underway to incorporate responsible AI directives with ESG considerations, thereby ensuring that investors can influence the development of more ethical and socially responsible AI systems.
This perspective signals a broader understanding of AI’s implications that extend beyond its technical dimensions. Accountability within supply chain networks is also being highlighted as pivotal in enhancing AI governance. Specifically, advances on AI bills of materials aim to standardise the types of AI used within systems whilst sharing accountability amongst various stakeholders in the supply chain.
This marks a recognition of the collective responsibility of stakeholders in AI governance. In light of the rise of AI in the realm of game theory, exemplified by AlphaGo’s victory in the game of Go, there’s a reassurance that rivalry between AI and humans is not necessarily worrying.
Contrary to eliminating human involvement, these advances have instigated renewed interest in such games, leading to a historical high in the number of grandmasters in both Go and Chess. Highlighting the shared responsibility in addressing potential bias and data unevenness within AI development is vital.
The assertion is that decision-making concerning these issues should not be solely the responsibility of developers or AI providers, suggesting that a collective approach may be more beneficial. In summary, it’s crucial to incorporate democratic decision-making processes into AI operations.
This could involve making visible the trade-offs in AI, which would allow for a more informed and inclusive decision-making process. Overall, these discussions shed light on the multifaceted and challenging aspects of responsible AI development and deployment, providing clear evidence of the need for a comprehensive and multifaceted approach to ensure ethical AI governance.
Merve Hickok
Speech speed
143 words per minute
Speech length
1025 words
Speech time
429 secs
Arguments
Innovation and protection should coexist in AI regulation
Supporting facts:
- Regulation creates clarity and safeguards, making innovations better, safer, more accessible.
- Genuine innovation promotes human rights, engagement, and transparency.
Topics: AI regulation, Innovation
Rule of law standards must apply to both public and private actors
Supporting facts:
- Businesses also have an obligation to respect human rights and rule of law
- UN’s Guiding Principles for Business reflect this
Topics: AI ethics, Private sector, Public sector
Domination of industry in the policymaking process undermines democratic values
Supporting facts:
- Democracy requires public engagement
- Industry dominance is likely to exacerbate existing concerns about replication of bias, displacement of labor, concentration of wealth, and power imbalances
Topics: AI policy, Democracy
Report
The comprehensive analysis conveys a positive sentiment towards the regulation and innovation in Artificial Intelligence (AI), emphasising their coexistence for ensuring better, safer, and more accessible technological advancements. Notably, genuine innovation is perceived favourably as it bolsters human rights, promotes public engagement, and encourages transparency.
This viewpoint is grounded in the belief that AI regulatory policies should harmonise the nurturing of innovation and the implementation of essential protective measures. The analysis also underscores that standards based on the rule of law must apply universally to both public and private sectors.
This conviction is influenced by the United Nations’ Guiding Principles for Business, which reinforce businesses’ obligation to respect human rights and abide by the rule of law. This represents a paradigm shift towards heightened accountability in the development and deployment of AI technologies across different societal sectors.
However, there is significant apprehension surrounding the perceived industrial domination in the AI policymaking process. Such dominance is viewed negatively as it could erode democratic values, potentially fostering bias replication, labour displacement, concentration of wealth, and disparity in power. Critics argue this scenario could compromise the public’s interests.
Moreover, the analysis highlights strong advocacy for the integration of democratic values and public participation into the formulation of national AI policies. This stance is complemented by a call for the establishment of robust mechanisms for independent oversight of AI systems, aiming to safeguard citizens’ rights.
The necessity to ensure AI technologies align with and uphold democratic principles and norms is thus underscored. Nevertheless, the analysis reveals resolute opposition to the use of facial recognition for mass surveillance and deployment of autonomous weaponry. These technologies are seen as undermining human rights and eroding democratic values—an interpretation echoed in UN negotiations.
In conclusion, despite AI offering tremendous potential for societal advancements and business growth, it’s critical for its advancement and application to adhere to regulatory frameworks preserving human rights, promoting fairness, ensuring transparency, and upholding democratic values. Cultivating an equilibrium and a forward-thinking climate for formulating AI policies involving public participation can assist in mitigating and managing the potential risks.
This approach ensures that AI innovation evolves ethically and responsibly.
Moderator
Speech speed
175 words per minute
Speech length
2563 words
Speech time
877 secs
Arguments
AI systems are wonderful tools if they are used for the benefit of all people and if they are not used for hurting people, for creating harm
Supporting facts:
- Thomas chairs the negotiations on the first binding AI treaty at the Council of Europe
- The treaty is open to all countries that respect and value the same values of human rights, democracy and rule of law
Topics: Artificial Intelligence, Human Rights, Democracy
Technological progress affecting every single community globally calls for enhanced digital cooperation
Supporting facts:
- The IGF is meeting for the 18th time
- The need to identify and mitigate common risks and ensure the benefits of technology are indeed helpful and respect fundamental rights
Topics: Technology, Digital Cooperation, Globalization
Benefits of AI include increased efficiency, productivity, elimination of human error, medical breakthroughs
Supporting facts:
- AI can take over mundane and repeated tasks
- AI can assist in making decisions based on big data
- AI can lead to scientific and medical breakthroughs
Topics: AI, Efficiency, Medicine, Productivity
Requirement of a multi-stakeholder approach to develop an international treaty on AI that respects human rights, rule of law and democratic values
Supporting facts:
- Council of Europe is developing a framework convention on AI led by Ambassador Schneider
- The goal is to design, develop and use AI that respects common legal standards
Topics: AI, International Governance, Human Rights, Multi-Stakeholder Approach
AI is bringing great innovation and progress
Supporting facts:
- There has been notable progress and benefits that artificial intelligence and automated decision-making have brought to humanity.
Topics: Artificial intelligence, Innovation, Progress
Bias in AI can perpetuate and code existing inequalities
Supporting facts:
- There have been instances where less credit was given to women because women traditionally make less money than men. Similarly, cases where job adverts targeting lower pay were served to women and instances where families were wrongfully identified as potential fraudsters in the benefit system.
Topics: Artificial intelligence, Bias in AI, Inequality
AI should be viewed as a socio-technical tool rather than purely technical
Supporting facts:
- Data, parameters, and humans all play a role in creating a code. Consequently, AI involves code, parameters, people, data, and none of this is neutral.
Topics: Artificial intelligence, Technology, Society
AI is a social technical field, not just a science or technology
Supporting facts:
- Francesca Rossi agrees with everything said by previous speakers stating that AI has social technical aspects
Topics: Artificial Intelligence, Technology Study
Regulation of AI is necessary but should focus on its uses, not the technology itself
Supporting facts:
- Francesca Rossi and her company believe that regulation does not stifle innovation, but should focus on application scenarios of the technology
Topics: AI Regulation, Technology Risk Management
Companies have a role in ensuring their technology respects human rights and freedoms
Supporting facts:
- IBM, despite having no legal obligation in certain regions, strives to make their technology respect human rights and freedoms
Topics: Human Rights, Company Responsibility, AI Ethics
Research on AI can help to address its current limitations
Supporting facts:
- Francesca Rossi believes that AI research can expand the capabilities of AI and help resolve its current issues
Topics: AI Research, Technology Development
Partnerships are important in AI field, they should be inclusive, multi-stakeholder and global
Supporting facts:
- Francesca Rossi stresses the complementarity of each initiative in the AI industry, and the importance of finding ways to work together
Topics: AI Partnerships, Global Cooperation
AI is deeply embedded in our societal fabric.
Supporting facts:
- From enhancing healthcare diagnosis to making financial markets more efficient
Topics: AI, society
AI regulations are paramount not only to Europe, but to the world and to Latin America
Supporting facts:
- Transparency is necessary in an age where algorithms shape many of our daily decisions. Accountability is essential for societies to thrive, and addressing any errors or discrimination is crucial. We are duty-bound to ensure that AI doesn’t perpetuate existing biases, but instead aids in creating a fairer society.
- A unified regulatory framework is desirable but challenging due to fragmentation, technological gaps, and infrastructure challenges.
Topics: AI, Regulation, Europe, World, Latin America
Establishing regional coordination for harmonizing AI regulation across Latin America is important.
Supporting facts:
- A unified regulatory framework is desirable but challenging due to fragmentation, technological gaps, and infrastructure challenges.
Topics: AI Regulation, Latin America, Regional Cooperation
AI can be a potential solution to regional challenges in Latin America
Supporting facts:
- AI has potential to deliver healthcare to remote Amazonian villages, or predicting and mitigating the impact of natural disasters
Topics: AI, Problem solving, Regional Challenges, Latin America
AI ethics framework in Australia includes traditional quality attributes along with unique AI features
Supporting facts:
- Australia developed AI ethics framework in 2019
- The ethics framework includes elements of transparency, accountability, and explainability
Topics: AI ethics framework, Australia’s AI approach
Recognizing trade-offs in AI data governance
Supporting facts:
- There are trade-offs between data utility and privacy in AI
- AI can also affect fairness when trying to preserve privacy
Topics: Data Governance, Privacy
The lens of Environmental, Social, and Governance (ESG) for responsible AI
Supporting facts:
- Responsible AI is being viewed through the lens of ESG
- Factors such as AI’s environmental footprint and societal impact are considered
Topics: ESG, Responsible AI
Different countries have different traditions, cultures, laws regarding the implementation and managing of artificial intelligence.
Supporting facts:
- Although we share the same goals, we do have different systems, we do have different traditions
Topics: Artificial intelligence, Regulation, Cultural differences
The biggest challenge in developing an international common framework is dealing with differing systems and ensuring that private sectors respect human rights while contributing positively to democracy.
Supporting facts:
- one of the biggest challenges is indeed, and we’ve heard a few hints how to, one thing is how to have governments commit themselves to follow some rules; how do we responsibilize private sector actors in a way that they can be innovative, but that they contribute positively and not negatively to our values?
Topics: international relations, human rights, democracy, private sectors
AI is not just a technology but a system that includes humans
Supporting facts:
- AI includes not just algorithms, but models, services, and human interaction
Topics: Artificial Intelligence, Human-machine interaction
There are a lot of parallels and differences between the AI systems and engines
Supporting facts:
- Engines have replaced physical labor just like AI is about to replace cognitive labor
- We have learnt to deal with engines by developing norms
Topics: AI systems, Historical transitions, Engines
Report
Artificial Intelligence (AI) is progressively becoming an influential tool with the ability to transform crucial facets of society, including healthcare diagnostics, financial markets, and supply chain management. Its thorough integration into our societal fabric has been commended for bringing effective solutions to regional issues, such as providing healthcare resources to remote Amazonian villages in Latin America and predicting and mitigating the impact of natural disasters.
Echoing this sentiment, Thomas, who leads the negotiations for a binding AI treaty at the Council of Europe, asserted that AI systems can serve as invaluable tools if utilised to benefit all individuals without causing harm. This view is reflected by Baltic countries who are also working on their own convention on AI.
The treaty is designed to ensure that AI respects and upholds human rights, democracy, and the rule of law, forming a shared value across all nations. Despite the substantial benefits of AI, the technology is not without its challenges. A significant concern is the bias in AI systems, with instances of algorithmic discrimination replicating existing societal inequalities.
For instance, women being unfairly targeted with job adverts offering lower pay and families mistakenly identified as potential fraudsters in the benefits system. In response to this, an urgent call has been made to update non-discrimination laws to account for algorithmic discrimination.
These concerns have been encapsulated in a detailed study by the Council of Europe, stressing the urgent need to tackle such bias in AI systems. In response to these challenges, countries worldwide are developing ethical frameworks to facilitate responsible use of AI.
For instance, Australia debuted its AI ethics framework in 2019. This comprehensive framework amalgamates traditional quality attributes with unique AI features and emphasises on operationalising responsible AI. The necessity for regulation and accountability in AI, especially in areas like supply chain management, was also discussed.
The concept of “AI bills of materials” was proposed as a means to trace AI within systems. Another approach to promoting responsible AI is viewing it through an Environmental, Social, and Governance (ESG) lens, emphasising the importance of considering factors such as AI’s environmental footprint and societal impact.
Companies like IBM are advocating for a company-wide system overseeing AI ethics and a centralised board capable of making potentially unpopular decisions. Despite the notable differences between countries regarding traditions, cultures, and laws governing AI management, focusing on international cooperation remains a priority.
Such collaborative endeavours aim to bridge the technological gap through the creation of technology sharing platforms and encouraging a multi-stakeholder approach in treaty development. This cooperation spirit is embodied by the Council of Europe coordinating with diverse organisations like UNESCO, OECD, and OSCE.
In conclusion, while technological advances in AI have led to increased efficiency and progress, the need for robust regulation, international treaties, and data governance is more significant than ever. It is crucial to ensure that the use and benefits of AI align with its potential impact on human rights, preservation of democracy, and promotion of positive innovation.