Upholding Human Rights in the Digital Age: Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All
11 Oct 2023 05:00h - 06:30h UTC
Event report
Panelists and Moderator
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Audience
In a recent discussion on Internet Governance, Peggy Hicks emphasized the importance of diverse participation in conferences to obtain a variety of ideas and insights. She highlighted the need for representation from different regions and backgrounds to ensure a comprehensive approach in decision-making processes. Carolyn Tackett also stressed the significance of providing meaningful access to these spaces for individuals whose voices need to be heard, particularly those from marginalized communities.
However, concerns were raised regarding the upcoming Internet Governance Forum (IGF) meeting in Saudi Arabia. Carolyn Tackett raised questions about the compatibility of Saudi Arabia as the venue for this important forum, expressing concerns about potential implications for stakeholder involvement and safety. This raises questions about the adherence to principles of global representation and the ability to ensure a safe and inclusive environment for all participants.
The potential impact of artificial intelligence on job loss was also discussed. There is a fear that the increasing use of AI could lead to a decrease in job opportunities, as tasks that previously required a team of multiple researchers can now be accomplished by a single application. This raises concerns about the future of employment and the need to ensure a balance between technological advancements and job security.
Cyber bullying, particularly targeting vulnerable groups such as women politicians, was highlighted as a prevalent issue in Kenya. This highlights the urgent need to address this form of harassment, protect individuals’ rights to safety online, and implement effective policies and strategies to prevent and combat cyber bullying.
The incident involving WordCoin collecting data from Kenyan citizens without their awareness underscored the need for robust data protection and privacy regulations. It is essential to ensure that individuals maintain control over their personal data and are aware of how it is being used, particularly by technology platforms or companies.
Furthermore, the lack of international regulation and oversight in areas such as artificial intelligence and data protection was identified as a concerning issue. The incident with WordCoin highlighted the consequences of inadequate regulation, emphasizing the necessity for global standards and cooperation in addressing emerging technologies.
In the context of the Global Digital Compact, gender equality was highlighted as a cross-cutting theme. Efforts are being made to promote gender equality in the digital space, with the Alliance for Universal Digital Rights (AUDRI) championing principles such as gender equality in the Global Digital Compact. Additionally, human rights should be a fundamental principle of the Compact, as outlined by AUDRI. This emphasizes the need to prioritize and safeguard human rights in the digital sphere.
The exclusion of lived experience individuals and systemic issues like visa problems at conferences were identified as barriers that need to be addressed. The repeated issues with visa problems highlight the disparities in global mobility and the need for accessible processes that ensure equal participation for attendees. Moreover, there was a call for conferences to address systemic issues and create a more inclusive environment for all participants.
The idea of decolonizing technology and including diverse representation, particularly from the queer community, in IGF debates gained attention. The lack of representation from the queer community, especially transgender individuals, at previous IGFs was criticized. It was highlighted that the queer community faces violence both online and offline, making their representation in these discussions crucial. Additionally, the importance of including more youth perspectives was emphasized to ensure a fresh and inclusive dialogue in internet governance.
In conclusion, the discussions on Internet Governance covered various important topics. The need for diverse and representative participation, addressing concerns about the selection of venues, understanding the implications of artificial intelligence, combatting cyber bullying, protecting data privacy, regulating emerging technologies, promoting gender equality and human rights, addressing systemic barriers, and inclusion of marginalized communities were identified as key areas for further attention and action.
Peggy Hicks
During a discussion on the challenges of the digital age, the importance of a human rights framework was widely acknowledged. This framework serves as a universal agreement that guides ethical decision-making in relation to the internet, digital technology, and artificial intelligence. It was recognised that these advancements present challenging issues that require careful consideration of their impact on individuals and society as a whole.
The human rights framework ensures that all voices, particularly those directly affected by digital technologies, are included and represented. It promotes inclusivity and prevents the domination of certain regions or sectors in discussions about these issues. This is crucial for achieving a balanced and holistic perspective, allowing for a more comprehensive understanding and effective decision-making processes.
The discussion also highlighted the need for meaningful engagement from all communities and adequate resources for researchers. This emphasised the importance of a multi-stakeholder perspective in addressing the challenges posed by the digital age. Including input from various stakeholders, such as individuals, communities, industry experts, and policymakers, ensures a diversity of perspectives, fostering solutions that are informed by the needs and concerns of all stakeholders.
Additionally, the issue of representation at global conferences and the impact of visa issues were discussed. It was noted that there have been persistent issues with global perspective and participation at conferences, as certain stakeholders are absent due to visa limitations. This creates a disadvantage as valuable insights and experiences are missed, hindering the effectiveness of these conferences. The need to address this issue and find ways to facilitate global representation and participation was underlined.
Furthermore, the topic of internet shutdowns was deemed relevant and should continue to be discussed. Internet shutdowns restrict access to information, impede freedom of expression, and have negative implications for individuals and societies. The previous year’s Internet Governance Forum (IGF) also highlighted this issue, further emphasizing the importance of continued attention and action to address this concern.
Data protection, privacy, and transparency were identified as crucial elements in the discussions on artificial intelligence (AI). It was recognised that the challenges related to AI primarily stem from data issues. Protecting personal data, ensuring privacy, and promoting transparency in the use of data are essential for addressing the ethical and societal implications of AI.
In conclusion, the extended summary underscores the significant role of a human rights framework in the digital age. It highlights the importance of inclusion, representation, meaningful engagement, and a multi-stakeholder perspective for addressing challenging issues related to the internet, digital technology, and AI. The impact of visa issues on global representation, the need to continue addressing internet shutdowns, and the focus on data protection and transparency in AI discussions were also noteworthy points raised. These discussions serve as a reminder of the ongoing importance of fostering dialogue and finding ethical and responsible solutions in the rapidly evolving digital landscape.
Eileen Donahoe
The analysis highlights several important points made by the speakers. Firstly, it emphasizes that the governance of AI is grounded in international human rights law. AI has implications for privacy, equal protection, non-discrimination, and freedom of expression, among other human rights considerations. Therefore, it is crucial to ensure that AI development and implementation adhere to a framework that respects and upholds these fundamental rights.
Additionally, the analysis underscores the significance of digital inclusion. With 2.6 billion people still unconnected globally, the majority of whom are women and girls, there exists a significant digital and gender divide. Bridging these gaps and ensuring equal access and participation for all individuals, regardless of their gender or background, becomes imperative.
The importance of striking a balance between protecting the integrity of information and safeguarding freedom of expression is also highlighted. This requires taking into account the interconnected nature of human rights, human dignity in the digital context, and multi-stakeholder processes. It underscores the need to find a middle ground that respects both information integrity and the fundamental right to freedom of expression.
Additionally, the analysis emphasizes the essential role of multi-stakeholder processes in protecting human rights. By ensuring the inclusion of diverse perspectives and interests from various stakeholders, these processes can effectively shape policies and practices that impact human rights. This highlights the importance of inclusive and participatory approaches to governance.
Elevating human rights throughout U.S. cyber and digital policy is identified as a crucial objective. It is important to integrate human rights principles into U.S. policies and practices in the digital sphere to promote peace, justice, and strong institutions, as outlined in the relevant Sustainable Development Goals.
The analysis also highlights the increasing expertise of civil society in tech policy and internet governance. Engaging civil society organizations and individuals in shaping technology-related policies and practices can lead to more inclusive and equitable outcomes, ensuring that human rights considerations are appropriately addressed.
The need to prioritize human rights and inclusion on the agenda of the Internet Governance Forum (IGF), leadership panels, and the Multistakeholder Advisory Group (MAG) is emphasized. Ensuring the accountability of these entities is crucial for incorporating human rights and inclusion considerations into internet governance. This underlines the importance of ongoing dialogue, collaboration, and monitoring to promote responsible and rights-based approaches to technology governance.
While technology inclusion offers numerous benefits, it also carries risks such as surveillance, censorship, and control of information by authoritarian governments. Achieving a balance between the benefits and risks of technology inclusion presents a challenge that requires careful consideration and effective safeguards to protect individuals’ rights and freedoms.
Addressing gender issues in the technology and human rights conversation is also highlighted. Women are most excluded from connectivity, and technology is often used in ways that specifically impact women and girls. Therefore, gender-sensitive approaches to technology development, deployment, and governance are vital, alongside efforts to address the gendered impacts of technology on individuals and societies.
The analysis also recognizes the need for further research and analysis regarding the impact of AI on labor displacement. This under-explored area warrants attention to understand the potential effects on employment and develop strategies for mitigating any negative impacts. It also emphasizes the importance of considering the societal implications of technological advancements beyond immediate benefits and conveniences.
A better understanding of the decision-making process in technology governance is deemed necessary. Transparent, accountable, and inclusive decision-making processes are advocated to ensure that technology-related decisions are made with democratic principles in mind.
The analysis further highlights the importance of tech regulation consistent with human rights, while also capable of limiting content-related harms. Striking a balance between protecting individual rights and addressing the potential negative consequences of certain forms of online content is a key objective.
Technology is seen as a potential solution to address issues such as violence against women and human rights violations. Utilizing technology as a tool can contribute to creating safer and more inclusive environments where individuals’ rights are respected and protected.
Finally, the analysis emphasizes the need for translation and understanding between the tech community and the norms community. Bridging the gap between these two communities, which often have different perspectives and languages, is crucial for effective collaboration and the development of responsible and rights-based technology policies and practices.
In summary, the analysis highlights the interconnectedness of technology, human rights, and governance. It underscores the need for inclusive and participatory approaches, where diverse perspectives are considered, and the rights and dignity of individuals are protected. The insights gained from the analysis provide valuable considerations for policymakers, advocates, and other stakeholders working in the fields of technology and human rights.
Mallory Knodel
The analysis of the speeches highlights several important points regarding human rights, internet governance, and related issues. One of the main arguments made is that human rights serve as a crucial mechanism for addressing pressing social issues. It is emphasised that human rights are tangible and useful in discussing and addressing the most urgent challenges of today.
Furthermore, the relationship between the technical community and the human rights framework is examined. It is noted that the technical community consists of various stakeholders, including industry and states, and it is important to establish their relationship to the human rights framework. This highlights the need to understand how the technical community can contribute to the promotion and protection of human rights within their respective domains.
Censorship and internet resilience are identified as significant concerns that need to be addressed. The analysis suggests that censorship and internet resilience are recurring issues that are discussed in various forums and technical communities. This highlights the importance of actively engaging in conversations surrounding these topics and finding effective solutions.
The speakers emphasise the importance of placing human rights at the centre of engagement on internet issues. They argue that considering human rights in all fora and in the Internet Governance Forum (IGF) is essential. This includes bringing up human rights issues and teasing out the most important aspects for people worldwide. It is also noted that internet governance meetings have taken place in countries with questionable human rights records. This serves to highlight the need to ensure that human rights remain a significant part of the conversation, regardless of the location or host.
The analysis also explores the issue of cyberbullying, emphasising the need for research, understanding, and a nuanced perspective to address such problems effectively. The report by the Centre for Democracy and Technology on the experiences of women of colour in US politics is referenced, along with the requirement for platforms to open up their data to researchers. It is argued that empowering users with more agency and tools to block and report cyberbullying at scale is crucial.
Privacy emerges as a complex issue within the current internet landscape. Despite technical progress, there is an ongoing privacy crisis. The impact of business models on end users and the potential for surveillance by regimes are highlighted. The analysis suggests that reviewing the business model and its impact on the end user is crucial in addressing these privacy concerns.
The internet governance landscape is observed to be increasingly complex, with new issues and dimensions coming into play. The creation of new mechanisms within the United Nations to tackle these issues is proposed, highlighting the need for continuous adaptation and engagement to stay abreast of the evolving internet governance landscape.
Lastly, there is a warning against technocratizing all social issues and placing them solely within the technical bucket. It is argued that not all social issues can be addressed solely through technical means, and a holistic approach is necessary to effectively tackle these challenges.
In conclusion, this comprehensive analysis of the speeches highlights the importance of human rights in addressing pressing social issues, the need to establish the relationship between the technical community and human rights, and the significance of addressing censorship, internet resilience, cyberbullying, and privacy concerns in internet governance discussions. It also underscores the increasing complexity of the internet governance landscape and the importance of avoiding a purely technocratic approach to addressing social issues.
Cameran Ashraf
The concept of human dignity in relation to technology and human rights is a complex and often under-discussed issue. Conceptions of dignity vary by geography and are constantly evolving. However, questions surrounding the dignity of individuals and their place in society with regards to technology have become increasingly salient.
One of the main concerns is how technology, particularly artificial intelligence (AI), can infringe upon human dignity. There are worries about AI tracking individuals without their consent, predictive content that can manipulate or harm individuals, the digital divide which widens inequality, ageism online, internet censorship, surveillance, and the spread of misinformation. These issues raise significant ethical and human rights concerns.
On a positive note, platforms like Wikipedia are built on the principle of human dignity. Wikipedia is a place where everyone’s contribution is valued and not exploited. Volunteers curate the world’s knowledge and make decisions about content in good faith. The Wikimedia Foundation, which supports Wikipedia, also demonstrates firm commitment to human rights standards. By allowing individuals to freely contribute to the world’s knowledge, Wikipedia’s contribution-based model is seen as a reflection of human dignity. It upholds values of inclusivity, collaboration, and the recognition that every individual has something valuable to offer.
It is argued that there is an urgent need to broaden the scope of human rights teams in tech companies. Currently, these teams primarily focus on state-based violations, privacy, surveillance, and freedom of expression. However, there is a need to also address broader issues such as gender equity and queer representation. By considering a wider range of human rights concerns, tech companies can better promote inclusivity and equality within their platforms and services.
AI, despite generating skepticism, has the potential to be a valuable resource in providing access to information. For example, AI can be beneficial in translating articles across different languages on Wikipedia, which supports over 300 languages. This has the potential to bridge gaps in knowledge and contribute to the democratization of information. By leveraging AI, platforms like Wikipedia can help overcome language barriers and ensure that knowledge is accessible to a broader audience.
While AI may disrupt labor, it is important to build opportunities for people and communities to contribute their knowledge and perspectives. By allowing individuals to contribute in their own language, AI can potentially offset disruptions caused by automation and create a more inclusive and equitable technological landscape.
In order to protect individuals and uphold human dignity, there is a call for the establishment and enforcement of laws, regulations, and social norms around technology, particularly AI. Governments have a responsibility to protect their citizens, which implies recognizing the intrinsic worth of all individuals. Such frameworks should be based on the fundamental concept of human dignity and aim to safeguard individuals from potential harm or exploitation.
In conclusion, the concept of human dignity in relation to technology and human rights is a complex and multifaceted issue. Concerns about how technology affects human dignity, such as AI, surveillance, and internet censorship, have gained prominence. However, platforms like Wikipedia demonstrate a commitment to human dignity through their inclusive and collaborative model. It is important to expand the scope of human rights teams in tech companies to encompass broader issues like gender equity and queer representation. AI has the potential to bridge knowledge gaps and democratize information, while also providing opportunities for individuals and communities to contribute. Ultimately, it is crucial to establish and enforce laws and regulations that uphold human dignity in the face of technological advancements.
Marielza Oliveira
During the discussion on digitalisation, several important points were raised. One of the key concerns was the uneven distribution of the benefits brought about by digitalisation. While digitalisation is advancing rapidly, not everyone is reaping its rewards equally. This inequity highlights the need for regulation in digital spaces to ensure that technology is used in a way that benefits all members of society.
Currently, there is a lack of regulation in digital ecosystems, particularly when it comes to social media and artificial intelligence (AI). The absence of such regulation allows for potential misuse and harm. Recognising this, UNESCO considers regulation and standards crucial for ensuring oversight and protecting the public good. Countries at the forefront of digital transformation have acknowledged the need for the development and implementation of regulations in digital spaces and technologies. However, to effectively address this issue, global standards and guidelines are necessary to facilitate collaboration between governments, the private sector, and civil society.
Another significant aspect of digitalisation is the importance of building and strengthening institutional capacities. Digitalisation requires individuals and organisations to possess the necessary competencies to harness the potential of digital technologies and platforms while also addressing the challenges they bring. Media and information literacy plays a key role in equipping individuals with critical thinking skills, technical expertise, and knowledge to navigate digital information ecosystems and avoid falling victim to misinformation. UNESCO places priority on enhancing the capabilities of decision-makers, educators, judicial operators, and young people, recognising their potential to have the widest and deepest impact.
The discussion also shed light on the prevalence of online violence, particularly against women. Women journalists, in particular, face significant harassment online, with a startling 73% experiencing such abuse. Disturbingly, a high proportion of those who suffer online harassment also suffer offline attacks. The session emphasised that online violence has become a new front line for professionals, and urgent action is needed to address this issue and ensure the safety and well-being of those affected.
Additionally, the negative impact of technology on labour protections was highlighted. With the advent of artificial intelligence, jobs are changing, and there are concerns that labour protections are being stripped away. An example given was the hiring of nurses in a way similar to ride-hailing services, such as Uber. This precarious work arrangement has negative consequences for patient health and underscores the need to address the potential pitfalls of technological advancements to protect workers’ rights.
Lastly, the session stressed the importance of active participation from various stakeholders in meetings of the Internet Governance Forum (IGF). Regulatory authorities, media representatives, and public prosecutors were specifically mentioned as actors who should actively engage in these meetings. This inclusive approach to governance aims to ensure that discussions on digital technologies and their impact are comprehensive, incorporating a range of perspectives and expertise.
In conclusion, the discussion on digitalisation highlighted the need for regulation and standards in digital spaces, the importance of building institutional capacities and promoting media and information literacy, concerns about online violence, the negative impact of technology on labour protections, and the call for active participation in the Internet Governance Forum. Addressing these issues requires collaborative efforts from governments, private entities, civil society, and individuals to create an inclusive, equitable, and safe digital environment for all.
Gbenga Sesan
The first argument asserts that multi-stakeholder conversations should be inclusive and incorporate all relevant stakeholders. It is important to address barriers to entry, such as visa issues, that prevent certain stakeholders from participating. In doing so, diverse perspectives can be represented, leading to more comprehensive and effective discussions.
The second argument highlights the importance of utilizing data and stories from civil society organizations concerning human rights. These organizations have been actively involved in various human rights issues, and their data can provide valuable insights to improve processes and gain a better understanding of the issues at hand. By incorporating their data into policy-making, decision-makers can make more informed decisions and better address human rights concerns.
The third point emphasizes the significance of universal internet accessibility. Presently, around 2.6 billion people worldwide are not connected to the internet, and various factors, including government actions, contribute to these disconnections. It is crucial to address these issues and ensure that everyone has equal access to the internet. Furthermore, the problem of internet shutdowns needs to be addressed, as they impede people’s access to information and communication.
The fourth argument highlights that human rights should be central to global policy processes. It stresses that everyone, including states, civil society, the technical community, and the private sector, has a role to play in promoting and protecting human rights. Moreover, it is affirmed that respecting human rights is not only morally right but also beneficial for business.
The fifth point specifically focuses on the hosting of the Internet Governance Forum (IGF) by countries that respect internet freedom. It is stated that the previous IGF was hosted by a country that had shut down the internet, causing embarrassment. Therefore, it is argued that any country hosting the IGF should understand and uphold the principles of internet freedom.
The sixth argument emphasizes the need to raise concerns and demand guarantees from nations hosting the IGF. As the IGF is a forum for discussing the internet, including human rights principles, it is crucial to ensure that the host country respects these principles. This is vital for maintaining the credibility and effectiveness of the IGF.
The seventh point stresses that the issue at hand is not tokenism but the genuine respect for rights. It is stated that difficult conversations regarding human rights need to take place, regardless of the location. The emphasis is on truly respecting and upholding human rights rather than merely appearing to do so.
The eighth argument highlights the need to hold countries and platforms accountable for their actions. This includes calling out those that speak the language of human rights but do not genuinely respect rights as they should. By doing so, it ensures that human rights are protected and upheld.
The ninth point addresses the issue of representation, particularly for young people and minority groups. It argues that conversations regarding representation should continue as the concerns raised regarding the underrepresentation of these groups should not be ignored.
The tenth argument highlights the roles of different stakeholders in promoting human rights. The state is regarded as having an obligation to uphold human rights, while the private sector needs to understand that respecting human rights is beneficial for business. Additionally, the technical community is urged to incorporate human rights principles into their work.
Lastly, civil society is encouraged not to shy away from speaking the truth, even to allies. It emphasizes that civil society should fearlessly raise concerns and advocate for human rights, regardless of any alliances they may have.
In conclusion, this summary underscores the importance of inclusive multi-stakeholder conversations, the use of data from civil society organizations, internet accessibility for all, human rights in global policy processes and IGF hosting, raising concerns, meaningful representation, stakeholder responsibilities, and the role of civil society in speaking truth. These arguments and observations highlight the need for a comprehensive and inclusive approach to address human rights issues and ensure that rights are respected and upheld
Frederick Rawski
The Internet Governance Forum (IGF) served as a platform for discussions on a range of topics, including human rights, regulation, stakeholder collaboration, policy evolution, cyberbullying, inclusivity, and systemic problems. Frederick Rawski represented META at the IGF and expressed satisfaction with the significant focus on human rights in all panel discussions.
The argument was made for the need for consistent and principle-based regulations rather than individual companies developing their own rules. META faced challenges in fulfilling its commitments due to inconsistent legal conditions. The company expressed support for the leadership of the United Nations (UN) in improving global cooperation.
The integration of human rights into business practices emerged as a key theme. There was a call for human rights to be treated with equal importance as other business risks. It was observed that the roles of different risks were not balanced in decision-making processes. Stakeholders discussed the importance of harmonizing the concept of risk among all parties involved. The discussions also highlighted the significant gap in understanding and applying human rights principles across stakeholders and the challenge of translating human rights language into actionable steps for engineers and software designers.
The need to address cyberbullying and ensure user control was emphasized. META showcased its robust policies on bullying and harassment, and adjustments were made to provide additional protection for women public figures. The discussions also highlighted the role of language when discussing content-related issues.
The importance of inclusivity and addressing systemic problems was stressed. Frederick Rawski suggested that more efforts could be made to ensure inclusivity at conferences and tackle the root causes of systemic problems.
META expressed its commitment to actively participate in the IGF and support its initiatives in the future. The company had a high-level delegation at the event to demonstrate its dedication to the conversation. META expressed readiness to move forward and support the IGF in any way possible.
In summary, the IGF discussions focused centrally on human rights. The challenges encountered in integrating human rights into business practices and the need for consistent regulations were recognized. The importance of addressing cyberbullying, ensuring user control, and promoting inclusivity was emphasized. META’s commitment to actively engage in future discussions and support the IGF demonstrates its dedication to building a more inclusive and ethical internet.
Peter Kirchschlager
The need for ethical regulation in AI, based on human rights, is highlighted as a crucial aspect to consider. It is argued that human rights can serve as a foundational framework for various initiatives related to AI and technology. This common framing allows for the promotion of human dignity and ensures that individuals can lead a life that respects their rights. Furthermore, regulations based on human rights can also foster diversity, freedom of expression, and innovation.
There is a growing consensus on the requirement for a regulatory framework that specifically focuses on AI and is rooted in human rights. Various global processes and consultations have indicated a convergence of ideas in this regard. A universal understanding is emerging that an institution or body, similar to the United Nations, should be established to enforce and implement this regulatory framework effectively.
Drawing from the success of the International Atomic Energy Agency, which played a crucial role in avoiding the worst-case scenarios in nuclear technology, it is proposed that a similar international agency should be established to handle AI ethics. This agency would identify ethical opportunities and risks associated with AI, enhance international cooperation in addressing these issues, and ensure that AI benefits both humans and the planet.
In the realm of gender issues, it is acknowledged that technology-based solutions can play a vital role in addressing concerns such as gender-based hate speech and cyberbullying. However, there is a lack of sufficient focus from states and the private sector in tackling these problems.
The impact of AI on human labor is another significant concern. It is observed that economic growth has been accompanied by increasing unemployment rates. This phenomenon highlights the need for interdisciplinary debate and evaluation to better understand the effects of AI on work and employment.
The issue of visas and migration is also discussed, suggesting that a more systemic approach is required to address this matter. It is argued that considering migration in a broader context is essential in effectively dealing with visa issues.
The importance of critically evaluating institutions, structures, and representation is emphasized. It is essential to assess these aspects closely to ensure justice, inclusivity, and reduced inequalities.
Finally, the practical application of the ethical discourse surrounding AI is deemed necessary. The legal discussions and ethical considerations need to be translated into practical implementation, ensuring that ethical principles are upheld in AI development and deployment.
In conclusion, this extended summary highlights the importance of ethical regulation in AI based on human rights. It emphasizes the need for a regulatory framework, the establishment of an international agency for AI ethics, technology-based solutions for gender-based issues, and the evaluation of the impact of AI on human labor. It also emphasizes the significance of addressing migration in a systemic manner, critically evaluating institutions and structures, and applying ethical discourse practically.
Session transcript
Peggy Hicks:
rights in the digital age, fostering a multi-stakeholder approach for safeguarding human dignity and freedom for all. My name is Peggy Hicks, I’m Director of Thematic Engagement, Special Procedures, and Right to Development at the UN Human Rights Office in Geneva. We’ve gathered here an amazing panel, so I won’t take too long at the start, but maybe just to make a couple of framing comments about why we thought that this was a really important conversation to be having at IGF this year. It starts from the idea that the human rights framework is not just a legal obligation, which it is, but is also an incredibly useful tool that needs to be brought into conversations at the Internet Governance Forum and everywhere else where the issues around the Internet, digital technology, and artificial intelligence are being discussed. We see this human rights framework for what it is, it’s a universal document that’s been a set of documents that’s been agreed across contexts, across continents, and it provides an enormous amount of resource and material that can help guide some of the tough issues that I’ve heard in the many sessions we’ve all been a part of here in Kyoto. So we’re looking for ways to make sure that that is available to ground some of these conversations. It brings in, of course, the ethical conversations, we of course are often brought back to the ethics and values language, but we think that the human rights framework is a reflection of our ethics and values, and gives us a place that we’re able to work across all the different stakeholders and contexts in an effective way. I also wanted to emphasize at the start how much this is a global conversation, and how difficult it is sometimes to make sure that that’s reflected in reality as well as in sentiment. We still see that discussions on some of these issues tend to be dominated by certain regions and certain sectors, and that we don’t have enough of the voices of those who are going to be directly affected and are being directly affected by digital technology in the room. And the human rights framework I think helps us to make sure that we are listening to the voices of those who are most affected. by digital technologies. Finally, I wanted to mention that the panel also is, we’ve asked our panelists to give us a sense of what their expectations are for the Internet Governance Forum and for the Global Digital Compact that my boss, the Secretary General, has been working on, and how to really advance those conversations from a human rights perspective. And so we’re going to be looking at, for example, concepts like how do we develop a better evidence base for the work that we need to do in the digital sector and on artificial intelligence, for example, and the need for us to have better monitoring and observatories and data that will help us look at these issues. And of course, coming back to the framing of this session, the importance of a multi-stakeholder perspective. And the multi-stakeholder perspective, I have to emphasize, is one that provides not just token participation, but actually meaningful engagement from all communities. And one of the things we keep coming back to is that it’s not just enough to open the doors. It’s also important for the resources to be there to allow that to happen. An example of that, for example, is the need for some of the researchers that are going to have access to some of the technologies that we want investigated. Will they have the computing capacity to be able to do that work? Do they have the resources to be able to do what we need them to do as researchers and academics in the system? So those are some of the questions that will frame the conversation we’re about to have. As I said, we’re very fortunate to have with us an incredible panel today. I’ll introduce each of them as we go forward. We’re going to start with some initial remarks from each of the panelists. And then we’ll move quickly, I hope, after that into a question and answer with some time at the end, I hope, for us to come in with some final comments. So with all that in mind, I’m going to turn to our first speaker, who, in fact, we have two of our speakers are going to be online. And our first speaker is Dr. Cameron Houshang Ashraf, who is the human rights lead at the Wikimedia Foundation and Central European University Department of Public Policy. So we’re very fortunate to have Cameron with us. And we’ll turn to him online now.
Cameran Ashraf:
Can everybody hear me OK? Yes, we can. Yes, we can. Super. Wonderful. Thank you. I want to thank the conveners and the organizers, the moderators, and the repertoire, and all the panelists here. I wish I could be with you. And of course, the panelists online, and everybody who is here or is watching online. My name is Cameron Ashraf. As you just heard, I lead the human rights team at the Wikimedia Foundation. I lead the human rights team at the Wikimedia Foundation, which is the global nonprofit that supports Wikipedia and other digital projects for free knowledge. We provide the technical infrastructure and support hundreds of thousands of volunteers around the world who contribute to Wikipedia. I’m also an assistant professor of new media and global communications in the Department of Public Policy at Central European University in Vienna, Austria. And the subject of this panel, broadly speaking, was safeguarding human dignity and freedom for all in the digital age. And I’m personally really appreciative of this choice of wording, as I feel that a strong belief in human dignity is why many of us are here at the IGF or why we work in technology. It’s a complex, contested, and comparatively under-discussed topic within the tech and human rights field. And I think part of the problem and part of the challenge to understanding dignity online is that we actually have yet to agree on what human dignity is offline. How a person is treated, how they’re respected varies wildly by geography. Borders can make a humongous difference, which demonstrates, to me at least, that conceptions of dignity are in flux and perhaps always have been. This question of the dignity of the individual and of the individual’s place in society with regards to technology I think is likely to be one of the great salient issues of this decade. We’re already asking these questions when we ponder how AI might infringe upon our dignity, what AI might do. And also when we think about where is human dignity in internet censorship and surveillance? What about companies who derive their profits from tracking us without our consent? Or predictive content? and based on inferred emotional states? What about the digital divide, people not having access to the internet? How are the elderly treated online? What happens to our collective discourse when it’s poisoned by misinformation? The core to me, and with all these questions and a lot more that we could spend all day on, is a question of dignity, which is something I think few of us can define or even begin to articulate. I think really, you know, human dignity is something that we just have a sense of, perhaps an intuition, but it’s not something that we can actually just look up on Wikipedia and conclude the discussion. And, you know, while Wikipedia won’t settle this discussion on human dignity, I believe that Wikipedia itself is really premised upon dignity. To me, it’s the idea that everyone everywhere has something to contribute. And importantly, that what they contribute is not for sale, it’s not to be exploited, and that the individuals who create this knowledge are free to develop their own approaches towards managing the knowledge that they are stewards of. In other words, there is no unelected interference. Yes, Wikipedia is an encyclopedia. It’s not a social media platform, it’s not an opinion page, and volunteers collaborate, debate, deliberate, argue, discuss their edits, and curate the world’s knowledge. They provide citations and sources, they weigh multiple perspectives so that they can make good faith decisions about content together. They really do embrace the spirit of collaboration across national borders to provide the most accurate information possible for the world. And I really encourage you on a personal note, you know, dive into Wikipedia in any language. It’s a really, I think, very humbling experience to see how much people have created not-for-profit and because they want to, because they care. These volunteers set and enforce rules for what does and doesn’t belong on the projects guided by a universal sort of code of conduct, which is supported by the foundation’s genuinely firm commitment to human rights standards. And I think across the foundation, all of our staff and our volunteers, a belief in the dignity of the individual to contribute to the world’s knowledge freely. I look forward to this panel and to expanding and discussing this topic today with both the panelists and the audience. Thank you.
Peggy Hicks:
Thanks very much, Cameron. I think that you’ve started us off on a really important note, really grounding the discussion in those concepts of human dignity and raising the issues that we all know need to be part of the discussion with regards to surveillance, the digital divide, the impact on. vulnerable communities. These are all things that we’re looking for to be part of conversations here at the IGF and in the policymaking bodies across the globe. But I guess one of our challenges is how effectively are we bringing that forward. And with that question in mind, I’ll turn to our next speaker, Dr. Eileen Donohoe, who is the Special Envoy and Coordinator for Digital Freedom at the U.S. State Department, formerly known to those of us in Geneva as the U.S. Ambassador to the Human Rights Council there. Really looking forward to hearing your thoughts on this, Eileen, please.
Eileen Donahoe:
Thanks so much. It is so great to be back in the IGF community. I feel like there’s tremendous energy this year. Like many of you, I think of myself as one of those strange multi-stakeholder animals. I’ve been in different sectors working on human rights and technology issues for a long time. I was in civil society actually with Peggy at Human Rights Watch. I also was recently at Stanford for the past eight years or so, where I ran a center called the Global Digital Policy Incubator, and we really focused on the implications of tech for democracy and human rights. And as Peggy said, I’m now back in the U.S. government as Special Envoy for Digital Freedom. And the way I see my mission, the sort of top-line mission, is to elevate human rights throughout U.S. cyber and digital policy, but also to elevate it internationally in all of the technology conversations. I sort of have identified in my very early days three priorities and tremendous overlap with the agenda here. at the IGF, the first of which is international AI governance, where the goal is, as Peggy said, it’s to solidify the status of international human rights law as the foundation for governance of AI. And I think that human rights framework is peculiarly well-suited to governance of AI because if you think about all of the risks and implications of AI that people are concerned about, starting with privacy, equal protection, non-discrimination, concerns about ramped up surveillance, freedom of assembly and association, freedom of movement, freedom of expression, implications for the information realm, all of those are human rights considerations. There’s also the other side of the equation, which is inclusion in the benefits of AI. And then the other part, I have a little bit of a philosophical streak. I feel like AI is raising these existential questions about the centrality of the human person in the future of society and in the future of governance as the focal point. And for all those reasons, I think the international human rights law framework is really, really speaks to the challenges. I will also note, unlike any other normative frameworks that I’m aware of, it does have the status of international law. It is universally applicable. It’s a shared language across the globe. And for all of those reasons, I think it is just very well-suited for international AI governance. The big move I’d like to make there with everybody here is, many of you will recall, in 2012, there was the first UN resolution on internet freedom, and it laid down that foundational idea of human rights being applicable online as offline. You know back in the days where we actually saw these as different realms and now everything has collapsed together I think the same move has to be made with respect to international AI governance because we see this proliferation of risk management frameworks and ethical guidelines and they sometimes use the same language and mean different things sometimes they use different language and My observation is that many of the people involved in crafting these frameworks super well intentioned Very well, they’re very knowledgeable understanding the technology but underexposed to the international human rights law framework, and so I think that is really the job of this community to advocate for this framework and have it be the foundation upon which Risk management frameworks can be built A second big priority is digital inclusion in in the multi-dimensional sense it obviously starts with basic connectivity for everyone and I’m sure this community knows well there’s something like 2.6 billion people on the planet who are still unconnected and that is a priority but Meaningful inclusion is multi-dimensional You have we have to stay focused on inclusion in data which goes directly to equal protection non-discrimination inclusion in content creation like Wikimedia Inclusion in the coding community in the governance community and especially maybe inclusion in decisions about application of AI so for all those reasons I I feel like this this multi-dimensional concept of digital inclusion We have to remember it isn’t only basic connectivity. It’s all those things The last point is of those 2.6 billion who are unconnected, women and girls make up the majority and that is a really underappreciated fact and it was really brought home to me on day minus one here with a really amazing event and we talk about the gender divide and we talk about the digital divide but they are really ultimately one and the same thing and we have to elevate that and I feel like women and girls are also less likely to be included in all those other layers in the data, in the content creation, in digital literacy programs so I really think we have to underscore the gender piece. Last is information integrity, which is a really challenging subject because we always have to take care not to undermine freedom of expression when we are seeking to stop the erosion of integrity in the information realm and I don’t think anybody has quite figured out how to do that yet but it has to be prioritized and I think governments around the world, civil society actors, everybody is getting much more engaged on the practical dimensions of how we do that. I will mention we just were involved in the Canadian Dutch global declaration on information integrity at UNGA and really pleased to support that initiative and I think it’s got great content and I think it can really serve as a basis for conversation going forward. Last I just want to say it’s a really significant moment in the IGF life cycle and I think everybody here is comparatively more sophisticated about multi-stakeholder governance but we have to all raise the bar even further. And what does that mean in different layers of the stack and in different sectors, etc.? I don’t think that’s fully flushed out. And last point is this panel is framed around two things. One is the substance of human rights, human dignity in the digital context, and the other one is the multi-stakeholder processes and how do we advance them. Those are not really separate. I think we have to remember process impacts substance. And multi-stakeholder process is how we protect human rights. So I’ll stop there.
Peggy Hicks:
Great, Eileen. All sorts of wonderful points there that we need to bring in. I really appreciated what you had to say about the multiplicity of frameworks as well and the extent to which the human rights framework can ground those other initiatives. It’s not that we have to move away from them, but we need a common framing that pulls them together in a way that makes it so that we’re not spread across too many different places. Your point about gender, I have to say I have been here a day and a half and I haven’t heard nearly as often as I would like, so I’m very impressed that you made that. And the points on the way that we move forward on digital inclusion I think are crucial and I’m hoping we’ll come back to them in some of our other speakers. But I’ll turn now to Professor Peter Kirschlager, Director of the Institute of Social Ethics at the University of Lucerne. We’ve shared a panel before and we’re back here again, Peter. So please, give us your thoughts.
Peter Kirchschlager:
Well, thank you so much, Peggy. I also wish to thank the organizers for having me on this panel. Being an ethics professor focusing on ethics of AI, I need to pick up a point Peggy was mentioning beforehand, starting to build a bridge between human rights and ethics. Because I think from an ethics of human rights perspective, it is actually crucial to start with a kind of minimum standard, being human rights, allowing people to survive and allowing people to live a life with human dignity, being very much informative. I think, for basically the entire value chain of AI. And, interestingly enough, we can also observe a certain kind of converging ideas when we look at the different processes, I mean, look at the results of the IGF so far, look at the high-level advisory board on multilateralism, like, more effective multilateralism. But I want to also recognize the consultations in the framework of Global Digital Compact, but also policy briefs by the Secretary-General, but also statements by the Secretary-General in the UN Security Council this summer, but also the latest resolution in the UN Human Rights Council, and also statements by the UN High Commissioner for Human Rights. Converging ideas in the sense that, first, it is clear that we need a regulatory framework, and, secondly, it is also clear that this regulatory framework needs to be human rights-based. And, thirdly, there’s also a converging idea that we need, at the UN institution, some body, some entity, taking care of the enforcement, the implementation of this regulatory framework. And I think this reality that we have, these converging ideas, you know, coming together on these focal points, I think is something which we need also kind of to celebrate, that we achieved already now in the discourse about so-called AI, this consensus in these three, with these three characteristics. And I would like to add that, you know, from an ethics of human rights perspective, using a human rights-based approach for the regulatory framework is also good news for technology, because it’s not overburdened technology with some higher ethos. I mean, human rights is really the minimum standard. ethical point of view. And secondly, human rights are also able to not only protect, but also foster and promote diversity, and by that also able to promote and foster innovation, by allowing people to think out of the box, by allowing people to freely express their opinion, by allowing people to have access to all the information which is there, which of course is crucial for being innovative. And secondly, regarding, you know, the agency, that there is a need for a UN body taking care of this existential issue of AI and how we use AI on the global level, including, you know, the huge positive potential AI can have for us as humanity and for the planet, but also including looking more precisely also on the ethical risks it poses. This agency could have, and that’s, you know, based on my research, a multi-year research project I started at Yale University and finalized at the University of Lucerne, one proposal could be to follow the model of the International Atomic Energy Agency, because I would argue, and that’s as a proposal, as a suggestion for further thoughts, that, you know, both nuclear technologies and AI share a dual nature, both of them having, you know, an ethical positive potential, but also an ethically negative potential. And in the field of nuclear technology, I’m simplifying now very much, we basically did research first, then we created the atomic bomb, unfortunately we used the bomb several times, and then we realized, as humanity, that we need to do something about it. And we created the International Atomic Energy Agency at the UN, basically taking care of the fact that we can avoid the worst. And even as an ethicist, I’m not that naïve, not acknowledging that, of course, that’s not perfect, it has its… geopolitical implications, but still we need to acknowledge that the International Atomic Energy Agency was able to avoid the worst. And I would think, you know, in similar terms, I would follow, actually I would suggest to follow the model of the International Atomic Energy Agency, also in the field of AI, creating an international database systems agency, IDA, at the UN, first for focusing on finding out, identifying, you know, the ethically positive opportunities which AI is offering to us. Secondly, also identifying, of course, you know, the ethical risks, but also, thirdly, in enhancing international cooperation, collaboration, technical exchange, technological exchange, what we also see, you know, how fruitful that can be also here at the IGF. And of course, also being, you know, benefiting from all the different initiatives in this field, bringing them together, combining also multi-stakeholder approach with a multilateral approach because both of them have their advantages and disadvantages now from an ethical point of view. So, bringing that together in making sure that at the end of the day, all humans can benefit from AI and also the planet can benefit from AI. Thank you so much.
Peggy Hicks:
Thanks very much, Peter. I think you did something that’s not that easy, which is that you gave us an optimistic perspective on where we are, that we have consensus around the needs, in a way, in a broad sense. The pathway is there, but how we get there is the critical question. And I think your comments, as well as Eileen’s on information integrity, point to the fact that we need to now take that conversation to the next level. We’ve identified these complex issues. We’ve recognized there’s no silver bullet, there’s no switch that we can flip that’s going to solve these issues. but we need to really look deeper at the different needs. And for example, on the broader AI question, I think the compelling idea that we can’t expect any one institution to actually necessarily accomplish everything. But the idea, as you put forward, of an observatory or really having an authoritative body that can at least give us some of the evidence base and the monitoring that we need is a step in that direction. I’d like to now turn to Mallory Nodal, the Chief Technology Officer at the Center for Democracy and Technology, a key partner of ours on these issues with enormous expertise. So we’re really looking forward to hearing Mallory’s thoughts, please.
Mallory Knodel:
Thanks, thanks very much to the organizers and for inviting me to talk about this. In fact, I do wanna touch on sort of some of the work that we’ve done with the OHCHR at some point. My response to this is a little bit broader than just what the IGF community does. I think of the whole constellation, I’m actually looking at the ceiling, there’s a whole constellation of fora out there that govern the internet, that do different things. We’re now including AI, not just the internet anymore. And they all are meant to sort of come together here at the IGF, where we present, we share, we cross-fora analyze all these different issues. And so when I think about these issues, I think about that broader landscape, although they all end up landing here, don’t they? I wanna reaffirm what others have already said, which is that this isn’t a panel about ethics, this is about human rights. It’s the most tangible, useful mechanism we have to talk about the most pressing social issues of our day. One of my former colleagues, Vidushi Marder, always said, human rights is governance with teeth. And she talked about that in her paper on AI, but I think it applies across the board. we’re talking about governing technology. And so because this is a really complicated landscape that’s only getting more complicated, I think it can be useful to just take a moment to appreciate that and also to analyze what everyone is currently doing and what their relationships are to human rights and what their roles and responsibilities are. So we know, right, that states have the obligation. We know that companies have a responsibility to human rights. I wonder, though, sometimes how we conceive of all the other stakeholders. For me, having worked in this field for my entire career now as a civil society representative, it is really the vision and the mission of civil society, in essence, why we’re here. But then we have academia, I think, plays a really important role. But where I work most consistently is in the technical community. And I don’t know that we’ve firmly established what is the technical community’s relationship to the human rights framework. I think we’re working on that now. And the OHCHR report that is soon to come out touches maybe on ways of mechanizing human rights discussions within the technical community, supporting those discussions. But we haven’t yet had the philosophical conversation, what is the technical community for or to the human rights framework? Now, to reflect it for a moment, the technical community, admittedly, is made up of other stakeholders. It’s really industry and states. You have increasingly civil society members there, but not that many. And so then there’s a question of, okay, if you have states that are obliged to the human rights framework, if you have companies that are responsible for it, why isn’t the technical community already talking about human rights? Why isn’t it already baked in? And I think that it’s comfortable, maybe, to talk more about the technology than it is to talk about the hard problems or to talk about the hard problems in technocratic terms. terms is a comfort zone. It’s also the language of products and of producing and of, right? And so there’s just a different way of talking about it. When I very first started engaging in the technical community, I found myself always having to say human rights things, but in a totally different way. I had to create these value tradeoffs. So there was one design possibility over here, but then there’s another design possibility over here. Let’s just list out the requirements and talk about the tradeoffs rather than saying this one’s better because it’s better for freedom of expression, because that wouldn’t have gone anywhere. That wouldn’t have helped. I think we’ve made progress, right? I think that now that there are more and more human rights advocates in the technical community, it’s becoming easier to talk about this is the end goal because human rights would be better under this design, but it still very much has to be sort of reconstituted into its constituent parts and then put back together. And that’s really important work that we need to keep doing and we need to do more of, not because we should turn technical discussions into political theater, but because, again, what my colleagues have been saying, these are actually the hard problems. When you characterize the hard problems of technology in their roots, in their real essence of being about people, you’ll actually get to the answer quicker and better, and that’s all we’re really trying to do. And while that can be challenging for people that have only been formally trained in the sciences, it isn’t impossible. It wasn’t impossible for physics and chemistry. It’s not impossible for computer science engineers either. I think I should probably be ending soon with my time, but I wanted to just really quickly say that substantively, I think one issue that I’ve seen travels throughout all the fora, across all the stars in the Internet Governance Constellation, is censorship and Internet resilience. And I see that as a real One of our first starting points when we bring together a real end-user or people-centric issue that all fora can engage on and every technical community’s conversation, every standard has a role to play in figuring out how we ensure the internet stays on, that there’s meaningful access everywhere. And so I’d invite folks to really consider that as a centerpiece for cross-fora engagement on internet issues and human rights. So I would just hope that this conversation continues to talk about human rights, not just at the IGF, but in all fora, and the IGF in perpetuity. It’s not that human rights is coming in or out of fashion. It’s a constant vessel for all the things that should matter to us, and we need to make sure that no matter what fora it is, no matter where that forum is hosted or what venue we’re in, that we’re able to bring these issues up and to tease out the really important things for people everywhere. That’s it. Thanks.
Peggy Hicks:
Great. Thanks so much, Mallory. I think you made some fabulous points there, and really this emphasis on the bridges to the communities that have to be part of the conversation. And what I liked about what you said is that give us that idea that people out there are having human rights conversations all the time. They’re just not aware or framing it as a human rights conversation, but those tradeoffs and those discussions around the impacts of technology in different ways, those are conversations that the technology experts that I’ve engaged with really want to have. And I do think making that bridge to help with the framework that can underlie those conversations more substantively will be really helpful. And your points about censorship and where we need to go on issues around internet shutdowns and the need for an approach that puts people at the center are crucial. We’re going to turn now to our second online participant. very fortunate to have with us Dr. Marielsa Oliveira, the Director of UNESCO’s Communications and Information Section Division for Digital Inclusion, Policies, and Transformation. UNESCO’s as good at long titles as my office is. Over to you, Marielsa.
Marielza Oliveira:
Well, hello, Peggy. Thank you, and hello, everyone. It’s a great pleasure to be here with you, and I’m sorry I cannot be in Kyoto in person. We’ve attended all IGFs since the very beginning, but this year we have a smaller team in presence. Well, UNESCO is kind of at the front line of seeing the opportunities, but also the barriers that the digital spaces can create, because our mandate is exactly the free flow of ideas by world and image. And so for us, we’ve centered our work on fostering the kind of ethical and human rights-based framework that can enable digitalization to bring forward human rights and human dignity, because it’s actually not advancing at a fast, while it’s advancing at a fast pace, it’s not really benefiting everyone equally, and actually quite harming, creating quite a lot of harms. Our digital era is kind of a troubled one, and we really need to reboot our digital spaces, particularly re-grounding it on trust and facts. So regaining ground on a goal, the vigorous and healthy public debate, requires that we really protect information as a public good, defend freedom of expression and access to information everywhere, but because of its scale and reach, particularly online, individuals and societies need to relearn the value of facts and knowledge, and we need to support a fact-based, evidence-based generating bodies. such as academia, science institutions, as well as public interest, independent media. And the IGF, as a multi-stakeholder mechanism, actually bring quite a lot to this conversation. We play a leading role in facilitating the international cooperation and shaping a human rights-based digital future because we actually work on strengthening human rights-based standards and regulatory frameworks under which digital ecosystems evolve. Last year, for example, we held the Internet for Trust Conference exactly to look at regulation of internet platforms. And at the end of this month, we’re actually launching our guidelines that center this type of regulation, particularly on accountability and responsibility, which is missing in digital ecosystems. And regulation and standards are how we ensure oversight, protect the public good, and encourage also investments because they actually create level and stable playing fields for innovation to flourish. We see that all countries that are at the forefront of digital transformation are actually engaging in the development and implementation of regulation of digital spaces and technologies, particularly social media and artificial intelligence, because they really see the need for that. But these efforts really need to be complemented by global standards and guidelines that facilitate collaboration between actors in government, private sector, civil society, this multi-stakeholder approach. And so we have been setting standards in the area of digital transformation. Our recommendation and guidance, for example, in transparency of digital platforms, open science and open data, ethics of artificial intelligence, ICT competencies for… teachers and others, they are instruments that foster innovation while protecting human rights and promoting accountability. That’s how we center our work. And a second line of action that we always bring is to strengthen institutions and systems that can enable cooperation and digitalization issues. It’s really important that we develop human and institutional capacities to harness the potentials and address the challenges of digital technologies and platforms, and we prioritize the capacities of these groups whose decisions and actions have kind of the widest and the deepest impact. For example, policymakers and civil servants, particularly judicial operators, since they have a special role in shaping the environment in which our digital ecosystems are developed, as well as educators who are responsible to impart knowledge in line with 21st century requirements and young people, digital navies that lead this process globally. And we value very much the networking and collaboration, knowledge sharing among stakeholders, and foster ideas for engaging on best practices and on regulation and the conversation around human rights. But we must also raise the skills and competencies of users of digital technologies and platform. Media and information literacy being essential to build critical thinking, technical and other skills, knowledge and attitudes that allow us to derive value from digital information ecosystems and avoid the traps they set by misinformation, conspiracy theories, disinformation, hate speech, online incitement to violence, and others. And the stakes are really high, you know? So we really need to bring this conversation. in a bigger way around the IGF and other digital ecosystems, particularly in this year that we have so many governance changes. For example, with the WSIS plus 20 process taking shape, starting the global digital compact coming up and other mechanisms such as the AI body that is being thought out and so on. So this is the occasion in which we have the chance to do this big reboot. Thank you.
Peggy Hicks:
Thanks very much. Very helpful perspective there and really sort of linking back to where Cameron started us on the importance of information and data. Two points that are crucial to the conversations. But I also liked as well the points that Marielza makes around human capacity, the institutions and structures and the need for us to build up the ability to tackle these issues in a human rights compliant way as well and how we get there. And I’m sure our other panelists may have some thoughts on that front. There are still two more panelists to go. I appreciate everybody’s patience before we get to the questions and answers. But I’m going to turn now to Frederick Rowski, who’s the head of human rights policy for Asia Pacific at META. And my notes say, Frederick, you’re also a composer of electronic music. So I don’t know if you’ll bring that in, but over to you.
Frederick Rawski:
Thank you. How did that fact get in there? I didn’t offer that bit of my biography, but happy to discuss that offline. Look, it’s hard to be sixth or seventh in line in a conversation. It has benefits and downsides. On the positive side, I have a better sense of where the conversation’s going. But I also have so many notes now to my remarks that I’m not sure that they’re valuable to me anymore. But I’d just like to say, I’d like to thank the IGF and Deutz, HR, and everyone else for giving us the opportunity. And apologies for my voice. I’ve lost my voice. And just to say that I’m personally excited about being part of this panel. I joined the Human Rights Policy Team at META last year in July, after several decades of work in the international civil society space and with the UN, including with the Office of the High Commissioner of Human Rights, and am committed in my current role as the Head of Human Rights Policy for the Asia-Pacific region, and that’s part of a larger human rights policy team globally, to engaging in all of our work in a multi-stakeholder and consultative manner. But with that background, I came to this space with the perspective of a critical outsider and had a fair amount of skepticism coming in about how successful we could be in building a human rights framing and human rights approach to the business. I have to say that I’ve come to appreciate how successful META has been in doing this. There’s a long way to go, but I think a lot of progress has been made, particularly since we adopted our corporate human rights policy in March of 2021. We try to show this commitment rather than just talk about it, but just a few things that we have pushed forward with in the last couple of years are building a human rights team itself, it’s relatively new, adopting the human rights policy, launching the Oversight Board, which has adopted human rights as a principal basis for its work, creating a human rights defenders fund, committing to protecting expression and privacy against overbroad government demands, a commitment that we made when we joined the Global Network Initiative in 2013, which I think is another institution worth talking about. We’ve published two annual human rights reports. This is the most recent one, and what makes this one interesting is it includes a summary of our enterprise-wide salient risk assessment, which is something that we committed to do and are now beginning to publish some of the outcomes of that, and it looks across human rights risks. risks throughout the company, up and down the value chain. And we continue to publish other forms of due diligence at the country level. We’ve done one recently on Israel-Palestine, on end-to-end encryption, and a number of other issues. And over the last two to three years, we’ve strengthened our engagement with UN actors and agencies. This is included with UN Global Compact, our commitments to the UNGPs, engagement with the Secretary General’s office, OHCHR, UNICEF, UNHCR, off of the Office of the Special Representative for the Prevention of Genocide, many special rapporteurs and country teams. And we have a 20-person, I think, delegation here to IGF, which I’m very proud of, including our President of Global Policy, just to represent the commitment that the company has to this kind of engagement. I can already hear the groans from some of my civil society colleagues in the audience. OK, yeah, they’re listing off all the great things they’ve done again. So I do want to acknowledge upfront that this is not easy. There are many challenges to integrating human rights standards into company policies and making them more than a fig leaf, but actually influential on important decisions, even decisive. I work in the Pacific region, in the Asia-Pacific region, which has an extraordinary amount of linguistic and cultural diversity, a fractured regulatory space, vibrant and growing economies, and where governments have, let’s say, an inconsistent commitment to democracy and human rights. And as I list those off, I realize that I’m talking about the world and not about the Asia-Pacific. So it does sometimes feel difficult, if not impossible, to live up to the dual commitments that the company makes to both comply with local law in these many different jurisdictions and to abide and promote international human rights standards at the same time. That’s all to say that we seek expert guidance, and we give it where we can. We want consistent and principle-based frameworks that we can’t and shouldn’t be developing ourselves. And for these reasons, we strongly support the leadership of the UN in facilitating the global process and improving global cooperation through the GDC and the IGF and other fora. I’ll end by just mentioning that we recently made a submission of inputs to the GDC and highlighted a number of actions, some of which include actions that we would take collectively in a consultative manner, urging governments to resist policies that enable the misuse of personal data or impose overbroad restrictions on protected speech, supporting the use of end-to-end encryption against overreaching surveillance and encroachments on freedom of expression, offering support to capacity-building initiatives, inclusive ones, for public and private sector actors to prevent and react to harmful, malicious, and hostile behavior online. And this is where my notes get kind of so confused that I can’t follow them anymore. But I would just, on that last point around multi-stakeholder engagement, a couple of thoughts came to mind as I was listening to others speak. One is, in my role in META and coming from a civil society and international organization perspective, I do see that there is still a significant gap in understanding across stakeholders. There is, as has been mentioned, conversations are happening all the time about human rights, but they’re not being conducted in human rights terms and human rights language. And there’s still a long way to go to kind of socialize and communicate that human rights framework. The other challenge there in that gap, I think, is now that I’m living among engineers and software designers and salespeople and all of that, is to translate those principles into action. There are many examples. Robot plan of action. How do you take that and turn it into language that can be implemented as policy? How do you then take that language and turn it into something that can be applied at scale? It can be coded, it can be understood by engineers, it can be understood by people who are promoting the business and other aspects of the business policy. And the second thing that came to mind is this risk, the concept of risk. I’m constantly talking about risk with people and everybody understands this. There’s legal risk, there’s business risk, there’s policy risk, and there’s human rights risk. And I think we’re often talking about similar things, but we give them different balance. They play a different role in the balance of decision-making among different parts of companies and different parts of governments. And I think there’s some progress that can be made there in both developing a shared vision about what we mean by human rights risks and impacts, and especially how they interrelate with these other frameworks for assessing and understanding and mitigating risks that are much more common and, frankly, more widely understood amongst many people in the technical and business community. So I’ll leave it at that, and thanks again so much for giving us the opportunity to speak.
Peggy Hicks:
Thanks Frederic. We really appreciate the company perspective and your willingness to come and be part of a forum like this and discuss what META is doing and what areas there are still room for improvement. And I think your point about the risk assessment frameworks is a really interesting one that comes up quite a bit for us as well and is an area where we might be able to bring things together a bit better, taking Mallory’s point about putting people at the center of it and the human rights analysis that will help us to do that. But fortunately, we have with us one final panelist who has been patient, and we’re very much looking forward to his insights. Benga Sasan is the Executive Director of the Paradigm Initiative and a member of the IGF Leadership Panel. Over to you, Benga.
Gbenga Sesan:
Thank you, Peggy. We’re talking about human rights and multi-stakeholder processes. and I think it’s a good time to state clearly that we can’t have multi-stakeholder conversations if certain stakeholders can’t be on the table. It’s hypocrisy at best to say that we’re having human rights conversations and it’s multi-stakeholder, and there are stakeholders that can’t be on the table. Where the IGF 2023, and I keep hearing stories of people who wanted to be here but couldn’t make it because of visas, and it’s not just this IGF, it’s the IGF before now and many global processes. There are barriers to entry and I think we have to address this. You have no idea how dehumanizing it is for you to stand in front of a visa officer to defend your existence and expertise. It shouldn’t even be a thing because you’re going to contribute to conversations and you’re not trying to do something else. So I think it’s important for us to set that as a conversation to continue and that if you’re hosting the IGF or you’re hosting a global, if you call it global, then it has to be global. And if it’s global, it means you have to open your doors to relevant stakeholders. And I know this is part of a bigger migration debate but we can’t pretend that this is not happening and we’re talking about human rights. And speaking of which, when we talk about global processes including the GDC conversation, including the IGF and all of the conversations we will have, one of the important opportunities we also have is that we have data and we have stories on human rights, either human rights violations or human rights defense from civil society organizations that have been working on these issues for a very long time. And I think it’s very important for us to take advantage positively of this information. this data, and be able to improve processes, because when we have these conversations, one of the things we must realize is that there are people with lived experiences that we can’t ignore, and these lived experiences will help us understand what the issues are. We don’t need to commission a study, for example, to understand some of the violations that happen in some of the countries across the world, and how to respond to those challenges. Some of you may be aware that the IGF leadership panel presented a paper a few days ago, well, not a few days ago, it feels like it’s a long week already, I think just two days ago, actually, presented the Internet We Want paper, and the whole idea of that paper is to ask the question, what internet do we have right now, what internet do we have, what internet do we want, and what is the gap, and how do we do that? Of course, the five overarching areas, it should be all and open, it should be universal and inclusive, it should be free-flowing and trustworthy, it should be safe and secure, but I want to emphasize the fifth point, which is that it must also be rights-respecting. By saying that it should be rights-respecting, I want to just focus on just one tiny area of that. We talk a lot about the 2.6 billion people who are not connected, and I want them to be connected. I say to people that my life story, my career journey was made possible because of one email, and that is the power of the internet. So there are 2.6 billion people who are not connected, but don’t forget, there are also people who are disconnected, and I want to emphasize that, because we’re talking about human rights, there are people whose governments or whose certain activities or situations have rendered them disconnected, and because they are disconnected, we count them as part of the connected, because we’re focusing on the unconnected, which is the 2.6 billion, and I think it’s really important. I’m glad that the Freedom Online Coalition released a statement this week on internet shutdowns. It is not a conversation that we should be having. in 2023, but again, it is what it is, the world as it is, and where we need to go. And I think, finally, is to say that, you know, global policy processes are not for aliens, they’re for humans, and so at the center of the conversation should be humans. It should be human dignity, it should be human rights, and everyone has a role to play. States have, you know, the obligation already to, you know, to make sure that they implement human rights principles. Civil society does advocacy, you know, like Malorie said, you know, the technical community needs to bake it in, and for the private sector, it is very clear, at least between, you know, the COVID problems we had in 2020 and now, it is very clear, I think, for many businesses that, you know, human rights is good for business. When people trust your platform better, they are very likely to use it and become advocates for that. So, I really look forward to, you know, the comments and the questions we’ll have, and the conversations that will continue on this topic of human rights and multistakeholderism, and making sure that everyone, everyone that needs to be on the table doesn’t face barriers to entry.
Peggy Hicks:
Well, that was well worth waiting for, Abenga. I think the points that you make are so crucial to the conversation. I have to pick up the first point, which is about who’s in the room, because this has been a persistent issue, not just in this conference, but in many conferences we’re at, where we say we want a global perspective, but we’re not necessarily able to achieve it. And I do think there’s a fundamental question there about what are we going to put into that, and what do all of those in the room want to say to all of the governments that they’re engaged with about what you need for this forum to be as successful, because it’s actually a disservice to all of us, because that idea of participation, it’s not opening the doors as a favor to those who want to participate, as you said. It is a necessity for us to be able to have the experience and knowledge. that will allow us to arrive at the right approaches and ideas and insights that we need in a discussion like this. I’ve gone on too long, it’s a point I’m passionate about, but really appreciated what you had to say as well on the Internet shutdown point, one that’s a recurrent one from last year’s IGF, I’ll point out. But now we are finished with the statements from our panellists and really looking forward to seeing if there are questions from the audience that we can bring back to the panel. If you want to come forward to the microphones, please identify yourself, try to keep your comment or question short so that we have a chance to bring in as many people as possible. Don’t all line up at once, I can’t see people. Okay good. We’re getting some engagement here in Kyoto. Please.
Audience:
Hi everyone. Yes, here we go. My name is Carolyn Tackett, I’m with Access Now, a global organisation working to extend and defend the digital rights of people and communities at risk around the world. Thank you so much to the panel for all of your comments and especially on the reflections about the importance of meaningful access to these spaces and making sure that the people whose voices need to be heard the most when we’re having these conversations can actually safely engage in these spaces. I don’t think any of these reflections on multi-stakeholderism can really arrive at the stated goal if those people aren’t able to engage. And so I just want to present the question back to the panel and maybe to you, Benga, first to just kind of build on what you’ve already shared but also Aileen and Frederick would be great to hear from you as well. To what extent the news that we’re hearing about the next location for IGF in Saudi Arabia is in any way compatible with what you’ve outlined here? And I think especially understanding where we are in this cycle for IGF and coming up at the end of the WSIS plus 20, kind of understanding what the future of the multi-stakeholder model looks like, what a move like this means for our ability to actually bring some in all these spaces, safely and meaningfully, so I’d just like to hear what you all have to say. Thank you.
Peggy Hicks:
Great, thanks, Carolyne. We’re gonna take a couple of questions just to make sure we get things in and then come back to the panel. I think there is somebody over here, please.
Audience:
Thank you so much. First, I want to appreciate the conversation on the issue of human rights and human dignity. My name is Michi Jumamboko. I’m a member of parliament from Kenya, and I represent the Parliamentary Service Commission on the issue of information and public communication. And I just want to say on the human rights perspective, how are you going to address the fear that artificial intelligence can create job loss vis-a-vis it’s going to create some jobs in terms of research? I’m just looking on a scenario whereby a job which could be done by 10 researchers can just be implemented by just one application. So that is a fear which is across developing countries. Number two is there’s the issue of cyber bullying, which is rampant, especially in my country, and mostly it’s targeting venerable group of members of the country, like the women politicians. We have that challenge, and it is very much rampant. I don’t know how we’re going to address it. My last question is the issue of protection to privacy and personal data. Recently in our country, there were some guys who came from America, and they call themselves WordCoin. Under that banner of WordCoin, they were collecting some personal data from Kenyan citizens. It was a big debate, because even the authority, the Kenyan government, were not aware of what was happening in regards to WordCoin. So it’s like there’s no international regulations. Anybody maybe can just pop in a certain country and just try to collect some data. So, this is a scenario where more fear is being spread to citizens of certain countries. And that is why maybe we really need to have some outreach programs, or we need to have programs in the entire globe, so that at least people would understand what does it entail by artificial intelligence. Because even people think that it can be a threat in terms of internal security. So I think there is a need, not only having these big forums internationally, but we have to disseminate the same information in our countries, far back to our rural areas, whereby up to now, there’s still no connectivity. The connectivity is very, very low. And people don’t understand what is happening. So when you’re talking about artificial intelligence, some people think that, who’s this monster now coming? Is it the same people like the world coin, or these people are taking our data, or is this going to be a threat in terms of our internal security? So there is a lot to talk, and there is a lot to discuss and engage people globally. I thank you.
Peggy Hicks:
Thank you very much for that perspective. Three really important points. I’m going to take one more question before we go back to the panel, please.
Audience:
Hello, I’m Emma Gibson from the Alliance for Universal Digital Rights, or AUDRI for short. And thanks, Eileen, for mentioning the event on day zero minus one that we co-organized, which was around the Global Digital Compact, and our launch of these ten feminist principles for a Global Digital Compact, which you can ask me for a copy, and I’ll give you one afterwards. It was great to hear some talk about gender. Human rights is the first of our principles, that the GDC should be based on the human rights law. But I’d love to hear a little bit more from people around the importance of gender specifically in the Global Digital Compact as a cross-cutting theme. Thank you.
Peggy Hicks:
Thanks very much for those three sets of questions. We actually ended up with five questions. So I’ll come back to the panel. Maybe Benga said you went last before, you can go first now, please.
Gbenga Sesan:
Thanks for asking that question. So first of all, I don’t know how many people were in Addis last year for the IGF, and you know, I said a few things. One of the things I said was it was pretty embarrassing that a country that shut down the Internet was hosting the Internet Governance Forum. That may not have been a diplomatic thing to say, but it’s the truth. Everyone, including Saudi Arabia or anyone else who hosts the IGF, needs to understand what the IGF means. It means that it is a forum for conversation around the Internet, including principles of human rights. I believe that apart from speaking to state obligation and civil society advocacy, we should not be worried about surfacing concerns that we have and asking questions to get guarantees from anyone who has stepped forward to offer to host the IGF.
Peggy Hicks:
Thank you, Benga. We also had questions relating to gender, to cyberbullying, to the protection of privacy and data, and the impact of AI in the field of work. Who wants to jump in? Mallory, you look ready.
Mallory Knodel:
I can jump in with answers to two of the questions. The first one specifically on cyberbullying, I just wanted to highlight a really instructive and informative report that Center for Democracy and Technology put out about women of color who are politicians in the United States and the experiences they have online. But intersection is a real mess for those folks. And what that research really highlights is a bunch of different— elements that sort of all stakeholders have a role to play in this. I’m not going to outline all the recommendations, but you can go and look. So the process by which you actually research the problem, understand it from a nuanced perspective, what’s actually going on, that is the kind of thing you have to conduct every time there is a problem like this. And it requires the platforms to open up their data to researchers. It requires a fine-tooth comb when you’re going through what the experiences are. In the high level, the recommendations are things like give users more agency. They need the ability to block and report. They need the ability to do that at scale because the attacks against them are often being done at scale. Things like that. So I just I feel like that’s a really, really important question. It’s particularly important for those most affected, for gender discrimination, and so I think it’s a great example of the kinds of things we need to pay attention to when we’re talking about real-world harms and human rights. But there are, of course, many, many others. The second question I wanted to respond to was just about next year’s forum. Just because I alluded to this in the end of my remarks, I think that irrespective of the actual location or the host, human rights has to be a huge part of the conversation. In fact, sometimes I feel like this conference should really just be a human rights and sustainable development conference that we talk about the internet and AI sometimes. That would be more useful and beneficial sometimes rather than creating this about the technology. So I want to just say that we’ve had this happen before. We’ve had internet governance meetings every year happen in places with questionable human rights records. It’s happened for the IGF in particular in Turkey 2014. It was just after Gezi Park. We had Singapore hosted the IETF one year and folks were trying to boycott it because if you were LGBTQ, you were not technically legally allowed to go to Singapore then. real problem. Things like that have not deterred the conversations from happening. In fact, I think we have to lean into it and be louder about it. It’s an opportunity to talk about these things in a different way. And so I’d challenge all of us to make sure that we do that.
Peggy Hicks:
Thanks, Valerie. Eileen, you want to come in?
Eileen Donahoe:
It’s difficult. So many good questions and so many layers to them. I will start with the two points by access. The first one, I have something positive to say. You talked about the necessity of having civil society in the room for these tech policy conversations and internet governance conversations. My observation, particularly at this IGF, is that the expertise in the civil society community has skyrocketed, even as compared certainly to governments in understanding how the technology works and in relation to, let’s say, private sector and technologists in terms of exposure to relevance of the international human rights law framework. So we talk a lot about capacity building to get people in the room. Benga, you said that. I think we also need to think about capacity building for the technology community and governments. So that’s just a different angle on the same issue. Saudi Arabia, I mean, I would say it is the responsibility of the IGF community, the leadership panel, the MAG to, as Mallory said, make sure human rights is squarely on the agenda and emphasized, not hidden, and certainly make sure the inclusion piece is that people are paying attention of who’s not being… being allowed in or included, because it might be different. It might be different. I’m gonna also get to the last question about gender and join it with our colleague from Kenya. This whole conversation is kind of about the tension between the consequences for human rights of exclusion from enjoyment of the technology and the benefits of the technology and the processes around governing the technology and being in the room. But that sits in tension with the risks of inclusion and whether they are inherent risks of the technology that were not thought about before deployed or malign application of the technology by authoritarian governments, for surveillance, censorship, control of the information realm. And those two things are in tension. I think that event on day minus one really made a giant impression on me how the gender piece is at the heart of that, both because women are the most excluded from connectivity itself, but also from all the other dimensions of meaningful inclusion that should really be participating and benefit, but also at the heart of some of the risks of how the technology is used in ways that are peculiar to women and girls. There’s a gender dimension to it. And I think that both of those sides of the equation, we have to, this is why we have to elevate the gender conversation. Because if you want to understand the dynamics and the tension between both of those sides that we have to do at the same time, we have to solve. for the gender piece. On the other point by the Kenyan colleague, labor displacement in effect, the consequences of AI for labor. That is so under-theorized and so under-appreciated and it’s ultimately, I think it’s gonna hit us all in every society. That’s one way that societies where AI is more embedded are more at risk on the front end of the consequences of that. Well, I think there’s a whole community of people thinking about that, but they tend to be economists and they otherwise do labor issues. But I don’t think many in the technology community are yet focused on that or even in the human rights community. So I appreciate that question.
Peggy Hicks:
Thanks, Eileen. Just a small comment linking what Mallory said about what we really need is a human rights conference that brings in the internet and your comments about the gender dimension. One of the things I’m often struck by is the extent to which we try to separate or solve problems online when in fact, the online world is a reflection of the world that we live in. And that distinction or separation is never going to be truly successful. Peter, would you like to come in?
Peter Kirchschlager:
Yeah, I would like to pick up this point because I think at least I would see a huge opportunity that we actually can find technology-based solutions to the gender issues which were raised. And I think it’s not rocket science to find ways to identify gender-based hate speech. It’s not rocket science to find technology-based solution to identify cyberbullying. I think what we are lacking is really the will, be it from states, be it from the private sector, to really make that a main focus for the next year rather than striving for. more efficiency, just to put it very simply. And then regarding, you know, the impact on human labor, I couldn’t agree more with you on the fact that we really, I think, paying not enough attention to the question what kind of impact the use of so-called AI has on human labor, which kind of seem to pretend that this is not really happening and that we still have kind of a, you know, capitalist free market striving for full occupation while, full employment while, you know, it’s actually going in the other direction. And we have, you know, seen now years where we had economic growth with unemployment rates also increasing, which is a new phenomenon, you know, also from an economic point of view. And I think we just have to identify that interdisciplinary debate on where we should strive for, you know, ethics can also contribute in, you know, what does it mean for a human to work or not finding, you know, a paid professional task, and of course, you know, other disciplines contributing to trying to find a solution for that.
Peggy Hicks:
Thanks very much. I think there’s a real agreement on the need for the field of work issue to be looked at more thoroughly. Frederick, would you like to come in?
Frederick Rawski:
Well, thank you. I’ll be brief. I just, I agree with everything everyone has said a hundred times over. It struck me, particularly in the comment around cyberbullying and the other comment on the centrality of gender or its lack of centrality where it should be central, is part of this framing and translation problem that I was thinking about in the first comment, which is these four are amazing, and they’re great for talking about the principles, they’re great for bringing stakeholders together. You know, I find myself as a human rights lawyer thrust into the center of a giant tech company always needing to make all of that actionable, you know, turn that into… things we can do, processes, some of them are technical, some of them are not, some are policy, some are messaging appropriately to leadership. But I’m just thinking, for instance, in the cyber-bullying example, human rights is really where we need to start with that question. You need to start from the principles and from the common and shared global vision that we have for them. But very quickly, you have to get to policy. And we’ve got a very robust policy on bullying and harassment in META, and yet it needs to be constantly iterated and constantly evolved to take account of particular contexts, for country contexts, cultural contexts, language contexts. And from there, quickly go to finding mitigations and how we land upon them, how we design them, how we implement them very quickly. And just, for instance, we’ve made adjustments, for instance, to our policies on women public figures and the kind of vulnerabilities that they have, and have made adjustments to bullying and harassment policies that would add protections. The issue around user control, enhancing that user control and transparency about these things so that people can have all the tools that they need to protect themselves. And then many other complicated issues, language. I mean, every single issue often comes down to language when you’re talking about content. And bullying and harassment in particular is a space where we’re constantly needing to evolve those policies. And that cannot be done without engagement with communities, without understanding that we don’t have, and that is specific to the cultures and communities where the platform works. So just, again, thinking forward to the next IGF and other context. that just, it’s that next step from the amazing conversations that we’ve had to finding ways to collectively find specific solutions in some of these contexts. And obviously AI adds a whole other layer to that.
Peggy Hicks:
Thanks very much, Frederic. We have time for a few more questions if there are people who would like to come to the mics. I noticed that we didn’t thoroughly tackle one of the points raised by our Kenyan parliamentary guest who asked as well about sort of the privacy side and the data side and how we see those issues. I was in a conversation directly before coming here that really stressed that the theme in this IGF is around AI, but that we need to start seeing the AI challenge as a data issue. And that at a minimum, one of the key elements here is transparency, and that’s the point going back to what Frederic said as well. But a whole nother topic about the way data protection is a crucial piece of the AI equation. I see a question over here, please.
Audience:
Sorry, I’m a bit short, obviously. I wanted to follow up on a question that was raised because I felt like the response wasn’t necessarily sufficient. I think it actually speaks to wider systemic issues. So when we’re talking about where we’re hosting, whether it’s the IGF or other forums in certain contexts, I think the responses, at least from what I heard, thank you, the responses, at least from what I heard was more about agenda items that were being raised so that will include human rights in the agenda. But it feels like the people who have that lived experience who are the most affected are being excluded by design from these spaces where we have well-documented from credible sources, the use of technology itself to survey marginalized and vulnerable people. So I think that we need to talk about that. And then the other piece about people who are excluded because of the visa issues. I feel like this is a repeated problem in a lot of different forums and contexts, whether it’s IGF, we had this happen at RightsCon as well. And the conversation then just becomes about how do we guarantee it in this one context of allowing people to come to a conference but not speak to the wider issues of the bordering of the world that’s being enhanced by technology where certain groups of people are allowed to move freely but others are restricted. And a lot of that is the vestiges of colonialism and not talking about making a key part of our agenda in the IGF, decolonizing technology as well, because it’s just reinforcing some of these existing systems that are quite problematic. We see even within, for example, the African context that people have been prevented from moving within their own continent. Whereas we have already the mechanisms and systems in place, for example, the EU model of being able to freely travel within their continent. We know we can do this. It’s not that difficult in the sense that the models exist, but it’s only afforded for some populations. And so I think we need to be speaking about the systemic issues and they’re often ignored. So I wanted to hear some of that. What are the actual tangible, concrete actions that are being taken to address this instead of repeatedly having these come up as talking points in different conferences, including IGF? Thank you.
Peggy Hicks:
Thank you very much. We have another question over here, I think.
Audience:
Hello, everybody. I am Arnaldo from Brazil. I’m representing the youth from Brazil. And I would like to quote everything that was said before, because I was thinking exactly like that. We are from a self-perspective, from a youth perspective. And I feel that in the last IGFs, we had really not sufficient representations of our queer community. We have just a few people that are transgendered. We have sections of debates. that does not represent our perspectives, because we are daily facing the violence against our communities, not only on the Internet, but on-site as well. And I see that we have to face these debates and try to propose these forums to input our perspective too, and try to bring more youth perspective and queer perspective to the debates, not only the North ones. And I’ve just wanted to implement this commentary and try to bring this way of thinking, so in the next ones, we can build something more queer. Thank you.
Peggy Hicks:
Thanks very much. I don’t see any other questions, and I told the panelists that we’d come back to them for a final comment, so what I think we’ll do is we’ll respond to the two final questions or comments here as part of your final remarks. And I didn’t give anybody an order, but I’m thinking about going in reverse order, if that’s fair, again. Benga, do you mind going first?
Gbenga Sesan:
I don’t mind. So I think it’s important to reemphasize that the conversation is not about getting human rights on the conversation in Saudi Arabia as a panel. That will be tokenism, and we’re not talking about tokenism. We’re talking about the respect of rights and to be seen as respecting rights. And I think this is really important, because when I spoke about barriers earlier, this is lived experience. This is not theory for people. There are people who’ve had experiences that are not just dehumanizing, but have also affected spaces they can go to, opportunities they can get, and the things they can do. And so I think it’s important to understand that. So this is not just about a panel or about getting certain colors of faces on panels or something. It’s about making sure that when we have to have the difficult conversations, we have these difficult conversations, regardless of where this is held. And by the way, this is not just about the next IGF. This is about continuous IGFs. This is about continuous global forums. There are times when we even need to call out countries and platforms that speak the language, but do not respect the rights as they should. And I think this is really important. In terms of representation, this is a conversation that continues, and I’m glad to know that there are sort of tracks and there are panels and all that. But I think we must realize that as long as we continue to get those questions of young people are not represented, minority groups are not represented, it means that they are not things we should ignore. We should pay attention to them and make sure that things are simple as the guidelines of how to organize workshops, that we literally implement this and they become an opportunity. And I’m grateful that we’ve had this conversation today, and I hope that it’s not just going to end with this panel. I hope that the conversations will continue in the hallways and even beyond here about what must we do. The state has an obligation, private sector, human rights is good for your business. It’s no longer trying to emotionally blackmail you. For technical community, you have to build it in, and civil society, we must not shy away from speaking truth to anyone, including even our allies.
Peggy Hicks:
I’m kind of wishing I’d left you for last because that was a great closing statement. Thank you very much, Benga. Over to you, Frederic.
Frederick Rawski:
Thank you. This is my very first IGF, and it’s my very first time representing META at such a forum, so I’ll forego my critique. But I would just say that I’ve been very excited to be here. I’ve been very, very happy and the company has been very, very happy at the framing, the centrality of human rights to almost all the conversations that we’ve been in. It was very hard to figure out where I should be because almost every single conversation, every single panel that we’ve been engaged on has touched upon human rights, usually explicitly, but if not implicitly. So I do think it’s worth thinking about doubling down on that approach, as a few people have suggested. And from our side, I think we’d be very pleased with that kind of framing. At the same time, look, I’ve been in a lot of conferences in my life, on the civil society time side and in many other capacities. A lot more can be done to make this a more inclusive process. A lot more can be done to ensure that the framing of the issues and the issues that we deal with are getting to the heart of the problem, particularly more systemic problems. And so I think there’s work to be done there as well, but very excited to be part of this and just want to end by saying that we get a lot of criticism and rightly so in many cases, and very happy to spend time talking about the issues that are specific to us. But part of why METTA brought so many people here and decided to have such a high-level delegation to the IGF is to message to everybody across all of the stakeholder groups that were committed to this conversation and are ready to move forward and support it in every way we can in the future. Thanks.
Peggy Hicks:
Thanks very much, Frederic. We wanted to turn to Marelisa if you’re online and had a concluding comment.
Marielza Oliveira:
Thank you very much, Peggy. I just wanted to make two comments regarding previous questions. you know, in terms of the issue of online violence, that has really become a new front line for journalists, for educators, for cultural workers, for scientists, and particularly for women in these professions. This is an escalating freedom of expression and access to information crisis, you know, because this is really driving away the professionals that actually bring truth and facts to the digital ecosystems. And this kind of harassment and abuse is a combination of, you know, not only threats and misogynistic comments, but digital privacy and security breaches that expose identifying information, exacerbate, you know, the offline safety threats for them. You know, 73% of women journalists are actually harassed online, and a high proportion of those actually suffer attacks offline. So this is a thing that we really need to bring, you know, to discuss. I wanted also to comment on the labor questions that was asked before, and we know that jobs will certainly change, you know, through artificial intelligence, and we really don’t know yet what the end effects will be in terms of employment numbers. But what we see, and what really worries me more, is that how technology is stripping away some of the labor protections that took decades to construct. Gig workers is really precarious worker, and does not really enable people to realize their right to earn a decent living. It may also impact consumers negatively. You know, I saw the other day an article about nurses being hired like Ubers, and this has terrible consequences for the health of their own patients, and for the nurses themselves, for their own mental health. And we need to bring these issues, you know, to our conversations. And so for that, I would like to close by offering the IGF one suggestion. You know, the IGF is a great place for bringing together multiple stakeholders, but we’re still missing some that are critical to our dialogue on… on human rights and technology. So I’d like to remind us that regulatory authorities that create the norms under which digital technologies operate should be brought in, like information commissioners, data protection authorities, human rights commissioners. They should be a regular feature and participants in the IGF meetings. Media also is not really present enough. And they are the ones who create awareness among the general public about the digital issues and bring their own experiences of opportunity and harm that digital technologies realize. And policymakers that lead digital transformation that can help us create this pipeline of knowledge to policy on human rights-based approach to digital development. And finally, judges and public prosecutors who are the ones who bring human rights violations to justice that really need to be part of our conversation. So I’ll close here and thank you very much for the opportunity.
Peggy Hicks:
Thanks, Marielsa. I think it’s that call for that even broader sense of inclusion is really wonderful. Really appreciate that. I’m going in reverse order, so I’m going to go next to Mallory, please.
Mallory Knodel:
My concluding remarks will also comment on a couple of the questions we didn’t get to. I wanted to start by talking a little bit about privacy because it didn’t come up. And I think it is an interesting omission on this panel. I think it’s because, and it’s hard to overstate this, I keep saying it, but we have a really complex landscape. And I think privacy is a good example of that. We’ve been doing nothing but talking about privacy for the last 10 years or more, right? It’s 2023, 2013 was the Snowden revelations. We were talking about it before that. The Snowden revelations were very helpful to our argument. It really highlighted and demonstrated what was wrong. And we did a great deal, especially at the technical level. to fix that with rolling out transport encryption everywhere and ensuring people’s connections, and also now their DNS lookups are behind encryption, but we’ve never had a bigger privacy crisis, and it’s worth introspecting on that, right? Is it the business model? Is it that you can still be targeted by someone who not only wants to survey you, or not someone but a regime that wants to survey you and also do you harm? I mean, we have to do both. We have to look at the big picture, re-architect the way the internet works, and we also have to zoom into the details and pay attention to how end users are being affected, and that’s just one issue, and everyone is bringing in this incredibly important element of representation and of participation, and so we need more, not less. We have more issues to talk about, more dimensions to those issues to talk about, in more places, and so I think one of the last things I wanted to just conclude on is something that was just very briefly mentioned, but we haven’t really confronted in this panel yet, which is maybe the creation of new mechanisms within the UN to talk about internet and AI and other things with the global digital compact and so on. I feel like we’re never actually replacing anything. We’re only adding to the space. That is not necessarily a negative thing in and of itself, but something that we have to reflect upon. We are increasing the complexity of what it is we’re trying to govern and what it is we’re talking about when we’re governing it, and then the processes, and then how we actually do it, and so I would caution us to really think about the opportunity there, but also the risk. The opportunity is what the colleague over here from Brazil said. We can bring in new and better and more interesting and fun issues from the next generation, real things that are happening in places that we haven’t had enough representation of yet, and deal with those and expand what we’re able to. what we’re able to address, but I would just be careful that we do not take all kinds of social issues and put them into the technical bucket, that we don’t technocratize everything. I think that’s one of the risks I see in expanding this community rather than thinking about how we take our technical expertise and our technical conversations out into the world where the real end issue is sort of being discussed and being put forward. I think that’s another model where, yes, it still contributes to the complexity, but it’s coming at the issues from a different angle. Thanks.
Peggy Hicks:
Very interesting and thoughtful comment there, Mallory. Let’s go over to Peter, please.
Peter Kirchschlager:
Well, thank you so much, and I wish first to thank the colleagues who asked these questions and actually wanna dedicate my final statement to basically reiterate what they were saying, what at least I understood, and I hope I paraphrased that correctly, basically saying, listen, we cannot deal with the visa issue if you’re not talking about migration in a more systemic way. I think that’s something, maybe we can even be more self-critical to us, to continuously ask us, if you don’t have, if you have a concrete question, we need to look at it also from a systemic point of view. We need to look at institutions, we need to look at structures, being maybe structures of injustice, and we have to address them. And the same goes with the strong statement on representation. I think it’s, at least I hear a strong call to every one of us to keep continuously being self-critical in about, you know, do we really live what we are talking about? Are there any things we are willingly or unwillingly not respecting in our practice? But also, are there maybe also some blind spot we need to address? Because I think, you know, that the field, and that’s my last sentence, I think the field of. let’s say, legal discussion about so-called artificial intelligence but also the ethical discourse about it, at least has the tendency to run the risk to be very good in preaching but not so good in action so far. So I think we can get a huge step forward if we start really taking action on what we are, you know, have been riding on in recommendations, guidelines, etc. Thank you so much.
Peggy Hicks:
Very good point. I see Bengal smiling and I expect many in the room are as well. The practical side of what this all means on the ground is crucial. I’d like to turn first to Cameron online and we saw you pop in for a moment, Cameron. I hope you’re still there to give us some thoughts from your perspective and we’ll close with Aileen. Wonderful, thank you.
Cameran Ashraf:
Yes, so first I wanted to just briefly address a couple of the questions that were asked earlier. One of them was about, you know, about gender and another one was also about queer representation and I think this gets to, you know, to build on what Frederick said earlier, you know, this gets to a point about how are we structuring at the big platforms, you know, and the tech companies, what we consider human rights teams. Oftentimes they focus on sort of, you know, state-based violations, you know, sort of the traditional human rights narratives but where, for example, does gender equity sit in an organization when it’s global? And I think we need to start to think about repurposing human rights teams. What are we considering human rights? How are we defining human rights outside of perhaps just privacy and surveillance and freedom of expression? As I often would tell my students, there’s more to being a human being online than what I say and who’s listening. So I think it’s important for us to start to actually, you know, from the platform perspective, make some actionable steps towards understanding what human rights teams do and how they move forward. With regard to the question about AI and labor displacement, I think there’s a really important, you know, component with labor and what was access to, you know, factual and accurate information. And I’m a skeptic of a lot of AI, my academic work has criticized it from human rights perspective, but I do think, for example, with Wikipedia, it is a very vital resource for a lot of people. They’re able to get information they might not have otherwise gotten, and AI can actually be, you know, even though I’m skeptical, in general, here I think it’s very good, that wonderful potential application of it, because it can be used, for example, to translate articles across languages, can be massive gaps between different language wikis, over 300 languages on the platform, and also it can help people who, for example, are whatever language they’re writing and isn’t their first language, there’s opportunities also for individual communities and language groups around the world to build out the knowledge that’s available for people that might help offset, unfortunately, some of the disruptions that will be happening, and this is not from a sort of a late-stage capitalist perspective, but also to my, you know, final question, briefly, some members from the Wikimedia team are in the audience, if you’d like to speak with them, they’re happy to chat with you about human rights, AI, or anything else, and I started discussing dignity, and I think I will go ahead and close with that, you know, governments around the world, they have like a basic charge to protect their citizens, and to me, that means that, you know, even the most, you know, repressive regime, you know, has some basic conception that, you know, people have a worth, and that that worth merits protection, so we have a baseline there, and I think going forward, it’s really important that we, you know, that laws and regulations, you know, and social norms around technology, around the internet, around artificial intelligence, you know, continue to build upon that idea that we all have something worthwhile, worth sharing, worth contributing to, and I would say, especially when we disagree with each other, given everything that’s happening, especially when facts are inconvenient, so I really do hope that we move forward with, you know, working on that baseline, and remember that, you know, we are here to protect human beings online and offline, and that includes, you know, you know, digital activists in prison around the world, and all of those who are, you know, really advocating for free and open knowledge. Thank you.
Peggy Hicks:
We’re running out of time, so quickly over to Eileen for the final word.
Eileen Donahoe:
That last question over here, Benga, you said everything that needs to be said, I think, about inclusion in process and risks for people in the real world. The question that comes up for me is I am not really aware of how the decision was made. Decisions have been made in the past, you know, Turkey, Ethiopia, Azerbaijan, so I don’t know how those decisions are made, but perhaps as we think about the next phase of the IGF, that those decisions are made elsewhere at different people at the table. That’s one idea. I’m just going back to a couple of things I heard from colleagues. Peter, actually the person online, the last gal from UNESCO, right, she talked about the need for regulatory authority in terms of all the online content-related harms. Peter, at the very beginning, you said, well, we can’t forget the other side of the equation. Tech regulation itself has to be consistent with human rights, and that, too, is a very significant problem around the world. You also talked about not just the impact of technology for generating, you know, facilitating violence against women or violations of human rights, but you hit the other side, which is technology should be applied to be the solution as well, and there’s always going to be this game of cat and mouse, but that has to be done more. And then last is, Frederic and Mallory, you guys both emphasized this need for translation between the tech community and the norms community, and I think that… is a really exciting area and I think there’s a lot of potential and a lot of growth in that space. We’ve been talking about it for a few years at a very abstract level, but I think people are starting to figure out what does it look like in practice if you are, you’re talking about human rights and AI, how do we do those assessments? Last one, DPI was another, digital public infrastructure was brought up outside this room a lot and I see that as an area where you hit the inclusion problem, inclusion in the technology, tech for the SDGs, but you also connect it with human rights by design and so you’re basically bringing economic, social, cultural rights and civil political rights together. So that’s another area to be mined.
Peggy Hicks:
Great, thanks Noah. A lot of content there. We’re at the end of our time. I think it’s been a really rich conversation. I hope it leaves all of you as it does me with not just some insights, but also some work to do in terms of what we can all do to pick up on the themes that have been brought out in this session and how we can both improve the rest of this forum and how we can build towards bringing these human rights issues and the human rights framework into the conversations that we want to have here and in other forums and in the next IGF as well. Thank you all so much for your participation. I realize there were some questions online that we couldn’t get to. I apologize for that and really look forward to having further conversations on this topic throughout the rest of the IGF. Thank you.
Speakers
Audience
Speech speed
181 words per minute
Speech length
1538 words
Speech time
509 secs
Arguments
Importance of having a globally representative participation in Internet Governance forums
Supporting facts:
- Peggy Hicks stressed on the need of diverse participation in such conferences to get the right ideas and insights
- An audience member, Carolyn Tackett, reinforced the importance of meaningful access to these spaces for people whose voices need to be heard
Topics: Internet Governance Forum (IGF), global perspective, digital rights
Fear that artificial intelligence can lead to job loss
Supporting facts:
- Scenario whereby a job which could be done by 10 researchers can just be implemented by one application
Topics: Artificial Intelligence, Job Loss
Issue of cyber bullying, particularly targeting vulnerable groups such as women politicians
Supporting facts:
- Cyber bullying is rampant in Kenya
Topics: Cyber bullying
Need for protection to privacy and personal data
Supporting facts:
- Incident of WordCoin collecting data from Kenyan citizens without their awareness
Topics: Data Protection, Privacy, WordCoin
Need for awareness and information dissemination about Artificial Intelligence
Supporting facts:
- Low connectivity and understanding of AI in rural areas
Topics: Artificial Intelligence, Public awareness, Communication
The importance of gender as a cross-cutting theme in the Global Digital Compact
Supporting facts:
- The speaker is associated with Alliance for Universal Digital Rights, AUDRI, that promotes digital rights such as gender equality in the digital space
- AUDRI has co-organized an event around the Global Digital Compact which includes ten feminist principles for the GDC
Topics: gender, Global Digital Compact, digital rights
Lived experience individuals are excluded from important forums such as IGF
Supporting facts:
- The use of technology to survey marginalized and vulnerable people
- Exclusion due to visa issues
Topics: IGF, Human rights, Technology
Decolonizing technology should be part of IGF agenda
Supporting facts:
- Technology reinforces existing systems that are problematic
- Preventing people from moving within their own continent
Topics: IGF, Decolonizing technology, Colonialism
Insufficient representation of the queer community in IGFs
Supporting facts:
- In past IGFs, representation from the queer community, particularly transgender individuals, was deemed insufficient.
- Violence faced daily by the queer community both on the internet and on-site.
Topics: IGFs, Queer representation, Transgender representation
Report
In a recent discussion on Internet Governance, Peggy Hicks emphasized the importance of diverse participation in conferences to obtain a variety of ideas and insights. She highlighted the need for representation from different regions and backgrounds to ensure a comprehensive approach in decision-making processes.
Carolyn Tackett also stressed the significance of providing meaningful access to these spaces for individuals whose voices need to be heard, particularly those from marginalized communities. However, concerns were raised regarding the upcoming Internet Governance Forum (IGF) meeting in Saudi Arabia.
Carolyn Tackett raised questions about the compatibility of Saudi Arabia as the venue for this important forum, expressing concerns about potential implications for stakeholder involvement and safety. This raises questions about the adherence to principles of global representation and the ability to ensure a safe and inclusive environment for all participants.
The potential impact of artificial intelligence on job loss was also discussed. There is a fear that the increasing use of AI could lead to a decrease in job opportunities, as tasks that previously required a team of multiple researchers can now be accomplished by a single application.
This raises concerns about the future of employment and the need to ensure a balance between technological advancements and job security. Cyber bullying, particularly targeting vulnerable groups such as women politicians, was highlighted as a prevalent issue in Kenya. This highlights the urgent need to address this form of harassment, protect individuals’ rights to safety online, and implement effective policies and strategies to prevent and combat cyber bullying.
The incident involving WordCoin collecting data from Kenyan citizens without their awareness underscored the need for robust data protection and privacy regulations. It is essential to ensure that individuals maintain control over their personal data and are aware of how it is being used, particularly by technology platforms or companies.
Furthermore, the lack of international regulation and oversight in areas such as artificial intelligence and data protection was identified as a concerning issue. The incident with WordCoin highlighted the consequences of inadequate regulation, emphasizing the necessity for global standards and cooperation in addressing emerging technologies.
In the context of the Global Digital Compact, gender equality was highlighted as a cross-cutting theme. Efforts are being made to promote gender equality in the digital space, with the Alliance for Universal Digital Rights (AUDRI) championing principles such as gender equality in the Global Digital Compact.
Additionally, human rights should be a fundamental principle of the Compact, as outlined by AUDRI. This emphasizes the need to prioritize and safeguard human rights in the digital sphere. The exclusion of lived experience individuals and systemic issues like visa problems at conferences were identified as barriers that need to be addressed.
The repeated issues with visa problems highlight the disparities in global mobility and the need for accessible processes that ensure equal participation for attendees. Moreover, there was a call for conferences to address systemic issues and create a more inclusive environment for all participants.
The idea of decolonizing technology and including diverse representation, particularly from the queer community, in IGF debates gained attention. The lack of representation from the queer community, especially transgender individuals, at previous IGFs was criticized. It was highlighted that the queer community faces violence both online and offline, making their representation in these discussions crucial.
Additionally, the importance of including more youth perspectives was emphasized to ensure a fresh and inclusive dialogue in internet governance. In conclusion, the discussions on Internet Governance covered various important topics. The need for diverse and representative participation, addressing concerns about the selection of venues, understanding the implications of artificial intelligence, combatting cyber bullying, protecting data privacy, regulating emerging technologies, promoting gender equality and human rights, addressing systemic barriers, and inclusion of marginalized communities were identified as key areas for further attention and action.
Cameran Ashraf
Speech speed
200 words per minute
Speech length
1555 words
Speech time
465 secs
Arguments
Human dignity is a complex and under-discussed concept within the tech and human rights field
Supporting facts:
- Conceptions of dignity vary by geography and are always in flux.
- Questions around dignity of individuals and their place in society with regards to technology is becoming a salient issue
Topics: Human Dignity, Technology, Human Rights
Wikipedia is premised upon the concept of human dignity
Supporting facts:
- Wikipedia is a place where everyone’s contribution is valued and not exploited.
- Volunteers at Wikipedia curate the world’s knowledge and make decisions about content in good faith.
- The Wikimedia Foundation shows a firm commitment to human rights standards
Topics: Wikipedia, Human Dignity
It’s vital to reconsider the structure, scope and definition of human rights teams in tech companies, to include aspects beyond just state-based violations, privacy, surveillance, and freedom of expression
Supporting facts:
- Often, human rights teams in tech companies focus primarily on state-based violations, privacy, surveillance, and freedom of expression, ignoring broader issues like gender equity and queer representation.
Topics: Human Rights, Tech Companies, Gender Equity, Queer Representation
AI has the potential to be a valuable resource in providing access to information, particularly with platforms like Wikipedia, which can help bridge gaps in knowledge across different languages and contribute to the democratization of information
Supporting facts:
- Despite skepticism about AI, it is viewed as potentially beneficial in translating articles across different languages on Wikipedia, which supports over 300 languages.
Topics: AI, Information Access, Knowledge Democratization, Language
Report
The concept of human dignity in relation to technology and human rights is a complex and often under-discussed issue. Conceptions of dignity vary by geography and are constantly evolving. However, questions surrounding the dignity of individuals and their place in society with regards to technology have become increasingly salient.
One of the main concerns is how technology, particularly artificial intelligence (AI), can infringe upon human dignity. There are worries about AI tracking individuals without their consent, predictive content that can manipulate or harm individuals, the digital divide which widens inequality, ageism online, internet censorship, surveillance, and the spread of misinformation.
These issues raise significant ethical and human rights concerns. On a positive note, platforms like Wikipedia are built on the principle of human dignity. Wikipedia is a place where everyone’s contribution is valued and not exploited. Volunteers curate the world’s knowledge and make decisions about content in good faith.
The Wikimedia Foundation, which supports Wikipedia, also demonstrates firm commitment to human rights standards. By allowing individuals to freely contribute to the world’s knowledge, Wikipedia’s contribution-based model is seen as a reflection of human dignity. It upholds values of inclusivity, collaboration, and the recognition that every individual has something valuable to offer.
It is argued that there is an urgent need to broaden the scope of human rights teams in tech companies. Currently, these teams primarily focus on state-based violations, privacy, surveillance, and freedom of expression. However, there is a need to also address broader issues such as gender equity and queer representation.
By considering a wider range of human rights concerns, tech companies can better promote inclusivity and equality within their platforms and services. AI, despite generating skepticism, has the potential to be a valuable resource in providing access to information. For example, AI can be beneficial in translating articles across different languages on Wikipedia, which supports over 300 languages.
This has the potential to bridge gaps in knowledge and contribute to the democratization of information. By leveraging AI, platforms like Wikipedia can help overcome language barriers and ensure that knowledge is accessible to a broader audience. While AI may disrupt labor, it is important to build opportunities for people and communities to contribute their knowledge and perspectives.
By allowing individuals to contribute in their own language, AI can potentially offset disruptions caused by automation and create a more inclusive and equitable technological landscape. In order to protect individuals and uphold human dignity, there is a call for the establishment and enforcement of laws, regulations, and social norms around technology, particularly AI.
Governments have a responsibility to protect their citizens, which implies recognizing the intrinsic worth of all individuals. Such frameworks should be based on the fundamental concept of human dignity and aim to safeguard individuals from potential harm or exploitation. In conclusion, the concept of human dignity in relation to technology and human rights is a complex and multifaceted issue.
Concerns about how technology affects human dignity, such as AI, surveillance, and internet censorship, have gained prominence. However, platforms like Wikipedia demonstrate a commitment to human dignity through their inclusive and collaborative model. It is important to expand the scope of human rights teams in tech companies to encompass broader issues like gender equity and queer representation.
AI has the potential to bridge knowledge gaps and democratize information, while also providing opportunities for individuals and communities to contribute. Ultimately, it is crucial to establish and enforce laws and regulations that uphold human dignity in the face of technological advancements.
Eileen Donahoe
Speech speed
149 words per minute
Speech length
2226 words
Speech time
898 secs
Arguments
International human rights law is the foundation for governance of AI.
Supporting facts:
- AI implications include privacy, equal protection, non-discrimination, freedom of assembly and association, freedom of expression which are all human rights considerations.
Topics: AI Governance, Human rights
Digital inclusion is multi-dimensional and a top priority.
Supporting facts:
- 2.6 billion people on the planet are still unconnected, women and girls make up the majority.
Topics: Digital Divide, Inclusion
There is a significant overlap between the digital divide and gender divide.
Supporting facts:
- Women and girls are less likely to be included in all layers of digital inclusion.
Topics: Gender Divide, Digital Divide
The integrity of information needs to be protected without undermining freedom of expression.
Supporting facts:
- The substance of human rights, human dignity in digital context and multi-stakeholder processes are interconnected. The latter helps in protecting the former.
Topics: Information Integrity, Freedom of Expression
Civil society expertise in tech policy and internet governance have dramatically increased
Supporting facts:
- Observation at the IGF meeting
- Comparison between the understanding of civil society, governments and private sector regarding technology
Topics: Civil society, Tech policy, Internet governance
There is a tension between the benefits and the risks of technology inclusion
Supporting facts:
- Risks of the technology include surveillance, censorship, control of information by authoritarian governments
- Event on day minus one
Topics: Technology inclusion, Human rights, Surveillance, Censorship
Gender issue need to be elevated in the technology and human rights conversation
Supporting facts:
- Women are most excluded from connectivity
- Technology is used in ways that peculiar to women and girls
Topics: Gender issue, Technology, Human rights
The impact of AI on labor displacement is undertheorised and underappreciated
Topics: AI, Labor displacement
There needs to be a clearer understanding of the decision-making process in technology governance.
Supporting facts:
- Mentioned Turkey, Ethiopia, Azerbaijan on decisions made in technology that served as an example.
Topics: Tech Decision Making, Digital Governance
Tech regulation should be consistent with human rights and at the same time be capable of limiting content-related harms.
Supporting facts:
- Referenced Peter’s remarks on the equation involving tech regulation and human rights.
- Mentioned the UNESCO gal’s point on the need for a regulatory authority for online content
Topics: Tech Regulation, Human Rights
Technology could be a solution to issues like violence against women or violations of human rights.
Supporting facts:
- Referenced Peter’s point on utilizing technology as a tool to solve problems.
Topics: Tech Solutions, Human Rights
There’s a need for translation between the tech community and the norms community.
Supporting facts:
- Her agreement on Frederic and Mallory’s emphasis on the need for this translation.
Topics: Tech Community, Norms Community
Digital public infrastructure (DPI) is an important aspect in issues of inclusion and human rights considerations in technology.
Supporting facts:
- Mentioned DPI as a topic brought up outside the room.
- Connected DPI with inclusion and human rights by design, bringing economic, social, cultural rights and civil political rights together.
Topics: Digital Public Infrastructure, Inclusion, Human Rights
Report
The analysis highlights several important points made by the speakers. Firstly, it emphasizes that the governance of AI is grounded in international human rights law. AI has implications for privacy, equal protection, non-discrimination, and freedom of expression, among other human rights considerations.
Therefore, it is crucial to ensure that AI development and implementation adhere to a framework that respects and upholds these fundamental rights. Additionally, the analysis underscores the significance of digital inclusion. With 2.6 billion people still unconnected globally, the majority of whom are women and girls, there exists a significant digital and gender divide.
Bridging these gaps and ensuring equal access and participation for all individuals, regardless of their gender or background, becomes imperative. The importance of striking a balance between protecting the integrity of information and safeguarding freedom of expression is also highlighted.
This requires taking into account the interconnected nature of human rights, human dignity in the digital context, and multi-stakeholder processes. It underscores the need to find a middle ground that respects both information integrity and the fundamental right to freedom of expression.
Additionally, the analysis emphasizes the essential role of multi-stakeholder processes in protecting human rights. By ensuring the inclusion of diverse perspectives and interests from various stakeholders, these processes can effectively shape policies and practices that impact human rights. This highlights the importance of inclusive and participatory approaches to governance.
Elevating human rights throughout U.S. cyber and digital policy is identified as a crucial objective. It is important to integrate human rights principles into U.S. policies and practices in the digital sphere to promote peace, justice, and strong institutions, as outlined in the relevant Sustainable Development Goals.
The analysis also highlights the increasing expertise of civil society in tech policy and internet governance. Engaging civil society organizations and individuals in shaping technology-related policies and practices can lead to more inclusive and equitable outcomes, ensuring that human rights considerations are appropriately addressed.
The need to prioritize human rights and inclusion on the agenda of the Internet Governance Forum (IGF), leadership panels, and the Multistakeholder Advisory Group (MAG) is emphasized. Ensuring the accountability of these entities is crucial for incorporating human rights and inclusion considerations into internet governance.
This underlines the importance of ongoing dialogue, collaboration, and monitoring to promote responsible and rights-based approaches to technology governance. While technology inclusion offers numerous benefits, it also carries risks such as surveillance, censorship, and control of information by authoritarian governments.
Achieving a balance between the benefits and risks of technology inclusion presents a challenge that requires careful consideration and effective safeguards to protect individuals’ rights and freedoms. Addressing gender issues in the technology and human rights conversation is also highlighted.
Women are most excluded from connectivity, and technology is often used in ways that specifically impact women and girls. Therefore, gender-sensitive approaches to technology development, deployment, and governance are vital, alongside efforts to address the gendered impacts of technology on individuals and societies.
The analysis also recognizes the need for further research and analysis regarding the impact of AI on labor displacement. This under-explored area warrants attention to understand the potential effects on employment and develop strategies for mitigating any negative impacts. It also emphasizes the importance of considering the societal implications of technological advancements beyond immediate benefits and conveniences.
A better understanding of the decision-making process in technology governance is deemed necessary. Transparent, accountable, and inclusive decision-making processes are advocated to ensure that technology-related decisions are made with democratic principles in mind. The analysis further highlights the importance of tech regulation consistent with human rights, while also capable of limiting content-related harms.
Striking a balance between protecting individual rights and addressing the potential negative consequences of certain forms of online content is a key objective. Technology is seen as a potential solution to address issues such as violence against women and human rights violations.
Utilizing technology as a tool can contribute to creating safer and more inclusive environments where individuals’ rights are respected and protected. Finally, the analysis emphasizes the need for translation and understanding between the tech community and the norms community. Bridging the gap between these two communities, which often have different perspectives and languages, is crucial for effective collaboration and the development of responsible and rights-based technology policies and practices.
In summary, the analysis highlights the interconnectedness of technology, human rights, and governance. It underscores the need for inclusive and participatory approaches, where diverse perspectives are considered, and the rights and dignity of individuals are protected. The insights gained from the analysis provide valuable considerations for policymakers, advocates, and other stakeholders working in the fields of technology and human rights.
Frederick Rawski
Speech speed
181 words per minute
Speech length
2240 words
Speech time
743 secs
Arguments
META has been successful in integrating human rights into business
Supporting facts:
- Frederick joined the Human Rights Policy Team at META in July last year
- META has a human rights team, human rights policy and an Oversight Board
- META is committed to protecting expression and privacy against overbroad government demands since 2013
- META has published two annual human rights reports
Topics: Human Rights, Business, Tech industry
There is a significant gap in understanding and applying human rights across stakeholders
Supporting facts:
- Conversations on human rights are not always conducted using human rights language
- Challenges in translating human rights principles into comprehensible action for engineers and software designers
Topics: Human Rights, Stakeholders, Tech industry
Emphasizes the need for actionable policies and implementations regarding cyberbullying and user control
Supporting facts:
- META has already got a robust policy on bullying and harassment
- META has made adjustments to its policies on women public figures and added protections
- The issue comes down to language when talking about content
Topics: Policies, Cyberbullying, User control, Technology
Frederick Rawski was glad about the human rights centric discussions in every panel at IGF.
Supporting facts:
- This is Rawski’s first time representing META at a forum
- Almost every single panel has touched upon human rights, either explicitly or implicitly
Topics: META, IGF, Human Rights
Rawski suggests that more efforts can be made to ensure inclusivity and tackle systemic problems in the issues discussed.
Supporting facts:
- He has been to many conferences in his life and thinks there is room for improvement
- Feels that the issues and framing discussed can get to the heart of more systemic problems
Topics: Inclusivity, Systemic Problems, Conferences
Report
The Internet Governance Forum (IGF) served as a platform for discussions on a range of topics, including human rights, regulation, stakeholder collaboration, policy evolution, cyberbullying, inclusivity, and systemic problems. Frederick Rawski represented META at the IGF and expressed satisfaction with the significant focus on human rights in all panel discussions.
The argument was made for the need for consistent and principle-based regulations rather than individual companies developing their own rules. META faced challenges in fulfilling its commitments due to inconsistent legal conditions. The company expressed support for the leadership of the United Nations (UN) in improving global cooperation.
The integration of human rights into business practices emerged as a key theme. There was a call for human rights to be treated with equal importance as other business risks. It was observed that the roles of different risks were not balanced in decision-making processes.
Stakeholders discussed the importance of harmonizing the concept of risk among all parties involved. The discussions also highlighted the significant gap in understanding and applying human rights principles across stakeholders and the challenge of translating human rights language into actionable steps for engineers and software designers.
The need to address cyberbullying and ensure user control was emphasized. META showcased its robust policies on bullying and harassment, and adjustments were made to provide additional protection for women public figures. The discussions also highlighted the role of language when discussing content-related issues.
The importance of inclusivity and addressing systemic problems was stressed. Frederick Rawski suggested that more efforts could be made to ensure inclusivity at conferences and tackle the root causes of systemic problems. META expressed its commitment to actively participate in the IGF and support its initiatives in the future.
The company had a high-level delegation at the event to demonstrate its dedication to the conversation. META expressed readiness to move forward and support the IGF in any way possible. In summary, the IGF discussions focused centrally on human rights.
The challenges encountered in integrating human rights into business practices and the need for consistent regulations were recognized. The importance of addressing cyberbullying, ensuring user control, and promoting inclusivity was emphasized. META’s commitment to actively engage in future discussions and support the IGF demonstrates its dedication to building a more inclusive and ethical internet.
Gbenga Sesan
Speech speed
182 words per minute
Speech length
1581 words
Speech time
522 secs
Arguments
The multi stakeholder conversations should include all relevant stakeholders
Supporting facts:
- There are barriers to entry which need to be addressed, especially visa issues, that prevent certain stakeholders from participating in these conversations
Topics: Multi-stakeholder Conversations, Human Rights
Importance of internet accessibility for all and the issues regarding internet shutdowns
Supporting facts:
- There are 2.6 billion people who are not connected and people who are disconnected due to various reasons including government actions.
Topics: Internet Accessibility, Internet Shutdowns
The Internet Governance Forum (IGF) should be hosted by countries respecting internet freedom.
Supporting facts:
- Previous IGF was hosted by a country that had shut down the internet, which was embarrassing.
- Any country, including Saudi Arabia, hosting the IGF should understand what the IGF stands for.
Topics: Internet Governance, Internet Freedom
We’re not talking about tokenism. We’re talking about the respect of rights and to be seen as respecting rights.
Supporting facts:
- This is not just about a panel or about getting certain colors of faces on panels or something. It’s about making sure that when we have to have the difficult conversations, we have these difficult conversations, regardless of where this is held.
Topics: Human Rights, Respect
Issues of representation, especially of young people and minority groups, is a conversation that should continue.
Supporting facts:
- As long as we continue to get those questions of young people are not represented, minority groups are not represented, it means that they are not things we should ignore.
Topics: Representation, Minority groups, Young people
Civil society should not shy away from speaking truth to anyone, including even our allies.
Topics: Civil Society, Truth
Report
The first argument asserts that multi-stakeholder conversations should be inclusive and incorporate all relevant stakeholders. It is important to address barriers to entry, such as visa issues, that prevent certain stakeholders from participating. In doing so, diverse perspectives can be represented, leading to more comprehensive and effective discussions.
The second argument highlights the importance of utilizing data and stories from civil society organizations concerning human rights. These organizations have been actively involved in various human rights issues, and their data can provide valuable insights to improve processes and gain a better understanding of the issues at hand.
By incorporating their data into policy-making, decision-makers can make more informed decisions and better address human rights concerns. The third point emphasizes the significance of universal internet accessibility. Presently, around 2.6 billion people worldwide are not connected to the internet, and various factors, including government actions, contribute to these disconnections.
It is crucial to address these issues and ensure that everyone has equal access to the internet. Furthermore, the problem of internet shutdowns needs to be addressed, as they impede people’s access to information and communication. The fourth argument highlights that human rights should be central to global policy processes.
It stresses that everyone, including states, civil society, the technical community, and the private sector, has a role to play in promoting and protecting human rights. Moreover, it is affirmed that respecting human rights is not only morally right but also beneficial for business.
The fifth point specifically focuses on the hosting of the Internet Governance Forum (IGF) by countries that respect internet freedom. It is stated that the previous IGF was hosted by a country that had shut down the internet, causing embarrassment.
Therefore, it is argued that any country hosting the IGF should understand and uphold the principles of internet freedom. The sixth argument emphasizes the need to raise concerns and demand guarantees from nations hosting the IGF. As the IGF is a forum for discussing the internet, including human rights principles, it is crucial to ensure that the host country respects these principles.
This is vital for maintaining the credibility and effectiveness of the IGF. The seventh point stresses that the issue at hand is not tokenism but the genuine respect for rights. It is stated that difficult conversations regarding human rights need to take place, regardless of the location.
The emphasis is on truly respecting and upholding human rights rather than merely appearing to do so. The eighth argument highlights the need to hold countries and platforms accountable for their actions. This includes calling out those that speak the language of human rights but do not genuinely respect rights as they should.
By doing so, it ensures that human rights are protected and upheld. The ninth point addresses the issue of representation, particularly for young people and minority groups. It argues that conversations regarding representation should continue as the concerns raised regarding the underrepresentation of these groups should not be ignored.
The tenth argument highlights the roles of different stakeholders in promoting human rights. The state is regarded as having an obligation to uphold human rights, while the private sector needs to understand that respecting human rights is beneficial for business.
Additionally, the technical community is urged to incorporate human rights principles into their work. Lastly, civil society is encouraged not to shy away from speaking the truth, even to allies. It emphasizes that civil society should fearlessly raise concerns and advocate for human rights, regardless of any alliances they may have.
In conclusion, this summary underscores the importance of inclusive multi-stakeholder conversations, the use of data from civil society organizations, internet accessibility for all, human rights in global policy processes and IGF hosting, raising concerns, meaningful representation, stakeholder responsibilities, and the role of civil society in speaking truth.
These arguments and observations highlight the need for a comprehensive and inclusive approach to address human rights issues and ensure that rights are respected and upheld
Mallory Knodel
Speech speed
188 words per minute
Speech length
2349 words
Speech time
750 secs
Arguments
Human rights is governance with teeth
Supporting facts:
- Human rights is the most tangible, useful mechanism we have to talk about the most pressing social issues of our day.
Topics: Human Rights, Governance
The technical community’s relationship to the human rights framework
Supporting facts:
- The technical community is made up of other stakeholders including industry and states. It is important to establish what is their relationship to the human rights framework
Topics: Human Rights, Technical Community
Research, understanding, and nuanced perspective are needed with problems like cyberbullying.
Supporting facts:
- Center for Democracy and Technology’s report on experiences of women of color who are politicians in the US
- Requirement of platforms to open up their data to researchers
- Need to enable users with more agency and tools to block and report at scale
Topics: cyberbullying, online harrassment, gender discrimination
Human rights should be a significant part of the conversations at the forum, irrespective of the location or host.
Supporting facts:
- Internet governance meetings have occured in places with questionable human rights records
- Example of IGF in Turkey 2014, IETF in Singapore
Topics: human rights, internet governance, artificial intelligence
Privacy is a complex issue in the current internet landscape
Supporting facts:
- Privacy has been a prominent topic of discussion for the past decade
- Despite technical progress, there is an ongoing privacy crisis
Topics: privacy, Snowden revelations, internet governance
Reviewing the business model and the impact on the end user is crucial
Supporting facts:
- Business models could be contributing to the privacy crisis
- End users can be targeted and surveyed by regime
Topics: business model, end user, surveillance
The internet governance landscape is becoming increasingly complex
Supporting facts:
- New issues and dimensions are coming into play
- The creation of new mechanisms within the UN to tackle these issues is proposed
Topics: internet governance, representation, participation
There is a risk of technocratizing all social issues
Supporting facts:
- There is a warning against putting all kinds of social issues into the technical bucket
Topics: technocratization, social issues, internet governance
Report
The analysis of the speeches highlights several important points regarding human rights, internet governance, and related issues. One of the main arguments made is that human rights serve as a crucial mechanism for addressing pressing social issues. It is emphasised that human rights are tangible and useful in discussing and addressing the most urgent challenges of today.
Furthermore, the relationship between the technical community and the human rights framework is examined. It is noted that the technical community consists of various stakeholders, including industry and states, and it is important to establish their relationship to the human rights framework.
This highlights the need to understand how the technical community can contribute to the promotion and protection of human rights within their respective domains. Censorship and internet resilience are identified as significant concerns that need to be addressed. The analysis suggests that censorship and internet resilience are recurring issues that are discussed in various forums and technical communities.
This highlights the importance of actively engaging in conversations surrounding these topics and finding effective solutions. The speakers emphasise the importance of placing human rights at the centre of engagement on internet issues. They argue that considering human rights in all fora and in the Internet Governance Forum (IGF) is essential.
This includes bringing up human rights issues and teasing out the most important aspects for people worldwide. It is also noted that internet governance meetings have taken place in countries with questionable human rights records. This serves to highlight the need to ensure that human rights remain a significant part of the conversation, regardless of the location or host.
The analysis also explores the issue of cyberbullying, emphasising the need for research, understanding, and a nuanced perspective to address such problems effectively. The report by the Centre for Democracy and Technology on the experiences of women of colour in US politics is referenced, along with the requirement for platforms to open up their data to researchers.
It is argued that empowering users with more agency and tools to block and report cyberbullying at scale is crucial. Privacy emerges as a complex issue within the current internet landscape. Despite technical progress, there is an ongoing privacy crisis.
The impact of business models on end users and the potential for surveillance by regimes are highlighted. The analysis suggests that reviewing the business model and its impact on the end user is crucial in addressing these privacy concerns. The internet governance landscape is observed to be increasingly complex, with new issues and dimensions coming into play.
The creation of new mechanisms within the United Nations to tackle these issues is proposed, highlighting the need for continuous adaptation and engagement to stay abreast of the evolving internet governance landscape. Lastly, there is a warning against technocratizing all social issues and placing them solely within the technical bucket.
It is argued that not all social issues can be addressed solely through technical means, and a holistic approach is necessary to effectively tackle these challenges. In conclusion, this comprehensive analysis of the speeches highlights the importance of human rights in addressing pressing social issues, the need to establish the relationship between the technical community and human rights, and the significance of addressing censorship, internet resilience, cyberbullying, and privacy concerns in internet governance discussions.
It also underscores the increasing complexity of the internet governance landscape and the importance of avoiding a purely technocratic approach to addressing social issues.
Marielza Oliveira
Speech speed
156 words per minute
Speech length
1447 words
Speech time
555 secs
Arguments
Rebooting digital spaces by re-grounding on trust and facts.
Supporting facts:
- UNESCO’s mandate is the free flow of ideas by word and image.
- Digitalization is advancing at a fast pace but it’s not benefitting everyone equally.
- Regulation of digital spaces and technologies, particularly social media and AI, is missing in digital ecosystems.
Topics: Digitalization, Human Rights, Trust, Facts
Building and strengthening institutional capacities with digital technologies and platforms and media and information literacy are essential.
Supporting facts:
- Digitalization requires competencies for harnessing potentials while also addressing challenges of digital technologies and platforms.
- Media and information literacy is crucial for critical thinking, technical and other skills, knowledge, and attitudes to harness value from digital information ecosystems and avoid misinformation.
- UNESCO prioritizes the capabilities of groups whose decisions and actions have the widest and deepest impact such as policymakers, judicial operators, educators, and young people.
Topics: Digital Transformation, Institutional Capacities, Media Literacy, Information Literacy
Online violence has become a new front line for professionals, specifically for women
Supporting facts:
- 73% of women journalists are harassed online
- High proportion of harassed online suffer offline attacks
Topics: Online violence, Privacy and security breaches, Misogynistic comments
Various stakeholders including regulatory authorities, media and public prosecutors should actively participate in IGF meetings
Topics: IGF, Digital technologies, Human rights violations
Report
During the discussion on digitalisation, several important points were raised. One of the key concerns was the uneven distribution of the benefits brought about by digitalisation. While digitalisation is advancing rapidly, not everyone is reaping its rewards equally. This inequity highlights the need for regulation in digital spaces to ensure that technology is used in a way that benefits all members of society.
Currently, there is a lack of regulation in digital ecosystems, particularly when it comes to social media and artificial intelligence (AI). The absence of such regulation allows for potential misuse and harm. Recognising this, UNESCO considers regulation and standards crucial for ensuring oversight and protecting the public good.
Countries at the forefront of digital transformation have acknowledged the need for the development and implementation of regulations in digital spaces and technologies. However, to effectively address this issue, global standards and guidelines are necessary to facilitate collaboration between governments, the private sector, and civil society.
Another significant aspect of digitalisation is the importance of building and strengthening institutional capacities. Digitalisation requires individuals and organisations to possess the necessary competencies to harness the potential of digital technologies and platforms while also addressing the challenges they bring.
Media and information literacy plays a key role in equipping individuals with critical thinking skills, technical expertise, and knowledge to navigate digital information ecosystems and avoid falling victim to misinformation. UNESCO places priority on enhancing the capabilities of decision-makers, educators, judicial operators, and young people, recognising their potential to have the widest and deepest impact.
The discussion also shed light on the prevalence of online violence, particularly against women. Women journalists, in particular, face significant harassment online, with a startling 73% experiencing such abuse. Disturbingly, a high proportion of those who suffer online harassment also suffer offline attacks.
The session emphasised that online violence has become a new front line for professionals, and urgent action is needed to address this issue and ensure the safety and well-being of those affected. Additionally, the negative impact of technology on labour protections was highlighted.
With the advent of artificial intelligence, jobs are changing, and there are concerns that labour protections are being stripped away. An example given was the hiring of nurses in a way similar to ride-hailing services, such as Uber. This precarious work arrangement has negative consequences for patient health and underscores the need to address the potential pitfalls of technological advancements to protect workers’ rights.
Lastly, the session stressed the importance of active participation from various stakeholders in meetings of the Internet Governance Forum (IGF). Regulatory authorities, media representatives, and public prosecutors were specifically mentioned as actors who should actively engage in these meetings. This inclusive approach to governance aims to ensure that discussions on digital technologies and their impact are comprehensive, incorporating a range of perspectives and expertise.
In conclusion, the discussion on digitalisation highlighted the need for regulation and standards in digital spaces, the importance of building institutional capacities and promoting media and information literacy, concerns about online violence, the negative impact of technology on labour protections, and the call for active participation in the Internet Governance Forum.
Addressing these issues requires collaborative efforts from governments, private entities, civil society, and individuals to create an inclusive, equitable, and safe digital environment for all.
Peggy Hicks
Speech speed
196 words per minute
Speech length
3288 words
Speech time
1009 secs
Arguments
Importance of human rights framework in the digital age
Supporting facts:
- Human rights framework is a universal agreement across contexts and continents
- It can help guide some of the tough issues around the internet, digital technology, and artificial intelligence
Topics: Artificial Intelligence, Internet Governance, Digital Technology
The importance of a multi-stakeholder perspective
Supporting facts:
- Need for meaningful engagement from all communities
- Need for resources to facilitate meaningful participation
- Researchers should have adequate resources to investigate the technologies
Topics: Digital Compact, Internet Governance, Participation
The issue of representation at global conferences needs to be addressed
Supporting facts:
- There have been persistent issues with global perspective and participation at conferences
- The absence of certain stakeholders due to visa issues is unfavorable for the conferences as they miss out on the valuable insights and experiences
Topics: Human rights, Global conferences, Visa issues, Participation
the AI challenge is essentially a data issue
Supporting facts:
- Data protection and privacy are crucial to AI challenges
- Transparency is a key element in dealing with data and AI issues
Topics: Artificial Intelligence, Data Protection, Transparency
Report
During a discussion on the challenges of the digital age, the importance of a human rights framework was widely acknowledged. This framework serves as a universal agreement that guides ethical decision-making in relation to the internet, digital technology, and artificial intelligence.
It was recognised that these advancements present challenging issues that require careful consideration of their impact on individuals and society as a whole. The human rights framework ensures that all voices, particularly those directly affected by digital technologies, are included and represented.
It promotes inclusivity and prevents the domination of certain regions or sectors in discussions about these issues. This is crucial for achieving a balanced and holistic perspective, allowing for a more comprehensive understanding and effective decision-making processes. The discussion also highlighted the need for meaningful engagement from all communities and adequate resources for researchers.
This emphasised the importance of a multi-stakeholder perspective in addressing the challenges posed by the digital age. Including input from various stakeholders, such as individuals, communities, industry experts, and policymakers, ensures a diversity of perspectives, fostering solutions that are informed by the needs and concerns of all stakeholders.
Additionally, the issue of representation at global conferences and the impact of visa issues were discussed. It was noted that there have been persistent issues with global perspective and participation at conferences, as certain stakeholders are absent due to visa limitations.
This creates a disadvantage as valuable insights and experiences are missed, hindering the effectiveness of these conferences. The need to address this issue and find ways to facilitate global representation and participation was underlined. Furthermore, the topic of internet shutdowns was deemed relevant and should continue to be discussed.
Internet shutdowns restrict access to information, impede freedom of expression, and have negative implications for individuals and societies. The previous year’s Internet Governance Forum (IGF) also highlighted this issue, further emphasizing the importance of continued attention and action to address this concern.
Data protection, privacy, and transparency were identified as crucial elements in the discussions on artificial intelligence (AI). It was recognised that the challenges related to AI primarily stem from data issues. Protecting personal data, ensuring privacy, and promoting transparency in the use of data are essential for addressing the ethical and societal implications of AI.
In conclusion, the extended summary underscores the significant role of a human rights framework in the digital age. It highlights the importance of inclusion, representation, meaningful engagement, and a multi-stakeholder perspective for addressing challenging issues related to the internet, digital technology, and AI.
The impact of visa issues on global representation, the need to continue addressing internet shutdowns, and the focus on data protection and transparency in AI discussions were also noteworthy points raised. These discussions serve as a reminder of the ongoing importance of fostering dialogue and finding ethical and responsible solutions in the rapidly evolving digital landscape.
Peter Kirchschlager
Speech speed
161 words per minute
Speech length
1552 words
Speech time
578 secs
Arguments
AI needs a minimum standard of ethical regulation based on human rights
Supporting facts:
- Human rights can ground the various initiatives around AI and technology and provide a common framing.
- Ethics of human rights perspective allows people to live a life with human dignity.
- Human right based regulations can also foster diversity, freedom of expression and innovation.
Topics: AI Ethics, Human Rights
There is a need for a regulatory framework for AI that is human rights-based.
Supporting facts:
- Different processes and consultations at global level have shown a convergence in the idea of needing a regulatory framework for AI that is based on human rights.
- There is a general consensus on the need for an UN institution or body that can enforce and implement this regulatory framework.
Topics: AI Regulation, Human Rights
An international agency, similar to the International Atomic Energy Agency, could be established to handle AI ethics.
Supporting facts:
- AI and nuclear technologies share a dual nature having ethical positive and negative potential.
- The International Atomic Energy Agency was successful in avoiding the worst-case scenarios in nuclear technology.
- This agency could identify ethical opportunities and risks, enhance international cooperation and ensure that AI benefits both humans and planet.
Topics: AI Ethics, International Regulations, AI Standardization
We can find technology-based solutions to gender issues including gender-based hate speech and cyberbullying
Supporting facts:
- It’s not rocket science to identify gender-based hate speech and cyberbullying
Topics: gender issues, hate speech, cyberbullying
We are not paying enough attention to the impact of AI on human labor
Supporting facts:
- We have seen years where we had economic growth with unemployment rates also increasing
Topics: AI, impact on human labor
We cannot deal with the visa issue if you’re not talking about migration in a more systemic way
Topics: visa issue, systemic migration
Look at institutions and structures, including structures of injustice.
Topics: institutions, structures, injustice
Representation is important and should be evaluated critically.
Topics: representation
The field of legal discussion about artificial intelligence and ethical discourse needs to be applied in a practical way.
Topics: Artificial intelligence, ethical discourse
Report
The need for ethical regulation in AI, based on human rights, is highlighted as a crucial aspect to consider. It is argued that human rights can serve as a foundational framework for various initiatives related to AI and technology. This common framing allows for the promotion of human dignity and ensures that individuals can lead a life that respects their rights.
Furthermore, regulations based on human rights can also foster diversity, freedom of expression, and innovation. There is a growing consensus on the requirement for a regulatory framework that specifically focuses on AI and is rooted in human rights. Various global processes and consultations have indicated a convergence of ideas in this regard.
A universal understanding is emerging that an institution or body, similar to the United Nations, should be established to enforce and implement this regulatory framework effectively. Drawing from the success of the International Atomic Energy Agency, which played a crucial role in avoiding the worst-case scenarios in nuclear technology, it is proposed that a similar international agency should be established to handle AI ethics.
This agency would identify ethical opportunities and risks associated with AI, enhance international cooperation in addressing these issues, and ensure that AI benefits both humans and the planet. In the realm of gender issues, it is acknowledged that technology-based solutions can play a vital role in addressing concerns such as gender-based hate speech and cyberbullying.
However, there is a lack of sufficient focus from states and the private sector in tackling these problems. The impact of AI on human labor is another significant concern. It is observed that economic growth has been accompanied by increasing unemployment rates.
This phenomenon highlights the need for interdisciplinary debate and evaluation to better understand the effects of AI on work and employment. The issue of visas and migration is also discussed, suggesting that a more systemic approach is required to address this matter.
It is argued that considering migration in a broader context is essential in effectively dealing with visa issues. The importance of critically evaluating institutions, structures, and representation is emphasized. It is essential to assess these aspects closely to ensure justice, inclusivity, and reduced inequalities.
Finally, the practical application of the ethical discourse surrounding AI is deemed necessary. The legal discussions and ethical considerations need to be translated into practical implementation, ensuring that ethical principles are upheld in AI development and deployment. In conclusion, this extended summary highlights the importance of ethical regulation in AI based on human rights.
It emphasizes the need for a regulatory framework, the establishment of an international agency for AI ethics, technology-based solutions for gender-based issues, and the evaluation of the impact of AI on human labor. It also emphasizes the significance of addressing migration in a systemic manner, critically evaluating institutions and structures, and applying ethical discourse practically.