New Technologies and the Impact on Human Rights
24 Jun 2025 15:30h - 16:00h
New Technologies and the Impact on Human Rights
Session at a glance
Summary
This discussion focused on the intersection of emerging technologies and human rights, examining how to balance innovation with rights protection in an increasingly digital world. The panel, moderated by Pablo Hinojosa and Allison Gilwald, brought together representatives from technology companies, civil society, government, and international organizations to address three key questions about preventing human rights violations in digital environments.
The conversation began with establishing that technology and rights should work together rather than in opposition, with participants emphasizing that approximately 2.6 to 4 billion people remain offline, often excluded from digital rights discussions. Peggy Hicks from the UN High Commission for Human Rights argued that getting human rights right is not a hurdle to innovation but an asset, advocating for transparency, participation, and accountability as key principles. She highlighted that the human rights framework provides a universal touchstone that all UN member states have agreed upon.
Pierre Bonis from AFNIC emphasized the importance of internet infrastructure as an enabler of human rights, noting that access to the internet itself is one of the most violated rights globally. He stressed the need for neutrality in internet protocols while acknowledging the tension between neutrality and human rights protection. The discussion revealed significant concerns about digital inequality, with speakers noting how lack of representation in datasets and concentrated data infrastructure in certain regions amplifies existing exclusions.
Anita Gurumurthy from IT4Change challenged the current balance between innovation and inclusion, arguing that it creates a “human rights free zone for business” and that social harms are often baked into innovation pathways. She called for moving beyond individual rights to consider collective and economic rights, emphasizing that digital access inequalities are manufactured by extractivist economics rather than inherent deficiencies in developing countries. Alexandra Walden from Google outlined the company’s commitment to human rights through the UN Guiding Principles, describing efforts to address risks like deepfakes while advocating for proportionate, risk-based regulation.
The panel addressed practical challenges including supply chain ethics, particularly regarding cobalt mining in the Democratic Republic of Congo for renewable technologies, and the need for precautionary principles in AI development. Rodrigo Goni, a Uruguayan parliamentarian, emphasized that parliaments must shift from reactive to proactive approaches, adopting multistakeholder models and creating regulatory sandboxes to keep pace with technological change. The discussion concluded with recognition that self-regulatory models have failed and that global cooperation is essential, requiring genuine multistakeholder participation that includes voices from the Global South to address the concentrated power of major technology companies.
Keypoints
## Major Discussion Points:
– **Balancing Innovation and Human Rights Protection**: The panel extensively debated how to balance technological innovation with human rights safeguards, moving beyond the traditional view that regulation stifles innovation toward recognizing that proper governance can create conditions for more equitable innovation.
– **Digital Divide and Structural Inequalities**: Significant focus on how billions remain offline and how this exclusion is amplified by emerging technologies like AI, with emphasis on moving beyond individual rights to collective and economic rights that address systemic inequalities.
– **Need for Proactive vs. Reactive Governance**: Discussion of shifting from reactive policy-making (addressing problems after they emerge) to anticipatory, proactive frameworks that can keep pace with rapidly evolving technologies, particularly regarding parliamentary and regulatory approaches.
– **Global Cooperation vs. Local Implementation**: Tension between the need for global standards and cooperation (given the concentration of tech companies) while respecting local contexts, cultural diversity, and ensuring meaningful participation from the Global South in governance frameworks.
– **Expanding Human Rights Frameworks for Digital Age**: Moving beyond traditional first-generation rights (privacy, freedom of expression) to include second and third-generation rights (economic, environmental) and addressing the full value chain of technology from mineral extraction to data processing.
## Overall Purpose:
The discussion aimed to explore how emerging technologies and human rights can work together rather than in opposition, focusing on creating inclusive, people-oriented digital environments. The session sought to identify practical mechanisms for preventing human rights violations in digital spaces while enabling innovation that benefits all populations, particularly those currently excluded from digital participation.
## Overall Tone:
The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes contentious issues. While there were moments of challenge and critique (particularly regarding corporate responsibility and systemic inequalities), the overall atmosphere was one of shared problem-solving rather than adversarial debate. The tone became increasingly urgent toward the end, with participants emphasizing the need for immediate action and systemic change, but remained respectful and focused on finding common ground among diverse stakeholders.
Speakers
**Speakers from the provided list:**
– **Pablo Hinojosa** – Independent GRULAC representative, Marconi Society (internet resilience), Co-moderator representing “team technology”
– **Julian Theseira** – Centre for AI and Digital Policy (based in Prague), AI governance and policy expert
– **Peggy Hicks** – Director of the UN High Commission for Human Rights, human rights expertise
– **Rodrigo Goni** – Member of the Uruguayan Parliament, Head of the Committee of the Future, President of the Commission of the Future of Latin America
– **Anita Gurumurthy** – Executive Director of IT4Change, digital rights and economic justice expert
– **Allison Gilwald** – ICT Research Africa, Civil Society Africa, Co-moderator representing “team rights”
– **Alexandra Walden** – Google, Global Policy Director
– **Pierre Bonis** – CEO of AFNIC (.fr registry, CCTLD registry), internet infrastructure expert
– **Audience** – Various audience members asking questions (including individuals from Benin IGF Remote Hub, Democratic Republic of Congo, and Senegal)
**Additional speakers:**
– **Timothy Holborn** – Online participant asking questions about international human rights law integration
– **Elaine Ford** – Online participant commenting on international cooperation and funding cuts
– **Agustina** – Audience member asking about regulations and human rights policies
– **Christian Fazili Meigo** – Audience member from Democratic Republic of Congo asking about ethical supply chains and cobalt mining
– **Mallory** – Audience member with experience in Internet governance and standards bodies
Full session report
# Comprehensive Report: Emerging Technologies and Human Rights – Balancing Innovation with Rights Protection
## Executive Summary
This panel discussion at the Internet Governance Forum examined the critical intersection of emerging technologies and human rights, bringing together diverse stakeholders to address how innovation and rights protection can work together rather than in opposition. The session was moderated by Pablo Hinojosa (Independent GRULAC representative, Marconi Society) representing “team technology” and Allison Gilwald (ICT Research Africa, Civil Society Africa) representing “team rights,” with participants from technology companies, civil society organisations, government institutions, and international bodies.
Hinojosa opened by identifying himself as from “generation WSIS” and noted the session’s structure around three fundamental questions about preventing human rights violations in digital environments whilst enabling inclusive innovation. A key reframing emerged when Gilwald introduced “team people,” highlighting that approximately 2.6 to 4 billion people remain offline and are often excluded from digital rights discussions entirely.
## Opening Framework and Key Principles
### Establishing Common Ground
Peggy Hicks, Director of the UN High Commission for Human Rights, established a foundational principle that getting human rights right is not a hurdle to innovation but rather an asset. She emphasised that the human rights framework provides a universal touchstone that all UN member states have agreed upon, offering transparency, participation, and accountability as key ingredients for effective protection.
Hicks outlined how human rights violations in the digital space occur at multiple levels, from biased datasets that exclude certain populations to the unequal global distribution of data infrastructure. She specifically highlighted internet shutdowns as a major challenge, noting that these violations extend beyond traditional first-generation rights to encompass broader systemic inequalities that affect people’s ability to participate meaningfully in digital environments.
### Infrastructure as Rights Enabler and Technical Neutrality Challenges
Pierre Bonis, CEO of AFNIC (.fr registry), brought the perspective of internet infrastructure as a fundamental enabler of human rights. He noted that access to the internet itself represents one of the most violated rights globally, with billions remaining offline. However, Bonis also highlighted a critical tension between maintaining internet infrastructure neutrality and protecting human rights.
“For us in the technical community, it is not up to us to determine how to best protect human rights in standards,” Bonis explained. This perspective challenged popular notions of “rights by design” and technical solutions to human rights problems, emphasising that technical neutrality creates inherent tensions with human rights protection and that broader social and political engagement is necessary.
## Challenging Current Innovation Models
### Critique of Existing Balance
Anita Gurumurthy, Executive Director of IT4Change, provided one of the most provocative critiques of current approaches to balancing innovation and human rights. She argued that the present balance creates “a human rights free zone for business” through approaches that establish minimum thresholds rather than positive obligations.
Gurumurthy stated that “the social harms and injustices are somewhat baked into the pathways of innovation” and are “never fully acknowledged.” She challenged the fundamental assumption that innovation and rights can be easily balanced through minor adjustments, instead arguing for a more transformative approach that addresses structural inequalities.
Particularly striking was her argument that constraints in developing countries are “manufactured by extractivist economics” rather than representing inherent deficiencies. She noted that 45 countries face debt burdens that directly impact their ability to participate meaningfully in digital governance, directly challenging common framings of capacity building and development assistance.
### Corporate Responsibility and Due Diligence
Alexandra Walden, Global Policy Director at Google, outlined the company’s commitment to human rights through the UN Guiding Principles on Business and Human Rights. She described efforts to address emerging risks such as deepfakes, mentioning Google’s SynthID watermarking technology, whilst advocating for proportionate, risk-based regulation that preserves innovation whilst maintaining rights standards.
Walden emphasised the importance of engaging with all stakeholders to ensure a balanced approach, noting that companies should commit to the UN Guiding Principles as a baseline for responsible AI development. She advocated for a hub-and-spoke regulatory model where all sector regulators address AI within their domains rather than creating a single AI regulator.
However, this corporate perspective faced significant challenge from civil society representatives who argued that voluntary commitments have proven insufficient. An audience member, Agustina, specifically questioned why companies need special human rights policies, asking whether this suggests they would otherwise violate rights.
## Governance Transformation and Parliamentary Response
### Proactive Versus Reactive Paradigms
Rodrigo Goni, Member of the Uruguayan Parliament and Head of the Committee of the Future, provided a remarkably candid assessment of parliamentary limitations in addressing rapid technological change. He argued that parliaments must fundamentally shift from reactive to proactive paradigms to effectively govern emerging technologies.
“The paradigm for parliaments will have to change towards being proactive from being reactive,” Goni explained. “We have no other possibility but to sit down on an equal level with industry, academia and civil society.” This acknowledgement of parliamentary limitations and the need for multi-stakeholder approaches represented a significant departure from traditional notions of legislative authority.
Goni advocated for regulatory sandboxes that allow testing of governance frameworks on larger scales whilst maintaining protections. He emphasised the need for capacity building programmes for civil servants in AI literacy and the adoption of anticipatory governance models that can keep pace with technological development.
### International Frameworks and Implementation
Julian Theseira from the Centre for AI and Digital Policy highlighted that existing international frameworks already provide adequate foundations for governance. He mentioned UNESCO AI ethics recommendations and noted that the challenge lies not in creating new frameworks but in implementing existing ones effectively.
Theseira advocated for practical tools such as human rights impact assessments across the AI lifecycle. He referenced examples from various jurisdictions, including the EU AI Act’s requirements for high-risk systems and existing practices in countries like the Netherlands. He also mentioned AI literacy training initiatives, referencing work in Bangkok, and emphasised that AI governance cannot be divorced from broader global economic inequalities and debt justice issues.
## Supply Chain Ethics and Global Value Chains
### Addressing Hidden Costs of Digital Transformation
A powerful intervention came from Christian Fazili Meigo, an audience member from the Democratic Republic of Congo, who raised pointed questions about supply chain ethics, particularly regarding cobalt mining and child labour in the DRC for renewable technologies. He asked how the precautionary principle should mandate ethical supply chain audits before deploying AI models.
This intervention highlighted how digital rights extend beyond users to include those affected by the entire technology value chain. It connected digital transformation to environmental justice, labour rights, and global supply chains, revealing hidden costs often excluded from traditional rights frameworks.
The discussion revealed that addressing human rights in emerging technologies requires consideration of the full lifecycle of technology, from mineral extraction to data processing, and must account for impacts on both users and non-users of technology.
## Regional Perspectives and Contextual Challenges
### African Context and Access Limitations
Participants from African countries raised specific challenges related to internet access and neutrality in contexts where basic connectivity remains limited. A participant from Senegal highlighted how traditional concepts of internet neutrality may not apply during electoral campaigns when access itself is already restricted or limited.
The IGF Remote Hub in Benin raised questions about the biggest challenges in ensuring security of fundamental rights for technology users in rural areas, highlighting how geographic and infrastructure limitations create unique vulnerabilities for rights protection.
These interventions emphasised that global frameworks must account for diverse local contexts and that solutions developed for well-connected regions may not translate effectively to areas with limited infrastructure or different political contexts.
## Technical Standards and Multi-Stakeholder Governance
### Limitations of Technical Solutions
The discussion revealed significant tension around the role of technical standards in protecting human rights. Bonis argued that technical neutrality creates inherent tensions with human rights protection and that technical solutions alone cannot solve human rights problems without broader political engagement.
Mallory Knodel, an audience member with experience in internet governance and standards bodies, raised questions about how technical standards development in an open multi-stakeholder way could find alignment with regulation to enforce adoption of good standards in service of human rights.
This exchange highlighted the complexity of integrating rights protection into technical infrastructure whilst maintaining the openness and interoperability that characterise internet governance. It also revealed disagreements about the appropriate roles of different stakeholders in determining rights protections.
Tim Holborn raised complex questions about international human rights law integration, though the full details of his intervention were difficult to capture due to audio quality issues mentioned during the session.
## Areas of Consensus and Key Tensions
### Strong Consensus Areas
The discussion revealed significant consensus on several key issues:
**Failure of Self-Regulation**: Speakers across sectors agreed that voluntary self-regulatory approaches by companies have proven insufficient and that mandatory accountability mechanisms are needed.
**Digital Access as Fundamental Right**: There was broad agreement that internet access is essential for exercising rights in the modern world and that billions of people are excluded from digital participation.
**Need for Global Cooperation**: Speakers recognised that the global and concentrated nature of major technology companies requires international cooperation, particularly to address inequalities affecting developing countries.
**Implementation Over Innovation**: There was consensus that established international frameworks provide adequate foundation, but the challenge lies in implementation rather than creating new frameworks.
### Key Areas of Disagreement
**Technical Standards Role**: Disagreement emerged over whether technical standards should incorporate human rights protections directly or whether such protections require separate regulatory frameworks.
**Innovation-Rights Balance**: Tension persisted between those advocating for proportionate, risk-based approaches that preserve innovation incentives and those calling for more transformative approaches that fundamentally restructure innovation pathways.
**Scope of Rights Frameworks**: Whilst there was agreement on expanding beyond individual rights, disagreement remained over whether existing human rights frameworks are sufficient or whether new collective and economic rights frameworks are needed.
## Unresolved Challenges and Future Directions
### Implementation Gaps
Despite consensus on many principles, significant challenges remain in translating agreements into effective action. Key unresolved issues include:
– Mechanisms for ensuring meaningful participation of marginalised communities and Global South voices in global AI governance
– Specific enforcement mechanisms for international cooperation given fragmented regulatory landscapes
– Practical implementation of precautionary principles whilst maintaining innovation capacity
– Effective mechanisms for addressing harms to non-users of technology including environmental and labour impacts
### Systemic Reform Requirements
The discussion highlighted that addressing human rights in emerging technologies requires systemic reforms that extend beyond the technology sector itself. This includes addressing debt burdens, extractive economic models, and global power imbalances that create “manufactured constraints” in developing countries.
### Anticipatory Governance Models
The need for anticipatory rather than reactive governance emerged as a critical challenge requiring new institutional models. This includes developing regulatory sandboxes, multi-stakeholder governance mechanisms, and capacity building programmes that can keep pace with rapid technological change whilst maintaining democratic legitimacy and human rights protection.
## Conclusion
This IGF session demonstrated a maturing of debates around technology and human rights, with stakeholders from different sectors recognising that incremental approaches are insufficient and that more fundamental reforms to governance structures are needed. The conversation moved beyond technical fixes to address systemic issues of power, inequality, and global economic structures.
The panel’s most significant contribution was its expansion of digital rights discussions beyond traditional user-focused frameworks to encompass the full value chain of technology and its impacts on both users and non-users. This broader framing connects digital governance to environmental justice, labour rights, and global economic equity in ways that demand more comprehensive and transformative approaches.
The session’s informal tone, with Pablo’s acknowledgment of being intimidated by the camera and the mix of in-person and online participants, reflected the collaborative spirit needed for addressing these complex challenges. The discussion revealed encouraging consensus on the need for stronger accountability mechanisms, the importance of global cooperation with meaningful Global South participation, and the recognition that human rights protection can be an asset rather than a barrier to innovation.
The path forward requires continued collaboration across sectors, genuine multi-stakeholder participation, and commitment to addressing the structural inequalities that underpin many digital rights violations. As the session concluded, it was clear that the concentrated power of major technology companies and the global nature of digital systems require coordinated international responses that go beyond voluntary commitments to include mandatory accountability mechanisms and systemic reforms that address the root causes of digital inequality and exclusion.
Session transcript
Pablo Hinojosa: Please welcome to the stage the moderators Allison Gilwald, ICT Research Africa, Civil Society Africa, and Pablo Hinojosa, Independent GRULAC. Hello, welcome. It’s not a big crowd, but it’s a good crowd. So that’s important. And we also have not the ones present here, but online and hello to all of them. And this is also for preservation, because we will be recorded and I think it will be an important dialogue. So I want to welcome our fellow panelists and start a very good main session on emerging technologies and human rights. Okay, thank you. Let’s call them in. Well, this is it, showtime. I think I’m mostly intimidated by the resolution of that camera. That’s really hi-fi. My name is Pablo Hinojosa. I’m here for the Marconi Society, waving the flag of internet resilience. I consider myself a product of generation WSIS. And for the purpose of this session, I’ll be wearing the badge. Now it’s a digital badge. Hi, Julian, it’s good to see you online. Of team technology. So I’m on the team technology side. And I have the absolute pleasure of co-moderating this session with Alison Gilwald from ICT Research Africa, who brings the wisdom and depth of team rights. So team technology, team rights. But for the record, we’re not in opposite sides at all. I think the very point of this session is that technology and rights must be of the same group working together to shape a future that is inclusive and people-oriented. I think we all in the preparatory process agreed with that, at least. But hopefully we won’t agree on anything, on everything, just on some things and make this a good session. So we will have a difficult task of moderating this group, an incredible panel of speakers. For the session structure, we have 90 minutes and we’re going to split it in three parts. I’m not sure how many of you have read the brief of the session. It consists of three questions about emerging technologies and human rights. And we will rotate speakers across those blocks. Alison will be moderating two of the blocks, the first and the second one, and I will moderate the third. We encourage audience input in each of those blocks. My understanding from the producers is that there are some crosses in the hallways where you can pick up a microphone and you will be on. But please do not derail the discussion. Bring and be part of the dialogue and be very fast. We appreciate that very much. We will be closing with reflections and forward-looking messages. And that’s on the logistical side. On the panelist side, I am really honoured to be working with all of you. To my left, we have Pierre Bonis, CEO of AFNIC, and he’s also team technology, I think. I don’t want to categorise everyone. I think we all should merge and converge. So, for those that don’t know, AFNIC is the .fr registry, the CCTLD registry, and he has brought a wealth of really good points into the session. Next, we have Rodrigo. Rodrigo Goni is a member of the Uruguayan Parliament, a Latin American friend. He’s a Latin American friend. He’s a head of the Committee of the Future of the Commission. Rodrigo is President of the Commission of the Future of Latin America, and he participates with, as a parliamentarian, this is a really important contribution, together with Rodrigo. He’s the Executive Director of IT4Change, a very well-known entity in the IGF ecosphere, and I think she’s on the team’s right. Working together, it’s an honour. Peggy is super important. Your presence here, thank you for this. She’s the Director of the UN High Commission for Human Rights, and your expertise will be very welcome to help us frame and give adequate language to these discussions. Alex, from Google, she will be bringing perspectives from the global policy side, and we’re also very thankful. You’re probably in a difficult spot, and that’s part of the point of this, but we’re really welcoming the conversation. Julian, I met him online in the preparatory process. It’s really an honour to have you here. You helped a lot with the framing. You were the one that says these are not opposing teams, so it’s credit to Julian for that. He’s in Prague. Are you in Prague? Yes, I am in Prague. Fabulous. My PhD. So, Julian is at the Centre for AI and Digital Policy, and I can only speak for the quality of your research. It’s impressive. Please look for him online. So, that’s the panel. We don’t need to introduce the panel each time we go ahead, so I think I got rid of all the logistics, and let’s get on to the substance. What is this about, Alison? Let’s frame the conversation.
Allison Gilwald: Thank you so much, and thanks for this great panel joining us today and all the work that’s been put in so far. So, perhaps just to start by saying I’m from team people, so I’m not from team technology or rights, and I think a number of people, some of the people on the panel but also in the room, have been working for some time on trying to extend more sort of techno-legal understandings of rights and digital rights to more socio-economic, technico-legal, of course, but also socio-economic dimensions to this, and really talking about rights in a very people-centred way, and that’s largely because conservatively, sort of 2 billion, 2.6 billion, but more accurately about 4 billion people still remain offline, and often these discussions around digital rights exclude significant parts of the world’s population. And even in those jurisdictions where there is some resorting to some kind of rights framework, people very often don’t have the capabilities to exercise those rights, so even they exist on paper. How do we realise those? How do we make them more practical? And just in terms of a number of global forums and a number of alliances, Global Justice Forum that we work with Anita on, and a number of other international forums, Global Partnership on Artificial Intelligence, has been dealing with issues of extending notions of data governance, data justice, and just answering, you know, you’re asking the kind of question, you know, can you have rights-preserving, ethical governance frameworks that still produce unjust outcomes? And I think we only have to look across the globe to know that’s the case. We’ve got Universal Declaration of Rights, we’ve got these frameworks within the Human Rights Council that we normatively appeal to, but in practice these rights are not exercised by a large majority of people. So how can we do that? And I think part of this critique has been, of course, emphasising the importance of first-generation rights, of fundamental rights, of rights of privacy, freedom of expression, access to information, absolutely critical. But that those aren’t necessarily sufficient for the kinds of redress that we need around digital inequality and rights inequality more fundamentally. I think for many of us, digital access is an enabler of exercising one’s rights in a contemporary world. So, yeah, I think many of us have been working practically, of course, feeding into the Global Digital Compact and feeding into the WSIS Plus 20 review, considering, you know, after 20 years, after the COVID crisis, et cetera, you know, how far have we come and how big this gap is. So I think that one of some of the things is just moving beyond those first-generation rights, which have been practised in a very kind of individualised way. So the individual rights have taken preference over maybe collective rights or public interest rights. I think we saw this particularly around COVID. And then also, yeah, the more kind of systemic inequalities and injustices and rights abuses that one sees, you know, because of structural inequalities and so that sort of thing. So I think moving, extending, because it’s not that those individualised rights aren’t important, but extending those rights to look at some of the kind of collective implications of that. And then, as I said, extending that if one’s really concerned about redressing the current inequalities we see from a rights perspective is shifting that to second- and third-generation rights. So to economic and environmental rights that would allow people not only, you know, it’s not only a compliance framework for rights, but it’s actually the kind of enabling framework that you would need for the governance of technologies to, you know, to ensure that people can exercise those rights, enjoy the economic opportunities that are offered by, you know, a digital economy and data society. And I think those are where the challenges are coming now in regulation. Really shifting, as I said, not doing it at all, but extending the personal protection kind of rights frameworks, digital rights frameworks, to, you know, non-personal powers, power asymmetries we see in the data economy and data society.
Pablo Hinojosa: That’s the framework. And with that, I think this is a very good scene to start the conversation. And, Alison, we have the first block, the first policy question that is about how can we use existing international human rights standards and instruments by different stakeholders to be improved, to prevent violations and abuses, and how to make a digital environment that is more inclusive. That is well-designed and people-oriented. So that’s the first block, and I will leave you to moderate it.
Allison Gilwald: Thank you so much. So in this first section, we have Peggy, Pierre, and Julian online to discuss this question that Pablo has articulated for us, focusing on these mechanisms of preventing and addressing human rights violations and abuses in the digital environment, but also how we can, I guess, maybe proactively or precautionarily, as has been mentioned in some of the notices here, anticipate some of these violations and how we can mitigate those effects. So, Peggy, if you’d start us off, you’re obviously in a very strong position to speak about some of the human rights impacts and precautionary tools that exist. Please do share those with us.
Peggy Hicks: Great. I’m really happy to be with you today and with those gathered in the room. I think it’s fair to say, in a big room like this, we talk about sometimes elephants being in the room, but there’s an elephant in the room as well, which is, you know, why are you all here? Why are we still talking about human rights? And I think Ellison has put it on the table quite well, that there’s a real tension around what human rights mean in the digital space, and I think those of you that have been following it have recognized that it’s actually gotten, the conversation’s changed quite a bit in the last year, year and a half, and we really need to take that on in this panel, I think. We have to talk about the fact, like, why is human rights relevant? What does it mean in the technology space? And most importantly, how does it relate to the need for innovation and succeeding in a competitive world around development of digital technology? And I think the most important thing I want to bring to that conversation is the idea that really getting human rights right is not a hurdle to innovation and development of digital technology, but it’s actually an asset. It’s ultimately the companies and businesses and governments that figure out how to do digital in a rights-respecting way that will see the greatest advances that benefit people, and this goes to Ellison’s point as well about how we ensure that the real benefits of artificial intelligence and digital technology are there for everyone across the full set of rights. It’s interesting you talk about generations when, of course, economic and social rights were there at the creation. We have a whole covenant on them, which is just as old as the covenant on civil and political rights, but I think you’re right in terms of thinking about have we really fulfilled it in the way that we need to. And why I think the human rights framework is so relevant is that we also, as I said, we live in a very fragmented and polarized world, and the human rights framework is still there as a resource that is universal. It has been agreed by all the member states of the UN, and they have all committed to basic principles that guide how we develop and deploy and use technology in a broad sense, and relying on it as a touchstone that we can all agree on and bring it into the conversation I think is very important. So human rights principles can both help us define what’s at stake, what are the things we need to preserve when we’re doing this, why are we doing it, what’s AI for, what do we really need it in, do we need it to better achieve health goals, do we need it to better make sure education is available, do we need it so we can have better translation within our panel and put some better spin on how we are all talking to each other. But also, it can help us navigate the solutions, because we know that things do come into conflict in this space, and we see it all the time in terms of what it means to develop technology that can serve people, but can also ultimately undermine some critical rights. And the human rights framework has been worked on and developed in a course of different issues in a way that allows us to transverse that. It’s not that one right trumps another, but that we do have to figure out how we can achieve the full set of rights in an effective way. And we’re working on a variety of things that I can come back to later that will help us to traverse that space, including how we do human rights due diligence around digital technology. But the three key ingredients I’ll leave you with, Alison, that we need to focus on is transparency. So this means we need to know what’s happening, we need to understand and have good reporting on how technology is being rolled out and used and how human rights is being evaluated in that context. We need participation. That means that all the stakeholder voices need to be at the table, really glad to be here at IGF where that’s the case, but we need to make sure it’s happening in policy conversations around digital tech and AI everywhere. Not just because it’s the right thing to do, but because if we don’t bring in civil society, if we don’t have that expertise in the room, we’re going to miss a lot of the things we need to know to deliver human rights and technology in the way that we want. And finally, we need oversight and accountability. It’s not enough to roll it out, we need to know how it works for people, and there need to be mechanisms there that ensure that we’re following it and learning from what doesn’t work and trying to improve and make sure that we’re delivering better results as we go forward.
Allison Gilwald: Thank you so much, Peggy. Just as a quick follow-up, you’ve spoken about accountability and of transparency and participation critically in this forum, but could you speak a little bit about some of the kind of institutional challenges that are there? That you now kind of siloed attention to these rights frameworks within a kind of legal framework, needing to be extended to these much more kind of dynamic, agile institutions that we need for this environment that we’re in.
Peggy Hicks: I mean, the greatest challenge we face right now is inequality and discrimination, I think, within this space. And I was talking to someone earlier about how it occurs at so many different levels. tools that we’re building, we’re often building in bias, because we’re using data sets that have bias within them. And that’s sort of the starting point for the conversation. We’re also super concerned about whether or not investment in AI for good is what it needs to be, and whether or not we’re too driven by what can sell in a marketplace, as opposed to what will deliver the most value to the people that need it most. A third level of inequality is around how do we make sure that the benefits of digital technology reach the people who need them most? So you know, the vulnerable people, the marginalized people, who really are probably going to be the last to get on board the digital tech and AI bandwagon, but may actually be the people who could benefit the most, and probably the most cheaply at the very outset. And then finally, of course, we’re seeing a huge growth in inequalities among states. I mean, you have to look at where are the data centers, where is the compute, what languages is all this being done in, to realize that if it’s not managed in a better way, we will see an increase in global inequality in a variety of ways. So that’s one of the big challenges I think we need to tackle.
Pablo Hinojosa: Great. Thank you so much. Pierre, I think that provides you with a useful rights framework to go and talk to us a little bit about, on the more technical side, about some of the infrastructure and access aspects of the technology as an enabler of human rights.
Pierre Bonis: Yeah, thank you very much. Je vais parler en français. I am going to speak in French. So first, my warm salutations to the interpreters. I hope they won’t be replaced by AI, at least not for now. And in human rights, we also have the possibility to speak out and to listen to people in their own language. I would like to speak here about not something that is an emerging technology, but about underlying infrastructure of the Internet that help these emerging technology to emerge, namely. And that’s exactly what’s happening. We have this dialogue between human rights, technology, on the other hand, and the new technologies on top of that. And it’s always good, I believe, to remind everyone that from the very first rollout of the Internet with its own internal character that Peggy mentioned, transparency of the protocols, multistakeholder governance, these qualities through access to population will help these people to exercise their rights. So it’s not a solution for everything. That is true. But access to knowledge, to education, to law, to what is written in the law, the word of the legislation, the ability to upskill and gain more skills, everything is potentially reinforced by the Internet. I am not saying that Internet is going to solve all of our problems. It’s not just a question of being connected and all your problems disappear. But it is quite clear that for the last 30 years, the strengthening of these rights and the efficiency have been supported by the Internet. A point of clarification. One of the rights which is probably most violated today is the right of access to the Internet. Several billions of people do not have access to the Internet. And Internet has a possibility to make the rights efficient. And it means that some of the world population is in a very complicated situation. We’ve spoken about these people for many years. The longer we speak of it and the most violent it is for these people who are not connected, they’re becoming more and more excluded as time passes. That’s just one point I wanted to make. Another point very briefly. The underlying technology of the Internet also has a quality. It is its neutrality. And in our debate between technology and rights, I think we need to remember that the actors of these technologies, that is the one operating them, critical infrastructure operators, the ones who are managing these technologies, they need to stay neutral. And there is a tension between this requirement on neutrality and respecting human rights. Well, between these two, we need to find a path and navigate it. And if I am completely neutral, well, human rights are not neutral. It’s very important. It’s fundamental. But it’s not neutral. So we need to find this balance between protocol and pipeline neutrality and how to take into account these rights and their materiality. Thank you.
Allison Gilwald: Thank you so much, Pierre. You spoke about, you speak about the underlying infrastructure, because that’s so important, the Internet infrastructure that enables the additional layers of all these technologies. And I think you’ve made the point very strongly how lack of access to this really prevents, you know, significant numbers of people, we can debate the numbers, from exercising these human rights, these normative frameworks that have informed the Internet’s rollout in this kind of neutral way. But I think just, you know, speaking about how this does affect these more advanced technologies, the lack of representation in these big social networking data sets by people who’ve never come online, you know, inability to financially transact, et cetera, are all affecting the data sets that are used for these advanced technologies, some of the big data sets that are being used for these advanced technologies. So you actually have an amplification of these exclusions and rights. And I know you’ve been thinking about this even beyond AI in terms of quantum and various other things. But perhaps just to make those linkages, because I think people often speak about that Internet connectivity as a kind of black and white thing, but it’s become, you know, the digital inequality is no longer just a divide around connectivity. It’s about what you can do, what you have the resources to do once you have that connectivity. You know, how you can exercise, you know, your rights and use chat GPT or, you know, produce an alternative technology. Can you just talk to us a little bit about that?
Pierre Bonis: Yes, indeed, it’s very important. And this was said earlier by Peggy when she was talking about data localization. The digital divide and the access divide is also seen on the ground in different regions and countries. The produced data that will be used to access these rights to inform people are not maintained locally. So they go somewhere else. And there you can only hope that when these datas are given back to you, it will be done in a neutral way, in an efficient way. And indeed, potentially, you can be dispossessed of your cultural heritage, of your economy, because this infrastructure is not just pipelines. It’s also data centers, as we said. And this infrastructure needs to be localized everywhere in the world and not concentrated in specific countries. So the topic of the AI is not such a recent topic. We talked a few years ago about big data at the IGF. We were talking not about AI, but it was the same thing, basically. For the last 10 years, we’re trying to hire data analysts. And if you are in a country where you have no data, there’s no point hiring data analysts. So it is a vicious circle. It is still a problem, a problem that has been rampant for the last 25 years. And things are progressing. That is true. But it’s not something that is behind us. And the more internet and the new technologies, such as AI, etc., are bringing good things, The more these divide is absolutely scandalous.
Allison Gilwald: Thank you so much for that. And I think that takes us to Julian. And Julian, you were also going to talk to us a little bit about human rights impacts, particularly on AI systems. So I think a good segue from our input from Pierre. But also bridging the gaps that we’ve got on some of the standards and implementation that is affecting the ability to safeguard rights. So if you could just come in on that, please.
Julian Theseira: Thank you, Allison, and thank you very much to the organizers for having me on this panel. So at the Center for AI and Digital Policy, we’re a big supporter of international frameworks on AI and AI governance and AI policy that also support fundamental rights. And so one thing we often highlight is that there already are such frameworks out there, such as notably, for example, the UNESCO recommendation on the ethics of AI that have already been adopted by 194 UNESCO member states. So practically all countries in the world have agreed to it. And UNESCO actually also has developed certain tools to help countries to implement these recommendations. So notably, the readiness assessment methodology, so to assess their national frameworks and ecosystems about countries, about whether or not they are ready to apply the UNESCO recommendations. And then there’s also the UNESCO ethical impact assessment tool that can be used to assess the impact on rights across the AI lifecycle. And at the Center for AI and Digital Policy, we’re a big proponent as well for human rights impact assessments across the AI lifecycle. And also we see as well in certain jurisdictions, such as the EU, like the EU AI Act also mandates rights impact assessments for high-risk AI systems. And then there are countries like the Netherlands as well, where there already is pre-existing rights impact assessments mandated by law for AI systems and algorithmic systems. And then in terms of how things can be improved and building capacity, so again, there are also existing efforts that are ongoing at the same time as this, for example, at the same time as this Internet Governance Forum, there’s also the UNESCO Forum on the Ethics of AI that’s currently ongoing in Thailand. It’s also supporting the implementation of the UNESCO recommendations. And our CIDP president, Marva Hickok, she’s currently as well in Bangkok conducting an AI literacy training to trainers for civil servants and experts who will then go on to train more civil servants in different countries in AI literacy, which will also help them to better understand the ethical implications of AI and also how they can then implement the UNESCO recommendations in their own countries.
Allison Gilwald: Julian, thanks very much. We are very keen to follow up on the readiness assessments and the impact assessments, because for many developing countries, that’s been a significant challenge. We all know what’s got to be done, but we often don’t have the institutional capacity. And I think this is new technologies. We’re facing some of the same institutional capability challenges. So interesting to hear about the examples you’ve given. But I guess the question is kind of institutionally and on scale, how do we address this? I think UNESCO is making a lot of effort to support various countries. And actually, the readiness and impact assessments have been conducted in countries that would be sort of classified as maybe emerging economies or developing countries. And there are different terms, countries such as Brazil or Kenya, for example. So countries that may not be top of mind when one thinks of, let’s say, maybe cutting-edge AI, per se. But they are interested in understanding what AI could mean for their countries and their societies and interested in working with international stakeholders to better prepare and ensure their own domestic frameworks and legal systems can ensure that AI is used ethically. And also right now, at UNESCO Forum, for example, as I mentioned, there is this currently training trainers that’s ongoing by our CIDP president, where we are also training some experts and will then go on to train more civil servants in different countries around the world over the next 12 months, so spreading and sharing their knowledge. So I think this potentially could be one way in which capacity building could be spread out and made more sustainable and also equitable. Thank you. Thank you so much. There’s obviously so much to discuss there. I think we can take one question from the audience before we go to the next panel.
Pablo Hinojosa: OK, then I have them because the IGF Remote Hub in Benin, hello, thank you for following, is asking, what are the biggest challenges in ensuring the security of fundamental rights of technology users in rural areas? And I will also mention the question from Timothy Holborn. How are you, Tim? How can international human rights law be effectively integrated into internet governance to ensure humanitarian ICT principles, such as protecting essential services from being turned off during conflicts, supporting digital agency for natural persons with very favorable credentials and agreements, addressing challenges of pervasive surveillance, digital consent, and access to lawful remedies via courts while countering adversarial efforts to undermine human-centric digital transformation? It’s a mouthful of difficult questions, but those are from the online world. Is it OK if we throw them? Yes, let’s take the panel. Yeah, OK. Who wants to take them? Peggy?
Peggy Hicks: Sure. No, happy to jump in. I think the question about what are the challenges in rural areas has been partially answered in that connectivity is obviously the first one, right? There are so many people who aren’t benefiting from the needs that they have and aren’t able to access the benefits of digital technology. But I’d also throw in there an accompanying problem that doesn’t get as much focus is the impact of internet shutdowns. There are people across the globe who don’t have access to the internet, not because it hasn’t reached them yet, but because there’s been a conscious choice to shut it down. And there’s some good proposals being put forward around the WSIS plus 20 process about the need to do better on reporting and monitoring on internet shutdowns. But I would start with the issue of connectivity as the greatest challenge in rural areas. And then obviously, it’s also adapting to make sure that once you have it, we’re using it in a right way to address the needs that people in rural communities have. So I think the challenge of making sure that we’re not building systems that are designed with certain audiences in mind is a real problem. Because the people that are sitting in the rooms and doing the development, it’s also a gendered problem, are sometimes people that don’t really think through what the needs and challenges are in rural communities or in marginalized communities or for women as opposed to men. So we need, in terms of the development community, the ability to bring in the expertise that people have about their own needs and how to best meet them is an important point, I think.
Allison Gilwald: Thank you so much, Peggy. I think both of these questions really relate to all the panels and questions. So perhaps we’ll move on to the next panel. I think that would be really good. Yes, I think we can go into the second question. And then when we get some more questions from the floor, the first panel can also answer some of those. Sounds good. So we sort of rushed you through those. But the second block of this discussion was really focusing on balancing innovation, access, inclusion, and rights and very tied, of course, to the first one and, of course, to the third one as well. But here we’ve been a bit more concerned with the adoption of these new technologies increases. How should different stakeholders balance questions of innovation, access, inclusion? These have often been presented as tensions, regulation and innovation. And I think there’s an increasing realisation that under certain conditions, you need to govern in order to create the conditions, equal conditions for innovation. And I think we only have to look at the concentration in markets around innovation that we currently see. So what are the rights implications of that, of enabling equal participation in innovation? And some practical examples, we hope will come up in the session. Anita, let’s start with you. And a lot of your work is focused on some of the structural aspects of digital inequality, and also trying to surface the issues of economic justice and the right to development, which is built on very well from where Peggy ended off.
Anita Gurumurthy: Thank you very much. During the preparatory meetings that we’ve had, Pablo has given me the bottom line. He has insisted that we should make this a spirited discussion. So that’s what I’m going to do. And hopefully, you know, pick up from there. very valuable comments. Broadly I think the balance today between innovation and inclusion is achieved through an approach that carves out a human rights free zone for business. So that some minimum threshold is met for the so-called innovation economy to grow unhindered. And this is not incidental, we have research after research that points to how sustained lobbying by the big actors actually achieves this. So the balance is really not a balance that is from the sense that Allison pointed to that we are all on the side of the people, right? So the balance is not really coming from there. The first point I want to make is that technology foresight in the current environment becomes an exercise to enable ease of business with minimum negative externalities. So the idea is of negative externalities and the idea is not society, it is the economy first. Just as an example, and although this will be true for other regions as well, EU scholars have shown how the GDPR’s balancing act between privacy and the data market has ended up leaving serious gaps in relation to algorithmic profiling, for instance, and what it does to user autonomy, which is now trying to be fixed through the DSA, the Digital Services Act, and the DSA is struggling to fill those gaps. So it’s not as if the problem was unknown to us, we always knew the costs of surveillance advertising in terms of what it does to decisional autonomy, but we still ignore the risks of this model. So the social harms and injustices are somewhat baked into the pathways of innovation, they are never fully acknowledged, and as civil society, as concerned bureaucracies, in fact competition commissions in many countries, as ethical technologists, we have painstakingly put together evidence to mobilize opinion around the evidence, but policy ends up going from one fix to the other in some minimalist way without really transforming things. So you’re just tweaking things at the edges. So to do things differently, we need to envision the macroeconomic context very, very differently, and we need to answer the question, how can data rights bring new capabilities, new opportunities for social innovation? And the second point, which is a very brief point I want to make, is that the impulse for digital innovation today is built almost exclusively for market capture. So the incentive is for market capture. This means business entities resort to aggressive and preemptive patenting, they use and abuse trade secrets to lock up data, exploit loopholes in trade rules to avoid fiscal responsibility and algorithmic transparency. These are the several examples, actually, that we see in the world. So the paradox is that we can harmonize data standards all we want across the globe, like my colleagues said, and we can push for interoperable data standards, but the open innovation culture our digital economy is built on unfortunately promotes cannibalization of smaller businesses and not the diversification of innovation. I’ll stop.
Allison Gilwald: Thank you so much, Anita. Alex, perhaps you can pick up that challenge there. Tell us a little bit about what efforts you have undertaken to promote human rights in that innovation context and, of course, in that open innovation culture.
Alexandra Walden: Sure. Thanks for the question, and great to be included in this dialogue today. I think where I’ll start is just to say I think I’m proud to be at Google, where we are a company that is committed to human rights, and that is not to say that we’ve got everything perfect, but it is, I think, an important step for any company to take as a baseline, say, that you have a commitment to human rights, which really should be grounded in the UN Guiding Principles on Business and Human Rights, and if it’s not grounded there, then I think the company has a long way to go, so that’s a baseline for how we think about these issues. And specifically when it comes to AI, you know, across the company we are really focused on and hopeful about the benefits that AI can bring to everyone, individuals, consumers, communities, businesses, governments, services, so we see that potential, and that is what we are innovating to hope to hopefully improve and bring technology to. Obviously, there may be risks, and we are very focused on that, and so internally our leadership talks about this to everyone who works at Google, and they say this externally as well, we’re focused on being bold and responsible when we innovate. So when we innovate, we are looking to push the envelope, to develop new things, to take on new challenges, and in doing so, making sure that we are having an approach, using standards to address the risks, and having human rights be an important part of how we think about that. Another piece of this is really that there is a role for individual companies to think about how human rights play a part in their own work, and then how we work together as an industry on that, and then how we do so in concert with our partners at international organizations and civil society, and with governments as well, so there is a role for everyone in this. For our own part, we are focused on ensuring that we have policies for how we do training, have policies for how we think about what outputs are allowable and preferable, policies for how our products are going to work. All of those things are ways that we need to take responsibility for the things that we are developing or deploying through our products. We also have frameworks for how we think about, obviously, how we integrate human rights across our product suite, and then we have a set of AI principles that are, again, how we think about our own work. Obviously though, we think that this work is so important and pervasive across the industry and globe that any individual company should be doing this, but there is an important role for regulation that AI is too important not to regulate, but that that needs to be done well and smartly. In that way, we are engaging with civil society, international organizations, and government about how that should happen. Pre-existing standards that we already have, from the UN guiding principles to work happening at the OECD and various other places, those are standards that we should be using as a baseline for how we think about this and to be informing regulation. There’s a lot of good stuff out there, and everything is moving fast, but we are in early days, and I think ensuring that we’re not reinventing the wheel, but taking the frameworks that we already have and integrating those is the most important way that we can proceed. Again, I think creating a level playing field for the industry is really important, and human rights must be the baseline, and I think a company that is not explicitly committed to the UN guiding principles cannot say that they are responsibly developing AI.
Allison Gilwald: Thank you so much, Alex. If I could do your follow-up question again right after that, could you give us, you’ve spoken about Google’s very aware of the risks associated with this, human rights risks associated with this, and that you’ve got certain policies and training and frameworks in place. Could you give us a practical example? What do you see as the biggest risk? How are you mitigating that with these training frameworks? If you could identify one, I know they’re multiple, and then if you could just speak, you said regulation of the industry is important. What do you see as the biggest issue that requires regulation?
Alexandra Walden: Sure. So one example I’ll share on the sort of risk side is obviously around things like deepfakes or synthetic content that is manipulative. That’s something that we are very concerned about. We know many governments and members of civil society and others are concerned about. So for us, we sort of do a few things. One is we make sure that when we’re thinking about how technology gets integrated into our products, we’re thinking about how to flag things for users so that they can identify this type of content, right? So on YouTube, we have content creators creating basically indicators on the content to demonstrate where something has had AI manipulation or the use of AI in the content to help people understand what it is that they’re looking at. There are obviously many challenges with that, but it’s a way that we’re thinking about how we can be helpful to people as they’re engaging with content. Another example is through Google Cloud, we created something called Synth ID, which is a way to sort of watermark content to identify it’s essentially provenance, so you know where it comes from. That’s another way that we’re thinking about how people can figure out what is authentic content or the provenance of content, where it comes from. That’s what we’ve done on our own and what we’re putting out there in our own products. We also do this work in concert with other colleagues from industry and civil society with the partnership on AI and C2PA, where we’re thinking about protocols around this. So that’s just sort of one area where we see challenges related to AI and where we’re working in our own products and then certainly with others across industry and other sectors to hopefully improve and address the problem. When it comes to regulation, there are a variety of things that we think about when we’re thinking about what could be useful in terms of both preserving the ability to innovate while also making sure that we are maintaining standards related to human rights. So in particular that is making sure that we have sort of a proportionate risk-based framework that’s focused on what are based in actual likely harms so that we can have tailored regulation. We can always sort of improve and iterate later but broad vague regulation at the outset I think is both harmful to innovation in this area and really I think doesn’t help us have a narrowly tailored solution to the problem. Also differentiating between types of actors regulation should just distinct be distinct between what are the responsibilities as a developer, a deployer, or an end-user. We also sort of are very focused on things where there’s more of a hub-and-spoke model that there shouldn’t be a single AI regulator but really that every agency over time is going to need to be a regulator of AI in some way. If you regulate the financial sector you will be dealing with AI. If you regulate education you’ll be dealing with AI. So really that there needs to be capacity building among all types of regulators, not a single focus. And then maybe the last thing I’ll flag is about interoperability and ensuring that sort of where individual countries are regulating, multinationals are operating everywhere and so we want to make sure that there is alignment and interoperability between the different regulations and that they’re taking advantage of the standards that already exist, many of the ones that I already mentioned, and again using the UN Guiding Principles as the baseline for when you’re talking about human rights. I think sometimes we see things that pick and choose aspects or concepts from the UNGPs but then maybe muddle them a little bit and I think that is confusing for those of us at companies that have human rights expertise and certainly for all of our stakeholders who are human rights experts and trying to operationalize those things.
Allison Gilwald: Thank you so much Alex. Julian perhaps you can come in from that perspective. Alex has been speaking about some of the company and industry measures and I think importantly also indicating what can be done at the production level, the kind of technology by the design level, but that doesn’t necessarily deal with some of the either unintended consequences or outcomes, negative multipliers that Anita’s referred to. What do you see more broadly as some of the challenges and particularly the work that you’ve been doing around internet governance as a kind of underpinning, if you get that right, for AI?
Julian Theseira: Thanks Alison for the questions and so on. So just again more broadly thinking about this theme of innovation, human rights and so on, and coming back again to the UNESCO recommendations. So something that CIDP strongly supports as well as the idea of red lines against certain types of practices or AI systems that could represent or could result in gross violations of human rights or fundamental rights. And these are not just like ideas from civil society. So for example UNESCO recommendations already recommends provisions of AI for social scoring, AI for mass surveillance, it also recommends the consideration of environmental impacts, because AI also has like a material dimension, it also requires resources of various kinds. And these are recommendations that countries have already agreed to, so now like next step is to implement them. And we’ve seen for example in the EU there is now the AI Act that prohibits certain high-risk AI practices, and we encourage like more countries to as well take the next step of implementation and ensure that certain practices, high-risk practices are prohibited. And then coming back into, linking back as well then to the question of internet governance more broadly. So I think some of the issues related to internet governance are still very much relevant for AI governance and policy. So one of the building blocks of AI is data, so models need to be trained on data. And we already have in many countries around the world there are data protection authorities, there are data protection various types of privacy frameworks, the data protection frameworks. So we need to ensure that these frameworks continue to be applied, implemented, and if necessary updated for to take into account advances in AI and
Allison Gilwald: other emerging technologies. Thanks so much Julian, and please be aware that we can take a round of online questions or maybe in the audience Pablo you’ll decide. But before we do, Anita can you just respond to some of the inputs that have been made and maybe just to return to your original statement.
Anita Gurumurthy: Thank you very much. I just wanted to respond to my colleagues here. One is that I think tremendous work has happened from the different UN agencies, you know the action line holders, after the WSIS. But I do still think that more work needs to be done. One is that we somehow seem to have acquired a certain consensus around all human rights offline must be protected online. You know we see this come up even in the WSIS plus 20 elements paper, but this at best is a partial view because it’s not sufficient to deal with the complexities of network existence. For instance, take digital ID programs, deployment of facial recognition technologies in crime or social credit scoring for pensions. All of these examples show how our personhood is redefined by the manner in which tech renders who we are. Again, another example is about harms and data value chains to livelihoods of those who may not be users. So we need to account for those who are non-users. People who mine rare earth minerals, those whose lands are given away to data centers, who are dispossessed. In fact, I really wanted to give an example of this research by the ETC group of hedge funds and big food companies, you know, who operate in future markets. Because they control the data of farmlands, they have access to climate data, they are able to project the price of, say, wheat in the global markets and they are able to manipulate the pricing and bet on the pricing. And just take a small farmer who really, unbeknownst to the small farmer, we are actually having pricing changes in future markets because of AI. So you’re actually talking about human rights, not just of users, but of non-users. So we need to see these new affordances and program for these rights and these are guarantees that we need in the social reorganization brought by technology. So which is why the ILO, for instance, is looking at upping its standards on decent work, right? We need to look at algorithmic management as an essential core of decent work standards. The second point is really about the product life cycle, you know, again made by my colleague on the panel. I think that looking at the product life cycle is not adequate because we need a wider societal view of innovation, a kind of Karl Polanyi view, which is that we need to look at AI, we need to look at the AI economy as always embedded. So this idea of an always embedded economy, meaning an economy that’s embedded in society and social choices, and not the other way around, right? We look at society as embedded in economy and try to tweak, right? So there’s also in part of 22 of the GDC a call to the private sector, you know, which really is about voluntary responsibility. But I think in the global political economy of things, if you want to move from data extractivism to equitable knowledge economies and pluralistic knowledge societies, we may need other more compelling tools as a global community to bring greater accountability to private actors than soft exhortations. So lastly, I just wanted to say that the Business Plus 20 Elements paper in Paris 78 talks about how many developing countries continue to face significant barriers in harnessing digital technologies due to limited technical expertise, weak institutional frameworks, and constrained fiscal space. For a long while now, I have been very, very exasperated with this kind of characterization of developing countries. It’s not as if these constraints are because we are inherently lacking. These constraints emerge because of the context, and they are not ahistorical, they’re not because of any inherent deficiencies in our societies. They are manufactured by an extractivist economics. Look at the debt, look at the debt burdens on about 45 countries. This kind of extractivist economics is historical and it’s also baked into the digital paradigm So the bottom line is whether there is international courage not just political will To remove these constraints even as we enable capacity building and I guess all governments need capacity building so I would really Want the elements paper to acknowledge that these are not constraints that are owing to these developing countries These constraints are manufactured. So that’s something that we really need to remember
Pablo Hinojosa: So there are two microphones and I barely can see you so please stand up if you have questions
Audience: Okay, thank you, my name is Agustina I will be speaking in Spanish La pregunta es Yo soy, I’m Altina My question is on the regulations and the tension against between sorry innovation and The respect for human rights one thing that I found interesting is that companies have to have a Special policy on Human rights. It’s I mean, that’s strange. It’s evident, isn’t it? You have to respect them. That’s just it Now we have a parliamentarian here I’d like to know What is the vision? How do you? See things in Latin America, for example My name is Christian Fazili Meigo. I’m from the Democratic Republic of Congo the DRC My question related to the renewal Renewable tech relies on Congolese cobalt as you know Which is mined by children with reliable children So how can the precautionary principle demand ethical and ethical? The precautionary principle demand ethical supply chains Before labeling tech as green the second one How should the precautionary principle mandates? ethical supply chain audits for AI developers before deploying Models, and how do we enforce this? Thank you very much Thank you very much Christian and Mallory Hi everybody Mallory nodal. I wanted to Just reflect on my many years of experience in Internet governance particularly in standards bodies because what I’ve recently realized is the role of regulation in helping enforce some obviously use of Human rights tools, but that standards also plays a role there so if I would love to hear folks reflection on what I feel like is a Not so much emerging anymore because for myself, I’ve been doing this work maybe for 10 plus years But it’s definitely a strengthening of this idea that Internet governance in particular the development of technical specifications in an open multi-stakeholder way has found alignment with Regulation that can really enforce the adoption of good standards hopefully in service of Human rights. So the work that you are all doing and anyone who has reflections on that. I think it would be really nice to hear
Pablo Hinojosa: Thank you very much Mallory. So Agustina we will wait for your Answer to come in the third block when Rodrigo will speak which is gracias for to pregunta Cristian, thank you. I would like to ask if someone from here would like to take that one and Julian as well, of course and Mallory very important one because there is this element of Protocols and Discussions about let’s think about it from the start and I think that has been also part of the conversation and We have Tim as well. Tim. Thank you because I know that he’s in Australia and Following it at a very odd hour. So that’s very nice I will not be able to read all of what you’ve written, but please join Online to see very important comments from Tim Let’s go ahead Maybe Anita you can start with
Anita Gurumurthy: Thank you very much, I just wanted to address two points The first one is about the precautionary principle the people on the panel who may know more than me but I would certainly like to agree with you and say that the larger implications of an intergenerational ethics also Implicating Planetary well-being should be programmed into the way we look at AI economy Programmed into the way we look at AI ecosystems. So we are looking at value chains today in a very very limited manner So you have these due diligence rules for the EU for OECD? But all of this really look at a very limited idea and understanding of harm And I think that we should move beyond the product to understanding people Understanding intergenerational justice and understanding planetary boundaries. The second one is to Mallory and I Think that it’s indeed very important to if I understand your comment correctly I think it’s indeed really important to bring progressive values from the technical community around openness and interoperability You know into regulation in an appropriate way, but I just want to caution here that we should neither over valorize interoperability nor demonize Implementation because I’m just reminded now of President Trump’s administration and the big beautiful bill Where every data set is thought to be integrated and made interoperable I would really caution against wanting to apply these very important technical values as my friend here said into you know our future regulation, but do so in a very context specific way and Contextual integrity therefore matters very much when we speak about technical values because there are also larger political ideals and values Constitutional principles that we have to abide by. Thank you
Peggy Hicks: Thank you, I just want to add in on on the point about supply chains raised by the gentleman DRC, I mean I Absolutely value Anita’s comments about the broader framing that needs to be there But I also want to emphasize that there are tools that currently exists that could have a significant impact if they were actually used in the way that they ought to be and The reality is that we do need to to create the right types of both incentives and disincentives For companies to actually, you know Do the risk assessment that needs to happen to ensure that those supply chains are not using exploitative labor in the in the way that you’ve talked about and Right now, you know There are some movements towards mandatory human rights due diligence the CS triple-d and the EU is an example but there’s also a movement to you know water that down some as well and to make sure that we’re only looking at sort of the supply-chain side and the end-users are not looking at the overall human rights impacts of The technology that’s being used in the broader sense I think that Anita is talking about as well And the reality is that if we if we do leave it up And I think this is somewhat what Alex was saying, too If we leave it leave it up to each of the individual companies and actors, you know we’re creating an environment that is is right for the type of exploitation that’s been talked about because the incentives are for everybody to just deliver what they can and some companies will commit in some ways and And be at a competitive disadvantage because they do it compared to others that don’t so we need Governments to step in and we do have the legislative tools to do it and The basis upon which to do it is the UN guiding principles that have been mentioned. So we need more of that going forward I think
Pierre Bonis: yes to pick up on the integration within the standard Development which are issues linked to human rights on that part I might be a little bit provoking but In the AFNIC we are within the technical community and we do not like technocracy For us Technical community it is not up to us to determine how to best protect human rights in standards It’s not our work. It’s not our role and we do not have this competence It’s not up to us. We’ve seen several of these examples with the UTF We’ve seen several works and on the RFC The Press for Comments. And we’ve seen biases that were absolutely unexplainable. And people had goodwill, indeed. They said the more intermediaries we have, the more dangerous it is. And nobody knows why having middlemen is dangerous. I do not know who said that. Maybe somebody wrote it. And maybe somebody wrote it at a time where they were having problems with middlemen. And at the end, it’s a standard. So I think that’s a dangerous path. I’m not saying that technology is so neutral that it should not address fundamental aspects of human rights. But imagine for a moment, imagine that we could guarantee these rights through standards and protocols, I think is wishful thinking. And if you avoid difficult conversations, well, difficult conversations are the assessments of technologies, what kind of harmful impact they can have. How can we go back in time and try to limit these harmful effects? So we can’t, by design, from the start, solve all problems. It’s a techno-solution which does not work. We’ve seen it in all history. It never has worked.
Pablo Hinojosa: I’m going to speak in Spanish. I think we should move forward. And we have Agustina’s question, which is to do with the regulatory part and the policy issues. And I think in that sense, it would be appropriate to give the floor to Rodrigo. I’d like to hear your view on the legislative work. It’s preliminary, or is it post-factum? So what comes first, the chicken or the egg? Which way should it go?
Rodrigo Goni: Well, that’s good, because before the last technological revolution, what the parliaments used to do were to, after they had all the problems on the table and the problems had consolidated, that’s when the parliament acted. Normally, the parliaments would wait and see the behavior, the problems, well defined. That’s only then where the parliaments would look at the problems and try to correct them. But what happens now, in the current context, is that there is a permanent change, constant disruption. We can’t behave that way anymore. We can’t attack the problems that way anymore. If the parliaments want to have a role as a player, proactive role, in the defense of human rights going forward, and you have the two faces, don’t you? You have the protection, which is a duty we cannot avoid as a parliamentarian. It’s our duty to accept the way it is. Google doesn’t care, but Google’s function is just to develop technologies. We all have our different fields. The role of parliaments that they cannot escape is to protect the human rights. Human rights means protect people against harm, but also to fertilize it that they exercise. We talk about access, for example. We talked about companies being able to develop. Well, that’s where the role of parliaments is not only legislative, but also handle the resources of the country. So if we really want to expand the technological capacity so that everyone really can develop this technology, well, then that’s a first push for many parliaments to act. What I’d like to say is the following. If parliaments go, well, we can also be substituted. We have discussions in the parliaments that are quite heated. We are also threatened by being substituted. If we did have AI, it would be much better than us. We are MPs, we discuss all day, full day, whole day, whole weeks. AI would do that much more quickly. But where I’m heading is that if we as parliaments really want to fulfill this basic role that we have, fundamental role, we have to change the paradigm. The paradigm for parliaments will have to change towards being proactive from being reactive. It doesn’t make any sense to run after technology and changes. We’ll always arrive late and in the wrong door. Because we want to hurry up, we’ll just forget human rights at the other side of the door. We have to change the paradigm as a following way, for example. We’ve all discussed all day and yesterday. The local against the global rights and laws as opposites. But to have a correct approach, we have to move beyond that. Another thing that we have to move beyond, for example, is this multistakeholder model. We see it’s functioned, we all agree, in internet governance. Now if the parliaments want to arrange things or play a part, they have to enter into this multistakeholder perspective. We talk about legitimate citizen representation in the parliaments, yes. But in this context, we have no other possibility but to sit down. We’re doing that in some parliaments, sitting down on an equal level with industry, academia and civil society. Why do we do that? The level playing field, it’s not because we’re convinced, it’s because we have to. Legislators have no other possibilities, at least not most of us. We can’t follow, really, the events, the technological development. It’s impossible for us to be on top of that. Because you’re 24 hours in that, then you will not be reelected. You don’t have enough hours. So you have to change the paradigm. You have to do this in the parliaments quickly because we all agree that we need regulations except maybe some. We know what voices we’re talking about that confirm the rule of the exception. Everybody else agrees that we need not local regulations, we’re quite clear we need global regulations that, of course, will have to take into account this multi-stakeholder perspective and it will have to involve local parliaments because citizens will not allow parliaments to stay out of this. I’m about to finish, Pablo. We have to do this with an anticipatory focus. We have to be flexible. We have to have a focus that allows us. We have to try to establish a framework where we can develop these dialogues. We have to have a sandbox, a regulatory sandbox, which is much more, much wider, much bigger than we’ve had so far, so we can test out things, but on a much bigger scale. We have to do this, parliaments, otherwise human rights will be breached, not only the second generation rights, but also the first generation, and also the risks are very grave. If we don’t do it now, in the way I’ve described, probably we’ll arrive late at the party and problems will be of such a magnitude that we will have these very brisk shifts that are much more harmful.
Pablo Hinojosa: In the need to start wrapping up, I really liked, I mean, there is a lot of reframing here. I’m so sorry, just one second, please. I hope I can give you a chance, but we need to start.
Audience: Hello, I would like to make a contribution about the neutrality of the Internet, because I see it in a different way. In Africa, the question of the Internet neutrality is a different one. I come from Senegal, and there was a pre-electoral campaign, there was a lot of Internet campaigning, and where is neutrality in this case, when we did not always have access to the Internet?
Pablo Hinojosa: Yes, and I also have Christian’s question in the air. I think we need to start wrapping up. I would like to start with Alex, followed by Julian, followed by Anita, and hopefully we can have a chance for the others to just compliment what are the key takeaways of the panel. Alex, I’m putting you on the spot.
Alexandra Walden: You are. The key takeaways, I mean, I think for me, it reinforces all of the ways in which we at companies need to be engaging with everyone in civil society, international orgs, legislators at the national and local level to ensure that we’re getting this right, and not that it’s a tension, but that there is a lot of work to be done to figure out just how we get this balance right.
Pablo Hinojosa: Thank you, Alex. Fantastic. Julian, would you like to address some of the questions? I think about Christian and our last person from Senegal.
Julian Theseira: Thank you. So in terms of the questions from the floor and the last question and so on, so here now I’ll speak just on my personal capacity, not as CIDP. So personally, I’m sympathetic to what Anita had raised around some of the broader systemic concerns. I do think we need to think about them. I don’t think these questions about digital rights, internet rights, AI governance, so on, can be divorced from, let’s say, broader concerns around justices or inequalities and inequities in the global economic system that result in things, for example, inequitable internet access in countries like Senegal, just like the lady just mentioned. And there are ongoing conversations this year, for example, around debt forgiveness, debt justice. So these are things that are worth thinking about, that the international community should think about. And also there are new challenges emerging as well, such as, for example, that many emerging countries’ debts are now increasingly being held by private stakeholders rather than necessarily like traditionally other countries or multilateral development banks. So I think these are like… So I think one broader takeaway I would say is that AI policy, internet policy, is interconnected with other policy domains and I think systemic concerns, and we need to think about that as well. Thank you.
Pablo Hinojosa: Thank you, Julian. Just to add into the complexity, Elaine Ford, she is online and she’s talking about the international cooperation that has been drastically reduced. And I think that’s another part that we need to understand, also along significant cuts in funding and grants that have been traditionally provided. So that’s another situation that affects, in particular, the Global South. That’s Elaine Ford. So if you can thread that into your comments. Anita.
Anita Gurumurthy: Thank you very much. I wanted to make four points very quickly. The first is that we know one thing very clearly now, that we don’t need models to be universal. We know that universal models can be totalizing and detrimental to diversity. So the application of AI can enhance development autonomy if local communities are put at the center and currently the local is hollowed out and local value is simply transferred to the circuits of global finance capital, economically and culturally. So there are four things for public interest governance. The first takeaway is that public interest governance is not just about harms mitigation. It’s also about responding positively to societal needs, the new paradigm, so you spoke about for intergenerational justice. Secondly, public interest governance must respect what Pablo just mentioned, the spirit of international solidarity, extending accepted values that some of my friends in the audience spoke about, borrowing from environmental laws such as the precautionary principle. Not only, there are other environmental law principles, the concept of polluter base. So if you cause misinformation, you better pay. Ideas such as common but differentiated responsibilities, that comes from environmental law. The third point is that we really need public interest governance in AI to recognize, as I said, that technical norms cannot become automatic stand-ins for political norms. We need the instance of openness. Ecological activists have spoken about how open genomic information databases have just become sources of biopiracy. So you really need that. My last point would therefore be that we should really not have antagonisms between public governance and commons-based people’s alternatives. We need public infrastructure and we need commons-based alternatives from the people. Thank you very much.
Pablo Hinojosa: Thank you. I see a red mark saying time up, but I understand that this finishes until five, so I take that. Alison, I would leave you to wrap up this. We covered a lot of ground, I think. If we didn’t resolve all the questions that were asked at the beginning, I think we’re left with more questions, and that’s a good thing.
Allison Gilwald: Thank you, Pablo. I think that’s a big ask. I’m certainly not going to try and summarize everything that’s said. I think what’s been interesting in terms of developing some of the debates that have been on have come from everybody, actually, in different ways. But I think a number of the critical ones around this precautionary principle, which I think has also been evident in technology debates over many, many years, how soon do you step in to mitigate those harms without preventing the sort of innovation that might happen there? But I think this conversation, even five years ago at an IGF, would have been still pushing very much prevention of regulation in order to ensure innovation, and I think things have moved so fast, and the potential harms and the growing inequalities have become so great that I think that, I mean, just from the different stakeholders on this panel, there’s an acceptance that something has to be done, that the people have to be heard and have to be protected from the parliaments, that we need to be looking at beyond just the very siloed digital rights and digital harms, that we need to be looking at this whole value chain that goes from the extractive mineral base through to extractive labour, exploitative labour base with labellers and that sort of thing, so the whole labour ILO work that’s been done on decent work and actually de-traumatising work that’s underway at the moment, through to the more extractive data kind of debates that we’ve been more traditionally having. But I think the important points, too, that were made here about enabling equal participation, so it’s not only about harms mitigation, it’s about ensuring access to what is actually public data that might be proprietorially held in a way that protects individual rights and the harms that can arise from just open systems, but I think there’s an acknowledgement that self-regulatory models that we’ve had up to a certain point have failed. I think the band of European legislation has already moved us towards that. But I think what is also very interesting is the acceptance that there’s so much that we can do nationally and regionally, but ultimately because these are highly concentrated, big global companies that are dominant in this area, that it’s going to require some high levels of global cooperation if we’re going to resolve this, and that that needs to be done in a way that really reflects voices and vision and views from the global south and from different multi-stakeholders, because some of that work has been happening at the global level, but it’s tended to be either multilateral, high-level government with not a lot of multi-stakeholder participation, and then the multi-stakeholder participation has been in these more non-decision-making forums. So I think connecting that also came out very strongly in the different views and panel over there. So much more and such interesting comments, but I think we have to leave it there.
Pablo Hinojosa: Yes, it has been an absolute pleasure. Thank you for those that endured the 90 minutes, and we will be online for the rest of the next decade or so. Thank you.
Peggy Hicks
Speech speed
192 words per minute
Speech length
1755 words
Speech time
547 seconds
Human rights principles provide universal touchstone for technology development and deployment
Explanation
Human rights framework serves as a universal resource agreed upon by all UN member states, providing basic principles to guide technology development and deployment. This framework helps define what’s at stake and navigate solutions when different needs come into conflict in the technology space.
Evidence
All UN member states have committed to basic principles, and the framework has been developed across different issues to help traverse complex spaces
Major discussion point
Universal framework for technology governance
Topics
Human rights | Legal and regulatory
Agreed with
– Julian Theseira
– Alexandra Walden
Agreed on
Existing frameworks need implementation rather than new creation
Disagreed with
– Allison Gilwald
Disagreed on
Scope of human rights frameworks needed
Human rights violations occur at multiple levels including biased datasets and unequal global distribution
Explanation
Violations happen at various levels: bias built into AI tools through biased datasets, insufficient investment in AI for good versus marketplace-driven development, unequal access to benefits for vulnerable populations, and growing inequalities among states. The challenge spans from technical implementation to global resource distribution.
Evidence
Examples include biased data sets, concentration of data centers and compute power in certain regions, language limitations in AI development, and unequal distribution of digital infrastructure
Major discussion point
Systemic nature of digital rights violations
Topics
Human rights | Development | Economic
Internet shutdowns represent conscious denial of access beyond connectivity gaps
Explanation
Beyond the challenge of reaching unconnected populations, there are people who lose internet access due to deliberate government shutdowns. This represents a conscious policy choice to deny access rather than infrastructure limitations.
Evidence
Proposals being put forward in WSIS plus 20 process about need for better reporting and monitoring on internet shutdowns
Major discussion point
Government interference with digital access
Topics
Human rights | Infrastructure | Legal and regulatory
Agreed with
– Allison Gilwald
– Pierre Bonis
Agreed on
Digital access as fundamental enabler of human rights
Mandatory human rights due diligence needed to address supply chain exploitation
Explanation
Current voluntary approaches are insufficient to prevent exploitative labor practices in technology supply chains. Mandatory human rights due diligence requirements are needed to create proper incentives and disincentives for companies to assess and address supply chain risks.
Evidence
Reference to CS triple-d in the EU as an example, and concerns about watering down such requirements to focus only on supply-chain side rather than overall human rights impacts
Major discussion point
Corporate accountability in supply chains
Topics
Human rights | Legal and regulatory | Economic
Agreed with
– Allison Gilwald
– Anita Gurumurthy
Agreed on
Need for stronger accountability mechanisms beyond self-regulation
Allison Gilwald
Speech speed
151 words per minute
Speech length
2618 words
Speech time
1034 seconds
Need to extend beyond first-generation individual rights to collective and economic rights
Explanation
Traditional digital rights frameworks have focused on individual rights like privacy and freedom of expression, but this approach is insufficient for addressing digital inequality and systemic injustices. There’s a need to extend to second and third-generation rights including economic and environmental rights that enable people to actually exercise their rights.
Evidence
Examples of how individual rights have taken preference over collective rights, particularly visible during COVID, and how structural inequalities create rights abuses
Major discussion point
Expanding conception of digital rights
Topics
Human rights | Development | Economic
Disagreed with
– Peggy Hicks
Disagreed on
Scope of human rights frameworks needed
Digital access is fundamental enabler for exercising rights in contemporary world
Explanation
Digital access serves as an enabler that allows people to exercise their rights in today’s world. Without this access, people are excluded from participating in contemporary society and exercising basic rights.
Evidence
Reference to 2.6 billion to 4 billion people remaining offline, and how digital rights discussions often exclude significant parts of world’s population
Major discussion point
Digital access as prerequisite for rights
Topics
Human rights | Development | Infrastructure
Agreed with
– Pierre Bonis
– Peggy Hicks
Agreed on
Digital access as fundamental enabler of human rights
Self-regulatory models have failed and stronger accountability mechanisms are needed
Explanation
The conversation has moved beyond preventing regulation to preserve innovation, as potential harms and growing inequalities have become so significant that stronger intervention is required. There’s now acceptance across stakeholders that action must be taken.
Evidence
Reference to how the conversation would have been different even five years ago at IGF, and how European legislation has already moved toward stronger regulation
Major discussion point
Failure of self-regulation
Topics
Legal and regulatory | Human rights
Agreed with
– Peggy Hicks
– Anita Gurumurthy
Agreed on
Need for stronger accountability mechanisms beyond self-regulation
Global cooperation required due to concentrated nature of major technology companies
Explanation
While much can be done nationally and regionally, the highly concentrated nature of big global technology companies means that high levels of global cooperation are ultimately required to address these issues effectively.
Evidence
Reference to how some global work has been happening but tends to be either multilateral government-level without multi-stakeholder participation, or multi-stakeholder but in non-decision-making forums
Major discussion point
Need for global governance coordination
Topics
Legal and regulatory | Economic
Agreed with
– Julian Theseira
– Pablo Hinojosa
Agreed on
Need for global cooperation due to concentrated nature of technology companies
Julian Theseira
Speech speed
149 words per minute
Speech length
883 words
Speech time
354 seconds
Existing international frameworks like UNESCO AI ethics recommendations already exist but need implementation
Explanation
There are already established international frameworks such as the UNESCO recommendation on AI ethics that have been adopted by 194 member states. The challenge is not creating new frameworks but implementing existing ones through tools like readiness assessments and impact assessments.
Evidence
UNESCO readiness assessment methodology, UNESCO ethical impact assessment tool, EU AI Act mandating rights impact assessments for high-risk systems, Netherlands requiring rights impact assessments for AI systems
Major discussion point
Implementation of existing frameworks
Topics
Legal and regulatory | Human rights
Agreed with
– Peggy Hicks
– Alexandra Walden
Agreed on
Existing frameworks need implementation rather than new creation
Red lines against gross human rights violations in AI systems should be implemented
Explanation
There should be clear prohibitions against certain AI practices that could result in gross violations of human rights or fundamental rights. These are not just civil society ideas but recommendations that countries have already agreed to implement.
Evidence
UNESCO recommendations against AI for social scoring and mass surveillance, consideration of environmental impacts, EU AI Act prohibiting certain high-risk AI practices
Major discussion point
Prohibited AI applications
Topics
Human rights | Legal and regulatory
AI governance cannot be divorced from broader global economic inequalities and debt justice
Explanation
Digital rights, internet rights, and AI governance are interconnected with broader concerns around justice and inequalities in the global economic system. These systemic issues affect countries’ ability to participate equitably in digital development.
Evidence
Reference to ongoing conversations about debt forgiveness and debt justice, and how emerging countries’ debts are increasingly held by private stakeholders rather than traditional multilateral institutions
Major discussion point
Systemic economic inequalities
Topics
Economic | Development | Human rights
Agreed with
– Allison Gilwald
– Pablo Hinojosa
Agreed on
Need for global cooperation due to concentrated nature of technology companies
Pierre Bonis
Speech speed
111 words per minute
Speech length
1029 words
Speech time
551 seconds
Internet infrastructure neutrality creates tension with human rights protection requirements
Explanation
There is a fundamental tension between the requirement for neutrality in operating critical internet infrastructure and the need to respect human rights, which are inherently not neutral. Infrastructure operators need to find a balance between maintaining protocol neutrality and taking human rights into account.
Major discussion point
Neutrality versus rights protection
Topics
Infrastructure | Human rights | Legal and regulatory
Billions remain offline, creating fundamental exclusion from digital rights
Explanation
Several billion people lack internet access, and since the internet enables the exercise of rights, these populations face increasing exclusion as time passes. The longer this digital divide persists, the more violent the exclusion becomes for unconnected populations.
Evidence
Reference to several billion people without internet access and how they become more excluded over time
Major discussion point
Digital divide and exclusion
Topics
Development | Infrastructure | Human rights
Agreed with
– Allison Gilwald
– Peggy Hicks
Agreed on
Digital access as fundamental enabler of human rights
Data localization and infrastructure concentration in specific countries creates dependencies
Explanation
The concentration of data centers and digital infrastructure in specific countries creates problematic dependencies where local data is processed elsewhere and returned, potentially in ways that are not neutral or efficient. This can lead to dispossession of cultural heritage and economic value.
Evidence
Reference to how produced data used to inform people is not maintained locally and goes elsewhere, creating risks of cultural and economic dispossession
Major discussion point
Infrastructure sovereignty
Topics
Infrastructure | Economic | Legal and regulatory
Technical community should not determine human rights protections in standards development
Explanation
The technical community lacks the competence and role to determine how to best protect human rights in standards development. Attempting to guarantee rights through standards and protocols alone is wishful thinking and avoids necessary difficult conversations about technology assessment and harm limitation.
Evidence
Examples of biases in RFC (Request for Comments) processes where unexplainable decisions were made about intermediaries being dangerous without clear justification
Major discussion point
Limits of technical solutions
Topics
Infrastructure | Human rights | Legal and regulatory
Disagreed with
– Mallory (Audience)
Disagreed on
Role of technical community in human rights protection through standards
Anita Gurumurthy
Speech speed
143 words per minute
Speech length
1852 words
Speech time
773 seconds
Current balance creates human rights-free zones for business through minimum thresholds
Explanation
The current approach to balancing innovation and inclusion creates spaces where businesses can operate with minimal human rights constraints, as long as they meet some basic threshold. This is achieved through sustained lobbying by major actors and treats social harms as negative externalities rather than core concerns.
Evidence
Research showing sustained lobbying by big actors, EU scholars’ analysis of GDPR’s gaps regarding algorithmic profiling and user autonomy, ongoing struggles with Digital Services Act
Major discussion point
Business influence on rights frameworks
Topics
Human rights | Economic | Legal and regulatory
Agreed with
– Allison Gilwald
– Peggy Hicks
Agreed on
Need for stronger accountability mechanisms beyond self-regulation
Disagreed with
– Alexandra Walden
Disagreed on
Approach to balancing innovation and human rights
Technology foresight prioritizes ease of business over societal considerations
Explanation
Current technology foresight exercises are designed primarily to enable ease of business with minimal negative externalities, putting economy first rather than society. Social harms and injustices are built into innovation pathways but never fully acknowledged, leading to policy that tweaks problems at the edges without transformation.
Evidence
Examples of how surveillance advertising costs to decisional autonomy were known but ignored, and how policy moves from one minimal fix to another without addressing root causes
Major discussion point
Economy-first approach to innovation
Topics
Economic | Human rights | Legal and regulatory
Constraints in developing countries are manufactured by extractivist economics, not inherent deficiencies
Explanation
The characterization of developing countries as facing barriers due to limited expertise, weak institutions, and constrained fiscal space is problematic because these constraints are manufactured by extractivist economics rather than inherent deficiencies. This includes historical and ongoing debt burdens and extractivist digital paradigms.
Evidence
Reference to debt burdens on about 45 countries and how extractivist economics is both historical and built into the digital paradigm
Major discussion point
Structural causes of digital inequality
Topics
Development | Economic | Human rights
Digital inequality extends beyond connectivity to capabilities and resources for meaningful participation
Explanation
The digital divide is no longer just about connectivity but about what people can do and what resources they have once connected. This includes the ability to exercise rights, use advanced technologies like ChatGPT, or develop alternative technologies.
Evidence
Examples of lack of representation in big social networking datasets by people who’ve never come online, inability to financially transact, and amplification of exclusions in advanced technologies
Major discussion point
Multidimensional nature of digital inequality
Topics
Development | Human rights | Economic
Public interest governance must respond positively to societal needs, not just mitigate harms
Explanation
Public interest governance should go beyond harm mitigation to actively respond to societal needs and support intergenerational justice. This requires recognizing that technical norms cannot automatically substitute for political norms and that we need both public infrastructure and commons-based alternatives.
Evidence
Examples from environmental law like precautionary principle, polluter pays, and common but differentiated responsibilities; ecological activists’ concerns about open genomic databases becoming sources of biopiracy
Major discussion point
Proactive versus reactive governance
Topics
Legal and regulatory | Human rights | Development
Alexandra Walden
Speech speed
171 words per minute
Speech length
1387 words
Speech time
484 seconds
Companies must commit to UN Guiding Principles as baseline for responsible AI development
Explanation
Any company serious about human rights must have a commitment grounded in the UN Guiding Principles on Business and Human Rights as a baseline. Companies without this explicit commitment cannot claim to be responsibly developing AI, and this should be a fundamental requirement across the industry.
Evidence
Google’s commitment to human rights and UN Guiding Principles, company policies for training and product development, AI principles framework
Major discussion point
Corporate human rights commitments
Topics
Human rights | Legal and regulatory | Economic
Agreed with
– Julian Theseira
– Peggy Hicks
Agreed on
Existing frameworks need implementation rather than new creation
Need proportionate risk-based regulatory frameworks that preserve innovation while maintaining rights standards
Explanation
Regulation should be proportionate and risk-based, focused on actual likely harms rather than broad vague requirements that could harm innovation without effectively addressing problems. This includes differentiating between developers, deployers, and end-users, and ensuring interoperability between different national regulations.
Evidence
Examples of content flagging on YouTube for AI-manipulated content, Synth ID watermarking technology through Google Cloud, work with Partnership on AI and C2PA on content provenance protocols
Major discussion point
Balanced regulatory approach
Topics
Legal and regulatory | Human rights
Disagreed with
– Anita Gurumurthy
Disagreed on
Approach to balancing innovation and human rights
Companies should engage with all stakeholders to ensure balanced approach to innovation and rights
Explanation
Individual companies have a role in thinking about human rights in their work, but this must be done in concert with industry partners, international organizations, civil society, and governments. No single company can address these challenges alone, requiring collaborative approaches across sectors.
Evidence
Google’s engagement with civil society, international organizations, and governments; work with industry groups on standards and protocols
Major discussion point
Multi-stakeholder collaboration
Topics
Human rights | Legal and regulatory
Rodrigo Goni
Speech speed
113 words per minute
Speech length
827 words
Speech time
438 seconds
Parliaments must shift from reactive to proactive paradigm to address technological disruption
Explanation
Traditionally, parliaments waited for problems to consolidate before acting, but constant technological disruption requires a fundamental paradigm shift to proactive engagement. If parliaments want to fulfill their role in protecting human rights, they cannot continue running after technology and arriving late to address problems.
Evidence
Historical pattern of parliaments waiting for problems to be well-defined before acting, current context of permanent change and constant disruption
Major discussion point
Parliamentary adaptation to technological change
Topics
Legal and regulatory | Human rights
Multi-stakeholder approach necessary as legislators cannot keep pace with technological development alone
Explanation
Parliaments must adopt multi-stakeholder perspectives and sit on equal footing with industry, academia, and civil society because legislators cannot follow technological developments alone. This is not by choice but by necessity, as parliamentarians lack sufficient time and expertise to stay current with rapid technological changes.
Evidence
Recognition that legislators cannot be available 24 hours for technology issues while maintaining electability, need for regulatory sandboxes on much bigger scale for testing
Major discussion point
Limitations of traditional legislative processes
Topics
Legal and regulatory | Human rights
Pablo Hinojosa
Speech speed
117 words per minute
Speech length
1620 words
Speech time
828 seconds
Technology and rights must work together as unified team rather than opposing forces
Explanation
Rather than viewing technology and human rights as opposing sides, they should be part of the same group working together to shape an inclusive and people-oriented future. The session aims to demonstrate this collaborative approach while acknowledging that complete agreement isn’t necessary for productive dialogue.
Evidence
Reference to preparatory process where participants agreed on this collaborative framework, and the session structure designed to facilitate dialogue between different perspectives
Major discussion point
Collaborative approach to technology and rights
Topics
Human rights | Legal and regulatory
International cooperation and funding cuts significantly impact Global South participation in digital development
Explanation
There has been a drastic reduction in international cooperation along with significant cuts in funding and grants traditionally provided to developing countries. This creates additional barriers for Global South participation in digital governance and development beyond existing structural challenges.
Evidence
Reference to Elaine Ford’s online comment about reduced international cooperation and funding cuts affecting traditional grant provision
Major discussion point
International cooperation challenges
Topics
Development | Economic
Agreed with
– Allison Gilwald
– Julian Theseira
Agreed on
Need for global cooperation due to concentrated nature of technology companies
Audience
Speech speed
124 words per minute
Speech length
413 words
Speech time
199 seconds
Internet neutrality operates differently in African contexts during political periods
Explanation
The concept of internet neutrality takes on different meanings in African countries, particularly during electoral campaigns where internet access may be restricted or manipulated. This challenges traditional notions of neutrality when access itself becomes politically controlled.
Evidence
Example from Senegal during pre-electoral campaign period where internet access was affected, questioning where neutrality exists when access is not guaranteed
Major discussion point
Contextual nature of internet neutrality
Topics
Infrastructure | Human rights | Legal and regulatory
Precautionary principle should mandate ethical supply chain audits for AI development
Explanation
The precautionary principle should require comprehensive ethical supply chain audits before AI models are deployed, particularly addressing issues like child labor in mineral extraction for renewable technologies. This includes enforcement mechanisms to ensure compliance with ethical sourcing standards.
Evidence
Specific example of Congolese cobalt mining using child labor for renewable technology components, highlighting the disconnect between ‘green’ technology labels and unethical supply chains
Major discussion point
Ethical supply chains in technology
Topics
Human rights | Economic | Legal and regulatory
Technical standards development should align with regulation to enforce human rights protections
Explanation
There is growing alignment between internet governance, particularly technical specifications developed through open multi-stakeholder processes, and regulation that can enforce adoption of good standards in service of human rights. This represents a strengthening trend in governance approaches.
Evidence
Reference to 10+ years of experience in standards bodies and observations about strengthening alignment between technical specifications and regulatory enforcement
Major discussion point
Standards and regulatory alignment
Topics
Infrastructure | Legal and regulatory | Human rights
Disagreed with
– Pierre Bonis
– Mallory (Audience)
Disagreed on
Role of technical community in human rights protection through standards
Companies requiring special human rights policies indicates systemic problem with basic rights respect
Explanation
The fact that companies need to have special policies on human rights is problematic because respecting human rights should be evident and fundamental, not requiring special attention. This suggests a deeper issue with how businesses approach basic rights obligations.
Major discussion point
Corporate human rights obligations
Topics
Human rights | Legal and regulatory | Economic
Agreements
Agreement points
Need for stronger accountability mechanisms beyond self-regulation
Speakers
– Allison Gilwald
– Peggy Hicks
– Anita Gurumurthy
Arguments
Self-regulatory models have failed and stronger accountability mechanisms are needed
Mandatory human rights due diligence needed to address supply chain exploitation
Current balance creates human rights-free zones for business through minimum thresholds
Summary
All three speakers agree that voluntary self-regulatory approaches by companies have proven insufficient and that mandatory accountability mechanisms are needed to address human rights violations in the digital space
Topics
Legal and regulatory | Human rights
Digital access as fundamental enabler of human rights
Speakers
– Allison Gilwald
– Pierre Bonis
– Peggy Hicks
Arguments
Digital access is fundamental enabler for exercising rights in contemporary world
Billions remain offline, creating fundamental exclusion from digital rights
Internet shutdowns represent conscious denial of access beyond connectivity gaps
Summary
Speakers agree that internet access is essential for exercising rights in the modern world, and that billions of people are excluded from digital participation either through lack of infrastructure or deliberate shutdowns
Topics
Human rights | Development | Infrastructure
Need for global cooperation due to concentrated nature of technology companies
Speakers
– Allison Gilwald
– Julian Theseira
– Pablo Hinojosa
Arguments
Global cooperation required due to concentrated nature of major technology companies
AI governance cannot be divorced from broader global economic inequalities and debt justice
International cooperation and funding cuts significantly impact Global South participation in digital development
Summary
Speakers recognize that the global and concentrated nature of major technology companies requires international cooperation and coordination, particularly to address inequalities affecting developing countries
Topics
Legal and regulatory | Economic | Development
Existing frameworks need implementation rather than new creation
Speakers
– Julian Theseira
– Peggy Hicks
– Alexandra Walden
Arguments
Existing international frameworks like UNESCO AI ethics recommendations already exist but need implementation
Human rights principles provide universal touchstone for technology development and deployment
Companies must commit to UN Guiding Principles as baseline for responsible AI development
Summary
Speakers agree that there are already established international frameworks like UNESCO recommendations and UN Guiding Principles that provide adequate foundation, but the challenge is implementation rather than creating new frameworks
Topics
Legal and regulatory | Human rights
Similar viewpoints
Both speakers emphasize that challenges faced by developing countries in digital participation are not due to inherent limitations but result from systemic global economic inequalities and historical extractive practices
Speakers
– Anita Gurumurthy
– Julian Theseira
Arguments
Constraints in developing countries are manufactured by extractivist economics, not inherent deficiencies
AI governance cannot be divorced from broader global economic inequalities and debt justice
Topics
Development | Economic | Human rights
Both speakers recognize that human rights violations in technology are systemic and occur at multiple levels, from technical implementation to global resource distribution, often prioritizing business interests over societal needs
Speakers
– Peggy Hicks
– Anita Gurumurthy
Arguments
Human rights violations occur at multiple levels including biased datasets and unequal global distribution
Technology foresight prioritizes ease of business over societal considerations
Topics
Human rights | Economic
Both speakers advocate for fundamental paradigm shifts in governance approaches – from reactive to proactive for parliaments, and from individual to collective rights frameworks
Speakers
– Rodrigo Goni
– Allison Gilwald
Arguments
Parliaments must shift from reactive to proactive paradigm to address technological disruption
Need to extend beyond first-generation individual rights to collective and economic rights
Topics
Legal and regulatory | Human rights
Unexpected consensus
Limitations of technical solutions for human rights protection
Speakers
– Pierre Bonis
– Anita Gurumurthy
Arguments
Technical community should not determine human rights protections in standards development
Public interest governance must respond positively to societal needs, not just mitigate harms
Explanation
Despite Pierre representing the technical infrastructure side and Anita representing civil society advocacy, both agree that technical solutions alone cannot solve human rights challenges and that broader societal and political considerations are essential
Topics
Infrastructure | Human rights | Legal and regulatory
Need for multi-stakeholder collaboration despite different roles
Speakers
– Alexandra Walden
– Rodrigo Goni
– Pablo Hinojosa
Arguments
Companies should engage with all stakeholders to ensure balanced approach to innovation and rights
Multi-stakeholder approach necessary as legislators cannot keep pace with technological development alone
Technology and rights must work together as unified team rather than opposing forces
Explanation
Representatives from corporate, legislative, and civil society sectors all acknowledge the necessity of multi-stakeholder collaboration, recognizing that no single sector can address these challenges alone
Topics
Human rights | Legal and regulatory
Overall assessment
Summary
The discussion revealed significant consensus on key structural issues: the failure of self-regulatory approaches, the need for stronger accountability mechanisms, the importance of digital access as a human rights enabler, and the requirement for global cooperation due to the concentrated nature of technology companies. There was also agreement on implementing existing frameworks rather than creating new ones.
Consensus level
High level of consensus on fundamental challenges and directional solutions, with speakers from different sectors (corporate, government, civil society, technical) agreeing on the need for systemic changes. This suggests a maturation of the debate where stakeholders recognize that incremental approaches are insufficient and that more fundamental reforms to governance structures are needed to address human rights in the digital age.
Differences
Different viewpoints
Role of technical community in human rights protection through standards
Speakers
– Pierre Bonis
– Mallory (Audience)
Arguments
Technical community should not determine human rights protections in standards development
Technical standards development should align with regulation to enforce human rights protections
Summary
Pierre argues that the technical community lacks competence to determine human rights protections in standards and that attempting to guarantee rights through technical standards alone is wishful thinking. Mallory advocates for alignment between technical specifications and regulatory enforcement of human rights protections.
Topics
Infrastructure | Human rights | Legal and regulatory
Approach to balancing innovation and human rights
Speakers
– Anita Gurumurthy
– Alexandra Walden
Arguments
Current balance creates human rights-free zones for business through minimum thresholds
Need proportionate risk-based regulatory frameworks that preserve innovation while maintaining rights standards
Summary
Anita argues that current approaches create spaces where businesses operate with minimal human rights constraints, prioritizing economy over society. Alexandra advocates for proportionate, risk-based regulation that maintains innovation while protecting rights.
Topics
Human rights | Economic | Legal and regulatory
Scope of human rights frameworks needed
Speakers
– Allison Gilwald
– Peggy Hicks
Arguments
Need to extend beyond first-generation individual rights to collective and economic rights
Human rights principles provide universal touchstone for technology development and deployment
Summary
Allison argues for extending beyond traditional individual rights to collective and economic rights, while Peggy emphasizes the existing universal human rights framework as sufficient foundation for technology governance.
Topics
Human rights | Development | Economic
Unexpected differences
Neutrality versus human rights protection in infrastructure
Speakers
– Pierre Bonis
– Audience (Senegal)
Arguments
Internet infrastructure neutrality creates tension with human rights protection requirements
Internet neutrality operates differently in African contexts during political periods
Explanation
This disagreement reveals unexpected complexity in applying neutrality principles across different contexts – Pierre sees neutrality as creating tension with rights protection, while the Senegal audience member highlights how neutrality becomes meaningless when access itself is politically controlled.
Topics
Infrastructure | Human rights | Legal and regulatory
Characterization of developing country constraints
Speakers
– Anita Gurumurthy
– Julian Theseira
Arguments
Constraints in developing countries are manufactured by extractivist economics, not inherent deficiencies
AI governance cannot be divorced from broader global economic inequalities and debt justice
Explanation
While both acknowledge global inequalities, Anita strongly rejects characterizations of developing countries as inherently lacking, while Julian focuses more on systemic interconnections without challenging the framing as directly.
Topics
Development | Economic | Human rights
Overall assessment
Summary
The discussion revealed moderate disagreements primarily around approaches to governance rather than fundamental goals. Key areas of disagreement included the role of technical standards in rights protection, the balance between innovation and regulation, and the scope of human rights frameworks needed.
Disagreement level
Moderate disagreement with significant implications – while speakers generally agreed on the importance of protecting human rights in digital contexts, their different approaches to achieving this goal reflect deeper tensions between technical, regulatory, and economic perspectives that could impact policy development and implementation strategies.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize that challenges faced by developing countries in digital participation are not due to inherent limitations but result from systemic global economic inequalities and historical extractive practices
Speakers
– Anita Gurumurthy
– Julian Theseira
Arguments
Constraints in developing countries are manufactured by extractivist economics, not inherent deficiencies
AI governance cannot be divorced from broader global economic inequalities and debt justice
Topics
Development | Economic | Human rights
Both speakers recognize that human rights violations in technology are systemic and occur at multiple levels, from technical implementation to global resource distribution, often prioritizing business interests over societal needs
Speakers
– Peggy Hicks
– Anita Gurumurthy
Arguments
Human rights violations occur at multiple levels including biased datasets and unequal global distribution
Technology foresight prioritizes ease of business over societal considerations
Topics
Human rights | Economic
Both speakers advocate for fundamental paradigm shifts in governance approaches – from reactive to proactive for parliaments, and from individual to collective rights frameworks
Speakers
– Rodrigo Goni
– Allison Gilwald
Arguments
Parliaments must shift from reactive to proactive paradigm to address technological disruption
Need to extend beyond first-generation individual rights to collective and economic rights
Topics
Legal and regulatory | Human rights
Takeaways
Key takeaways
Human rights frameworks must be extended beyond individual first-generation rights to include collective, economic, and environmental rights to address digital inequalities
Self-regulatory models by technology companies have proven insufficient and stronger accountability mechanisms including mandatory human rights due diligence are needed
Parliaments must shift from reactive to proactive governance paradigms to effectively address rapid technological disruption while protecting human rights
Digital access is a fundamental enabler for exercising rights in the contemporary world, with billions still excluded from basic connectivity
Global cooperation is essential due to the concentrated nature of major technology companies, but must include meaningful multi-stakeholder participation from the Global South
AI governance cannot be divorced from broader systemic inequalities including debt justice and extractive economic models that create manufactured constraints in developing countries
Technical neutrality creates tensions with human rights protection, and technical solutions alone cannot solve human rights problems without broader political engagement
Existing international frameworks like UNESCO AI ethics recommendations provide foundations but require implementation through national and regional action
Public interest governance must move beyond harm mitigation to positively respond to societal needs and enable equal participation in innovation
Resolutions and action items
Companies should commit to UN Guiding Principles on Business and Human Rights as baseline for responsible AI development
Implementation of human rights impact assessments across AI lifecycle should be adopted more widely
Red lines against AI systems that could result in gross human rights violations should be established and enforced
Parliaments should adopt multi-stakeholder approaches and regulatory sandboxes to test governance frameworks
Hub-and-spoke regulatory model should be implemented where all sector regulators address AI within their domains
Capacity building programs for civil servants in AI literacy should be expanded globally
Supply chain audits for ethical sourcing should be mandated before deploying AI models
Unresolved issues
How to effectively balance innovation incentives with human rights protection without creating competitive disadvantages
Mechanisms for ensuring meaningful participation of marginalized communities and Global South voices in global AI governance
Specific enforcement mechanisms for international cooperation on AI governance given fragmented regulatory landscape
How to address the fundamental tension between technical neutrality and human rights protection in internet infrastructure
Practical implementation of precautionary principles in AI development while maintaining innovation capacity
How to address systemic inequalities and debt burdens that create manufactured constraints in developing countries
Effective mechanisms for addressing harms to non-users of technology including environmental and labor impacts
How to prevent universal AI models from becoming totalizing and detrimental to diversity
Suggested compromises
Proportionate risk-based regulatory frameworks that preserve innovation while maintaining human rights standards
Multi-stakeholder governance models that bring together parliaments, industry, academia, and civil society on equal footing
Regulatory sandboxes that allow testing of governance frameworks on larger scales while maintaining protections
Hub-and-spoke regulatory approach rather than single AI regulator to distribute expertise across sectors
Interoperable international standards that allow multinational companies to operate while respecting local contexts
Extending technical values like openness and interoperability into regulation while respecting constitutional principles and contextual integrity
Combining public infrastructure development with commons-based people’s alternatives rather than treating them as antagonistic
Thought provoking comments
I’m from team people, so I’m not from team technology or rights… conservatively, sort of 2 billion, 2.6 billion, but more accurately about 4 billion people still remain offline, and often these discussions around digital rights exclude significant parts of the world’s population.
Speaker
Allison Gilwald
Reason
This reframing fundamentally challenged the binary framing of the session by introducing a third perspective – ‘team people’ – and highlighting how traditional digital rights discussions exclude the majority of the world’s population who remain offline. It shifted focus from abstract rights frameworks to concrete exclusion.
Impact
This comment set the tone for the entire discussion by establishing that digital rights conversations must account for those without access. It influenced subsequent speakers to address structural inequalities and moved the conversation beyond technical solutions to systemic issues of inclusion and access.
The balance today between innovation and inclusion is achieved through an approach that carves out a human rights free zone for business… The social harms and injustices are somewhat baked into the pathways of innovation, they are never fully acknowledged.
Speaker
Anita Gurumurthy
Reason
This comment provided a sharp critique of current innovation models, arguing that human rights violations aren’t accidental but structurally embedded in how innovation is pursued. It challenged the assumption that innovation and rights can be easily balanced through minor adjustments.
Impact
This provocative statement forced other panelists, particularly from industry, to defend their approaches more substantively. It elevated the discussion from technical fixes to fundamental questions about economic models and power structures, leading to more nuanced responses about regulation and corporate responsibility.
We have to change the paradigm as a following way… The paradigm for parliaments will have to change towards being proactive from being reactive… We have no other possibility but to sit down… on an equal level with industry, academia and civil society.
Speaker
Rodrigo Goni
Reason
This comment acknowledged a fundamental shift in how democratic institutions must operate in the face of rapid technological change. It was remarkably honest about parliamentary limitations and the need for new governance models, challenging traditional notions of legislative authority.
Impact
This comment introduced the concept of anticipatory governance and legitimized multi-stakeholder approaches at the parliamentary level. It shifted the discussion from whether regulation should happen to how democratic institutions must transform themselves to remain relevant in governing emerging technologies.
For us Technical community it is not up to us to determine how to best protect human rights in standards… imagine that we could guarantee these rights through standards and protocols, I think is wishful thinking.
Speaker
Pierre Bonis
Reason
This comment challenged the popular notion of ‘rights by design’ or technical solutions to human rights problems. It was a rare admission from the technical community about the limits of technical approaches to social problems, pushing back against techno-solutionism.
Impact
This comment forced the discussion to confront the limitations of technical approaches and emphasized the need for broader social and political solutions. It led to more nuanced discussions about the appropriate roles of different stakeholders and the dangers of over-relying on technical fixes.
We need to see these new affordances and program for these rights… People who mine rare earth minerals, those whose lands are given away to data centers, who are dispossessed… you’re actually talking about human rights, not just of users, but of non-users.
Speaker
Anita Gurumurthy
Reason
This comment expanded the scope of digital rights beyond users to include those affected by the entire technology value chain. It connected digital rights to environmental justice, labor rights, and global supply chains, revealing hidden costs of digital transformation.
Impact
This broadened the conversation significantly, forcing participants to consider the full lifecycle impacts of technology. It influenced subsequent discussions about supply chain responsibility and connected digital rights to broader questions of global justice and environmental sustainability.
These constraints emerge because of the context, and they are not ahistorical, they’re not because of any inherent deficiencies in our societies. They are manufactured by an extractivist economics.
Speaker
Anita Gurumurthy
Reason
This comment challenged the common framing of developing countries as inherently lacking capacity, instead arguing that constraints are products of historical and ongoing extractive economic relationships. It reframed capacity building from a deficit model to a justice model.
Impact
This comment shifted the discussion from technical capacity building to structural reform. It influenced how other participants discussed international cooperation and development, moving away from paternalistic framings toward recognition of systemic inequalities that need to be addressed.
Overall assessment
These key comments fundamentally transformed what could have been a technical discussion about balancing innovation and rights into a deeper examination of power structures, global inequalities, and the need for systemic change. Gilwald’s reframing around ‘team people’ and offline populations set the stage for more inclusive thinking. Gurumurthy’s critiques of extractivist economics and embedded injustices challenged comfortable assumptions about innovation models. Goni’s honest assessment of parliamentary limitations introduced new thinking about anticipatory governance. Bonis’s pushback against techno-solutionism grounded the discussion in realistic assessments of what technology can and cannot do. Together, these comments elevated the conversation from procedural questions about implementation to fundamental questions about justice, democracy, and global economic structures, creating a more sophisticated and critical dialogue about the relationship between emerging technologies and human rights.
Follow-up questions
How can international human rights law be effectively integrated into internet governance to ensure humanitarian ICT principles, such as protecting essential services from being turned off during conflicts, supporting digital agency for natural persons, addressing challenges of pervasive surveillance, digital consent, and access to lawful remedies via courts while countering adversarial efforts to undermine human-centric digital transformation?
Speaker
Timothy Holborn (online participant)
Explanation
This comprehensive question addresses multiple critical aspects of human rights in digital governance that require systematic integration and enforcement mechanisms
What are the biggest challenges in ensuring the security of fundamental rights of technology users in rural areas?
Speaker
IGF Remote Hub in Benin
Explanation
Rural areas face unique challenges in digital rights protection due to connectivity issues, infrastructure gaps, and limited access to remedies
How can the precautionary principle demand ethical supply chains before labeling tech as green, particularly regarding Congolese cobalt mined by children?
Speaker
Christian Fazili Meigo (Democratic Republic of Congo)
Explanation
This highlights the need to address human rights violations in the supply chain of technologies, especially those marketed as environmentally friendly
How should the precautionary principle mandate ethical supply chain audits for AI developers before deploying models, and how do we enforce this?
Speaker
Christian Fazili Meigo (Democratic Republic of Congo)
Explanation
This addresses the need for mandatory due diligence processes in AI development to prevent human rights violations throughout the supply chain
How can technical standards development in an open multi-stakeholder way find alignment with regulation to enforce adoption of good standards in service of human rights?
Speaker
Mallory Knodel
Explanation
This explores the intersection between technical standard-setting processes and regulatory frameworks to ensure human rights protection
How do we address internet neutrality in contexts where access itself is limited, such as during electoral campaigns in Africa?
Speaker
Participant from Senegal
Explanation
This highlights how traditional concepts of internet neutrality may not apply in contexts where basic access is already restricted or limited
How can we move beyond first-generation individual rights to collective rights and public interest rights in digital governance?
Speaker
Allison Gilwald
Explanation
This addresses the need to expand digital rights frameworks beyond individual privacy and expression rights to include collective and economic rights
How can we ensure that AI governance frameworks account for impacts on non-users, such as those affected by supply chains, environmental impacts, and economic disruptions?
Speaker
Anita Gurumurthy
Explanation
This highlights the need to consider broader societal impacts of AI systems beyond direct users, including those affected indirectly through economic and environmental channels
How can parliaments shift from reactive to proactive approaches in technology governance while maintaining democratic legitimacy?
Speaker
Rodrigo Goni
Explanation
This addresses the challenge of legislative bodies keeping pace with rapid technological change while ensuring proper democratic oversight and human rights protection
How can we address the reduction in international cooperation and funding that affects Global South participation in digital governance?
Speaker
Elaine Ford (online participant)
Explanation
This highlights structural barriers that prevent equitable participation in global digital governance discussions and implementation
How can we ensure that data rights bring new capabilities and opportunities for social innovation rather than just market capture?
Speaker
Anita Gurumurthy
Explanation
This addresses the need to reframe data governance to enable social innovation and community benefit rather than primarily serving commercial interests
How can we implement UNESCO’s readiness assessment methodology and ethical impact assessment tools at scale, particularly in developing countries?
Speaker
Julian Theseira
Explanation
This addresses the practical challenge of implementing existing international frameworks for AI governance, particularly in countries with limited institutional capacity
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
