Internet Human Rights: Mapping the UDHR to Cyberspace | IGF 2023 WS #85

11 Oct 2023 01:15h - 01:45h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Michael Kelly

The analysis explores two main topics: the importance of defining digital human rights and the roles of big tech companies ahead of the AI revolution, and the preference for a multistakeholder approach to internet governance over a multilateral approach.

Regarding the first topic, it is argued that as human rights transition from physical to digital spaces, regulation is needed to protect and promote these rights. The AI revolution necessitates a paradigm shift towards creativity-based AI platform regulation, and defining digital human rights and tech companies’ responsibilities is crucial in this evolving landscape.

The analysis emphasises the proactive definition of digital human rights and the roles of big tech companies to establish clear regulations governing the interaction between technology and human rights. This approach is essential to ensure responsible and ethical use of evolving technologies.

Regarding the second topic, the analysis supports a multistakeholder approach to internet governance. This approach involves involving various stakeholders, including governments, tech companies, civil societies, and individuals, in decision-making processes. It aims to ensure diverse perspectives and interests are considered for balanced and inclusive governance.

Concerns are raised about a multilateral approach that may exclude big tech companies and civil societies from decision-making processes, hindering effective internet governance. The analysis also identifies a draft cybercrime treaty proposed by Russia as a potential threat to digital human rights, potentially limiting freedom of expression and privacy online.

In conclusion, the analysis highlights the importance of defining digital human rights and the roles of big tech companies in the AI revolution. It emphasises proactive regulation and creativity-based AI platform regulation. It supports a multistakeholder approach to internet governance and raises concerns about exclusions and threats to digital human rights. This comprehensive analysis provides valuable insights into the challenges and considerations at the intersection of technology, human rights, and internet governance.

Peggy Hicks.

The discussion centres around the relevance of human rights in the digital space and the potential impact of government regulations on online activities. It is acknowledged that the human rights that apply offline also extend to the online realm. However, there is ongoing deliberation regarding their practical implementation.

The significance of the human rights framework in the digital space is highlighted due to its universal applicability and legally binding nature. This framework encompasses obligations that the majority of states have committed to. Additionally, a multistakeholder and multilateral approach plays a key role in addressing human rights in the digital realm.

There are concerns about potential government overreach and its negative impact on free speech. Many legislations globally are viewed as hindering human rights rather than protecting them, raising apprehensions about government interference and censorship.

The responsibilities of companies in respecting human rights, particularly within their supply chains, are recognised. Companies are urged to understand and mitigate risks associated with human rights violations in their operations. The UN Guiding Principles on Human Rights outline the role of states in regulating the impact of companies on human rights and establishing accountability and remedy mechanisms.

However, there are also concerns about legislation on content moderation, which is seen as often leading to the suppression of free speech. The push for companies to take down excessive content can result in the repression of opposition or dissent. The Cybercrime Convention is highlighted as an area where potential overreach is observed, which can curtail rights.

The implications of legislative models, such as the German NetzDG statute, in different global contexts are discussed. It is noted that exporting these models without considering the varying contexts can lead to problems and conflicts with human rights principles.

Furthermore, worries are expressed about regulatory approaches in liberal democracies that could potentially compromise human rights and data encryption. Measures such as client-side scanning or undermining encryption are viewed as problematic, as they could have adverse global impacts.

The breadth and severity of punitive measures under the Cybercrime Convention also raise concerns. Instances where individuals have been imprisoned for a single tweet for three to four years prompt questions about the proportionality and fairness of these measures.

While negotiation processes are still ongoing, there is a recognised need for continued dialogue to address concerns and improve the Cybercrime Convention. Multiple states share the concerns expressed by the Office of the United Nations High Commissioner for Human Rights (OHCHR).

In conclusion, the discussion highlights the importance of upholding human rights in the digital space and cautions against excessive government regulation that can impede these rights. The responsibilities of companies in respecting human rights are emphasised, along with concerns about the negative effects of content moderation legislation. The need for careful consideration of context when enacting legislative models and the challenges posed by regulatory approaches in liberal democracies are also brought to light. Ultimately, ongoing negotiations are required to address concerns and enhance the Cybercrime Convention.

David Satola

The analysis explores the importance of upholding equal rights in the digital space, irrespective of an individual’s identity. It stresses the need to establish virtual identity rights prior to the impending AI revolution. The fast-paced progress in AI technology adds a time constraint to defining these rights, making it crucial to formulate and establish them promptly.

One of the key arguments in the analysis emphasizes that while everyone theoretically enjoys the same rights in physical spaces regardless of their identity, the emergence of a new front in the digital space necessitates extending principles of equality and non-discrimination to the virtual realm.

Another aspect highlighted in the analysis concerns the rights of avatars and posthumous social media accounts, raising questions about the legal framework and rights that should govern these virtual identities, particularly in the context of the AI revolution. Addressing these issues in advance becomes essential to safeguard individuals’ virtual identities within a legal framework that ensures equal rights and protections as in the physical world.

Furthermore, the analysis underscores the potential challenges to the universality of rights brought about by the migration of our daily lives into cyberspace. As our activities and interactions increasingly occur online, it becomes crucial to ensure the preservation of fundamental human rights in this digital domain as well.

Additionally, the incorporation of national or regional laws without adequate context may pose a threat to online rights. This observation underscores the importance of crafting carefully designed and globally aligned legal frameworks governing the digital space, to prevent discrepancies and inconsistencies that could undermine the universality of rights.

In conclusion, the analysis emphasizes the need to guarantee equal rights in the digital space, highlighting the significance of defining virtual identity rights in anticipation of the AI revolution. It also discusses the challenges posed by the migration to cyberspace and the potential threats to online rights in the absence of cohesive global legal frameworks. Given the rapid advancements in AI, it is essential to act swiftly in establishing these rights to pave the way for a fair and inclusive digital future.

Joyce Hakmeh

Joyce Hakmeh, Deputy Director of the International Security Programme at Chatham House, moderated a session focused on the Internet Governance Task Force. This task force was established following a report by the American Bar Association’s Internet Governance Task Force, co-chaired by Michael Kelly and David Sattola. Michael Kelly, a professor of law at Creighton University specializing in public international law, and David Sattola, Lead Counsel for Innovation and Technology at the US Department of Homeland Security and Director of the International Security Programme at Chatham House, co-chaired the task force.

In the session, the speakers discussed the complexities of internet governance, stressing the need to find the right balance of responsibilities. They highlighted concerning practices of some autocratic countries that suppress dissent and violate human rights. They also drew attention to regulatory approaches proposed by liberal democracies, which raised human rights concerns, such as breaking encryption for legitimate purposes.

Peggy Hicks, Director of the Office of the UN High Commissioner for Refugees, participated in the session as a discussant. She raised questions about the responsiveness of countries at both national and global levels to the concerns raised by the speakers. Her inquiries covered issues related to autocratic countries and potential human rights implications of regulatory measures proposed by liberal democracies.

The session also touched upon the Cybercrime Convention, with Peggy Hicks noting that the OHCHR has been actively engaged in publishing commentary and providing observations on the content and progress of the convention. Although specific details of the convention’s progress were not explicitly covered, they discussed its complexity and potential for abuse, particularly regarding procedural powers and broad criminalization.

In conclusion, the session emphasized the importance of raising awareness about the complexities of internet governance and the potential for human rights abuses. The discussion shed light on various perspectives and challenges related to this issue, contributing to a better understanding of the topic.

Session transcript

Joyce Hakmeh:
I’m going to turn it over to Joyce Hakmeh, who is the Deputy Director of the International Security Program at Chatham House. Good morning, everyone. May I please ask you to take your seats? We’re about to begin. So good morning again. My name is Joyce Hakmeh. I’m the Deputy Director of the International Security Program at Chatham House, and I have the pleasure of moderating this short but very important session looking at the Internet Governance Task Force. And this is the result of a report done by the American Bar Association’s Internet Governance Task Force that is co-chaired by Michael Kelly, who’s sitting on my left, and David Sattola, who is joining us online. So Michael is Professor of Law at Creighton University in the U.S., where he specializes in public international law, and David is Lead Counsel for Innovation and Technology at the U.S. Department of Homeland Security. And David is the Director of the International Security Program at Chatham House, where he specializes in connectivity and cybercrime prevention strategies. In addition to the two speakers who will be presenting the findings from their research, we also have a discussant with us today, Peggy Hicks, who is the Director of the Office of the U.N. High Commissioner for Refugees. So welcome to all of you. So we have half an hour together. So the way we will do this is we will hear first from Michael and David about the research, which has been just published in Volume 26 of the University of Pennsylvania Journal of Law and Social Change. And then we will hear some reactions from Peggy and perhaps a question to the speakers, and then we’ll end the session. So without further ado, I will now turn to Michael. And just a quick reminder that this session is being recorded and can be downloaded from the IGF website. So over to you, Mike.

Michael Kelly:
Okay. Thank you, Joyce. And if we could bring David Sotola up online. He is also presenting with us. Christina, please advance the slide. We want to start with a New Yorker cartoon because they can mean anything. In 1993, you see the famous cartoon of the dog saying to the other dog on the Internet, nobody knows you’re a dog. That was 1993. Today, in 2023, this cartoon was updated. Remember when on the Internet no one knew who you were. That’s a paradigm shift. And we’re going to talk about another paradigm shift in the field of digital human rights that’s coming up with the advent of AI and the revolution that is on the fringe of happening. Christina, please advance. Why are we interested in which human rights are manifesting online? Well, because that’s where we spend most of our time. This Pew Research Center poll from 2019 demonstrates that daily over 80% of us are online almost all the time. This used to be a generational format. But as the generations go forward, we see that that is compressing at the far end. You can just look around the room and see who’s on devices of one type or another. So you live your daily life in physical space, but you also live your daily life in digital space. And that’s not always or even mostly work space. Human rights manifest in both sides of this equation. The question is, which ones follow us from physical space into digital space? How do they manifest? How are they regulated? How are they defined? And then, of course, the other end of that is how are they enforced? Christina, next slide, please. The Universal Declaration of Human Rights, as you all know, recently celebrated its 75th birthday. Which is a huge passage to mark. It’s made up of both freedoms and rights. And these come about in multiple contexts. Freedoms you’re familiar with, speech, movement, assembly, religion, freedom from discrimination. Rights you’re also familiar with, equality, privacy, security, work, liberty, democracy, education, property, fair trial and national security. But in digital space, these rights really are rendered meaningless or less useful if you don’t have core rights that exist to actually animate them. And by core rights, we talk about connectivity and net neutrality. What good are digital human rights to you if you’re not online? Not much. What good are digital human rights to you if you’re not online meaningfully? And that, of course, is the net neutrality discussion. Again, not as much. And that certainly implicates the equality prong right out of the box. The other thing that we look at from a framework perspective is whether the normative equivalency paradigm is the right paradigm. To think about the transference of human rights from physical space to digital space. The normative equivalency paradigm is basically moving the rights into a digital format without really altering them much. Other paradigms have been proposed out there. Probably the one that has gained the most attention is actually according human rights to digital entities themselves. But you get into all kinds of definitional issues in that regard. And I’m not sure that we’re there yet. I don’t know that we’re going to be there soon, but it could be on the horizon. Nevertheless, we don’t take a stand on this, which paradigm is the appropriate paradigm in our research, because our research basically creates a matrix. And so it is a mapping exercise that hopefully will be useful to policymakers, human rights advocates, and jurists as well. Christina, next slide, please. Exhibit A is the right to be forgotten. This in our physical space is the right to privacy. And it’s confirmed by the European Court of Justice to exist in digital space much at a higher level than it is in physical space. And it exists in digital space much, at least in 2015, to Google’s consternation. This was a case that was brought by an individual in Spain who wanted some content delisted from search results about him, because he had already served a criminal sentence for fraud. Spain, of course, has a very forward-looking social justice mechanism for rehabilitation, and people are supposed to get a fresh start after they emerge from the criminal system. But people kept looking up the one article about this individual that tainted his ability to do that. This was litigated all the way up to the European Court of Justice. The ECJ said yes. Google, you are required to effectuate and moderate this human right on your platform and delist material that is irrelevant, no longer useful, or mistaken. Google’s argument, of course, was, well, this is censorship, and shouldn’t that be the job of a government, not a corporation? The ECJ confirmed, no, actually, Google, it’s your job, because we’re telling you it’s your job. And so Google found itself not only in a moderation role, but an enforcement role throughout the EU or throughout the global Google reach was later litigation that I don’t have time to go into today. But the internal corporate process that Google had to set up to actually have a company moderating human rights in digital space was one where they had to figure out what is the interplay between humans and algorithms. And we haven’t even inserted AI into the process at this point. But review committees for each EU member state were set up. Now there are over 5 million web page delisting requests since the advent of this process. And the vast number of them implicate content on social media, specifically YouTube, Facebook, and Twitter. So now you’re in a situation where you’ve got a company not only defining, moderating, and enforcing a digital human right on a space it owns in cyberspace. It’s actually moderating other companies’ content, right? Because when Google takes down, delists an item on Twitter, Twitter’s affected. So now you’ve got cross-pollination happening. And is conversation happening across those platforms and across those corporations? Not at the level that it should be. So this raises, obviously, a larger question on the propriety of corporate enforcement. Which, of course, is by terms of service. You agree when you read every line of those terms of service before you click accept, which I know everyone in this room does, that you will comply with what the corporation thinks about your content that you’re uploading onto its platform. Christina, next slide please. We selected here a half dozen articles from the UDHR. You can look at the University of Pennsylvania Journal of Law and Social Change article for the complete matrix of all 30 articles just to give you a bit of a comparative perspective. Article 1, freedom and equality, manifests usually as connectivity and net neutrality. Codification is in progress in some states, not in others. Regulation is in progress in some states, not in others. In the United States, you see this going back and forth in a bit of a ping-pong ball fashion between administrations. The Obama administration moved forward on this. The Trump administration moved backward on net neutrality. The Biden administration is now moving forward again, not unlike some other areas of law. Article 12, which I just covered, the right to privacy, manifests as the right to be forgotten. It’s codified as an EU regulation. EU member states enforce it per the European Court of Justice, but Google is the actual arm. It’s under court order, though, to do so, so there’s an interplay between the state and the company. Freedom of movement we’ll come back to. There’s an asterisk there. Article 17, the right to property, digitally manifests as property in lots of different ways online. If it happens to be intellectual property, well, there’s a treaty framework for that through TRIPS, and so this is regulated by states and enforced by states, but if you look at speech and assembly, Articles 19 and 20, with speech, it’s access to social media platforms, and the regulation is via the tech corps and your terms of service, and it matters whether or not it’s a public corporation or a private corporation. If it’s a public corporation, there’s likely to be a process. If it’s a private corporation, well, Elon Musk decides whether or not you get your speech rights on his platform. With assembly, it’s access to groups, again, via terms of service. The reason we marked Article 13, and there are a couple of other articles, is that there are no positive regulations in this area yet. There’s no positive digital manifestation of this as a right or a freedom yet, frankly, because it’s assumed you have freedom of movement across the Internet. Well, that assumption is incorrect, and what it does is it leaves a gap. My British colleagues are familiar with the term mind the gap. Yeah, mind the gap, because in the absence of this, that leaves room for negative regulation, and authoritarian regimes can wall you off from certain areas of the Internet and restrict your freedom of movement in digital space. So we have to look at these gaps as well as where regulations are positively manifesting. Next slide, please, Christina. Okay, here are your corporate protectors of Internet human rights, and I’m just going to kind of pause this here for a minute for you to take a look at these guys. Google, of course, we saw resisted its role as an enforcer and definer of digital human rights, but it is doing so, and I think it’s doing so effectively under court order. But I think, you know, the Microsoft approach, where the company actually embraces something about human rights. You all remember a few years ago Brad Smith calling for a digital Geneva Convention. That voluntary embrace of their new role, policing cyberspace, I think is where we need to go if we’re going to get effectively at the 20 to 25% of human rights listed in the UDHR that have corporate fingerprints on it. Next slide, please. Maybe we trust those guys more than we trust this guy. The broader context, if we back up a few paces and we look at the back and forth between multistakeholderism versus multilateralism as the effective paradigm for Internet governance, and we’re all here in a multistakeholder environment, authoritarian regimes want that replaced with the multilateral approach, where only states are sitting at the table, not companies, not civil society. I’m civil society. I’m with the American Bar Association. Although I don’t represent their views at this conference, I would not have a seat at this table if the authoritarian multilateralists had their way. Why should big tech care? Because they will lose their seat at the table. The conversion of them from objects to subjects of international cyber law will have a profound impact on them and on their bottom line. Russia’s draft cybercrime treaty, which some of you were in the room prior to this, listened to for an hour and a half when it was first introduced, was criticized as the beginning of the end for multistakeholderism. It wasn’t really so much about cybercrime as about possibly repressing human rights. it undermines the Budapest Convention and whether or not it could suppress digital human rights, the valiant people working on this through the UN process are discussing in New York City and Vienna every few months, and although the prior panel struck an optimistic note on that, I’m not sure I completely share it. And so this opens up all kinds of other issues, the broader issues, and that’s why now is the time to crystallize what these digital human rights are, and secondly, what big tech’s role is in defining, regulating, and enforcing them ahead of the coming AI revolution, because that will change everything. There will be a paradigm shift when AI actually matures to the point that creativity-based AI platform regulation replaces logic-based algorithmic platform regulation. Let me say that again. When creativity-based AI platform regulation replaces logic-based algorithmic platform regulation, that’s the sea change, and we have to get ahead of that. We have to get ahead of that for defining digital human rights, and we have to get ahead of that for defining the roles of companies in this process and convince them that it’s in their interest to do so. Just as an example, a policing example, AI will be a more effective cop for companies policing their platform, because it’s much more difficult to get around. You can get around a logic-based algorithm, but the bad side of that, if you just flip it around, is it also can be a more effective tool for authoritarian regimes to repress your digital human rights. Just like everything else, this is a double-edged sword. I should pause and let David chime in, Joyce, I think, if there’s … Yeah.

David Satola:
Thank you, Mike. Thank you, Joyce, and thank you, Peggy. I think in the interest of time, we should probably move ahead to the commentary and hopefully leave some time for questions at the end. The only remark I would underscore that Mike already made is the multi-stakeholderization of the enforcement of human rights that we’ve seen. It’s on our slide seven, where we see that actually the human rights are being examined and enforced by private actors. This was something that I don’t think anyone anticipated back when the Internet Governance Forum started. With that, I’ll turn it back to you in Kyoto.

Joyce Hakmeh:
Thank you. Thank you very much, David and Michael. Now, we turn to you, Peggy, since I messed up your introduction. Why don’t you introduce yourself and share your views on what’s been said? Thank you.

Peggy Hicks.:
No problem. Thanks so much. Yes, I’m Peggy Hicks. I work at the UN Human Rights Office in Geneva, where we’re focusing on many of these issues and very grateful to Michael and David for taking this look at digitization of human rights and their scholarly work. It’s a theme that we talk about quite a bit in Geneva. It’s been many years now since the Human Rights Council first said that the human rights that apply offline apply online. What that means in practice, of course, has yet to be worked out. It’s really interesting to look at this mapping approach that goes through the different articles and really looks at what are some manifestations of how that is developed in real terms. Part of the reason we talk about the human rights framework as being so relevant in the digital space, I want to emphasize, and that’s because you focus, Michael, on the battles between a multistakeholder and multilateral approach. Part of what we think is crucial about the framework of human rights is its universality and the fact that it involves legally binding obligations that the vast majority of states have ascribed to already. We avoid using it at our peril. It’s part of what can help us work through some of these challenges that are presented by the analysis that we’ve heard. It also already includes accountability mechanisms. One piece of it I really want to emphasize, which I think is quite relevant to the research here, is, for example, frameworks like the UN Guiding Principles on Human Rights, which really link up the company responsibilities to the legal obligations that states have. Under the guiding principles, you have three pillars. One looks at how states have a responsibility to regulate how companies impact on human rights in their actions, and then, of course, it has the chapter that’s best known goes through what do companies need to do to better respect human rights, including understanding and mitigating risks that are within their supply chains in different ways. And then the third pillar relates to accountability and remedy on those sites. And one of the things we’ve been really working on within our office is there’s been a lot of work done on how those principles apply in industries like extractive industries or the apparel industry. What does it mean in the context of the digital space, software applications that are mass marketed and used by millions of people globally? What does a tech company have to do with how that software might be misused at some point in time? So we’ve been working with a community of practice of a number of the largest tech companies to really work through some of those issues and figure out how we can better have them take on some of these responsibilities that are outlined in this report more effectively. But I think it also goes to this tension that the mapping shows of, you know, how much responsibility do we want at the corporate level, and what do we want states to do to better tell companies how they ought to handle things? So a good example is the terms of service that you referred to. It is the case that companies set those terms of service, but there are things that they are legally required to do within them in terms of unlawful content that might be on their platforms. So you know, how far do we take those relationships and what are we looking for from governments in terms of content regulation I think is a big question. Before I close, I have to say that one of our big concerns is that governments go too far in that regard, and that’s what we’ve seen playing out when we look at content moderation related legislation globally. The vast majority of legislation that’s been adopted across the globe tends to overreach and do more to undermine human rights than to protect them. So it allows and almost pushes companies to take down too much speech because they want to repress opposition or dissent or free speech in various ways. So we have to be very careful about what we ask governments to do and what we’re expecting of companies. But in both places, we have a lot of work to do to make sure that that digitization process goes forward as we’d like to see. And I agree, the Cybercrime Convention is an interesting area in which some of these issues are playing out. We see some of the potential overbreath in some of the work that’s being done under the Cybercrime Convention is similar to what we’ve seen in other efforts to legislate online speech in areas like counterterrorism efforts, where sometimes those statutes as well are used in an overbroad way to repress rights. So those are just some initial comments, and thanks again for the efforts.

Joyce Hakmeh:
Thank you very much. We have five minutes, maybe a little bit more, to have maybe a quick discussion. But maybe sort of like a follow-up question to you, Peggy, and maybe, Mike, if you want to respond, and David as well. You talked a lot about, first of all, you sort of both outlined the complexity of this issue and the very big importance of getting the balance right in terms of who should the honest fall on and how do you get to a place where the responsibilities are clear. And in the context of what you described, you talked about some of the practices that some autocratic countries are following in terms of suppressing dissent and not respecting human rights. But we also see some regulatory approaches and initiatives coming out of liberal democracies suggesting some human rights-concerning approaches from breaking encryption for obviously legitimate purposes and so forth. So how concerned are you about that, and how responsive do you find these countries to the concerns that you raise with them, whether in a kind of national context or more globally?

Peggy Hicks.:
I think it’s a really good question and one that not only our organization, but I think many of the civil society organizations that are here today are really looking at that. I think part of what tends to happen is that governments naturally and understandably rightfully want to adopt legislation that works in their context. But the reality is that those models are then exported globally in contexts that can be very different, where there is not the same infrastructure to support and ensure that those laws are interpreted and used in a human rights-respecting way. The example that’s always given is the German NetzDG statute, which was replicated in a variety of ways in a variety of places. But we worry about that now, and the point that you made on legislation that will potentially allow for client-side scanning, for example, which we see as incredibly problematic, given the importance of end-to-end encryption, is a really good example where we understand the concerns that are leading to that type of legislation, but feel very strongly that adoption of measures in that direction could have really deleterious impacts globally and could lead to a much broader problem with the limitations or undermining of encryption.

Joyce Hakmeh:
Thank you. Mike, if you can answer this question while also addressing what could be done in order to sort of understand and avoid those unintended consequences.

David Satola:
Right. Well, at base—and this, of course, is hand in glove with the policy approach from the United Nations—by virtue of the fact that you’re a homo sapiens, you get the same bag of rights. It doesn’t matter what your race is, your gender, your religion, or whatever, and everyone is theoretically bound by that in physical space. Now, we know that’s not always true, and that always doesn’t play out, but what about in digital space? Does everyone get the same bag of rights by virtue of the fact that you’re a digital homo sapiens? Well, what is a digital homo sapiens? Is your avatar, Peggy, going to get the same rights that you do, or Joyce, or does your Facebook account go on after you die, and does it continue to have the rights that you enjoyed while you were alive? We’re in a new frontier here, and it is a huge balancing question, but it also is a definitional question. Where are we? And that’s why the definitions need to be nailed down before the AI revolution comes, and it’s coming very quickly. So there’s a temporal component to this that we really have to be mindful of.

Joyce Hakmeh:
Thank you. David? David, are you still online?

David Satola:
Yes, I am. If I could just add one very quick comment, and it hopefully relates back to something that Peggy mentioned about the universality of rights, and I think one of the things that intrigued Mike and I when we went into this research was, does the migration of our daily lives into cyberspace in any way challenge that very basic concept of the universality of rights? And while we recognize that context matters, and again, to Peggy’s point about the Brussels effect and other national or regional laws that have been exported and incorporated out of context, does that also pose a threat to the universality of rights online? So we don’t have the answer to those questions, but I think they’re worth thinking about. Thank you.

Joyce Hakmeh:
Thank you, David. And maybe one final question to you, Peggy. You mentioned the Cybercrime Convention, and the OHCHR has been quite active on that front, you know, publishing sort of commentary and making some observations on the content and how the convention is proceeding. Can you share with us sort of your latest view on where the process is? I don’t know if you yourself covered that specifically or not, but maybe sort of share with us what you think about where we are at the moment and what sort of, you know, where do you think we might be heading?

Peggy Hicks.:
Oh, thanks. It’s a good question, but I smiled only because I don’t think it’s a one-minute question. It’s a bit more complex than that. We do still have some issues. We’ve been raising consistently some concerns over how the convention might have the same sort of overbreadth problem, the fact that the types of offenses that are included are those that are punishable by three to four years. For us, for example, is something that raises questions, because we see people being, you know, put in prison for a single tweet for three to four years. So, you know, I think there are some concerns that are still there in terms of criminalization and in terms of the breadth of investigative powers and the effective safeguards that need to be there. But, of course, as has been said, the process is still ongoing, and there’ll be lots of opportunity for those who share with us those concerns, including a number of states, to put them on the table. And, you know, hopefully the negotiation process will continue in a way that moves the convention in the right direction.

Joyce Hakmeh:
Brilliant. Thank you. And I guess maybe the silver lining from all of that is that the process has raised awareness about the complexity of the issues and the kind of, you know, the potential sort of for abuse when it comes to not just the procedural powers, but also like a very broad scope of criminalization. So I think with that, we will end the session. It was a short session, but very important. Thank you, Michael, David, and Peggy for joining us today. And thank you for everyone who attended. And, yeah, we’ll see you later. Thank you.

David Satola

Speech speed

170 words per minute

Speech length

476 words

Speech time

168 secs

Joyce Hakmeh

Speech speed

181 words per minute

Speech length

861 words

Speech time

286 secs

Michael Kelly

Speech speed

169 words per minute

Speech length

2401 words

Speech time

850 secs

Peggy Hicks.

Speech speed

177 words per minute

Speech length

1317 words

Speech time

446 secs