Global Internet Governance Academic Network Annual Symposium | Part 2 | IGF 2023 Day 0 Event #112
8 Oct 2023 04:30h - 06:30h UTC
Speakers and Moderators
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Robert Gorwa
The European Union (EU) is steadily establishing itself as the foremost regulator in the technological industry. This substantial shift is demonstrated in several regulations the EU has pursued, comprising the Digital Services Act and the AI Act. The EU’s regulatory strategy is considered to be an expanding toolkit with the potential to serve specific strategic objectives in the future. Notably, certain regulations, such as those mandating rapid takedown times, are now being integrated into the EU’s approach to content regulation.
Nonetheless, it’s crucial to acknowledge a considerable divergence within the EU’s overall digital policy. This divergence stems from the contrasting interests within the European Commission itself where distinct actors are driving differing objectives. This can be witnessed in discrepancies between the recently initiated European Media Freedom Act and the commission’s publicly declared objectives to combat disinformation. Further, optimal institutional arrangements vary substantially amongst the commission’s departments, contributing further to this divergence.
A renewed interest in European digital constitutionalism and the rise of digital capitalism have provided fresh theoretical perspectives to understand the alterations within the EU’s digital policy. Digital capitalism, interpreted as a battleground between firms and political actors, in addition to internal EU political conflicts, adds significant value. Moreover, industrial policy perspectives shed light on geopolitical strategies tied to the reshoring of supply chains and digital sovereignty projects.
Despite these insightful views, Robert Gorwa’s project, currently situated in its data collection phase, emphasises the importance of a deeper understanding of the EU commission’s actions. Hence, it necessitates a more significant focus on tangible findings, including securing internal communications and information via freedom of information requests.
Distinctively, data regulation stands to gain from public-level procurement, particularly within municipal governance. The Barcelona model, which commands companies to share data with the local community, serves as a prime example of this. This model reinforces the concept of a localised social contract, with mutual data sharing forming its nucleus.
Furthermore, tech policy witnesses a variety of sectors contesting the same jurisdiction, culminating in regular clashes between market-driven, rights-driven, and security-driven visions. The central actors involved in these confrontations are the US, EU, and China, each propounding their unique vision. An explicit example of this is the US actors influencing the controversial EU Child Sexual Abuse Material (CSAM) regulations, leading to the ‘Brussels effect’. Whilst these measures are aimed at child protection, they have sparked debates over potential infringements on user rights, privacy, and end-to-end encryption.
Amidst all this complexity, it’s encouraging to observe the considerable changes that have transpired within a short period, most notably in the sphere of content regulation and transparency. These advances, coupled with the other developments outlined, collectively depict a landscape of the evolving impacts of EU’s regulatory strides within the transnational tech sector.
Sophie Hoogenboom
Sophie Hoogenboom delves into the intricate concept of a global social contract pertaining to the digital sphere, specifically the vast, ever-growing realm of data. An essential facet of such theoretical frameworks, a social contract is frequently mooted as a potential intervention to streamline and optimise social cooperation within the global digital context. Drawing on Alexander Fink’s theory, she maintains that the likelihood of a social contract arising increases in communities with shared preferences, common social norms, and smaller sizes.
However, Hoogenboom critiques the ambitious notion of a sweeping global social contract on data, attributing its potential challenges to the culturally diverse preferences and social norms of the various global communities. She posits that, given the multifaceted contexts, notions of privacy and the universal definition of ‘common good’ could dramatically vary between societies, and the sheer size of the global community could inflate the costs associated with decision-making and monitoring protocols.
Nevertheless, she proposes a more manageable and immediate initial step could be the creation of a social contract at a community level, focusing on utilising community data for societal betterment. Such a contract could be highly beneficial in fulfilling human rights and propelling progress towards the ambitious objectives enlisted in the Sustainable Development Goals. Community data, specifically, could hold considerable potential for societal betterment, given its relevance in the field of health data and related sectors.
The current analogue social contract, she believes, leaves the potential of data untapped, thus stimulating debates about data decentralisation and its proposition as a global public good. She refrains from taking a hard stance on whether community data should be kept decentralised or placed under a global social contract, suggesting she is still formulating her view on this complex issue.
Hoogenboom advocates for a composite approach in data governance wherein community networks work simultaneously with global networks. She recognises the digital divide in certain parts of the world and underlines the need for inclusivity in the data governance framework.
Overall, she emphasises the importance of contextual understanding in these discussions, asserting that different communities might lay emphasis on distinct aspects of development, subject to their unique needs and challenges. This nuanced approach makes Hoogenboom’s analysis a significant contribution to the ongoing discourse about the need, purpose and potential form of a global social contract for data.
Audience
The analysis suggests a prevalent focus within academic literature on national Artificial Intelligence (AI) strategies, overshadowing the much-needed investigation of regional and global AI strategies. This skewness raises pivotal concerns regarding the comprehensive understanding and development of AI strategies worldwide. Moreover, it has been noted that the conscious decision to use the acronym ‘AI’, rather than consistently referring to ‘Artificial Intelligence’, could inadvertently limit the scope of future studies, confining the research to the era when AI became a recognised and frequently utilised concept.
In the realm of governance, the concept of the social contract has come under significant scrutiny, specifically in association with AI. Questions about the necessity of a social contract have been raised, and there are suggestions of a possible departure from traditional state-centric constitutions within the AI sphere. This shift could potentially pave the way for the development of a novel, decentralised social contract within AI governance, thus reflecting the differing nature of governance in this innovative field.
Attention has been directed towards the consequences of the Digital Services Act (DSA) on countries in the Global South, such as Costa Rica and Chile, where it has influenced local legislative creation. However, the European Commission, instrumental in these processes, seemingly harbours internal political issues that require unravelling for a comprehensive understanding of its operations.
There are potential contradictions within the legal framework, specifically between the Disinformation Code of Conduct and the Media Freedom Act. The necessity for clarification in this regard is evident, as ambiguities can hamper the efficacy and effectiveness of these legislative instruments.
There have emerged concerns about AI regulation and hate speech management due to observable inconsistencies in pertinent legislations. Consequently, there is an urgent call for more diligent digital leadership that guarantees consistently across AI regulations, and the importance of accurate and conscientious drafting of legislation is underscored.
On a more positive note, community networks have been recognised as potential facilitators of data governance. The analysis proposes these networks as instrumental infrastructures that could effectively embed data governance regimes, thereby fostering broader partnerships for sustainable development. Looking forward, this suggests a novel approach to managing data, leveraging the inherent strengths of community networks.
Radomir Bolgov
The analysis spotlights the domain of artificial intelligence (AI) policy as being in its fledgling stages. AI policy is a particularly nuanced field, presenting numerous dimensions of complexity, which necessitate comprehensive definition and documentation to ensure clarity of policy objectives. A key portion of the study encompassed a descriptive analysis and the subsequent development of a framework via bibliometric analysis. This approach was utilised to map the principal topics within the field, an exercise that underscored the multidimensional character of the domain.
Scientific output pertaining to AI policies, although demonstrating an uptick over the years, has shown signs of stagnation in recent times. Regrettably, the past three years have witnessed a plateau in annual scientific production. However, this underscores a compelling need for increased research surrounding the effects and implications of AI policies. The analysis unveiled a stark scarcity of studies focussing on either the positive or the negative outcomes resulting from AI policies. Alarmingly, themes such as AI policy evaluation are significantly underexplored, revealing a lacuna in research that needs addressing.
AI policies have been studied considerably during periods of stability, but, as the analysis emphasises, there is an urgent need for these policies’ analysis within the context of present-day crises like pandemics, conflicts and environmental crises. The reshaping and re-examination of AI policies are warranted by such drastic changes, posing challenges for policy-makers in this field.
Nonetheless, the findings of this analysis were constrained by the choice of keywords and the singular reliance on Google Scholar as the research database. The limitations evident in the choice of keywords are to be acknowledged. Given that the selection was based on the research team’s initial knowledge, this could potentially confine the scope of the study. The exclusive use of Google Scholar further restricts the breadth and diversity of research, considering that this database may not be as holistic as other databases such as Scopus and Web of Science.
There is, accordingly, an urgent need for strategic approaches to keyword selection to mitigate these limitations. Findings pertaining to global AI strategies are markedly few, indicating a need for expanded research in this area. This is especially significant in the context of aligning AI policy development with Sustainable Development Goal 9: Industry, Innovation, and Infrastructure. These insights, thus, form a vital basis for setting the agenda for future research initiatives and policy developments within the field of AI.
Moderator
Andrea was unable to participate in the discussion due to an unforeseen absence. Despite her absence, she assured that she would remain involved by committing to providing comments on the pertinent papers, exuding a neutral sentiment, and contributing to the ongoing discussion.
The forthcoming GIGNET Annual Symposium plans for an informative afternoon session during which three seminal papers will be presented. The papers cover crucial topics of AI policies, EU platform regulation, and a new social contract for data, to be presented by Radomir Bolgov, Robert Gorwa, and Sophie Hohenbaum respectively. The selection of these presentations expresses a positive sentiment towards the forward-thinking sectors of Industry, Innovation, and Infrastructure.
In the interest of optimising the panel discussion, there are plans to definitively manage time. The moderator plans to introduce all speakers at the start of proceedings, thereby establishing a clear and streamlined direction for the discussion.
The concept of building a new social contract was central to discussions, with Sophie Hohenbaum taking lead. Her view was that creating a fresh social contract requires a vacuum or void that does not presently exist due to our current social contracts. The moderator positively advised Sophie to thoroughly engage and respond to relevant theories, such as that of Alexander Fink, which could provide beneficial insights into the intricacies surrounding the proposed creation of a new social contract.
The EU’s intricate digital policy and its extensive implications were also discussed. Neutral concerns were raised about the complexities of mapping a policy landscape replete with interconnected, dynamic interactions like the role of the EU Commission and the EU External Action Service in shaping digital policy-making.
Returning to the subject of AI, the moderator commended Radomir Bolgov’s ongoing research on AI policies. The moderator expressed a positive sentiment, encouraging Radomir to delve into related fields such as education and health, where AI’s application and impact are profound.
There were questions raised about the size and functionality of social contracts as per Alexander Fink’s theory. The moderator expressed scepticism towards Fink’s claim that smaller groups lead to more effective social contracts, suggesting this could contravene the idea of global social contracts.
In the sphere of public procurement’s potential role in social contracts, examples were referenced from municipal governments around the world, focusing on data sharing as pivotal in shaping localised social contracts. The Barcelona model was cited as a commendable example where companies providing services like transportation platforms must share data with their communities.
Furthermore, the moderator agreed with and positively acknowledged the incorporation of social contract elements, notably data sharing, into public procurement. It was suggested that such an approach could foster a more engaged and informed community, echoing the success of the Barcelona model. To conclude, the discussion engaged with a wide range of topics from the influence of AI in various sectors and EU digital policies, to the norms of data sharing in public procurement and emerging trends in social contracts. Different stances, opinions, and ideas for further exploration within these paradigms were then outlined.
Session transcript
Moderator:
when somebody had to be a discussant at this panel. So I will at least start, but this is a team effort and we’re counting on everybody to fill in Andrea’s shoes for the moment. Andrea has assured me that he will send you comments on your papers, so we will start that process here. Okay, so it’s my pleasure to launch the afternoon session of the GIGNET Annual Symposium. We have three papers that are going to be delivered and I’ll introduce all three straight away so that I don’t have to talk in between the sessions, which takes up too much time. So we have Radomir Bolgov, who will be presenting the paper on AI policies as a research domain, preliminary findings of publication pattern analysis. And secondly, on my extreme left, we have Robert Gorwa, who will be presenting a paper on European rules, European tools, mapping the institutional cultures of EU platform regulation. And in the middle, we have Sophie Hohenbaum, who will be presenting her paper, a new social contract for data, question mark. Without further ado, I will pass the floor to Radomir, who’s going to start the presentations. I actually get about 10 minutes for your presentation.
Radomir Bolgov:
Thank you very much. The study about AI policies was conducted by me and my colleague, Olga Filatova. And we tried to look at the artificial intelligence policies and agencies as a research domain. Is there any specific feature of this research domain? recognition and interpretation problem. What does AI policy mean for industry? Are there any trends in these studies? And I will share the preliminary findings of this analysis. First of all, what is AI policy and how does it work? First of all, what is AI policy and we did it in three ways. We developed descriptive analysis of papers, articles, and authors in order to determine some trends. What is AI policies, what does it mean, and what are the main directions of research domain. First of all, what is AI policy and what are the main directions of research domain. We developed the framework for using bibliometric analysis and we have created the map of main and highly important topics studied. We have analyzed and grouped these topics and we tried to suggest some future research directions. We tried to answer two research questions. What is the state of art of AI policies reflected by the most cited papers, articles, most important authors, sources, countries, et cetera, and what are the key topics, key concepts in the literature on AI policies. So, the Framework of AI Policy. It’s a complex framework and it’s developed using the research on research fiction within the literature on AI policies. So, stages. First one is developing of the study design. We have selected database Google Scholar because of reasons of accessibility of this database. And the second step was the data collection. We have searched Google Scholar based on the research strategy. I will describe on the next slide. We have exported to EndNote format, worked in a Mendeley database. This is scientometric software for these purposes. We have automatically checked for duplicates, uploaded the collection in BiblioShiny. It’s one more software for bibliometric analysis. And exported the collection in Excel format. So totally we have found 1545 publications. So the search was conducted by country name and keywords. You can see these words on the slide. Policy, strategy, politics, initiatives, regulating, governing, legislation, et cetera. Excuse me. And the third stage was data analysis, screening titles in Google Scholar, deleting irrelevant records. We have excluded preprints, forwards, et cetera. We have left only items in English, we have deleted non-relevant publications, for example agriculture, education, health, etc. And we have left only 178 publications. So the final collection was uploaded to BiblioShiny and then we used the software of this program to visualize the data and the interpretation. So we can see that the annual scientific production was increased recently, but the increasing stopped during the last three years. So we can state that this research domain achieved some major point. So the most relevant sources, the author co-occurrence network, so we can see the collaboration of the authors, and we can state that these groups of authors work separately from each other. Title network, just to illustrate, and the thematic map, which consists of four quadrants, niche topic, motor topic, emerging and declining topics, and basic topics. We can interpret this, that such topics as artificial intelligence policies and China are increasing, emerging topics, at the same time such topics as AI policy evaluation is not elaborated enough. So we would like to see the further development of this topic, for example. So as far as the limitation of this study, so we should note that the choice of keywords was determined by our initial knowledge of this topic, so this determines the findings and we can see this on this. So we used only Google Scholar, traditionally Scopus and Web of Science are used for these purposes, but at the same time the Google Scholar indexed the books, but at the same time the data of Scopus and Web of Science are better and more diverse, of course. And as for the conclusion, so we can state that such topics are not studied enough as AI policy effects, and so what are the outcomes of AI policies, they are negative or positive, what is AI policy per se, so there are a few studies which represent the answer on this question, and the scholars focus predominantly on national strategies and platforms, as well as on the perspectives of AI policies, so the institutional approach is dominated, dominating in these studies. AI policies previously studied during the periods of certainty, but at the same time there are some crises such as pandemics, conflicts, sanctions, environmental crises, so we need to study this domain in the contemporary conditions. So that’s it, thank you for attention.
Moderator:
Thank you very much, Radomir. I’ll move directly on to Robert, who will give his paper now, Robert. You don’t have slides, so we’re going to look at you on a big screen behind you as well.
Robert Gorwa:
Oh my goodness, brilliant. I don’t know if I’ve ever been in such high definition before. Hi everyone, thanks so much for being here, it’s nice to be in Kyoto, and yeah, I’m grateful for the opportunity to talk a little bit about some work in progress that I’m undertaking with my colleague, the legal scholar Eletra Bietti, who teaches at Northeastern. How’s it looking up there? And I’m going to go old school with no slides, so apologies for this. Get ready to be extremely bored. I’ll try to sprinkle some jokes in to keep you on your toes. That was the first one. And we’re going to talk a little bit about European digital policy, which, again, dry topic, but it’s one which is really important at the moment, I think. We’ve all probably heard conversations about the many different overlapping things that European countries, but also the European Commission as a political actor, are pursuing from the Digital Services Act to the AI Act and other related regulations. And we’ve also probably all heard about things like the Brussels effect and the way that things that the EU is doing because of its market size are having potential transnational and transboundary impacts. So it seems kind of clear that at this current moment, at least a lot of commentators are picking up on this, the EU is portrayed as the leading tech regulator of the moment. And this project, which is very much a work in progress, so I’m looking forward to hearing comments and feedback on this, was born out of a bit of a dissatisfaction with some of the current work that we’re seeing in this space, at least as far as the lack of good centralized resources that one can look at that try to provide a more holistic assessment of what the EU is doing. So, for example, even if you’re looking at a subset of tech policy in the EU and you want a single resource that looks at the regulation of online content, so-called platform regulation issues, or other broader related issues, I think it’s difficult to find something like this. And there is a reason for this, I think. There’s a number of challenges that we, as a research space, are dealing with, I think. the nature of the rapid development of EU policymaking in this space, even for people like me who theoretically are supposed to be following this as my day job, I can barely keep up. So we’re seeing a huge amount of policy efforts that are being developed across the internet stack and are from the internet infrastructure to now more kind of assertive even industrial policy relating things that talk about supply chains and semiconductors and manufacturing processes of ICTs as well as kind of the top of the stack, all sorts of different content related regulations that are kind of at the application layer. And perhaps unsurprisingly with all of this mobility we’re seeing a lot of disciplinary fragmentation I think. So you know the data protection law scholars are doing their own thing, the intermediary liability legal scholars are doing their old thing, and the kind of problem is repeated from child safety, media policy, other related issues. And then kind of on the other side of the aisle I come from a more political science background. There are also big and ongoing debates in the EU policymaking literature on things like the resurgence of industrial policy and you know the bigger kind of interventionist shift in EU industrial policymaking on things like semiconductors. A lot of folks in this room have worked on digital sovereignty debates and the role that that might be playing for example when it comes to cloud infrastructure platforms and projects like Gaia X. So on top of all of this movement we’re also seeing I guess just some structural features which make this a complicated space to navigate which is how complicated the European Union is as a multi-level political actor. And even if you just look at the European Commission which is the main kind of policymaking bureaucratic arm of the EU, there’s a lot of complex politics that are kind of inside the hood that people don’t see. And this actually makes it really kind of of puzzling to understand what is going on. Because there’s different parts of the commission that have different agendas and are pursuing what oftentimes seem to be contradictory policies. So the project that we have just kind of embarked on this summer, and again, this is very early stage. It’s probably a bit too ambitious for a single paper and might even become two papers. Kind of has two main overarching goals. So the first thing is we’re trying to understand EU digital policymaking as a political project. Whether or not it’s actually a coherent political project, that kind of remains to be seen. But we’re trying to do a kind of descriptive and institutional mapping of all the different parts of this agenda and kind of what is going on. And then relatedly, part two is trying to look at what is going on from that descriptive point of view and try to map it on to some of the theoretical lenses that are being proposed in different disciplines, different parts of this conversation, to kind of explain the changes that we’re seeing, whether we want to call those logics or political mechanisms. So I’ll go into those a little bit in depth, maybe starting with the first kind of bucket. So what we’ve been doing in the project that we’ve started is we’re taking a kind of institutionalist and political economic perspective of trying to think out who the key actors are and what is their policy toolbox. And we’ve been focusing specifically just on the last four years of the von der Leyen Commission. So just one actor, one commission, but again, a time where we’re seeing a huge amount of change. So part of this is pretty basic descriptive work. We’re mapping out the ecosystem of all the different parts of the commission that have digital policy competencies. Again, this is interesting because sometimes these aren’t actually formal competencies if you look closely. So there’s directorate generals, for those of you who don’t know, these are departments of the commission. examples of relevant ones include DG Grow, DG Connect, which is kind of the most classic telecommunications, digital policy one. There’s Directorate General Justice, which has been working on issues like online hate speech and disinformation. There’s DG Competition, which is working on competition issues. And then there’s also ones like DG Home, which is the kind of security focus actor of the commission. And interestingly has been getting more involved on issues like child safety and terrorist content online in recent years, even though that isn’t maybe formally part of its mandate. And after doing this kind of mapping, we’ve been trying to list and map out all the different policies that these DGs have been spearheading, looking both at formal and informal regulatory tools. And then also finally, we’ve been trying to kind of figure out the institutional features of these different policies across the different DGs. So basically, how do they do what they do from a kind of functional organizational perspective? And again, this is really interesting and also really complicated. You know, the EU is famous for involving complex networks of different regulators. Sometimes this is decentered across member states. We have different types of enforcement mechanisms. We have different legal justifications, basically questions of institutional design that are at play here. So we’re trying to just understand that from a descriptive way in a kind of more coherent mechanism. And one of the things that we’re seeing here is something I’ve already alluded to, which is basically that a lot of divergence in EU digital policy seems to at least be partially explained by the different actors that are driving them and the different interests that we can assume those actors have. So there was just some interesting news today, for example, about the new European Media Freedom Act. And you might look at this and say, hey, doesn’t the European Commission have a stated interest? in combating disinformation and it’s something that they’ve been doing through different parts of the commission for some years, but there are some analyses and arguments being made that this European Media Freedom Act is actually directly contradictory to the goals of some of the things that they’ve done under voluntary codes, like the Disinfo Code of Conduct. So anyway, again, part of the just simple kind of institutional analysis here is that that discrepancy at least makes partial sense when you know that it’s different parts of the commission, different actors motivated by different goals. And again, that only goes so far. We’re trying to just understand also which parts of the commission prefer different types of institutional arrangements. So for example, DG Home substantively has been delegating a lot of stuff through kind of technical solutions like automated moderation. These are often called upload filters through the draft CSAM regulation and also the terrorist content regulation. DG Justice likes voluntary codes of conduct. Again, we can talk a little bit about this. Okay, and then part two of this paper, and I’m almost at 10 minutes already, so I’ll be really quick. But basically, this is, I think, the perhaps more interesting piece for this room, and it’s something I’d be interested in talking with you all a little bit about, is basically trying to map this types of descriptive analysis onto some of the major theoretical lenses and approaches that people have been advancing to try to explain what we’re seeing right now in terms of changes in EU digital policy. So a major line of scholarship that is coming out, especially of European legal circles, is this resurgence of interest in European digital constitutionalism. We can talk a little bit more about that. We have public policy scholars that are looking a lot at industrial policy. And thirdly, I think we have a third broad bucket of critical scholars who are looking at digital capitalism, digital. neoliberalism, digital ordo-liberalism, and maybe, you know, these are big generalizations, but basically they’re kind of, we could think of these as explanations that are shifted or based around norms, geopolitics, and markets. And we’re still working on this part of the project, but what’s interesting is that all of these different lenses have different strengths, and they look at different pieces of what is a very big digital policy ecosystem. So very quickly, the structural kind of analyses of digital capitalism, I think, provide helpful explanations of what’s going on in terms of an inter-actor competition perspective. They capture struggles between firms and political actors. They also kind of capture internal struggles within the EU policy just generally, and its kind of trends, and I guess its internal struggles between, for example, market delegating mechanisms that have always been part of the single market project, to kind of more interventionist efforts to curb the excesses of the market. We then have industrial policy approaches that I think make a lot of sense when we’re trying to explain certain geopolitical or geoeconomic policies relating to, for example, the reshoring of supply chains, digital sovereignty projects like GAIA-X. Those types of analyses, I guess, are less good at explaining the minute details of the kind of procedural bureaucratic frameworks that the EU is developing, or other parts of the EU are developing for online content, for example. And that’s something that digital constitutionalism scholars have been doing in the EU, describing new layers of rules that are being kind of layered on top of industry behavior in an area like content moderation. And what I think is interesting here, and again, this is not fully developed yet, is that there are a number of kind of interesting contradictions between these different approaches and also some weaknesses when you compare them in this way. So, for example, something like digital constitutionalism. coming at it from a political science point of view. There’s an argument that it underplays actor agency in terms of how policy change happens. Maybe it over relies on judicial actors as agents of change in terms of policy making in the EU. Maybe it kind of treats markets too abstractly and isn’t engaging enough with these broader political forces like geopolitics that are kind of inherent in the industrial policy approach. So yeah, so that’s kind of a very high level bird’s eye view of what we’re trying to do in this project again. It’s still in early stages and I think the move that we’re gonna make towards the end is basically trying to see whether or not we can argue for a more kind of comprehensive synthesizing framework that pulls in insights from all of these different frameworks in terms of thinking about digital political economy in the EU across the kind of scopes of norms, markets and geopolitics. So yeah, thanks so much. Excited for the discussion.
Moderator:
Thanks very much, Sophie there, you are a co-host right? Hello, could you please Sophie, a co-host.
Sophie Hoogenboom:
Hello everyone, my name is Sophie Hogenboom and I’m a PhD student at the United Nations University CRIS, which is situated in Bruges, Belgium, and the Free University of Brussels. Today I want to talk about a paper that I’ve been working on, in which I reflect upon a social contrast. So, I’m going to talk a little bit about what I’m doing. I’m the director of the World Bank and I’m the co-founder of a new social contract in relation to data and the likeliness of such a contract to occur. What inspired my paper was the fact that I kept seeing different types of actors talking about the need to establish a new social contract in relation to data. For example, the World Bank in 2021 had published a report, data for better understanding of the need for a social contract, and I found that there was a need for a social contract in relation to the need for data sharing, but I’ve also found similar debates in academia and also among journalists. And this raised the question for me at least, all right, we’re talking a lot about it, but is there a possibility of such a thing to occur? And if currently those conditions are present? I would first like to talk a little bit about what a social contract is. A social contract is often perceived as a written agreement geared towards the installment of a political authority in, with this purpose to foster a social cooperation. The document must raise the community out of a state of nature, meaning that prior there was no or not a sufficient political, overarching political authority. And thirdly, the members of the community need to enter the agreement voluntarily. And this is a very important question, because it’s a very important question. And it’s a question that we often think about in the nation state. Of course, we are often familiar with the works of Locke, Rousseau, Kant, et cetera. However, this is also often in question, the fidelity of the social contract in relation to the nation state. It’s often referred to as the myth of the social contract, as people struggle to find any sort of empirical evidence of such a social contract path in the movement. And the real questioner about women’s rights in Belgium is about men. In Belgium there is no, but in Soviet Union, there’s some feminists and experts who say that the social contract is impossible. Many feminists do say that the social contract is impossible. And many other feminists say yes, because it’s possible. So, the question is whether political In recent years, scholars have been starting to look at, or actually arguing against this and saying, no, a social contract exists, however, we can’t really find it on the nation-state level, but we can find it on sub-nation level and global levels. When it comes to social contract in relation to the digital sphere, we see often people writing and proposing a social contract in relation to the digital realm in general or specific parts of it. This is often considered to be a result of the fact, and you can see it on the quote on the slide as well, that sometimes the digital realm has been described as an undeveloped frontier, which is reminiscent of a locking state of nature, meaning that we need to establish, or there is a need to establish a political authority. So we currently are seeing, especially also in academia, many people are proposing a social contract, for example, in relation to virtual worlds, to privacy, to AI, and for data, of course. So this, for me, raised the question, okay, we’re talking a lot about these things, but is there a possibility for this to arise? And in the tradition of social theorists, there’s not much attention being paid to the conditions. However, there is one theory that I used in my paper, and this was proposed by Alexander Fink. And he described, he looked at the cases of social contracts that we can see empirically, and he drafted this theory. According to Alexander Fink, for a social contract to arise, we first need to have a community that needs to have similar preferences in relation to public good. Secondly, the members of the group need to share some common social norms for them to be able to come together and draft such a document. focuses on the size of the group, as he says that the smaller the group, the likeliness of a social contract to occur will increase, because there are lower decision-making costs and monitoring costs. So what I’ve done in the paper is I’ve structured it alongside these three conditions that Alexander Fink has proposed. I will keep this part brief, because in my paper I go very much in-depth, but I would like to not bore you with all the specificities. So first of all I’ve looked at the first condition. So the community needs to have a relatively similar preference. Of course this is where a global social contract becomes a little bit difficult, because the community is of course so big that we can identify very different preferences in relations to public good, such as commercial and security. However we can identify some data, so not all data, but some parts of data, where we can identify that actually many actors do have the similar preference. And this is what I found mostly about data that we can use for the betterment of society. This is also following the Digital Public Goods Alliance, who has made a distinction between all data and community data. As you can see on the slide, community data refers to data that can be useful for the fulfillment of human rights and the attainment of the Sustainable Development Goals. So to conclude on the first condition, I found that if we follow the theory of Alexander Fink, the community of course has very different preferences in relation to all data. However we can identify some similarities in the treatment of community data. The second condition that I’ve reflected upon is the fact that the members need to share a common social norm. Again, because of the size of the global community, we can, of course, identify varying social norms. I’ve looked at the differences between collectivist and individualist societies. Of course, the definition of a common good, right? If we say we want to use data that helps the betterment of society, we need to find a consensus of what that means, which creates obstacles. The third one regards to privacy. We can also identify that privacy is highly contextual. Norms around privacy is highly contextual. And fourth, the familiarity with the concept of a social contract also impacts the likeliness of people to be aware of what a social contract is, and thus also make plans to create a social contract. So, to conclude, I would say that the different social norms could hinder the creation of a global social contract. However, we could look at maybe not immediately go on a global level, but maybe focus more on regional initiatives or a topic focus. So, for example, a global social contract on health data. There has been some progress in that field already. Then, the third condition is the size of the community. And, of course, according to Alexander Fink, as I just said, the smaller the group, the lower the decision-making costs are. I agree with this. However, I do think that especially because of the fact that the Internet and digital realm actually makes us to be so much more connected with each other. So, that’s a way that I still think it would be possible. And, on top of that, if we would be able to make something like this on a global level, we would have a lot of resources in terms of people and money to do so. So, to conclude, I reflected upon what are the conditions needed, according to Alexander Fink, for a social contract to rely on. to arise in relation to data and what I have found is not very surprising that for now it seems too ambitious to, sorry, it wouldn’t be, sorry, it would not be very likely that in the following years we will see the creation of a global social contract in relation to all data. However, as I said earlier, in terms of community data I do see some possibilities and I do think it’s important to reflect upon this as we could make, we could use this step to protect the full potential of data for the betterment of society. Thank you for listening.
Moderator:
I need a second, all right, I’m now in the head of Andrea. Thank you very much for the papers, thank you very much to the authors for presenting them. I’m going to go through the papers one by one with a few comments and trying to kind of push towards a discussion with the audience as well here. So Radomir, thank you very much for the paper that you presented. It struck me as a very promising reflection on looking at this from the literature perspective, so looking at this on what people are saying about AI and I found that very interesting. The methodology is intriguing but I would have thought that the methodology worked on larger numbers and you spent a lot of time reducing the numbers so I was wondering about that because I think that, you know, there is some element of qualitative research that you could also bring into a smaller number. I think you said you got down to 170 in the end. So that’s one thing. The second thing is it would be useful probably, and this is boring geeky talk, I think, but it would be useful to, as you mentioned, also look towards Scopus and those other databases, which I think you can still access for free. I don’t know whether that’s… Okay, you can’t. Okay. It’s just because I only do it at university. But it may be interesting to expand the database list that you have. Also, I was a bit struck by something you said when you said, I’ve removed all literature relating to agriculture, education, and health. And yet I think most people, when they write about AI, they write about AI and a field, agriculture, education, or health. And so I was thinking, oh, why did he take them out and not leave them in or put even more in? And that leads to the bigger question, which is about the choice of code words or the choice of keywords. Because I think AI is not… Not everything that is called AI is AI, and not everything that isn’t called AI is not AI. That’s a surreal comment from a discussant. But it’s just that I remember that there was a big discussion a few years ago. Why don’t we call it cybernetics? And why are we using this term artificial intelligence to describe this when actually a lot of the things that are going on, machine learning, a lot of these concepts are actually used by different scholars in different disciplines to actually mean something else. So that might be something. That obviously limits the words that you use in the paper. And that limits the scope of the discussion that you can have, which also reflects on the choice of journals. I’m very glad that you chose our flagship journal. the telecommunications policy, which we have a special issue with every year in the list, but I was also wondering, you know, what was the rationale for the choice of those literatures? Okay, so that’s just some thoughts for you. I think I’ve got everything from my notes. Good. Second, Robert, thanks very much for this paper. It makes me realize that I should have written more. It’s really interesting to see you try and map out the European Union’s digital policy space. I have tried and failed myself, so I wish you all the best, but I don’t hope you do. No, I do hope you do it. There is one other thing that kind of struck me, that actually a lot of EU policy, full stop, is now dealing with digital, right? So your mapping might turn out to be just, okay, what is the EU doing as a whole, right? Because I find that even if you focus on one DG, you find out that they’re involved in other projects and things. So it might be interesting to look, when you’re looking just at the commission, for example, to look at, okay, how does, this would be really interesting, how does DG Connect get involved in inter-service consultations or something like that? How do they try and push forward a sort of research agenda, a policy agenda from that perspective? That might be interesting. And obviously, you have the tensions across different DGs, right, in the sense that there are different approaches, different understandings of what we mean. And traditionally, DG Connect has been very tech-oriented, and DG Justice and so on have been much more legalistic, of course, but I think that kind of interaction is. evolving very much. I think also one of the things you probably want to look into, because you focus on the, although you said the EU, you focus a lot or you talk a lot about the European Commission. I think the European External Action Service is doing quite a bit in digital policy now, engaging across. And you do have, of course, the Parliament’s role in this. And you can also say something about the Council. Sorry to all of those who aren’t involved in EU stuff, but that’s one of the consequences of this. So the other thing, you talked about the EU as a tech regulator in your introduction. And yeah, that seems to be a kind of at least a discourse that is emerging in the mainstream. But I would like you to critically also look at that. Milton and I were just at a workshop a couple of weeks ago that was trying to look at the role of the US, China, and other powers in shaping the way the EU looks at this. And it’s clear that the EU wants to be seen as the tech regulator. And maybe it does take regulatory decisions to that sort of level. But it might be interesting to actually critically look at that as well and see how much of this, due to the way that the EU works in terms of policy formulation, may be based not on tech regulation, but on interest representation in the development of tech regulation. And therefore, I would kind of, this is me not arguing against you doing the mapping, because it needs to be done. But it’s kind of saying, well, you need to look in these specific cases. Maybe it would be good to think about one or two specific cases of policy. Maybe one very obvious policy. and one less obvious policy and this is quite a lot there also you mentioned that you only want to look from 2019 onwards and you you’re obviously aware that there’s been a massive shift since that period so that’s a good good thing to do but I think it’s there is a history and a legacy to all of this and it’s important to kind of bear that in mind because that does have implications for this turn yes there are path dependencies very much there okay I’ll leave that there for the moment we’ll go on Sophie and of course we’ve we’ve talked briefly about this paper before so this is this well my comments will come as a surprise so first of all thanks I was wondering and you’ve obviously you’ve drawn this upon the literature on social contracts and I was wondering at a certain point during your presentation I was thinking you mentioned that there has to be a vacuum there has to be nothing that avoid before a social contract is signed but obviously we live in a period of social contracts so we have to rewrite a contract and for me in contract or you can always change your contract right so I was wondering how that reflects you know on on your reflections there because we do have social contracts that do exist right and now you want to create a functional social contract in a sense right so I was wondering about that I also thought so I mean this is not a comment to you but it’s a comment to think and I was wondering whether you’d thought critically about how think looks at these issues of size and functionality because for me the question of size is obviously incompatible with what you were describing you know a social contract is not between a small group of people in your case but it’s over a larger scale and then of course in terms of function, it’s also vice versa as well for you. So I was wondering whether you would actually rather write a response to think first, rather than going into this story, because I really like the idea of you reflecting on what the World Bank is doing and so on and so forth, but I’m wondering if before you do that, you need to have the theorizing up to speed, and so rather than saying, okay, let me talk about think, criticize, think, and then go on and do a justification for what the World Bank is doing, rather say, okay, let me just spend some time reflecting on the critique of the social contract. Okay. Are there questions from the floor? There are questions from the floor. Please just, yes, organize yourself in an open mic fashion. Thanks Jan-Art. Okay, yeah. My name is Jan Otscholte, Leiden University.
Audience:
Question for Radamir. Thanks for that very much. I was wondering if you could do a little bit of assessment of what you found. In other words, when you look at the trends, did you find that there are certain things which are encouraging, and are there certain aspects which worry you? To give one example, I think you said that most of the literature you found was looking at national responses and national strategies. So is the academic literature not looking at regional and global strategies, for example? Question for Sophie. Can you tell us why, I’m not saying that it’s not the case, okay, but can you be more explicit why one would be concerned about looking at a social contract? Why is it important, and why is it interesting? And would we necessarily expect a social contract in this sphere to take the same shape as the traditional state-centered constitution? kind of form. In other words, the fact that we don’t have a United Nations Charter or a national constitution, does that mean there’s not a social contract? Could you have in this sphere a very decentered social contract, for example, just because governance is looking different? Thanks.
Moderator:
Since Ramiro is standing up, we might as well give him the mic and if we then take three questions between the three of you plus Mike. Thank you.
Audience:
Hi, I’m Ramiro from CELE. Ramiro, I have a question similar to the one that Jamal posed in terms of selection of terms. By selecting AI something, you’ve selected an acronym and I think with that decision you are discarding the period in which people did not talk about AI because it was not a thing yet. So I wonder if that was a conscious decision to sort of control the sample that you would get to get only the policy related stuff? I mean, when I read your paper my intuition was that I would have included artificial intelligence, for example, but that would perhaps have led you in a different direction. I have another question for Robert. I love the project you’re engaging in. You mentioned that you’ve found different political issues within the Commission. I wonder if you’re at a stage in which you can share that with us or we have to wait for it? Because this is coming from… I work in a research center based in the Global South. We’re trying to look into the DSA because of the obvious impact it will have in the world, but especially in Latin America. It has already been used as an inspiration to produce bills. Costa Rica, and Chile, and that mapping that you’re working on, it’s essential to us. So I wonder if you can share with us a little bit more about those political issues within the commission. Thank you. Hi, good afternoon. My name is Nick Benequista. I’m from the Center for International Media Assistance at the National Endowment for Democracy in Washington, D.C. I think my question may be identical, in fact, to Romero’s question, but I’m gonna give it to you anyways. So Dr. Gorwa, you mentioned the contradictions in the sort of EU constitutionalist perspective that’s playing out in these regulatory approaches. You used the contradiction in particular between the Disinformation Code of Conduct and the Media Freedom Act as an example of that, which is of particular interest to my organization and our work. And I think this is where Romero and I are probably asking a very similar question. Could you just say a few words about the constellation of actors and the different perspectives, how you’re accounting for those contradictions, including the one you see in the Code of Conduct and the Media Freedom Act? Because I agree that sort of analysis would be really interesting to many of us. Thanks.
Moderator:
I suggest we go in the order of presentations. Okay.
Radomir Bolgov:
Thank you for your questions. I will start from the second one about the keywords. And so, yes, this was a limitation of our approach. So the choice of keywords was determined by our initial knowledge of this domain. So this strategy should be developed. Thank you for your suggestion. And as for the second, first question. my order, about the studies on global strategies on AI. So yes, there were a few studies on this topic. But at the same time, only several studies. But we have worked with big masses of information. So in order to analyze these approaches, we work individually with these articles. And so this is a direction for our further research. Thank you for your suggestion.
Robert Gorwa:
Yeah, thank you so much to you both for those questions. I’m heartened that this seems like a potentially interesting project. I fear I don’t have that much yet to reveal in terms of concrete findings. And part of that is that we’re still kind of diving into things and collecting data. This also relates to your question, Jamal, in terms of trying to get a better picture in terms of what the commission is actually doing. And of course, this is really complicated. In a bunch of the other work I’ve been doing, I’ve been doing a lot of freedom of information requests. And I’ve been trying to get internal emails. And I used that in some past work on the development of national kind of platform regulatory projects, like the German NetzDG, for example, to try to get a little bit of a better picture of these contradictions, not just in terms of what different parts of the commission were telling to each other in the things that they. hopefully, didn’t redact when I requested them, but also kind of getting a bit of a vision of the different institutional veto points, especially in terms of the negotiation of application of these policies between member states and the commission, which is really, really key. So I don’t know. I think what I would say, and this is just pure conjuncture. This is kind of how I’ve been thinking a little bit about the EU and some of the work I’ve been doing, is that I get this feeling that a lot of the, at least platform regulation agenda, is about kind of adding tools to the toolbox, and then these get used down the line strategically by specific actors in certain situations, right? And this is why, for example, I think a lot of concern, which is often coming from the right place, and I think is completely bang on, for example, about the prospective human rights impacts of certain legislative approaches. We can think about the upload filters conversation, or the conversation that, thankfully, has slowed a little bit, but the conversation around embedding, for example, rapid takedown times into a lot of the content regulation that the EU is doing, and contrasting that with the actual application of what’s going on in the ground, where hopefully this is not actually used. It’s just more like a potential cudgel that can be used, theoretically. And again, that doesn’t hearten folks who are worried about human rights impacts to know that this is a tool that hopefully a regulator that is normatively constrained is not going to use, but I think it’s an important part of what is going on, which is just that there are a lot of tools, and then the outcome is gonna be politics. But I know that isn’t necessarily helpful to folks who are actually dealing with these issues on the ground, so I’d love to talk more about what you guys are actually seeing in the weeds on this.
Sophie Hoogenboom:
Thank you for your comment. It kind of reminds me of the comment that you also just made, Jamal, about the already existing social contracts. Indeed, I think we could keep it the way it is, in a decentralized way. However, what I find in the literature, and also, to be honest, I don’t really have a personal opinion yet, I just reflect it upon, but what I found in the literature is that there is a worry that the data is, if we leave it up to sort of the analog social contract, if I can call it like that, that there is a potential risk that we don’t use the full potential that the data can have for, yeah, for example, the sustainable development goals or other goals that kind of cross borders. So, I think that is an argument that you can make, that perhaps we should establish, and that’s why I also specifically mentioned community data, so not maybe all data, but data that we can identify as this is useful for the global community. And, yeah, that’s what I found in the literature, that some people are arguing that we need to establish that, and it kind of ties into these debates that we also see in digital commons, where people are trying to, I think, sort of frame or reconstruct data as a public good, and not only as a public good of one nation or of a company in one nation, but that we might need to think about it as a global public good, and for that we might need a social contract to build a mechanism for that. Thank you.
Audience:
I see that you would like to ask a question. Please go ahead. I’m Michael Nelson with the Carnegie Endowment for International Peace, and this question is for Robert. One of my colleagues is Anu Bradford, who’s written a new book called The Digital Empires. It builds upon her first book, which is The Brussels Effect. We’ve been having some discussions. about the lack of digital leadership, and in particular, how both in Europe and in the U.S. Congress and in some of our state legislatures, you have this process where there’s a goal, which is to, you know, regulate AI or deal with hate speech, and everybody has their way of doing it. Everything gets thrown into the draft, and nobody actually makes it into a coherent whole. I started my career in Washington working in the U.S. Senate in 1988. Back then, there was a real pride of authorship. The draftsmen who were writing legislation wanted to make sure that they wrote consistent legislation, and if they didn’t, the people in the administration would point out the inconsistencies because they were going to have to implement it. That seems to be gone on both sides of the Atlantic. We’re getting these pieces of legislation that are just a series of aspirations, and no one seems to notice that some of them directly contradict themselves. Well, actually, let me correct that. I notice it, and you notice it, but the parliamentarians don’t notice it. So my hope is that you can give us some reason for optimism. Is there any mechanism, is there anything that might change this tide and force people to build a coherent, consistent whole, rather than just throwing together a lot of things that make them feel good? If you want to see more about what I’ve written on this, you can go to Twitter. I’m at Mike Nelson, because I was the first Mike Nelson, and I use the hashtag, when policies collide. Literally dozens of examples of this from both the U.S. and Europe. Thanks, Mike. Hi, I’m Lee McKnight from Syracuse University. I’ve been following Mike around for a decade, so happy to do so again. So this is a comment and a question for Sophie in particular. Your work on, or your hope perhaps, that there might be some chances for getting a social contract around at the community level. I’m wondering if you’ve thought at all about community networks per se as perhaps the instrument or vehicle in which there could be data governance regimes embedded that could be either created generally or specifically by the community, if that’s something that’s come to mind. Now I have to do, like Mike, my advertisement, our session at 6 o’clock on Leave No One Behind, the Importance of Data and Development, exactly, we’ll discuss some of these issues in particular as we work across community internet and community networks in Africa with my colleague, Professor Danielle Smith, who I promise to embarrass by putting on the spot right now. Thank you.
Sophie Hoogenboom:
Sorry, if I, wait one second before I sit down, before I answer the question, you mean community networks as in that the community makes a network themselves? Yes, yes, exactly. Yes, I think, well, I think both can work simultaneously. We still need to work on parts of the world that are not connected. We still need to work on parts of the world where maybe the needs or what I also mentioned, what is a public good or what is betterment of society might also depend on the context. And let’s say some community might focus, puts emphasis on other aspects than another would do. So, yes, I think that could be very much a possibility. However, I think the combining of the data sets after that, like after establishing that, is so vital that can, I think that can work in tandem. Like it doesn’t have to be. either or, I think. But I will attend your session for sure.
Robert Gorwa:
One just thing that came to mind when you were speaking earlier was, I mean, maybe it’s not in the language of social contract, but there’s been some interesting work looking at procurement at the public level in municipal governments, right? So like this Barcelona model where they had a, at least in theory, an idea that companies that came in and provided, for example, like transportation platforms would have to share the data with the local community, which isn’t always the case. So maybe that’s also a form of like a localized social contract.
Moderator:
Yes, no, definitely.
Sophie Hoogenboom:
But that’s what I meant with, I think we could maybe stack them on top of each other almost. But still, I think it’s important to then make sure that we do share them among each other.
Robert Gorwa:
But yeah, definitely. And thanks so much to the question, was it Mike, Mike for your question. And yeah, I mean, two parts. So I think, first of all, I had a chance to, I met Anu Bradford earlier this year at a conference at NYU, and she was talking about the new book, which I haven’t had a chance to read yet, but it looks interesting. And I think, in a way, kind of, I guess, part of what we’re seeing and what we’re trying to explore in this project, right? So she ideal types the US, EU, and China as a kind of, she calls it a market-driven, a rights-driven, and a state-driven or security-driven vision of tech policy. And I think much of what we’re seeing is kind of actually all three of those modes inside each jurisdiction, probably, and contestation between these different things. And I think what, I guess, what we’re seeing in the EU is a lot of contradictions that are driven by different actors with different interests on each and in different camps like this. So I can point to also Henry Farrell and Abe Newman’s work on this, right? Looking at. transatlantic networks of security actors on data protection in the EU, but I think, you know, this is another good example if you look at, for example, the quite controversial child sexual abuse material regulation, draft regulation that the Commission has been working on, and, you know, you look at that where at least critics are writing that this is going to have a profound issue on user rights, on user privacy, end-to-end encryption, which another part of the Commission is working on through the Digital Services Act. How do we think about that? And again, I think it’s about actor coalitions, and one of the things that is really interesting about the CSAM regulation, for example, is that the main actors actually that are really driving that are US actors who have kind of seized the EU as a global regulator due to the Brussels effect and are trying to kind of hijack that. So I have a lawfare piece on this. I can share it, but there’s been some interesting analysis from NetzPolitik, the German journalism NGO, recently about THORN, which is a startup slash civil society organization started by Ashton Kutcher to lobby on child protection issues, and they’re one of the major actors that have been kind of building alliances in the EU for this kind of policy, because they know it’s, you know, easier and probably more effective to do so than at the US federal level. But on the second part of your question about matters for optimism, I don’t often have a lot of optimism, but I think if there is something to come back to, it’s the fact that we’re just seeing so much change in a relatively recent period of time, and just from a bird’s-eye view, if we’re looking at content regulation, for example, in the amount of transparency and also the amount of resources that are being invested by industry to handle this right now, it’s a complete different ballgame from, let’s say, 2016. And I think the outcome there has been probably good for users. So I think there’s some reason for… or hope, just given how new this regulatory field is. Maybe. You didn’t give me much hope, but I can still see. It might be modeled, but I think, hopefully there’s an upward trajectory, long term.
Moderator:
Dare I add something? So I’d heard from the commission that a lot of, they’re aware of these inconsistencies, and they just say, yeah, but we wait for the courts, as you just said, Mike. Different courts take different. Yeah. When it’s EU legislation, DSA, then, yeah, it should be the EU, this ECJ that deals with it. Are there any last questions? Are there any last comments that you would like to make to each other? Good luck. Now, okay, excellent. So that’s brilliant, because we’re now back on time, and we even have time to grab coffee that’s just around the corner. If somebody needs to go to the bathroom or grab coffee, that will be next door, but please be back in three minutes, where we can start the next session. Thanks very much to the three of you, and to all the questions. Thanks very much to the three of you, and to all the questions. Thanks. Thank you. Thank you. Thank you. Thanks. Thank you. Thank you very much. Your comments. I will work. Somebody actually, this is the spot.
Speakers
Audience
Speech speed
163 words per minute
Speech length
1222 words
Speech time
449 secs
Arguments
The academic literature may not be looking at regional and global AI strategies
Supporting facts:
- Most of the literature Radomir found was looking at national responses and national strategies.
Topics: Academic Literature, Artificial Intelligence, Regional Strategy, Global Strategy
The use of the acronym ‘AI’ could limit the scope of the study to the period only when AI became a known concept.
Supporting facts:
- The question’s context is about the conscious decision made in choosing ‘AI’ over ‘artificial intelligence’ for the sample used in the study.
Topics: Artificial Intelligence, Research Scope, Selection of Terms
There is anticipation towards learning about the different political issues within the Commission
Supporting facts:
- Ramiro expresses concern over the impact of the DSA on countries in the Global South, given that it has been used as inspiration for bills in Costa Rica and Chile.
Topics: Political issues, EU Commission
Lack of digital leadership and coherent legislation surrounding regulation of AI and hate speech.
Supporting facts:
- Regulations drafted have a series of aspirations but some directly contradict themselves.
- Parliamentarians do not seem to notice these inconsistencies in legislation.
Topics: AI regulation, Hate speech, Legislation
Potential for community networks as a vehicle for data governance.
Topics: Community networks, Data governance
Report
The analysis suggests a prevalent focus within academic literature on national Artificial Intelligence (AI) strategies, overshadowing the much-needed investigation of regional and global AI strategies. This skewness raises pivotal concerns regarding the comprehensive understanding and development of AI strategies worldwide.
Moreover, it has been noted that the conscious decision to use the acronym ‘AI’, rather than consistently referring to ‘Artificial Intelligence’, could inadvertently limit the scope of future studies, confining the research to the era when AI became a recognised and frequently utilised concept.
In the realm of governance, the concept of the social contract has come under significant scrutiny, specifically in association with AI. Questions about the necessity of a social contract have been raised, and there are suggestions of a possible departure from traditional state-centric constitutions within the AI sphere.
This shift could potentially pave the way for the development of a novel, decentralised social contract within AI governance, thus reflecting the differing nature of governance in this innovative field. Attention has been directed towards the consequences of the Digital Services Act (DSA) on countries in the Global South, such as Costa Rica and Chile, where it has influenced local legislative creation.
However, the European Commission, instrumental in these processes, seemingly harbours internal political issues that require unravelling for a comprehensive understanding of its operations. There are potential contradictions within the legal framework, specifically between the Disinformation Code of Conduct and the Media Freedom Act.
The necessity for clarification in this regard is evident, as ambiguities can hamper the efficacy and effectiveness of these legislative instruments. There have emerged concerns about AI regulation and hate speech management due to observable inconsistencies in pertinent legislations. Consequently, there is an urgent call for more diligent digital leadership that guarantees consistently across AI regulations, and the importance of accurate and conscientious drafting of legislation is underscored.
On a more positive note, community networks have been recognised as potential facilitators of data governance. The analysis proposes these networks as instrumental infrastructures that could effectively embed data governance regimes, thereby fostering broader partnerships for sustainable development. Looking forward, this suggests a novel approach to managing data, leveraging the inherent strengths of community networks.
Moderator
Speech speed
168 words per minute
Speech length
2281 words
Speech time
817 secs
Arguments
Andrea couldn’t be present for the discussion and has assured to send comments on the papers
Supporting facts:
- Andrea has assured to send comments on papers
Topics: Absence, Review
Three papers will be presented during the afternoon session of the GIGNET Annual Symposium
Supporting facts:
- Radomir Bolgov will be presenting the paper on AI policies
- Robert Gorwa is presenting a paper on EU platform regulation
- Sophie Hohenbaum will be presenting her paper on a new social contract for data
Topics: GIGNET Annual Symposium, Presentations
For building a new social contract, there should be a vacuum or void, but we currently live in a period of social contracts
Topics: Social Contract, Current World
The EU’s digital policy space mapping will be a challenging task due to its wide and interconnected nature
Supporting facts:
- EU commission’s interaction with other DGs is dynamic
- EU External Action Service has a significant role in digital policy making
Topics: EU Digital Policy
Moderator encouraged Radomir to incorporate more distinct fields in his AI research
Supporting facts:
- AI and specific fields like education, health, agriculture have significant intersection
- Not all terms referred as ‘AI’ are really AI
Topics: Radomir’s Research on AI
Moderator expressed doubts over Fink’s theory of size and functionality of social contracts
Supporting facts:
- There’s a contradiction in Fink’s point of smaller group leading to more effective social contract in context of global social contracts
Topics: Alexander Fink’s Theory, Social Contract
Procurement at the public level in municipal governments can consider social contract aspects, such as data sharing with the local community.
Supporting facts:
- Barcelona model where companies providing services like transportation platforms share their data with the community
Topics: Public procurement, Data sharing, Municipal governments, Social contract
Report
Andrea was unable to participate in the discussion due to an unforeseen absence. Despite her absence, she assured that she would remain involved by committing to providing comments on the pertinent papers, exuding a neutral sentiment, and contributing to the ongoing discussion.
The forthcoming GIGNET Annual Symposium plans for an informative afternoon session during which three seminal papers will be presented. The papers cover crucial topics of AI policies, EU platform regulation, and a new social contract for data, to be presented by Radomir Bolgov, Robert Gorwa, and Sophie Hohenbaum respectively.
The selection of these presentations expresses a positive sentiment towards the forward-thinking sectors of Industry, Innovation, and Infrastructure. In the interest of optimising the panel discussion, there are plans to definitively manage time. The moderator plans to introduce all speakers at the start of proceedings, thereby establishing a clear and streamlined direction for the discussion.
The concept of building a new social contract was central to discussions, with Sophie Hohenbaum taking lead. Her view was that creating a fresh social contract requires a vacuum or void that does not presently exist due to our current social contracts.
The moderator positively advised Sophie to thoroughly engage and respond to relevant theories, such as that of Alexander Fink, which could provide beneficial insights into the intricacies surrounding the proposed creation of a new social contract. The EU’s intricate digital policy and its extensive implications were also discussed.
Neutral concerns were raised about the complexities of mapping a policy landscape replete with interconnected, dynamic interactions like the role of the EU Commission and the EU External Action Service in shaping digital policy-making. Returning to the subject of AI, the moderator commended Radomir Bolgov’s ongoing research on AI policies.
The moderator expressed a positive sentiment, encouraging Radomir to delve into related fields such as education and health, where AI’s application and impact are profound. There were questions raised about the size and functionality of social contracts as per Alexander Fink’s theory.
The moderator expressed scepticism towards Fink’s claim that smaller groups lead to more effective social contracts, suggesting this could contravene the idea of global social contracts. In the sphere of public procurement’s potential role in social contracts, examples were referenced from municipal governments around the world, focusing on data sharing as pivotal in shaping localised social contracts.
The Barcelona model was cited as a commendable example where companies providing services like transportation platforms must share data with their communities. Furthermore, the moderator agreed with and positively acknowledged the incorporation of social contract elements, notably data sharing, into public procurement.
It was suggested that such an approach could foster a more engaged and informed community, echoing the success of the Barcelona model. To conclude, the discussion engaged with a wide range of topics from the influence of AI in various sectors and EU digital policies, to the norms of data sharing in public procurement and emerging trends in social contracts.
Different stances, opinions, and ideas for further exploration within these paradigms were then outlined.
Radomir Bolgov
Speech speed
97 words per minute
Speech length
1015 words
Speech time
629 secs
Arguments
The research on AI policies as a domain is complex and multidimensional
Supporting facts:
- AI policy is a nascent field which needs to be defined and documented
- Conducted descriptive analysis, developed a framework using bibliometric analysis, and mapped major topics in the field.
Topics: AI policy, Research domain
Annual scientific production has plateaued in the last three years
Supporting facts:
- Scientific output on AI policies has increased over the years but has plateaued in recent years
Topics: Scientific production, AI policy
The choice of keywords was determined by the team’s initial knowledge
Supporting facts:
- Choice of keywords was limited
Topics: Artificial Intelligence, Research methodology
Few studies available on global strategies on AI
Supporting facts:
- Worked with large amount of information, analyzed approaches individually
Topics: Artificial Intelligence, Global strategies
Report
The analysis spotlights the domain of artificial intelligence (AI) policy as being in its fledgling stages. AI policy is a particularly nuanced field, presenting numerous dimensions of complexity, which necessitate comprehensive definition and documentation to ensure clarity of policy objectives.
A key portion of the study encompassed a descriptive analysis and the subsequent development of a framework via bibliometric analysis. This approach was utilised to map the principal topics within the field, an exercise that underscored the multidimensional character of the domain.
Scientific output pertaining to AI policies, although demonstrating an uptick over the years, has shown signs of stagnation in recent times. Regrettably, the past three years have witnessed a plateau in annual scientific production. However, this underscores a compelling need for increased research surrounding the effects and implications of AI policies.
The analysis unveiled a stark scarcity of studies focussing on either the positive or the negative outcomes resulting from AI policies. Alarmingly, themes such as AI policy evaluation are significantly underexplored, revealing a lacuna in research that needs addressing. AI policies have been studied considerably during periods of stability, but, as the analysis emphasises, there is an urgent need for these policies’ analysis within the context of present-day crises like pandemics, conflicts and environmental crises.
The reshaping and re-examination of AI policies are warranted by such drastic changes, posing challenges for policy-makers in this field. Nonetheless, the findings of this analysis were constrained by the choice of keywords and the singular reliance on Google Scholar as the research database.
The limitations evident in the choice of keywords are to be acknowledged. Given that the selection was based on the research team’s initial knowledge, this could potentially confine the scope of the study. The exclusive use of Google Scholar further restricts the breadth and diversity of research, considering that this database may not be as holistic as other databases such as Scopus and Web of Science.
There is, accordingly, an urgent need for strategic approaches to keyword selection to mitigate these limitations. Findings pertaining to global AI strategies are markedly few, indicating a need for expanded research in this area. This is especially significant in the context of aligning AI policy development with Sustainable Development Goal 9: Industry, Innovation, and Infrastructure.
These insights, thus, form a vital basis for setting the agenda for future research initiatives and policy developments within the field of AI.
Robert Gorwa
Speech speed
187 words per minute
Speech length
3794 words
Speech time
1220 secs
Arguments
EU is portrayed as the leading tech regulator of the moment.
Supporting facts:
- Many conversations about the many different overlapping things that European countries and the European Commission are pursuing from the Digital Services Act to the AI Act and other related regulations.
- Effects of EU’s regulations due to market size could potentially have transnational and transboundary impacts.
Topics: EU digital policy, Tech regulation
There is substantial divergence in EU digital policy
Supporting facts:
- Different actors in the European Commission have different interests, motivating them to pursue different goals. For example, discrepancies could be seen in new European Media Freedom Act and the stated goals of the Commission on combating disinformation.
- Differences in institutional arrangements preferred by various parts of the commission also contribute to divergence. For example, DG Home has been delegating tasks through solutions like automated moderation, while DG Justice prefers voluntary codes of conduct.
Topics: EU digital policy
Robert Gorwa’s project is still in data collection stage.
Topics: AI Regulation, EU Commission
Robert Gorwa expressed a need to understand better what the EU commission is actually doing through more concrete findings.
Supporting facts:
- He has been working on getting internal emails and information through freedom of information requests in order to get a better picture of these contradictions
Topics: EU Commission, Policy Understanding
Public level procurement, especially in municipal governments can play a role in data regulation.
Supporting facts:
- The Barcelona model requires companies to share data with the local community.
Topics: public procurement, municipal governments, data regulation
Different sectors are contesting the same jurisdiction in tech policy – US, EU, and China often see conflicts between market-driven, rights-driven, and security-driven visions
Supporting facts:
- Anu Bradford has ideated the US, EU and China as market-driven, rights-driven, and state-driven respectively in her new book.
- Henry Farrell and Abe Newman’s work on transatlantic networks of security actors on data protection in the EU.
Topics: Tech Policy, Jurisdiction
Significant changes have occurred in a short period of time, most noticeably in content regulation and transparency.
Topics: Tech Policy, Content Regulation, Transparency
Report
The European Union (EU) is steadily establishing itself as the foremost regulator in the technological industry. This substantial shift is demonstrated in several regulations the EU has pursued, comprising the Digital Services Act and the AI Act. The EU’s regulatory strategy is considered to be an expanding toolkit with the potential to serve specific strategic objectives in the future.
Notably, certain regulations, such as those mandating rapid takedown times, are now being integrated into the EU’s approach to content regulation. Nonetheless, it’s crucial to acknowledge a considerable divergence within the EU’s overall digital policy. This divergence stems from the contrasting interests within the European Commission itself where distinct actors are driving differing objectives.
This can be witnessed in discrepancies between the recently initiated European Media Freedom Act and the commission’s publicly declared objectives to combat disinformation. Further, optimal institutional arrangements vary substantially amongst the commission’s departments, contributing further to this divergence. A renewed interest in European digital constitutionalism and the rise of digital capitalism have provided fresh theoretical perspectives to understand the alterations within the EU’s digital policy.
Digital capitalism, interpreted as a battleground between firms and political actors, in addition to internal EU political conflicts, adds significant value. Moreover, industrial policy perspectives shed light on geopolitical strategies tied to the reshoring of supply chains and digital sovereignty projects.
Despite these insightful views, Robert Gorwa’s project, currently situated in its data collection phase, emphasises the importance of a deeper understanding of the EU commission’s actions. Hence, it necessitates a more significant focus on tangible findings, including securing internal communications and information via freedom of information requests.
Distinctively, data regulation stands to gain from public-level procurement, particularly within municipal governance. The Barcelona model, which commands companies to share data with the local community, serves as a prime example of this. This model reinforces the concept of a localised social contract, with mutual data sharing forming its nucleus.
Furthermore, tech policy witnesses a variety of sectors contesting the same jurisdiction, culminating in regular clashes between market-driven, rights-driven, and security-driven visions. The central actors involved in these confrontations are the US, EU, and China, each propounding their unique vision.
An explicit example of this is the US actors influencing the controversial EU Child Sexual Abuse Material (CSAM) regulations, leading to the ‘Brussels effect’. Whilst these measures are aimed at child protection, they have sparked debates over potential infringements on user rights, privacy, and end-to-end encryption.
Amidst all this complexity, it’s encouraging to observe the considerable changes that have transpired within a short period, most notably in the sphere of content regulation and transparency. These advances, coupled with the other developments outlined, collectively depict a landscape of the evolving impacts of EU’s regulatory strides within the transnational tech sector.
Sophie Hoogenboom
Speech speed
185 words per minute
Speech length
2106 words
Speech time
684 secs
Arguments
Sophie Hoogenboom discusses the possibility of a global social contract in relation to data
Supporting facts:
- A social contract is often seen as an agreement towards a political authority to foster social cooperation
- The concept of a social contract is often mentioned with relation to the digital sphere
- Alexander Fink’s theory suggests that a social contract is more likely to arise when a community has similar preferences, common social norms, and is smaller in size
Topics: Social contract, Digital sphere, Data
We might need to establish a social contract for community data if we want to fully use its potential for global purposes, such as achieving sustainable development goals.
Supporting facts:
- The concern that potential of data for achieving goals that cross borders under current analog social contract might not be fully utilized.
- Preserving data in a decentralized way could risk not tapping into its full potential.
- Some people argue for the need to establish a mechanism for treating data as a global public good.
Topics: Community Data, Digital Commons, Sustainable Development Goals, Global Public Good
Supports the idea of community networks in governing data
Supporting facts:
- She believes both global and community level networks can work simultaneously in governance of data.
- She points out that we still need to work on parts of the world that are not connected.
Topics: Community Networks, Data Governance
Report
Sophie Hoogenboom delves into the intricate concept of a global social contract pertaining to the digital sphere, specifically the vast, ever-growing realm of data. An essential facet of such theoretical frameworks, a social contract is frequently mooted as a potential intervention to streamline and optimise social cooperation within the global digital context.
Drawing on Alexander Fink’s theory, she maintains that the likelihood of a social contract arising increases in communities with shared preferences, common social norms, and smaller sizes. However, Hoogenboom critiques the ambitious notion of a sweeping global social contract on data, attributing its potential challenges to the culturally diverse preferences and social norms of the various global communities.
She posits that, given the multifaceted contexts, notions of privacy and the universal definition of ‘common good’ could dramatically vary between societies, and the sheer size of the global community could inflate the costs associated with decision-making and monitoring protocols.
Nevertheless, she proposes a more manageable and immediate initial step could be the creation of a social contract at a community level, focusing on utilising community data for societal betterment. Such a contract could be highly beneficial in fulfilling human rights and propelling progress towards the ambitious objectives enlisted in the Sustainable Development Goals.
Community data, specifically, could hold considerable potential for societal betterment, given its relevance in the field of health data and related sectors. The current analogue social contract, she believes, leaves the potential of data untapped, thus stimulating debates about data decentralisation and its proposition as a global public good.
She refrains from taking a hard stance on whether community data should be kept decentralised or placed under a global social contract, suggesting she is still formulating her view on this complex issue. Hoogenboom advocates for a composite approach in data governance wherein community networks work simultaneously with global networks.
She recognises the digital divide in certain parts of the world and underlines the need for inclusivity in the data governance framework. Overall, she emphasises the importance of contextual understanding in these discussions, asserting that different communities might lay emphasis on distinct aspects of development, subject to their unique needs and challenges.
This nuanced approach makes Hoogenboom’s analysis a significant contribution to the ongoing discourse about the need, purpose and potential form of a global social contract for data.