Data first in the AI era

10 Jul 2025 13:00h - 13:45h

Session at a glance

Summary

This discussion focused on the critical need for international data governance frameworks in the AI era, featuring experts from major international organizations including the ILO, OECD, UNICEF, and civil society groups. The panelists emphasized that while national data governance frameworks exist, they are insufficient given that most data flows across borders to cloud systems beyond national control. Steve McFeely argued that international principles are needed to establish guardrails for data exchange between different jurisdictions with varying ideologies around digital sovereignty.


The Global Digital Compact, adopted as part of the UN’s Pact for the Future, was highlighted as providing a unique opportunity to advance international data governance through a multi-stakeholder working group with equal representation from governments and non-state actors. The discussion emphasized that data governance must be human rights-based, with particular attention to protecting children’s rights, privacy, and dignity. Speakers stressed that children and young people should participate in shaping data governance frameworks since they will be most affected by these decisions.


Cybersecurity was identified as inseparable from data governance, with experts noting that governance without security is like “a constitution without a judiciary.” The panelists agreed that AI has brought unprecedented attention to data governance issues, though many organizations are rushing to adopt AI without proper data governance foundations. Key challenges identified include ensuring equitable access to data and its benefits, addressing power asymmetries between different stakeholders, and managing the tension between convenience and data protection. The discussion concluded that effective data governance requires balancing individual agency with collective benefits through a new social contract for the digital age.


Keypoints

## Major Discussion Points:


– **Need for International Data Governance Frameworks**: The panelists emphasized that national data governance alone is insufficient in our interconnected digital world. With data flowing across borders to cloud services and different jurisdictions with varying ideologies (“three digital kingdoms”), international cooperation and shared principles are essential to ensure data is treated with respect and consistency globally.


– **Human Rights and Child-Centric Approach to Data Governance**: The discussion highlighted the importance of grounding data governance in human rights principles, particularly focusing on children’s rights. This includes protecting privacy and dignity, ensuring autonomy over data use, preventing algorithmic bias that could limit children’s development, and involving young people in shaping data governance policies.


– **Cybersecurity as Essential to Data Governance**: The panelists stressed that data governance and cybersecurity are inseparable – data governance without cybersecurity is like “a constitution without a judiciary.” Cybersecurity enables and enforces data governance policies, ensuring access controls, data integrity, and privacy protections are actually implemented rather than just outlined on paper.


– **AI’s Impact on Data Governance Urgency**: The rise of AI has brought unprecedented attention to data governance issues, with AI systems requiring massive datasets often collected without consent. While AI has elevated the political importance of data governance, it has also created new challenges around data extraction, bias, and the need for transparency in training datasets.


– **Equity and Access as Core Challenges**: A central theme was ensuring equitable access to both data and the benefits derived from data. This includes addressing power asymmetries between different stakeholders, ensuring marginalized communities aren’t excluded from governance conversations, and developing business models that distribute AI and data benefits more fairly across global populations.


## Overall Purpose:


The discussion aimed to explore the critical need for international data governance frameworks in the AI era, examining how different stakeholders can collaborate to create ethical, secure, and equitable approaches to managing data across borders while protecting human rights and enabling innovation.


## Overall Tone:


The tone was professional and collaborative throughout, with panelists building on each other’s points constructively. There was a sense of urgency about addressing data governance challenges, balanced with cautious optimism about opportunities for progress through initiatives like the Global Digital Compact. The discussion maintained a practical focus on real-world implementation challenges while emphasizing the human impact of data governance decisions.


Speakers

– **Rafael Diez de Medina** – Chief Statistician of the International Labour Organization, moderator/host of the panel


– **Steve Macfeely** – Chief Statistician and Director of Statistics and Data at the OECD


– **Claire Melamed** – CEO of the Global Partnership for Sustainable Development Data


– **Francesca Bosco** – Chief Strategy and Partnerships Officer at the Cyber Peace Institute


– **Friederike Schuur** – Chief Data Governance and Strategy at UNICEF


– **Audience** – Multiple audience members who asked questions during the Q&A session, including:


– Someone from the Office of the High Commissioner for Human Rights working on human rights and digital technology


– Someone from Brazil


– Someone from the Department of Commerce


– An assistant professor at the Korea Advanced Institute of Science and Technology Policy (KAIST) studying AI policy


**Additional speakers:**


None – all speakers were included in the provided speakers names list.


Full session report

# International Data Governance in the AI Era: Panel Discussion Report


## Introduction and Context


This panel discussion took place as a side event during the AI for Good conference, moderated by Rafael Diez de Medina, Chief Statistician of the International Labour Organization. The panel brought together experts from major international organisations to examine the need for international data governance frameworks in the AI era. The distinguished panel featured Steve Macfeely, Chief Statistician and Director of Statistics and Data at the OECD; Claire Melamed, CEO of the Global Partnership for Sustainable Development Data; Francesca Bosco, Chief Strategy and Partnerships Officer at the Cyber Peace Institute; and Friederike Schuur, Chief Data Governance and Strategy at UNICEF.


The discussion was particularly timely given the recent adoption of the UN’s Global Digital Compact in September as part of the Pact for the Future, which establishes new mechanisms for international cooperation on digital governance issues.


## The Inadequacy of National Data Governance Frameworks


### The Reality of Data Flows


Steve Macfeely opened with a fundamental challenge to conventional thinking about data sovereignty: “Most of our data are going straight to the cloud, and after that we have no idea where those data are going… very few countries control the data in their country.” He argued that whilst governments may believe they have control over data within their borders, the reality is that most data flows to cloud services beyond any single nation’s jurisdiction.


Macfeely introduced the concept of “three digital kingdoms” representing different approaches to data control, though he noted these create fundamental challenges for international data exchange as each operates under different assumptions about who should control data and for what purposes.


### Data as Human Identity


Perhaps most significantly, Macfeely reframed the discussion by observing: “There’s a phrase now, we are our data.” This conceptualisation elevated data governance from a technical issue to something fundamentally about human identity and dignity, influencing the entire subsequent discussion.


## AI as a Catalyst for Data Governance Attention


### The Inconvenient Truth About AI’s Role


Macfeely provided a candid assessment: “We have to thank AI that we’re having this conversation. Data governance has been important for a long time, but nobody cared less about it until artificial intelligence surfaced.” This observation highlighted how AI’s prominence has finally brought necessary political attention to data governance issues that experts had been raising for years.


### AI’s Unprecedented Data Appetite


Friederike Schuur warned that “AI opens door to pervasive data extraction far exceeding anything seen before, threatening trust.” She provided a concrete example of how AI systems are being developed for everyday tasks: “There’s going to be an AI agent that’s going to book your dinner… it’s going to know where you want to go, what you want to eat, who you want to eat with.”


Francesca Bosco noted that “AI systems trained on enormous datasets scraped without consent create challenges of opacity, bias, and security risks,” emphasising how current AI development practices often bypass traditional consent mechanisms.


## The Global Digital Compact as a Governance Opportunity


Claire Melamed highlighted the Global Digital Compact as providing an opportunity for advancing international data governance through a multi-stakeholder working group with equal representation between governments and non-state actors. She emphasised that this balanced representation model represents a departure from traditional state-led international governance mechanisms.


Importantly, Melamed clarified that any international framework would complement rather than replace national data governance systems, recognising legitimate national roles whilst acknowledging that purely national approaches are insufficient for cross-border data flows.


## Protecting Children in Digital Spaces


### The Right to Make Mistakes


Friederike Schuur brought crucial attention to children’s vulnerabilities in digital environments, warning about educational platforms that “record everything that a child makes.” She expressed concern that comprehensive data collection could lead to children being “slotted into a particular development path because of something that they have done at one point.”


Schuur introduced a powerful concept: “Childhood really means you get a second, a third, a fourth, a fifth, and so many chances because you deserve it.” This principle challenges data governance systems to account for human development over time, ensuring that early data points don’t create permanent constraints on children’s future opportunities.


She also emphasised involving children directly in data governance conversations, noting that they “understand the issues well” and should participate in shaping governance agendas that will affect them.


## Cybersecurity as Governance Foundation


### The Constitution and Judiciary Analogy


Francesca Bosco provided a memorable insight: “Data governance without cybersecurity is like a constitution without a judiciary – it might outline rights and responsibilities, but it cannot enforce or protect them.” This positioned cybersecurity not as a technical add-on but as fundamental to the entire governance structure.


Bosco explained her organisation’s mission: “The Cyber Peace Institute works to protect vulnerable organisations… we work with hospitals, schools, humanitarian organisations.” She emphasised that cyberattacks affect “real people” and can cause “double victimisation of beneficiaries” when personal information is compromised.


### Addressing Power Asymmetries


Bosco highlighted “asymmetries of power and protection” in current arrangements, observing that data governance frameworks are “disproportionately shaped by actors in technologically advanced economies” whilst “most affected actors” are excluded from governance conversations.


## Equity and Access Challenges


### The Commodification Problem


Steve Macfeely identified “equity of access to data” as “the big issue,” arguing that “as data become more and more valuable, as people recognise the value of it, it’s naturally going to be commodified and that means ownership.” This highlighted challenges around ensuring fair access and preventing concentration of data resources among already powerful actors.


Claire Melamed emphasised addressing “business models and commercial parameters” to ensure “equitable distribution of benefits from data,” recognising that technical solutions alone are insufficient without addressing underlying economic structures.


## Practical Implementation Challenges


### The Convenience-Privacy Trade-off


An audience member from Brazil raised the practical challenge of how people “trade convenience for data” without fully understanding risks, citing “employees using their own account of ChatGPT without an institutional and corporate account to upload corporate documents.”


### The Expertise Gap


An audience member from the Office of the High Commissioner for Human Rights posed a fundamental question: “How much agency can we give them regarding their own data when even experts don’t know how data can be used?” This highlighted the tension between individual autonomy principles and the practical reality that even sophisticated users may not fully understand implications of their data choices.


### Global AI Development


An academic from KAIST raised concerns about “under-investment in AI and data systems in areas like the African continent,” noting that “communities need data collection for AI systems to work without harm.” This highlighted tensions between inclusive AI development and potentially exploitative data collection practices.


The same academic introduced the concept of “data donation,” asking whether people might be willing to donate data for beneficial purposes, similar to blood donation.


## Areas of Consensus and Remaining Tensions


### Strong Agreement


The panellists demonstrated consensus on several principles: equity of access to data represents the core challenge; data governance must be human rights-based with particular attention to vulnerable populations; and international cooperation is necessary whilst complementing rather than replacing national frameworks.


### Different Approaches to Equity


Whilst agreeing on equity’s importance, panellists emphasised different approaches: Macfeely focused on ownership and commodification issues; Melamed emphasised regulating business models; and Schuur prioritised rights-based approaches with special attention to children.


### Data Requirements for AI


A tension emerged around data needs for effective AI. Schuur argued that “delivering valuable AI services doesn’t require very large datasets,” whilst the academic audience member emphasised data collection from underrepresented communities to ensure AI systems work without causing harm.


## Looking Forward


The discussion revealed both the complexity of international data governance challenges and potential for collaborative solutions. As Claire Melamed noted in closing, the goal is creating “a social contract around data” that balances individual rights with collective benefits.


The panellists consistently returned to human dimensions of data governance, rejecting purely technical framings in favour of approaches recognising data as fundamentally about human identity and dignity. The Global Digital Compact’s multi-stakeholder approach represents a significant opportunity to test new models for international cooperation on these critical challenges.


The path forward requires sustained collaboration across sectors, attention to power imbalances and capacity building needs, and creative approaches to balancing individual agency with collective benefits. The current moment of AI-driven attention to data governance provides a unique opportunity for meaningful progress on these fundamental challenges.


Session transcript

Rafael Diez de Medina: So, good afternoon. We are very happy to start our event now on Data First in the AI Era, the Case for Data Governance. This afternoon, we are going to have, I think, an interesting discussion on various topics around data governance. I think some years ago, we were talking about data revolution, but now I think the revolution is well-established, and we are suffering, or all are, under an avalanche of data produced by many sources. But of course, artificial intelligence came unexpectedly to disrupt everything and to overrun all our initial thoughts of how the data revolution was going to be tamed or something like that. I think now we are all immersed in this new environment, an ecosystem of data that is affecting us all in all aspects of our lives. It has implications for geopolitical implications for our daily lives. We are producing millions and millions and trillions of data every moment. So more than ever, I think the discussion around how this should or not be governed, it’s more than topical. And I think this is the interesting part of this panel in particular. We are having different discussions around different global governance, national governance. So I think it will be very interesting to hear from our distinguished panelists. I am Rafael Díaz-Medina, the Chief Statistician of the International Labour Organization. And I am very happy to host distinguished speakers today. Let me introduce them and then start and kick off the discussion. We have Steve McFeely, the Chief Statistician and Director of Statistics and Data at the OECD. We have Claire Malamed, CEO of the Global Partnership for Sustainable Development Data. We have Francesca Bosco, Chief Strategy and Partnerships Officer at the Cyber Peace Institute. And we have Friederike Schuur, Chief Data Governance and Strategy in UNICEF. So we are very happy and lucky to have all of them who have a long experience on the issues that we are going to speak. So to kick off, I will go directly because we have a limited time. We have different, I would say, initial thoughts. And I will start by Steven. And with very concrete questions, why do we need international data governance in addition to national regional data governance frameworks? Why start with principles?


Steve Macfeely: OK, good afternoon, everybody. I’m glad to see so many people here. So the question, why international data governance? And I think this is a really good question because it’s the question that I get challenged on most. So I’ve discussed this with many countries. And they say, well, we have our own national data governance plan. We have our own national data governance strategy. That’s enough. And honestly, I think that’s a fallacy. I think it’s a very reassuring fallacy. But it’s one that doesn’t stand up to scrutiny. So we hear a lot today about national data sovereignty. And I would ask everybody to think about what that means in practice. So it’s a very reassuring term, but very few countries control the data in their country. Most of our data are going straight to the cloud, and after that we have no idea where those data are going. And this is why we need some sort of an international agreement or component to ensure that we have some sort of guardrails, some guidelines on how to exchange data from one jurisdiction to another. So in the literature we talk about the three digital kingdoms, which is really based around individual sovereignty, state sovereignty, and commercial sovereignty, and you can probably guess how they align geographically. And it’s not clear how we exchange data between those three kingdoms or those three jurisdictions because the ideologies are so different. And this is really why we need some sort of an international framework that helps us to exchange our data safely. I would remind you, when we talk about data, oftentimes we’re tempted to look at this as an economic proposition only. This is about securing the digital economy. But it’s much, much more than that. I mean, our data are essentially who we are. There’s a phrase now, we are our data. I mean, there’s so much of our life, as Raphael said, is recorded. So our aspirations, our dreams, our privacy, our health status, everything is up in the cloud. And if those data are moving to jurisdictions that don’t treat them with the same respect that I would like them to be treated where I live, then I think we have a problem. And I think we have a right as citizens of the world to demand that our information are treated with respect. So very quickly then to finish up, why principles? Principles is a good way to start, I think, because this is a tricky conversation. So I think if we can agree on basic principles which set out the high level broad brush aims and aspirations that we would like to achieve, I think that’s a good way, it’s a good way to set a North Star. We can agree on those, I think, relatively quickly, I hope. If anybody would like to see one proposition, we’ve published a paper on what we think would be good principles. But there are many others and I think we need to discuss that. Then after that, I think we can get into the nuts and bolts of how we would actually implement some sort of an agreement. Thank you.


Rafael Diez de Medina: Thank you, Steve. I will go now to Claire and ask her what is the opportunity created through the Pact for the Future and the Global Digital Compact for advancing international data governance in practice?


Claire Melamed: Thank you very much. I think there’s two levels to this. There’s obviously the Global Digital Compact more broadly sets out a framework for international cooperation, shared norms, a shared global agenda on a broad range of topics around this, of which data is one, around the broad area of AI and digital cooperation. That in itself has huge value and will have, I suspect, ramifications that will unfold over time as the initiatives that fall out of that develop. But it also presents a very specific and important opportunity on this topic on data governance, which is that in the Global Digital Compact, which, as we all know, was agreed as part of the Pact for the Future last September, there is a specific mandate provided to begin a multi-stakeholder process on data governance. I think that presents us with a. you know, so far, I think, unique opportunity. There are huge numbers of data governance processes. As Steve said, there are a huge number of principles that have been developed, of different pilots and initiatives. And, you know, it’s not a problem that has not, is suffering from lack of attention per se. It’s a problem that is suffering, I would say, from a lack of sort of the kind of attention that can deliver sustained and coordinated, and, you know, fully agree with Steve in that, you know, this has to be something that we look at on a global level. So it’s that kind of sustained, the sort of pathway to that sort of global agreement that I think to date, we haven’t had in the system, despite all of the many initiatives that have been going on. And I think it’s that which the Global Digital Compact offers us the potential for. It’s a really interesting process. I’m slightly intimidated sitting here with the two people who are leading that process. Peter Major, who is the chair of the working group that has been set up, and Aral from UNCTAD, who’s leading the secretariat. But there was a work that the Global Digital Compact sets up a working group, which is interesting by nature of being a multi-stakeholder working group. It contains even numbers of members from governments, representing member states from all of the different regions represented in the United Nations, and an equal number of non-government stakeholders. And I think, you know, anyone who’s been around this week and has seen the sort of vibrancy of the conversation, which, you know, has been, I think, in at least the panels I’ve been at this week, very evenly balanced between governments, private sector, civil society. You know, it’s an absolutely necessary way to have the conversation given. the way the market is, the way technology is developing, the way all of this works. So I think we have an opportunity through this group, through the many consultations and interactions that will be possible with this group while it goes about its work, to do some of the things that, as Steve said, absolutely need to happen, which is to pull together the many, many things that do exist and create some sort of framework, some sort of pathway for delivering that global perspective, not to displace the different national frameworks, but to provide that layer that will allow them to talk to each other in the way that the technology, frankly, demands that we do.


Rafael Diez de Medina: Thank you, Claire. Thank you a lot. And Friederike, you champion child rights-based and child-centric data governance. Do tell us why.


Friederike Schuur: Well, that’s a very short and sweet question. I love it. Thank you all for coming. It’s really a pleasure to be here to speak alongside all of you. What’s important here is that data, it’s not just an economic commodity. We really have to think about the relationship when we speak about data governance between enabling innovation, fostering really vibrant digital economies, but also at the same time protecting and advancing the interests and the rights of people. And that also includes young people and children. Of course, I work for UNICEF, so this is very close to my heart. And there’s an opportunity also for us to really think about some foundational documents, in particular in the United Nations system. And for all of us, the Universal Declaration of Human Rights and the CSE, the Convention of the Rights of the Child, that offer us a grounding really for the dialogue that we can have on international data governance facilitated through some of the mechanisms that Claire just mentioned. Really, it’s a case where all laws have new relevance for new technologies because they continue to really stand and they continue to provide us with a very solid foundation. that we can build upon as we think about how we want to move forward when it comes to data and when it comes to AI, and how we can realize really the benefits of data and AI equitably and for all. And to make that a bit more specific, what does it actually mean? Like human rights-based data governance, child rights-based data governance? I can’t be comprehensive here, we have very little time today for this conversation, but let me pull out a few specifics. One that I want to lead with is really privacy and protection. Now Steve, you just mentioned our data are who we are, and then I add to that, it’s like just like us, our data deserves protection and we deserve privacy. And reflecting a bit on the sort of sibling conference that is happening right now, the AI for Good conference, agentic AI, super hot right now, right? Like there’s a risk where we again trade convenience for data, and it is increased now compared to where we were when sort of digital services, think about the emails that we all have, our private emails that we sort of subscribe to, right? Something that we have to start thinking about. Another element, second one is really about dignity and autonomy and how we can think about data governance, putting in place data governance that helps protect dignity and that helps enable autonomy. Part of that is also to give individuals, but also groups and communities control over the use of their data. It’s very hard to understand these days how data is actually used when you engage with digital services, and it makes it difficult to really have that autonomy. But it goes further, like if we think about children growing up, developing, they have a right to develop to their full potential. That also means making mistakes without being afraid of the repercussions. But now think about educational platforms in the classrooms, right, that record everything that a child makes. Now we have to make sure that that is not going to slot them in to a particular development path because of something that they have done at one point. I mean, childhood really means you get a second, a third, a fourth, a fifth, and so many chances because you deserve it, because that’s how you have to… move forward. Think about also on that point agentic AI and how it might affect socio-affective development of children as the environment keeps reacting to them. So these are questions that do touch on data governance because data governance is one of the core and crucial inputs also into AI of course. The last point I wanted to pull out is really around participation by children and young people also in shaping how we move forward with the data governance agenda. Children and young people should have an opportunity to sort of express their view and also help us guide how we set up international data governance. We’ve done that actually at the last UN World Data Forum. Some of you might have been there. For example Steve you were interviewed by one of our youth speakers. We had a delegation more than 20 children and young people who sort of attended. It was very meaningful to them to be there because they got to ask all their questions and most importantly they got to express their views. They understand a lot about a technical issue such as data governance. They’re worried about a lot of things. They see the opportunity that is inherent in AI but they’re also worried what it might mean for the planet. A lot of children in rural communities are worried about not being able to sort of like be connected to that movement that offers opportunities to them but maybe not to them because they’re part of the unconnected. But there’s another benefit if you listen to children you actually understand where the real value lies that we have to realize. Data governance is not a technical issue right it is one about realizing benefits to real people and that includes our future generations and so as that participation by children actually helps us what benefits all of us ultimately which is really making sure that data governance serves to shape innovation and really help bring about digital economies that are equitable and that really drive the benefit for society. Thank you.


Rafael Diez de Medina: Thank you Friederike. I think it’s it’s very clear that we have these discussions between the global national framework frameworks for governance. But thank you for giving us the human part of the governance and the data governance. They need to have that. But you also touch on important things like privacy and certain things that Francesca, I would ask you, because data governance is not only standing up by itself. We can, can you speak a bit about the role of cybersecurity for strong data governance and the risks that if we fail to bring these two together?


Francesca Bosco: Thank you so much. And it’s a pleasure to be here with such a distinguished speakers and thanks a lot for the participation. So your observation is absolutely correct. So data governance and cybersecurity are inseparable. And I often think about data governance without cybersecurity is like a constitution without a judiciary in a way, because it might outline like rights and responsibilities, but it cannot enforce or protect them. So we have to think about them in the same way. Conversely, cybersecurity without governance, risk could become a tool that it’s often, let’s say used for surveillance or exclusion. So I think that really together, they form a sort of like pillar of responsible data stewardship. And I like to think about cybersecurity as an enabler of data governance. Because data governance is really establishing the strategic framework of like rules, responsibilities, policies for managing data ethically and lawfully, but cybersecurity ensures that those rules are actually followed, protected. And there are some, let’s say key concept that maybe we can share. I know that we have limited time, but just to give some food for thoughts. So for example, in terms of like access control, governance tell us who should have access to the data and cybersecurity ensures that only those people do. When we think about, it was mentioned before, also data integrity and availability, governance has set the expectations for data quality and continuity and cybersecurity protect against, for example, tampering, loss of ransomware induced disruptions. When we think about privacy enforcement, as you just mentioned, on one hand, governance aligns with regulations like GDPR, notably, At the same time, cybersecurity ensures that those policies are enforced through tools like, for example, encryption, secure data transfer, data masking. So it really goes hand in hand. And because the question was around the risk, when we think about risk-based prioritization, not all data carries equal risk. And so cybersecurity tools like, I’m thinking like a threat modeling, vulnerability scanning, for example, help identify which data asset required the most protection and oversight. Let me bring it to, let’s say, to two last points. One is really related to, okay, what it means in practice. And I can tell you what we are facing. We are a civil society organization, we’re based in Geneva, but the mandate is global. And we have the mission, basically, to expose the real consequences, the real harm that cyberattacks are causing on society, and to provide the free protection, free cybersecurity protection to, I would say, the most vulnerable organizations. And in doing this, we have to, we are at the same time, let’s say, data provider in a way, because we work a lot with the data, collecting data about the cyberattacks, collecting data about the organization that we’re working with. And at the same time, we are building capacity of those organizations in understanding the risk, if, I mean, if data are not protected correctly, and how to better do so. And I really like what Friederike was mentioning in terms of like, it’s about real people. One key mission that we have is also to increase the understanding that we need to give a human dimension to data. And I mean, obviously, I speak about, let’s say, in a way, the dark side, meaning, for example, I mean, the real impact of cyberattacks. And too often, we think about, for example, cyberattacks on data, as just impacting, let’s say, the economic infrastructure or the devices that are attacked. Well, behind that, there are data, there are the data, for example, of those organizations that are working in the development humanitarian settings. And attacking those data doesn’t mean just, allow me to say, attacking the organization data, but it means also attacking the data of the beneficiaries, for example, risking for double victimization. So we have to start thinking more about people and the relevance of data about the people. So extending beyond, I would say, the traditional concerns such as privacy, information integrity, because the results can really devastate the life of ordinary people, basically. And allow me to finish with, okay, so what? Because I’m working for civil society, I’m always trying to be very concrete. So I think that to ensure, let’s say, stronger resilience data governance, cybersecurity must be built from the start. So it’s still too often, and this is why I very much welcome the question and the opportunity, too often cybersecurity is still seen as an afterthought. And so very practically speaking, security by design. So embed access control, encryption, monitoring in governance framework from the ground up. Right-based cybersecurity. It’s a pleasure to be here with such distinguished speakers, also because we are all talking about, in a way, from the same view of like, we need to embed the human rights principles, like privacy, dignity, freedom of expression, that align with cybersecurity practices. Understand the contextual sensitivity. So prioritise protection for high-risk data, for example, and high-risk actors, such as biometric data in refugee contests, health data in fragile states. And it was mentioned before, also the international dimension. It’s super key to follow what is happening when it comes to the international, for example, global norm settings. And I’m thinking specifically one process that we are very active in and it just ended the last cycle is the open-ended working group, for example, the UN open-ended working group. And it’s super important because it’s an opportunity for the multi-stakeholder community, as Claire was mentioning, to tap basically into governance and improve accountability and deterrence.


Rafael Diez de Medina: Thank you so much. I think we have set the stage for a very, I think, interesting discussion and we, of course, we have several dimensions, we have touched on the key aspects and we have left many others that we may have that opportunity to hear from you all. But just to kick off with the panelists, I think you have touched on some of these areas, but it would be good to see or to hear from you. What is the one core issue or challenge that effective data governance must address? Who wants to?


Steve Macfeely: I think equity of access is going to be the big issue. As data become more and more valuable, as people recognize the value of it, it’s naturally going to be commodified and that means ownership. So I think ownership and access are going to be really, really challenging issues in the future.


Claire Melamed: Okay, if Steve hadn’t said that first, I probably would have said that. But I think just to follow on from that, I think once you have ownership and access, there’s also then the question of what are the sort of business models and the sort of commercial models and the parameters within which they’re regulated, which allows people to benefit from that. Access and which controls the distribution of that benefit. I mean, I think we’ve seen, you know, with the with the growth of with the sort of the way that social media and obviously social media runs on data too. So that’s not separate to this conversation, but it’s perhaps a sort of first generation of this technologies which are now evolving into all kinds of other things. So it gives us a bit of a sort of signal as to the way that if left unchecked, largely unchecked, these commercial models are going to develop and the way that data, however it’s owned, is going to evolve. be used. So I think we need to think about, you know, ownership per se from a sort of rights point of view, but also from a kind of economics point of view. I never feel like we talk enough about economics in these conversations. How can we set up the business models and the rules around them to make sure that that ownership is translated into business models which can spread the benefits in an equitable way?


Friederike Schuur: Well, if Steve hadn’t said it, and if Claire hadn’t said it, I mean it’s equity of access to data and equity of access to the benefits that can be derived from the data. It’s critical. Because so much flows from equity of access to the benefits from the data. And I think linked to that, now I can add something, I must add something, is I think really capacity development for empowerment. And by that I mean organizations, but I also mean citizens. So that they are better equipped to make their own voice and their own interests heard in the conversation through the channels that we also need to increasingly open up for them.


Francesca Bosco: And I think very much linked to what was said before, the challenge that I think allowed me to two points. One is that the redress of the asymmetries of power, agency, and protection that are kind of like deriving exactly from the equity point. And the reason is because data governance frameworks, let’s admit it, are disproportionately shaped by actors, I would say, with basically most of them, they are in technologically advanced economies in a way. And so the most affected by data-related decisions often are excluded, basically, from governance conversation. And so this imbalance basically leads to… extractive data practices, and representative data sets, and unequal protections. And together with the unequal protections, the second point that I want to make is that still we see that international law and data governance must evolve with the changing threat landscape, and we are not there yet. So I think that these two points, asymmetries of power and speed, still between evolving threat landscape and law and policies.


Rafael Diez de Medina: Thank you. Thank you very much. And now, of course, we are in the AI for Good conference. So how does AI excitement and adoption put a pressure on data governance? And how must data governance evolve for safe and responsible AI? Is the question. So we can start.


Francesca Bosco: I mean, I always feel like the black sheep, meaning that I’m biased. I’m always thinking about like, okay, what can go wrong? So no, but I mean, I think that what I’m thinking, and maybe that’s also, let’s say, my role in this panel. Believe me, I’m also a very optimistic person in general. Let’s start, let’s say, with the fundamentals, meaning that an AI system, and particularly I’m thinking like large models like GPT-4, Lama, are trained on enormous data sets that are scraped from the internet, right? So these data sets are often collated without consent. It was mentioned that in transparency, and this is really resulting in some major challenges. So I’m thinking like, for example, opacity, and we rarely know what data went, for example, into the training corpus, and this undermined accountability, for example, or reproducibility. Well, for sure, I mean, it’s a sort of like a common discussion. But I mean, for example, the bias and harm. So marginalized communities are often overrepresented in surveillance data and underrepresented, for example, in linguistic and cultural data. and i’m thinking about security risk so a models can be reversed engineer the basically to extract training data or for example targeted with data poisoning manipulated so i’m again i mean i’m i’m i’m here to to to to speak about the potential so this is why am i lighting those and i leave the floor to two colleagues to highlight some potential benefits


Friederike Schuur: well on that note you know i mean i like grounded optimism but sometimes we do have to construct the ground upon which we can stand and that i think is what we’re doing with this conversation in terms of the particular pressures on data governance because of offset of ai and how it’s evolving like um being here at the conference um i think when we talk about ai assistance so those are the alexas of the world when we talk about the genetic ai so like ai agents that are starting to actually complete tasks for us there was in one of the keynotes the example of an agent that is booking a dinner for me and my friends right like um super convenient um and i think many of us are probably already enjoying the convenience of some of the new ai tools that we have at our disposal um but really we must emphasize like that they’re really opening up the door towards pervasive data extraction that by far exceeds anything that we have seen so far um and that is a really big risk and then thinking about how we can safeguard trust because i mean in the end i think a lot comes down to trust trust amongst people trust amongst from people to organizations that is really what we have to safeguard right and and and when it comes to that i think um we we one is we need to actually like help build an understanding what is actually happening on the back end so to speak of the services that are providing um this kind of convenience we have to think about their perhaps I can’t say the word, remunerated, fair payment perhaps also for data where we feel it’s fine to sort of commoditize them in the way that that kind of approach would actually allow. And really what it comes down to is, I mentioned a trust, trust also that is necessary for us to keep believing in things that we are seeing. And that is also something that is put increasingly under pressure.


Claire Melamed: Thank you. I mean, I agree absolutely with all of what’s been said about the sort of risks to individuals and to whole systems if we allow AI in a sense to sort of plunder data unchecked and all of the various risks of that. I think there’s also the other side of this, which is, it’s very much in the interests of those who are developing AI models to get the data governance right. I don’t know whether anybody was in the hall on Tuesday listening to the interview with Will.i.am, who I’m a little bit too old to appreciate the music, but I certainly appreciate the insights. And he said, if you have poor data practices, guess what? You’re going to have, expletive deleted, bad AI. And I think there is a very strong interest among AI companies as well for data governance practices to get that right, to maintain the trust upon which the flows of data, upon which all AI depends, are maintained. So I think there’s a common interest in a sense here. I think it is, you know, it’s funny, I’ve been here since Tuesday, listened to lots of conversations about AI and governance and so on. And there’s a lot of that sort of Will.i.am quote, you know, I’ve heard a lot of sort of, oh, but of course data is terribly important and we have to govern it. Oh, but now we’re actually going to talk about the interesting stuff, which is the models itself and so on. So there’s a kind of acknowledgement that it’s really important. I think it is obviously driving. some of the sort of increased political traction that we’re seeing in data, you know, the UN Working Group, some of the, you know, governments are perhaps taking more interest in data governance than they have ever have done before, because it’s obviously become much more important across a range of interests, whether that’s security or economics or rights and so on, but somehow I feel like it still hasn’t quite got itself into the heart of this conversation where it needs to be. So I would say in answer to the question, I think that the sort of AI and the obvious connection between data governance and AI has increased the political interest, which is really my concern here. I think unless we have that political interest, we’re not going to get any of the things that we want in terms of regulation and governance. It’s raised it up the agenda, but I would say there’s probably still some way to go.


Rafael Diez de Medina: Thank you.


Steve Macfeely: Thank you. Yeah, I’m just going to repeat what Claire said in a different way. I mean, we have to thank AI that we’re having this conversation. Data governance has been important for a long time, but nobody cared less about it until artificial intelligence surfaced. The reason we have the new group hosted by UNCTAD is because of the global digital compact. When the UN developed the, when the chief executives board of the UN signed off on the principles and the broad white paper on data governance, the big challenge was to find a home. Where do we land this issue? And everybody agreed that data governance was really important, but it wasn’t important enough that anybody would want to discuss it. So, digitalization and AI have given us the platform. So, we’ve kind of come in in the back door and the challenge we have now is to help people interested in AI to understand that AI governance is not just about data governance. It’s about AI governance as a whole. can’t happen without data governance, that there’s a sequential order and data governance is a prerequisite to AI governance and that’s an unfortunate inconvenient truth. It’s one that maybe people are slowly coming around to and as Claire said, I mean, they kind of tolerate it, but we have to help them to understand that this is really important for them to help their objectives, so yeah.


Francesca Bosco: I don’t want one and that’s very interesting what you’re mentioning because in our experience that we are supporting, for example, many under resource organization, and that’s interesting because with with the Hype on AI, we receive many requests, we develop our own responsible AI approach, methodology principles and also guidelines and so we receive many requests of support from these are from those organization to set up their own policies. And the first question that I ask is, but do you have like a responsible data policy? Do you know how you collect the data? I mean, what is your government data governance framework? And they don’t. So I think it’s extremely important what you’re mentioning because also in practice, that’s the reality that we’re living in because of the of the Hype and the focus on AI, we’re forgetting about the basics and the essentials to, yeah, to both develop but also apply AI responsibly.


Rafael Diez de Medina: Okay, thank you. I think we have all the elements now to open the floor for questions to the panelists and to add to the things that we have been discussing. I think there are interesting points on AI and how AI is is impacting in the data governance and the opposite. I think it’s important now to to hear from you. Yeah, please.


Audience: Hi. So I work with the Office of the High Commissioner for Human Rights. human rights I work on issues of human rights and digital technology I want to ask you what is the role of the consumer so basically the end-user in data governance considering that let’s say the how much of an agency can we give them regarding their own data considering that they do not know or because even like even experts don’t know nowadays what in what ways can their data be used so how like where do you demarcate that that the governance will be done by the entities which are governing them through democracy we have given them that agency that you can go on certain aspects of my life but how much of an agency do I get in governing that data


Rafael Diez de Medina: okay now yes let’s let’s collect a couple of more questions and then we we


Audience: open oh thank you very much for this very interesting debate emulation from buzzer from Brazil very inspiring but I would like to quote what our colleague from UNICEF has said which is really critical in this debate trade convenience for data I thought this is a very important point because it relates to our behavior and we have seen a very rapid adoption of AI based applications like like a chat GPT and what we see in many organizations even in government employees are using their own account of a jet PT without an institution and corporate account to upload corporate documents or a contract without being aware that this behavior is very risky so I think that what what people is doing is trade convenience for data because they want to review or they want to translate a piece of a document or something but they don’t care about what they are doing and in many cases there is no then not even a corporate policy that would guide what to do with a JTPT for instance this is a very basic example that is happening I guess everywhere thank you yeah thank you yes please if


Rafael Diez de Medina: you can introduce yourself push the button


Audience: Does it come close we got it so just coming back to the point that was raised on data governance and the idea that we do have a national data governance framework and then whatever we come up with has got to acknowledge that autonomy in terms of having a national perspective of governance framework but when we then look at data as a commodity doesn’t that allow us to push the boundary boundaries towards in a more globally accepted standard when it comes to data governance and globally accepted adherence frameworks when it comes to two standards of data governance frameworks thank you any


Rafael Diez de Medina: other question you


Audience: different countries we hear that same thing and so how do you square the fact that Like there is, I think when we talk about data governance, we don’t necessarily acknowledge that the people who had access to the internet first came from urban areas, suburban areas, and even rural areas do not have kind of data that is necessary in order to kind of like fit for service. And that is true both across the US, but it’s true globally, particularly in areas where there are smaller areas where, there are smaller areas, thank you so much. There are smaller areas where languages are spoken that are not necessarily national or not necessarily represented in large-scale internet datasets, because those people are still not consistently connected to the internet. And so when we’re talking about AI and data governance, how do we square that circle of the fact that we want AI adoption across the world so that everyone can see the benefits of AI solutions and AI systems, but also that means that there is going to have to be some data collection from these communities in order for those systems to work in a way that does not imminently harm them and for there to be the kind of investment in those communities that these communities and countries and areas have been asking for. And I’m thinking particularly in the African continent because what we’re seeing is an under-investment in AI there and an under-investment in data systems there. Thank you. I don’t work for the UN. I’m an academic, I’m an assistant professor at the Korea Advanced Institute of Science and Technology Policy, KAIST. And I study AI policy and more specifically, I study how data can be managed, especially in energy and transportation technologies. And I recently wrote a paper on data donation and how the two main consequences of data collection is one, the environmental problems caused by data centers and two, the privacy issues. And obviously my argument was that both can be solved by data donation, you know, privacy issue, you donate your data so that’s solved. With environmental sustainability, with more data donation, the quality of the data will be higher, less missingness, which means we will eventually need to collect less data and save less data because right now the data center is just saving way too much data in general, just a lot of trash there. So I was wondering whether there’s any discussion going on at the UN level on data donation and what your thoughts were.


Rafael Diez de Medina: Okay, thank you so much for the thing. I think we will have many, many others, but we have unfortunately a constraint of time. So I will ask you to react and pick up what you think.


Steve Macfeely: So lots of interesting questions and perspectives. The one I’d like to pick up is on the gentleman from the Department of Commerce. I would agree, but I’m gonna push back slightly as well. The digital divide created a data divide. So, okay, so that means any AI models, we have a representativity issue, but it’s not purely because the data weren’t there. There’s a lot of models. So in health models, we’ve seen a lot of models were trained on male data only. That wasn’t because of any paucity of data. That was a choice that AI modelers made. So I think we have to be careful not to broad brush. So the data divides, the digital divide exists, but it’s diminishing all the time. So I think the arguments you’re making, in fact, just reinforce the arguments for data governance. As countries start increasing their digitalization of data, it’s all the more reason that this topic becomes urgent and they put in place good governance before they stopped adopting widespread AI models and AI usage, because otherwise, we’re gonna see the problems replicating that didn’t have anything to do with data paucity. And I see you, we can have a bilateral, but I see you disagreeing, which is good.


Francesca Bosco: I can take the one from the gentleman next to me and specifically related to the challenges, let’s say to our responsible behavior internally. What I mentioned before is that at a certain point, even being, let’s say a tech savvy, a cyber savvy organization, we face indeed a very similar problem. And I remember, I mean, after the advent of ChatGPT during one of the, basically one full house, I simply asked, how many of you are using ChatGPT? And all the room went with the hands up. And I was like. Okay we have an issue here let’s close the shop for one second maybe and and and and this is why we i mean we we went into a process of developing our own that makes sense for us a responsible i’m approach to the use of a and the development of a because we also develop some a i based tools what i really suggest is that indeed try to understand which are the need so it’s not ai first but it’s the need first uh using charge gpt for example in a professional environment and address the specific need of the organization this means also that it’s not um let’s say a one time effort but for example in the responsible ai approach that we develop we went from principles into actual guidance guidelines embedding let’s say staff consultation across the different steps but also envisaging regular capacity building meaning regularly update what i mean you create as a framework and build the capacity internally to actively use the framework because a framework without being used it’s useless.


Friederike Schuur: We have to wrap up so i’m going to keep it very very short but i just wanted to add one point to what steve just said in response to uh your sort of um remark and that is to deliver valuable services through ai does not need to require very large data sets and i think it’s important that we keep that in mind because there are other benefits in addition to also being able to serve the global population more equitably and sustainability is really just one of them.


Claire Melamed: Thank you we give me give me this because there is a question on agency that hasn’t been answered and i think the question on agency and the question on the trading the convenience for data is similar and i think you know we don’t want to get into a situation where we become purist about it so we can never we have to have total agency and we can never trade convenience for data and these things. What we want, and this brings us back to data governance, is an environment like we have in every other area, that’s the basis of having a functioning, choosing to live together in a society and having a government, is that you trade off certain individual autonomy against the benefits that you get like the security and the collective action and the division of labor and all the things that you benefit from by living in a society and data is no different and I think the challenge that we’re facing here is not should we do it or shouldn’t we do it but what is the basis of that social contract essentially that will mean that we can do it in ways that have consent and that deliver obvious benefits.


Rafael Diez de Medina: Okay thank you, unfortunately we have to wrap up and finish but I think we had an interesting discussion, we have touched on key issues and particularly how data governance is a prerequisite for AI or sound AI and also the ethical and the risks that we have in all this so thank you so much to the speakers and thank you for your interest and I think that of course we have only touched on the tip of the iceberg of this important emerging and important topic as data governance so thank you so much. Thank you.


S

Steve Macfeely

Speech speed

162 words per minute

Speech length

1038 words

Speech time

382 seconds

National data sovereignty is a fallacy since most data goes to the cloud with no control over where it goes

Explanation

Macfeely argues that while countries claim national data sovereignty, very few actually control the data within their borders. Most data flows directly to cloud services, leaving countries with no knowledge or control over where their data ultimately resides.


Evidence

Most of our data are going straight to the cloud, and after that we have no idea where those data are going


Major discussion point

Need for International Data Governance


Topics

Legal and regulatory


Agreed with

– Claire Melamed

Agreed on

International coordination is necessary beyond national frameworks


Three digital kingdoms (individual, state, commercial sovereignty) need international framework for safe data exchange

Explanation

Macfeely describes three different approaches to digital sovereignty based on different ideologies and geographic alignments. He argues that because these approaches are so different, an international framework is needed to facilitate safe data exchange between these jurisdictions.


Evidence

In the literature we talk about the three digital kingdoms, which is really based around individual sovereignty, state sovereignty, and commercial sovereignty, and you can probably guess how they align geographically


Major discussion point

Need for International Data Governance


Topics

Legal and regulatory


Equity of access to data will be the biggest issue as data becomes more commodified

Explanation

Macfeely identifies equity of access as the primary challenge for effective data governance. As data becomes increasingly valuable and recognized as such, it will naturally be treated as a commodity, leading to issues of ownership and unequal access.


Evidence

As data become more and more valuable, as people recognize the value of it, it’s naturally going to be commodified and that means ownership


Major discussion point

Core Challenges in Data Governance


Topics

Economic | Human rights


AI has raised political interest in data governance, but data governance is a prerequisite to AI governance

Explanation

Macfeely acknowledges that AI has brought much-needed attention to data governance issues, but emphasizes that proper AI governance cannot happen without first establishing data governance. He argues there is a sequential order where data governance must come first.


Evidence

Data governance has been important for a long time, but nobody cared less about it until artificial intelligence surfaced. When the UN developed the principles and the broad white paper on data governance, the big challenge was to find a home


Major discussion point

AI’s Impact on Data Governance


Topics

Legal and regulatory


Digital divide creates data divide, but representativity issues also result from choices made by AI modelers

Explanation

Macfeely agrees that digital divides create data representation problems, but argues that many AI bias issues aren’t due to lack of data availability. Instead, they result from deliberate choices made by AI developers about which data to include in their models.


Evidence

In health models, we’ve seen a lot of models were trained on male data only. That wasn’t because of any paucity of data. That was a choice that AI modelers made


Major discussion point

Practical Implementation Challenges


Topics

Human rights | Development


Disagreed with

– Audience member (academic)

Disagreed on

Causes of AI bias and representativity issues


C

Claire Melamed

Speech speed

161 words per minute

Speech length

1464 words

Speech time

543 seconds

Global Digital Compact provides unique opportunity for sustained, coordinated global agreement on data governance

Explanation

Melamed argues that while there have been many data governance initiatives and principles developed, the Global Digital Compact offers something unique – a pathway to sustained, coordinated global agreement. She emphasizes that the problem isn’t lack of attention but lack of coordinated action.


Evidence

There are huge numbers of data governance processes. It’s not a problem that has not, is suffering from lack of attention per se. It’s a problem that is suffering from a lack of sort of the kind of attention that can deliver sustained and coordinated


Major discussion point

Need for International Data Governance


Topics

Legal and regulatory


Agreed with

– Steve Macfeely

Agreed on

International coordination is necessary beyond national frameworks


Multi-stakeholder working group with equal government and non-government representation offers necessary balanced approach

Explanation

Melamed highlights the importance of the working group’s structure, which includes equal representation from government and non-government stakeholders. She argues this balanced approach is essential given how technology markets work and how these issues affect multiple sectors.


Evidence

The working group contains even numbers of members from governments, representing member states from all of the different regions represented in the United Nations, and an equal number of non-government stakeholders


Major discussion point

Need for International Data Governance


Topics

Legal and regulatory


Business models and commercial parameters need regulation to ensure equitable distribution of benefits from data

Explanation

Melamed argues that beyond ownership and access issues, there’s a need to focus on the economic models and regulatory frameworks that govern how benefits from data are distributed. She suggests that current commercial models, if left unchecked, will not lead to equitable outcomes.


Evidence

We’ve seen with the growth of social media and obviously social media runs on data too. So it gives us a bit of a signal as to the way that if left unchecked, largely unchecked, these commercial models are going to develop


Major discussion point

Core Challenges in Data Governance


Topics

Economic | Human rights


Agreed with

– Steve Macfeely
– Friederike Schuur

Agreed on

Equity of access to data and its benefits is the core challenge


AI companies have strong interest in getting data governance right since poor data practices lead to bad AI

Explanation

Melamed points out that AI developers themselves have a vested interest in proper data governance because poor data practices result in poor AI systems. She argues this creates a common interest between AI companies and those advocating for better data governance.


Evidence

Will.i.am said, if you have poor data practices, guess what? You’re going to have, expletive deleted, bad AI. There is a very strong interest among AI companies as well for data governance practices to get that right


Major discussion point

AI’s Impact on Data Governance


Topics

Economic | Legal and regulatory


Agreed with

– Steve Macfeely
– Francesca Bosco
– Rafael Diez de Medina

Agreed on

Data governance is a prerequisite for AI governance


Data governance should establish social contract basis for trading individual autonomy for collective benefits

Explanation

Melamed argues that rather than seeking total individual agency over data, society should establish a social contract similar to other areas of governance. This would involve trading some individual autonomy for collective benefits, but with proper consent and obvious benefits.


Evidence

You trade off certain individual autonomy against the benefits that you get like the security and the collective action and the division of labor and all the things that you benefit from by living in a society and data is no different


Major discussion point

Practical Implementation Challenges


Topics

Human rights | Legal and regulatory


F

Friederike Schuur

Speech speed

181 words per minute

Speech length

1508 words

Speech time

498 seconds

Data governance must balance innovation and economic benefits with protecting rights of people, including children

Explanation

Schuur argues that data governance should not treat data merely as an economic commodity but must consider the relationship between enabling innovation and protecting human rights. She emphasizes that this includes the specific rights and interests of children and young people.


Evidence

Data, it’s not just an economic commodity. We really have to think about the relationship when we speak about data governance between enabling innovation, fostering really vibrant digital economies, but also at the same time protecting and advancing the interests and the rights of people


Major discussion point

Human Rights and Child-Centric Data Governance


Topics

Human rights | Children rights


Agreed with

– Steve Macfeely
– Francesca Bosco

Agreed on

Data has human dimensions that must be protected


Privacy and protection are fundamental – our data deserves protection just like we do

Explanation

Schuur emphasizes that privacy and protection are core elements of human rights-based data governance. She argues that just as humans deserve protection, so does their data, especially given the increasing risks from new AI technologies that trade convenience for data.


Evidence

Just like us, our data deserves protection and we deserve privacy. Agentic AI, super hot right now, right? Like there’s a risk where we again trade convenience for data, and it is increased now compared to where we were


Major discussion point

Human Rights and Child-Centric Data Governance


Topics

Human rights | Privacy and data protection


Children need dignity, autonomy, and control over their data, plus right to make mistakes without permanent consequences

Explanation

Schuur argues that children’s developmental needs require special consideration in data governance. She emphasizes that children need the ability to make mistakes without permanent consequences, which is threatened by educational platforms that record everything and could limit future opportunities.


Evidence

Think about educational platforms in the classrooms that record everything that a child makes. We have to make sure that that is not going to slot them in to a particular development path because of something that they have done at one point


Major discussion point

Human Rights and Child-Centric Data Governance


Topics

Children rights | Human rights


Children and young people should participate in shaping data governance agenda and understand the issues well

Explanation

Schuur advocates for meaningful participation of children and young people in data governance discussions. She argues that they understand technical issues well and can provide valuable insights about benefits and concerns, helping ensure data governance serves future generations.


Evidence

We had a delegation more than 20 children and young people who attended. They understand a lot about a technical issue such as data governance. They’re worried about a lot of things. They see the opportunity that is inherent in AI but they’re also worried what it might mean for the planet


Major discussion point

Human Rights and Child-Centric Data Governance


Topics

Children rights | Human rights


Capacity development for empowerment of organizations and citizens is critical for participation in governance conversations

Explanation

Schuur identifies capacity development as essential for enabling meaningful participation in data governance. She argues that both organizations and individual citizens need to be better equipped to advocate for their interests and participate in governance discussions.


Evidence

I think really capacity development for empowerment. And by that I mean organizations, but I also mean citizens. So that they are better equipped to make their own voice and their own interests heard in the conversation


Major discussion point

Core Challenges in Data Governance


Topics

Development | Capacity development


AI opens door to pervasive data extraction far exceeding anything seen before, threatening trust

Explanation

Schuur warns that new AI technologies, particularly agentic AI that can complete tasks autonomously, enable unprecedented levels of data extraction. She argues this threatens the trust that is fundamental to the relationship between people and organizations.


Evidence

When we talk about agentic AI so like AI agents that are starting to actually complete tasks for us there was in one of the keynotes the example of an agent that is booking a dinner for me and my friends right like super convenient but really we must emphasize like that they’re really opening up the door towards pervasive data extraction


Major discussion point

AI’s Impact on Data Governance


Topics

Human rights | Privacy and data protection


Delivering valuable AI services doesn’t require very large datasets

Explanation

Schuur challenges the assumption that effective AI requires massive datasets. She argues that valuable AI services can be delivered with smaller datasets, which has benefits for equity, sustainability, and serving global populations more effectively.


Evidence

To deliver valuable services through AI does not need to require very large data sets and I think it’s important that we keep that in mind because there are other benefits in addition to also being able to serve the global population more equitably


Major discussion point

Practical Implementation Challenges


Topics

Development | Sustainable development


F

Francesca Bosco

Speech speed

153 words per minute

Speech length

1780 words

Speech time

695 seconds

Data governance without cybersecurity is like a constitution without a judiciary – cannot enforce or protect rights

Explanation

Bosco argues that data governance and cybersecurity are inseparable, using the analogy that data governance without cybersecurity is like having laws without enforcement mechanisms. She emphasizes that cybersecurity is essential for actually implementing and protecting the rights and responsibilities outlined in data governance frameworks.


Evidence

Data governance without cybersecurity is like a constitution without a judiciary in a way, because it might outline like rights and responsibilities, but it cannot enforce or protect them


Major discussion point

Cybersecurity and Data Governance Integration


Topics

Cybersecurity | Legal and regulatory


Cybersecurity enables data governance by ensuring access control, data integrity, privacy enforcement, and risk-based prioritization

Explanation

Bosco explains how cybersecurity serves as an enabler of data governance across multiple dimensions. She details how cybersecurity tools and practices ensure that governance policies are actually implemented and enforced in practice.


Evidence

Governance tell us who should have access to the data and cybersecurity ensures that only those people do. Governance aligns with regulations like GDPR, notably, At the same time, cybersecurity ensures that those policies are enforced through tools like encryption, secure data transfer, data masking


Major discussion point

Cybersecurity and Data Governance Integration


Topics

Cybersecurity | Privacy and data protection


Cyberattacks on data have human consequences, affecting real people and causing double victimization of beneficiaries

Explanation

Bosco emphasizes the human dimension of cybersecurity by explaining how cyberattacks on organizational data don’t just affect the organizations but also harm the people they serve. She argues that attacks on development and humanitarian organizations can lead to double victimization of already vulnerable populations.


Evidence

Attacking those data doesn’t mean just attacking the organization data, but it means also attacking the data of the beneficiaries, for example, risking for double victimization


Major discussion point

Cybersecurity and Data Governance Integration


Topics

Cybersecurity | Human rights


Agreed with

– Steve Macfeely
– Friederike Schuur

Agreed on

Data has human dimensions that must be protected


Security by design, rights-based cybersecurity, and contextual sensitivity are essential for resilient data governance

Explanation

Bosco outlines practical approaches for integrating cybersecurity into data governance from the beginning. She advocates for embedding security measures from the ground up, aligning cybersecurity with human rights principles, and prioritizing protection based on risk levels and contexts.


Evidence

Security by design. So embed access control, encryption, monitoring in governance framework from the ground up. Prioritise protection for high-risk data, for example, and high-risk actors, such as biometric data in refugee contests, health data in fragile states


Major discussion point

Cybersecurity and Data Governance Integration


Topics

Cybersecurity | Human rights


Asymmetries of power and protection exist, with most affected actors excluded from governance conversations

Explanation

Bosco identifies power imbalances as a core challenge in data governance, noting that those most affected by data-related decisions are often excluded from governance discussions. She argues that current frameworks are disproportionately shaped by actors from technologically advanced economies.


Evidence

Data governance frameworks, let’s admit it, are disproportionately shaped by actors, I would say, with basically most of them, they are in technologically advanced economies in a way. And so the most affected by data-related decisions often are excluded, basically, from governance conversation


Major discussion point

Core Challenges in Data Governance


Topics

Human rights | Development


AI systems trained on enormous datasets scraped without consent create challenges of opacity, bias, and security risks

Explanation

Bosco outlines the fundamental problems with how current AI systems are trained, emphasizing that large language models use data scraped from the internet without consent or transparency. She identifies this as creating multiple risks including lack of accountability, bias, and security vulnerabilities.


Evidence

AI system, and particularly I’m thinking like large models like GPT-4, Lama, are trained on enormous data sets that are scraped from the internet. These data sets are often collated without consent. We rarely know what data went into the training corpus, and this undermined accountability


Major discussion point

AI’s Impact on Data Governance


Topics

Human rights | Privacy and data protection


Organizations focus on AI policies while lacking basic responsible data governance frameworks

Explanation

Bosco describes a practical problem where organizations rush to develop AI policies due to the current hype, but lack fundamental data governance frameworks. She emphasizes that responsible AI cannot be implemented without first establishing how data is collected and governed.


Evidence

We receive many requests of support from those organization to set up their own policies. And the first question that I ask is, but do you have like a responsible data policy? Do you know how you collect the data? And they don’t


Major discussion point

AI’s Impact on Data Governance


Topics

Legal and regulatory | Capacity development


Agreed with

– Steve Macfeely
– Claire Melamed
– Rafael Diez de Medina

Agreed on

Data governance is a prerequisite for AI governance


Need-first approach rather than AI-first, with regular capacity building and framework updates

Explanation

Bosco advocates for a practical approach to implementing responsible AI that starts with understanding organizational needs rather than rushing to adopt AI. She emphasizes the importance of ongoing capacity building and regular updates to frameworks as technology evolves.


Evidence

Try to understand which are the need so it’s not AI first but it’s the need first. This means also that it’s not a one time effort but for example in the responsible AI approach that we develop we went from principles into actual guidance guidelines embedding staff consultation across the different steps


Major discussion point

Practical Implementation Challenges


Topics

Capacity development | Legal and regulatory


A

Audience

Speech speed

155 words per minute

Speech length

872 words

Speech time

337 seconds

People trade convenience for data without understanding risks, like using ChatGPT with corporate documents

Explanation

An audience member from Brazil highlighted how people, including government employees, are rapidly adopting AI tools like ChatGPT without understanding the risks. They use personal accounts to upload corporate or sensitive documents for translation or review, trading convenience for data security without proper institutional policies.


Evidence

We have seen a very rapid adoption of AI based applications like ChatGPT and what we see in many organizations even in government employees are using their own account of a ChatGPT without an institution and corporate account to upload corporate documents or a contract without being aware that this behavior is very risky


Major discussion point

Practical Implementation Challenges


Topics

Cybersecurity | Privacy and data protection


R

Rafael Diez de Medina

Speech speed

120 words per minute

Speech length

908 words

Speech time

451 seconds

We are experiencing an avalanche of data from many sources, disrupted by AI’s unexpected arrival

Explanation

Diez de Medina argues that while the data revolution was initially established and somewhat predictable, artificial intelligence came unexpectedly to disrupt all initial thoughts about how the data revolution would be managed. This has created an overwhelming flow of data that affects all aspects of life with geopolitical implications.


Evidence

We were talking about data revolution, but now I think the revolution is well-established, and we are suffering, or all are, under an avalanche of data produced by many sources. But of course, artificial intelligence came unexpectedly to disrupt everything and to overrun all our initial thoughts


Major discussion point

AI’s Impact on Data Governance


Topics

Legal and regulatory | Development


Data governance discussion is more topical than ever due to the new ecosystem affecting all aspects of life

Explanation

Diez de Medina emphasizes that the discussion around data governance has become extremely relevant because we are now immersed in a new data ecosystem that affects every aspect of our lives. He argues that with millions and trillions of data points being produced every moment, the question of how this should be governed is more important than ever.


Evidence

We are all immersed in this new environment, an ecosystem of data that is affecting us all in all aspects of our lives. It has implications for geopolitical implications for our daily lives. We are producing millions and millions and trillions of data every moment


Major discussion point

Need for International Data Governance


Topics

Legal and regulatory | Human rights


Data governance is a prerequisite for sound AI and addresses ethical risks

Explanation

In his closing remarks, Diez de Medina summarizes the panel discussion by emphasizing that data governance is not just important alongside AI development, but is actually a prerequisite for sound AI implementation. He also highlights that the discussion covered the ethical considerations and risks involved in data governance.


Evidence

We have touched on key issues and particularly how data governance is a prerequisite for AI or sound AI and also the ethical and the risks that we have in all this


Major discussion point

AI’s Impact on Data Governance


Topics

Legal and regulatory | Human rights


Agreements

Agreement points

Equity of access to data and its benefits is the core challenge

Speakers

– Steve Macfeely
– Claire Melamed
– Friederike Schuur

Arguments

Equity of access is going to be the big issue. As data become more and more valuable, as people recognize the value of it, it’s naturally going to be commodified and that means ownership


Business models and commercial parameters need regulation to ensure equitable distribution of benefits from data


Equity of access to data and equity of access to the benefits that can be derived from the data. It’s critical. Because so much flows from equity of access to the benefits from the data


Summary

All three speakers identified equity of access as the fundamental challenge in data governance, recognizing that as data becomes commodified, ensuring fair access and distribution of benefits becomes critical


Topics

Economic | Human rights


Data governance is a prerequisite for AI governance

Speakers

– Steve Macfeely
– Claire Melamed
– Francesca Bosco
– Rafael Diez de Medina

Arguments

AI governance cannot happen without data governance, that there’s a sequential order and data governance is a prerequisite to AI governance


AI companies have strong interest in getting data governance right since poor data practices lead to bad AI


Organizations focus on AI policies while lacking basic responsible data governance frameworks


Data governance is a prerequisite for AI or sound AI and also the ethical and the risks that we have in all this


Summary

All speakers agreed that proper data governance must come before AI governance, with AI development dependent on sound data practices


Topics

Legal and regulatory


AI has brought necessary attention to data governance

Speakers

– Steve Macfeely
– Claire Melamed

Arguments

We have to thank AI that we’re having this conversation. Data governance has been important for a long time, but nobody cared less about it until artificial intelligence surfaced


AI and the obvious connection between data governance and AI has increased the political interest, which is really my concern here


Summary

Both speakers acknowledged that while data governance was always important, AI development has finally brought the necessary political attention and urgency to these issues


Topics

Legal and regulatory


International coordination is necessary beyond national frameworks

Speakers

– Steve Macfeely
– Claire Melamed

Arguments

National data sovereignty is a fallacy since most data goes to the cloud with no control over where it goes


Global Digital Compact provides unique opportunity for sustained, coordinated global agreement on data governance


Summary

Both speakers agreed that national data governance frameworks alone are insufficient and that international coordination and agreements are essential


Topics

Legal and regulatory


Data has human dimensions that must be protected

Speakers

– Steve Macfeely
– Friederike Schuur
– Francesca Bosco

Arguments

Our data are essentially who we are. There’s a phrase now, we are our data


Data governance must balance innovation and economic benefits with protecting rights of people, including children


Cyberattacks on data have human consequences, affecting real people and causing double victimization of beneficiaries


Summary

All three speakers emphasized that data governance is not just a technical or economic issue but fundamentally about protecting people and their rights


Topics

Human rights


Similar viewpoints

Both speakers emphasized the need to address power imbalances and build capacity so that affected communities can meaningfully participate in data governance discussions

Speakers

– Friederike Schuur
– Francesca Bosco

Arguments

Capacity development for empowerment of organizations and citizens is critical for participation in governance conversations


Asymmetries of power and protection exist, with most affected actors excluded from governance conversations


Topics

Development | Human rights


Both speakers warned about the unprecedented scale of data extraction enabled by AI systems and the risks this poses to privacy and trust

Speakers

– Friederike Schuur
– Francesca Bosco

Arguments

AI opens door to pervasive data extraction far exceeding anything seen before, threatening trust


AI systems trained on enormous datasets scraped without consent create challenges of opacity, bias, and security risks


Topics

Human rights | Privacy and data protection


Both speakers noted that AI bias and governance problems are not just due to technical limitations but result from deliberate choices and lack of foundational frameworks

Speakers

– Steve Macfeely
– Francesca Bosco

Arguments

Digital divide creates data divide, but representativity issues also result from choices made by AI modelers


Organizations focus on AI policies while lacking basic responsible data governance frameworks


Topics

Human rights | Legal and regulatory


Unexpected consensus

AI industry’s self-interest in data governance

Speakers

– Claire Melamed
– Steve Macfeely

Arguments

AI companies have strong interest in getting data governance right since poor data practices lead to bad AI


AI has raised political interest in data governance, but data governance is a prerequisite to AI governance


Explanation

It was unexpected to see consensus that AI companies themselves have strong incentives for good data governance, creating potential alignment between industry interests and governance advocates rather than opposition


Topics

Economic | Legal and regulatory


Smaller datasets can deliver valuable AI services

Speakers

– Friederike Schuur

Arguments

Delivering valuable AI services doesn’t require very large datasets


Explanation

This challenges the common assumption that effective AI requires massive datasets, suggesting more sustainable and equitable approaches to AI development are possible


Topics

Development | Sustainable development


Social contract approach to data governance

Speakers

– Claire Melamed

Arguments

Data governance should establish social contract basis for trading individual autonomy for collective benefits


Explanation

The framing of data governance as a social contract similar to other areas of governance was an unexpected but compelling way to think about balancing individual rights with collective benefits


Topics

Human rights | Legal and regulatory


Overall assessment

Summary

The speakers demonstrated remarkable consensus on fundamental principles: equity of access as the core challenge, data governance as prerequisite to AI governance, need for international coordination, and the human dimensions of data protection. They agreed on both the problems (power imbalances, AI hype overshadowing data governance basics) and solutions (capacity building, rights-based approaches, multi-stakeholder processes).


Consensus level

High level of consensus with complementary expertise rather than conflicting viewpoints. This strong agreement among diverse stakeholders (statisticians, civil society, international organizations) suggests a mature understanding of data governance challenges and creates a solid foundation for policy development and implementation through initiatives like the Global Digital Compact.


Differences

Different viewpoints

Causes of AI bias and representativity issues

Speakers

– Steve Macfeely
– Audience member (academic)

Arguments

Digital divide creates data divide, but representativity issues also result from choices made by AI modelers


Under-investment in AI and data systems in areas like the African continent, with communities needing data collection for AI systems to work without harm


Summary

Macfeely argues that AI bias isn’t just due to lack of data availability but deliberate choices by AI modelers, while the audience member emphasizes structural under-investment and lack of representation in datasets as the primary issue


Topics

Human rights | Development


Unexpected differences

Scope of data requirements for effective AI

Speakers

– Friederike Schuur
– Audience member (academic)

Arguments

Delivering valuable AI services doesn’t require very large datasets


Communities need data collection for AI systems to work without harm, particularly in under-invested areas


Explanation

This disagreement is unexpected because both speakers are concerned with equity and inclusion, yet they have opposing views on whether large datasets are necessary for effective AI. Schuur argues for efficiency with smaller datasets, while the academic argues that more data collection is needed for underrepresented communities


Topics

Development | Sustainable development


Overall assessment

Summary

The speakers showed remarkable consensus on fundamental principles while differing mainly on implementation approaches and emphasis. Key areas of alignment included the need for international data governance, the importance of equity and human rights, and the recognition that AI has elevated the urgency of data governance issues


Disagreement level

Low to moderate disagreement level with high strategic alignment. The disagreements were primarily about methods and emphasis rather than fundamental goals, which suggests a strong foundation for collaborative action. The main tension appears to be between different approaches to achieving equity – whether through technical efficiency, regulatory frameworks, or increased representation – rather than disagreement about the importance of equity itself


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized the need to address power imbalances and build capacity so that affected communities can meaningfully participate in data governance discussions

Speakers

– Friederike Schuur
– Francesca Bosco

Arguments

Capacity development for empowerment of organizations and citizens is critical for participation in governance conversations


Asymmetries of power and protection exist, with most affected actors excluded from governance conversations


Topics

Development | Human rights


Both speakers warned about the unprecedented scale of data extraction enabled by AI systems and the risks this poses to privacy and trust

Speakers

– Friederike Schuur
– Francesca Bosco

Arguments

AI opens door to pervasive data extraction far exceeding anything seen before, threatening trust


AI systems trained on enormous datasets scraped without consent create challenges of opacity, bias, and security risks


Topics

Human rights | Privacy and data protection


Both speakers noted that AI bias and governance problems are not just due to technical limitations but result from deliberate choices and lack of foundational frameworks

Speakers

– Steve Macfeely
– Francesca Bosco

Arguments

Digital divide creates data divide, but representativity issues also result from choices made by AI modelers


Organizations focus on AI policies while lacking basic responsible data governance frameworks


Topics

Human rights | Legal and regulatory


Takeaways

Key takeaways

International data governance is essential because national data sovereignty is largely illusory – most data flows to the cloud beyond national control, requiring global frameworks for safe data exchange between different digital sovereignty models


The Global Digital Compact provides a unique opportunity through its multi-stakeholder working group to create sustained, coordinated global agreements on data governance that can complement rather than replace national frameworks


Data governance must be human rights-based and child-centric, ensuring privacy, dignity, autonomy, and meaningful participation of children and young people in shaping governance frameworks


Cybersecurity and data governance are inseparable – cybersecurity acts as the enforcement mechanism for data governance policies, while governance without security cannot protect rights


Equity of access to data and its benefits is the core challenge, as data commodification creates ownership issues and power asymmetries that exclude affected communities from governance conversations


AI has elevated data governance politically but also created new pressures – data governance is a prerequisite to AI governance, not an afterthought, and poor data practices inevitably lead to problematic AI systems


There is a fundamental gap between AI adoption enthusiasm and basic data governance implementation, with organizations rushing to develop AI policies while lacking foundational responsible data frameworks


Resolutions and action items

Continue development of the UN multi-stakeholder working group on data governance established through the Global Digital Compact


Develop and implement security-by-design approaches that embed cybersecurity into data governance frameworks from the ground up


Create capacity building programs to empower organizations and citizens to participate meaningfully in data governance conversations


Establish need-first rather than AI-first approaches in organizations, with regular capacity building and framework updates


Ensure meaningful participation of children and young people in data governance processes, building on successful models like the UN World Data Forum youth delegation


Unresolved issues

How to balance individual agency over personal data with the practical reality that even experts don’t fully understand how data can be used


How to address the fundamental tension between trading convenience for data while maintaining meaningful consent and control


How to ensure equitable AI development and data collection in underrepresented communities, particularly in Africa and rural areas, without perpetuating extractive practices


How to establish fair business models and commercial parameters that ensure equitable distribution of benefits from data use


How to bridge the gap between the three digital kingdoms (individual, state, and commercial sovereignty) with their different ideological approaches to data


How to address the environmental impact of data centers and excessive data storage while maintaining AI system effectiveness


How to create enforceable international agreements when data governance frameworks are predominantly shaped by technologically advanced economies


Suggested compromises

Accept that some trade-off between individual autonomy and collective benefits is necessary, similar to other social contracts, but establish clear frameworks for consent and benefit-sharing


Recognize that delivering valuable AI services doesn’t require very large datasets, allowing for more sustainable and equitable approaches


Start with agreed-upon principles as a foundation for international data governance, then work toward more detailed implementation mechanisms


Develop contextual sensitivity in data protection that prioritizes high-risk data and high-risk actors rather than applying uniform approaches


Create frameworks that complement rather than replace national data governance systems, providing the international layer needed for cross-border data flows


Thought provoking comments

Most of our data are going straight to the cloud, and after that we have no idea where those data are going… very few countries control the data in their country… our data are essentially who we are. There’s a phrase now, we are our data.

Speaker

Steve Macfeely


Reason

This comment fundamentally challenged the notion of national data sovereignty by exposing it as potentially illusory. It reframed data from a technical resource to an extension of human identity, elevating the stakes of the governance discussion from economic to existential.


Impact

This set the foundational tone for the entire discussion, establishing that data governance isn’t just about policy but about protecting human essence. It influenced subsequent speakers to adopt a more human-centered approach, with Friederike emphasizing children’s rights and Francesca discussing real human impacts of cyberattacks.


Think about educational platforms in the classrooms that record everything that a child makes. Now we have to make sure that that is not going to slot them into a particular development path because of something that they have done at one point. Childhood really means you get a second, a third, a fourth, a fifth, and so many chances because you deserve it.

Speaker

Friederike Schuur


Reason

This comment introduced a profound temporal dimension to data governance – the idea that data persistence can violate the fundamental nature of childhood development. It highlighted how AI systems could inadvertently create permanent consequences from temporary childhood behaviors.


Impact

This shifted the conversation from abstract principles to concrete, emotionally resonant scenarios. It influenced the discussion toward considering vulnerable populations and introduced the concept that data governance must account for human development over time, not just static privacy rights.


Data governance without cybersecurity is like a constitution without a judiciary… it might outline rights and responsibilities, but it cannot enforce or protect them.

Speaker

Francesca Bosco


Reason

This analogy brilliantly illustrated the interdependence of governance frameworks and enforcement mechanisms. It moved beyond viewing cybersecurity as a technical add-on to positioning it as fundamental to the entire governance structure.


Impact

This comment integrated cybersecurity into the core governance discussion rather than treating it as a separate technical concern. It influenced the conversation to consider implementation and enforcement as integral to governance design, not afterthoughts.


We have to thank AI that we’re having this conversation. Data governance has been important for a long time, but nobody cared less about it until artificial intelligence surfaced… that’s an unfortunate inconvenient truth.

Speaker

Steve Macfeely


Reason

This meta-observation about the discussion itself was remarkably candid, acknowledging that data governance only gained political traction through AI hype rather than its inherent importance. It revealed the political dynamics driving policy attention.


Impact

This comment provided crucial context for understanding why data governance is suddenly urgent and influenced the discussion toward recognizing both the opportunity and challenge of riding AI’s coattails to achieve better data governance.


Trade convenience for data – I thought this is a very important point because it relates to our behavior… employees are using their own account of ChatGPT without an institutional and corporate account to upload corporate documents… without being aware that this behavior is very risky.

Speaker

Audience member from Brazil


Reason

This comment grounded the abstract governance discussion in immediate, relatable behavior that everyone in the room likely recognized. It highlighted the gap between governance frameworks and actual human behavior driven by convenience.


Impact

This shifted the conversation from high-level policy to practical implementation challenges. It influenced subsequent responses to acknowledge that governance must account for human psychology and convenience-seeking behavior, not just create perfect frameworks.


What we want… is an environment like we have in every other area, that’s the basis of having a functioning, choosing to live together in a society… you trade off certain individual autonomy against the benefits that you get… data is no different… what is the basis of that social contract essentially.

Speaker

Claire Melamed


Reason

This comment reframed the entire data governance challenge as a fundamental question of social contract theory, connecting it to centuries of political philosophy about balancing individual rights with collective benefits.


Impact

This provided a unifying framework for understanding all the various tensions discussed – between convenience and privacy, individual and collective benefits, national and international governance. It elevated the discussion from technical policy to fundamental questions of how societies organize themselves.


Overall assessment

These key comments transformed what could have been a technical policy discussion into a profound exploration of human identity, social contracts, and the fundamental challenges of governing in a digital age. The most impactful comments consistently brought abstract concepts down to human-scale consequences – from children’s development being constrained by educational data to employees unconsciously trading corporate security for convenience. The discussion evolved from initial framings of technical governance challenges to deeper questions about how societies balance individual autonomy with collective benefits, how we protect human development and dignity in data systems, and how we create enforceable frameworks rather than just aspirational principles. The candid acknowledgment that data governance only gained attention through AI hype added crucial political realism to the conversation, while the social contract framing provided a unifying lens for understanding the various tensions and trade-offs discussed throughout.


Follow-up questions

How do we exchange data safely between the three digital kingdoms (individual sovereignty, state sovereignty, and commercial sovereignty) given their different ideologies?

Speaker

Steve Macfeely


Explanation

This addresses a fundamental challenge in international data governance where different jurisdictions have conflicting approaches to data control and exchange


How can we set up business models and rules around data ownership to ensure benefits are spread in an equitable way?

Speaker

Claire Melamed


Explanation

This explores the economic dimensions of data governance that are often overlooked in discussions focused primarily on rights and technical aspects


How can we ensure that educational platforms recording children’s data don’t slot them into particular development paths based on early mistakes?

Speaker

Friederike Schuur


Explanation

This addresses the long-term implications of data collection on children’s development and the right to make mistakes without permanent consequences


How might agentic AI affect the socio-affective development of children as their environment keeps reacting to them?

Speaker

Friederike Schuur


Explanation

This explores the psychological and developmental impacts of AI systems that continuously respond to and learn from children’s behavior


How do we address the asymmetries of power, agency, and protection in data governance when frameworks are disproportionately shaped by actors in technologically advanced economies?

Speaker

Francesca Bosco


Explanation

This highlights the need to include affected communities in governance conversations and address global inequities in data governance influence


How can international law and data governance evolve to keep pace with the changing threat landscape?

Speaker

Francesca Bosco


Explanation

This addresses the gap between rapidly evolving cybersecurity threats and the slower pace of legal and policy development


What is the role of the consumer/end-user in data governance, and how much agency can we give them regarding their own data when even experts don’t know how data can be used?

Speaker

Audience member from Office of the High Commissioner for Human Rights


Explanation

This explores the balance between individual agency and institutional governance in data protection


How do we address the behavior of trading convenience for data, particularly in organizational settings where employees use personal AI accounts for work without understanding the risks?

Speaker

Audience member from Brazil


Explanation

This addresses practical challenges in implementing responsible AI use within organizations and the need for better awareness and policies


How do we square the circle between wanting global AI adoption while needing to collect data from underrepresented communities to make AI systems work without harming them?

Speaker

Academic audience member


Explanation

This addresses the tension between inclusive AI development and the data collection requirements that may exploit already marginalized communities


What are the UN’s thoughts on data donation as a solution to both privacy issues and environmental problems caused by data centers?

Speaker

Assistant professor from KAIST


Explanation

This explores alternative models for data sharing that could address multiple challenges simultaneously


How can we develop AI services that deliver value without requiring very large datasets, particularly to serve global populations more equitably and sustainably?

Speaker

Friederike Schuur


Explanation

This explores more efficient and equitable approaches to AI development that don’t rely on massive data collection


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.