From Technical Safety to Societal Impact Rethinking AI Governanc

20 Feb 2026 13:00h - 14:00h

From Technical Safety to Societal Impact Rethinking AI Governanc

Session at a glance

Summary

This discussion at the AI Impact Summit focused on moving beyond purely technical approaches to AI safety toward more comprehensive, multidisciplinary governance frameworks that address real-world societal impacts. The session was co-hosted by Virginia Dignum and Jeanna Matthews from ACM’s Technology Policy Council, featuring two rounds of panelists from diverse backgrounds including policymakers, researchers, and activists.


The central argument presented was that AI systems fail not just due to technical flaws, but because they are embedded in institutional, economic, and political systems that shape their impact on communities. Dr. Lourino Chemane from Mozambique emphasized that AI governance must prioritize human and social protection, requiring input from law, social sciences, education, and affected communities. Dame Wendy Hall criticized the male-dominated nature of AI leadership, arguing that without diversity in decision-making, ethical AI development is impossible.


Several panelists highlighted the extractive nature of current AI development, with concerns about data centers depleting local resources and AI systems excluding marginalized communities whose languages aren’t represented in major models. The discussion revealed tensions between technical innovation and social responsibility, with some arguing that technology development should remain unrestricted while input and output phases require regulation.


The conversation emphasized the need for “AI metrology” – a science of measuring AI’s social impact – and called for more precise accountability regarding trade-offs in AI systems. Panelists stressed that historical patterns suggest benefits won’t automatically reach everyone without deliberate intervention. The session concluded with a call for active insistence on inclusive AI development rather than passive hope for equitable outcomes.


Keypoints

Major Discussion Points:

Moving Beyond Technical Safety to Holistic AI Governance: The central theme emphasized that AI safety cannot be addressed through technical measures alone (model alignment, robustness, benchmarks) but must include institutional, economic, political, and social contexts where AI systems are deployed and cause real-world impact.


Lack of Diversity and Inclusion in AI Decision-Making: Multiple panelists highlighted the concerning absence of women and marginalized communities in high-level AI discussions and decision-making processes, with Dame Wendy Hall pointedly noting that “50% of the population weren’t included yesterday, the women” and arguing that “if it’s not diverse, it’s not ethical.”


Need for Multidisciplinary Approaches and “AI Metrology”: Speakers called for incorporating perspectives from law, social sciences, education, ethics, and affected communities into AI governance, with proposals for developing a “science of AI” or “AI metrology” to systematically study socio-technical systems.


Real-World Extractive and Exploitative Impacts: Discussion of concrete harms including data center environmental impacts, exclusion of tribal languages from AI systems, exploitation of data annotation workers, and the widening socioeconomic divide, particularly in countries like India.


The Imperative to “Insist” Rather Than Hope for Change: The conversation concluded with a call to action, emphasizing that positive outcomes won’t happen automatically and require active political engagement, with the recognition that only when 51% of political will is mobilized do meaningful regulations emerge.


Overall Purpose:

The discussion aimed to shift the discourse around AI safety from a narrow technical focus to a broader, more inclusive approach that considers social, institutional, and political contexts. The session sought to identify concrete ways to make AI governance more multidisciplinary and accountable to real-world impacts on diverse communities.


Overall Tone:

The discussion began with a formal, academic tone but became increasingly critical and urgent throughout. Speakers expressed frustration with the status quo, particularly the lack of meaningful diversity in AI leadership and the gap between inclusive rhetoric and exclusive practice. The tone shifted from diplomatic policy discussion to more pointed criticism of power structures, culminating in direct calls for political action and resistance to current approaches. The overall atmosphere was one of constructive dissatisfaction with existing frameworks and a demand for more substantive change.


Speakers

Speakers from the provided list:


Virginia Dignum – Co-host of the session, Chair of the Technology Policy Council of ACM


Lourino Chemane – Chairman of the board of the National Institute of Information and Communication Technology in Mozambique, leading the national strategy on AI for Mozambique


Dame Wendy Hall – Regius Professor of Computer Science, Associate Vice President and Director of the Web Science Institute at the University of Southampton, former member of the United Nations high-level expert advisory body


Yannis Ioannidis – Current president of ACM, Professor at the University of Athens


Sara Hooker – Co-founder and president of Adaption Labs, formerly with Cohera and other developing organizations


Jibu Elias – Researcher and activist who examines how technology and innovation institutions receive knowledge, labor, and legitimacy


Neha Kumar – Associate professor at Georgia Tech in the School of Interactive Computing, President of the Special Interest Group on Computer Human Interaction


Merve Hickok – President and policy director for Center for AI and Digital Policy


Rasmus Andersen – Works with the Tony Blair Institute of Government, advises leaders around the world at the prime ministerial or presidential level on navigating AI


Tom Romanoff – Director of policy for ACM, manages policy committees


Jeanna Matthews – Co-host of the second session (role/title not specified in detail)


Participant – Audience member asking questions


Speaker 2


Additional speakers:


Gina Matthews – Co-host of the session, Chair of the Technology Policy Council of ACM (mentioned by Virginia Dignum but not in the speakers list)


Full session report

This comprehensive discussion at the AI Impact Summit represented a critical examination of how AI safety discourse must evolve beyond purely technical considerations to address the complex socio-political realities of AI deployment. Co-hosted by Virginia Dignum and Jeanna Matthews from ACM’s Technology Policy Council, the session brought together diverse voices from policymakers, researchers, industry leaders, and activists across two panel rounds to challenge fundamental assumptions about AI governance.


Reframing AI Safety: From Technical Robustness to Systemic Impact

The session opened with Virginia Dignum’s foundational argument that fundamentally reframed the AI safety debate. Rather than viewing AI failures as primarily technical problems requiring algorithmic solutions, she argued that “AI systems do not fail simply because of flaws in the model architecture or in the data or in the alignment technique. They fail or they produce harm because they are embedded in institutional, economic and political systems.” This perspective shifted the entire discussion from model alignment and benchmark performance towards examining deployment contexts, governance capacity, and the lived realities of communities affected by AI systems.


Dr Lourino Chemane, leading Mozambique’s national AI strategy development, provided concrete policy perspectives on this holistic approach. He emphasised that AI governance must prioritise “the protection of people, not only systems,” requiring continuous human oversight and institutional accountability. Mozambique’s approach integrates ethical and social assessment whilst addressing infrastructure sovereignty through regulations for data centres and cloud computing, particularly focusing on protecting vulnerable populations including women, children, and youth.


The Diversity Crisis and Summit Critique

Dame Wendy Hall delivered perhaps the most provocative intervention of the session, directly challenging both the summit’s format and claims of inclusivity. She opened with a frank critique: “I have a love-hate relationship with this summit. It’s too big. There’s too much going on, and not enough actual real debate about the core.” Her pointed observation that “50% of the population weren’t included yesterday, the women” highlighted the stark contradiction between rhetoric about “AI for all” and the reality of male-dominated leadership in AI governance. Hall’s assertion that “if it’s not diverse it’s not ethical” connected representation directly to ethical outcomes.


This critique resonated throughout both panel rounds, with multiple speakers building upon themes of exclusion. Neha Kumar emphasised the need to examine “who is making decisions, who is being benefited, and who is part of the design process,” advocating for learning from feminist and women’s studies to ask fundamental questions about power and participation.


Jibu Elias provided stark examples from the Indian context, highlighting how tribal populations remain excluded from AI benefits because their languages aren’t represented in major AI systems. He also raised concerns about “AI psychosis” among elderly users and described the extractive nature of AI infrastructure development, including how data centres are built through manipulation of local communities and exploitation of natural resources, particularly groundwater in water-scarce regions.


Technical Innovation and Social Responsibility

The discussion revealed nuanced perspectives on balancing technical development with social governance. Yannis Ioannidis, ACM’s president, proposed distinguishing between “safety of AI and safety of AI use,” arguing that “the technology, there’s no issue, there’s no social issue in the safety of the technology itself.” He advocated for allowing technical innovation to proceed whilst regulating both input data selection and output applications.


Sara Hooker, co-founder and president of Adaption Labs, brought industry experience to questions of trade-offs and transparency. She challenged the notion that AI systems can be universally safe, arguing that “there are almost certainly trade-offs in place” and calling for explicit acknowledgement of what systems don’t cover or haven’t tested for. Her practical proposal focused on requiring model providers to report language coverage, safety parameters, and limitations rather than making universal claims about safety.


The Call for AI Measurement and Evidence

Dame Wendy Hall introduced the concept of systematic measurement for what she termed “social machines” – socio-technical systems created by the interaction of technology and society. Drawing parallels to the UK’s National Physical Laboratory and AI Security Institute, she proposed developing systematic approaches for studying AI’s social impact, noting that the network is called “AI measurement” rather than safety due to political preferences.


Hall cited Australia’s social media ban experiment as an example of the need to study unintended consequences systematically. She mentioned plans to launch a journal in this field, representing a step towards institutionalising broader approaches to AI safety research.


Government’s Integrating Role

Despite representing different sectors, several panellists converged on government’s crucial role in AI governance. Rasmus Andersen, advising government leaders through the Tony Blair Institute, argued that “the only force in the world that can really take all those considerations together and think about the partial perspectives that technical people have, that civil society has, that industry has” is government, however imperfect these institutions may be.


Merve Hickok, representing the Center for AI and Digital Policy, contextualised these concerns within broader technological development patterns, arguing that “history does not show us that it’s going to be cool” and emphasising that technological benefits don’t automatically reach everyone without deliberate intervention.


Political Mobilisation and Implementation

Tom Romanoff conducted an interactive exercise with the audience that powerfully demonstrated the gap between abstract support for AI safety and concrete willingness to implement enforcement mechanisms. Through a series of hand-raising questions, he illustrated his “49-51% rule” analysis – that meaningful regulatory action only occurs when more than half of political stakeholders are convinced of the need for change.


This connected to broader questions about accountability raised by Jeanna Matthews, who provocatively asked whether AI safety efforts are serious without “recovery, retribution, remuneration” and consequences for harmful AI deployment.


Practical Pathways Forward

Despite the critical tone of much discussion, panellists offered concrete proposals:


Transparency Requirements: Requiring model providers to report language coverage, safety parameters, and testing limitations as an achievable first step towards accountability.


Multidisciplinary Integration: Genuine collaboration across disciplines, with examples like Mozambique’s strategy integrating law, social sciences, education, and ethics into national AI policy.


Systematic Evidence Collection: Moving beyond reactive responses towards proactive understanding of AI’s societal effects through longitudinal studies and systematic measurement.


Political Engagement: Recognition that effective AI safety requires active citizen engagement and political advocacy rather than relying solely on industry self-regulation or technical solutions.


Unresolved Tensions

The discussion highlighted fundamental tensions that remain unresolved. The balance between innovation freedom and social responsibility continues to generate debate, with questions about how to address extractive development patterns whilst maintaining technological progress. The practical mechanisms for ensuring meaningful participation of marginalised communities in AI governance remain unclear, particularly given the technical complexity and global scale of AI deployment.


The shift from “existential risk” focus at previous summits like Bletchley to more practical governance concerns was noted, though speakers agreed that current approaches remain inadequate for addressing the breadth of challenges identified.


Conclusion: From Hope to Insistence

The session concluded with Jeanna Matthews’ observation that “we are not going to get happiness for all and wellness for all unless we insist.” This shift from passive hope to active insistence encapsulated the session’s trajectory from identifying problems towards demanding action.


The discussion demonstrated remarkable consensus across diverse stakeholder groups on fundamental principles: that AI safety must encompass social and institutional considerations, that current governance approaches are inadequate, and that meaningful change requires active political engagement. The panellists’ commitment to developing a collaborative report suggests recognition that these conversations must continue beyond summit settings.


Ultimately, the session succeeded in reframing AI safety from a narrow technical concern to a broad question of social justice, democratic participation, and equitable development. Whether this reframing translates into meaningful policy change will depend on the sustained political engagement and citizen advocacy that panellists called for in their concluding remarks.


Session transcript

Virginia Dignum

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. session so if you just want to stand here in front they want to make a picture of all of us you Thank you. Thank you. Yes, you have to sit there. Okay. Good morning, everybody. Thank you very much for being here. My name is Virginia Dignam. I will be co -hosting this session with my colleague Gina Matthews there. We both are the chairs of the… Technology Policy Council of ACM. And today we are here to discuss how to move beyond technical safety and looking at aspects of multidisciplinarity, governance, and real world.

impact. Across global AI discussion, safety is too often being framed in technical terms. Model alignment, red teaming, benchmark performance, frontier containment, and so on. These tools matter and they really are further development is crucial. But they don’t address the core question or at least one of the core questions. What determines whether AI systems produce human and societal value or harm in real deployment contexts? That’s what we are going to discuss in this session. AI systems, like we all know, do not operate in isolation. Their impact is shaped by deployment context, by governance capacity, by incentive structures, and by the lived reality of the communities that use and are impacted by these systems. As such, AI systems do not fail simply because of flaws in the model architecture or in the data or in the alignment technique.

they fail or they produce harm because they are embedded in institutional, economic and political systems. So we will have an open discussion with the panelists. It will be two rounds of panelists. And I would like to start by inviting Dr. Lorine Chaman, who is the chairman of the board of the National Institute of Information and Communication Technology in Mozambique, where he is at this moment leading the national strategy on AI for Mozambique. Please.

Lourino Chemane

Thank you. I would like to start by thanking the invitation to join this panel and also to congratulate the government of India hosting this AI Impact Summit. Going directly to the topic, part of this panel, as part of our exercise of crafting the… the national AI strategy, we look to this topic of safety. And for us, safety… working for the policy subject and from the policy formulation point of view. For our safety, we look at it as the protection of people, not only systems. So we look at AI governance must prioritize human, social, and institutional impact, going beyond technical metrics such as robustness, accuracy, or algorithm alignment. We also look at it from the multidisciplinary governance, grounded in the world context of use of AI.

For us, effective AI policies require input from law, social sciences, education, labor, ethics, and affected communities. So the inclusion of the people and how they will feel safe in using these technologies. We look also from the continuous human oversight and institutional accountability. People must know what’s in the bread box, how they’re designed, if they’re functional, if they’re not functional, if they’re not functional, and what factors that are affecting their lives, the decision made by the algorithm, have taken into consideration their feelings in the design phase. We also look for the protection of children, young people and women. From the studies that were conducted, women and children and youth are the first victims of the bad application of the AI.

We also look for the ethical and social assessment. Mozambique is one of the pilot countries adopting the UNESCO principles of ethics in adopting AI, and we are looking also for the dimension defined by UNESCO in this perspective. Sharing what we are doing in the country now, in Mozambique we are drafting, as I mentioned, our national AI strategy with the support of UNESCO and thank Professor Virginia, who is the leading expert in our team, but the contribution of other experts from UNESCO. We are also drafting our data policy and its implementation strategy, because we believe that data is a fundamental element for AI system. We are reviewing our national cyber security strategy. data that we’re collecting now is that there are already cybersecurity -related problems by the use of a young use of AI model.

We just adopted in Mozambique the regulation for the construction and operation of data centers and also the regulation for cloud computing, because we believe that infrastructure is a fundamental and key element for sovereignty of our country in terms of when it comes to safety, but from the policy point of view for the democratic system and all other dimensions. But we also look at it from the digital government point of view. So we’re reviewing also our interoperability framework that’s related to data to make sure that in adopting AI in the public administration, we address our main objective of improving efficiency and efficacy and delivering public services. For us, these are the elements that will be contained in the overall digital transformation strategy that, if everything goes as planned, will be approved by our government during government.

This year, and we are learning a lot in this summit. and gathering important elements that will help us to uplift and improve our work in crafting this element. Thank you for the opportunity to be part of this session.

Virginia Dignum

Thank you very much, Dr. Shaman. I understand that you have to move to another session, so feel free to leave whenever you need to go. We understand the complexities of the program. Now I would like to ask Dame Wendy Ho, Regis Professor of Computer Science, Associate Vice President and Director of the Web Science Institute at the University of Southampton, and also a former member of the United Nations high -level expert advisory body, to give us some provocative statements. They will be. Good. Provoke us.

Dame Wendy Hall

I’m fed up with just towing the party line. So I will… I have to first apologize, because I have to leave at 11. I’m supposed to be on three panels at the moment. and I also have a lunch date at midday in town. So, that’s my morning. I want to say, I think, three things. One is, what’s really… Four. If you know Monty Python, nobody expected the Spanish Inquisition. Anyway, so first of all, it’s been wonderful to be in India. I love India, and I have a love -hate relationship with this summit. It’s too big. There’s too much going on, and not enough actual real debate about the core. There’s going to be some sort of platitude statement come out today.

Yeah. And I’ve just been come back from the UN. Our advisory board and the new scientific panel get together. They’ve got a panel going on. At the moment, the dialogue that’s starting in… The dialogue that’s starting in the AI for Good conference in Geneva in July, we hope will be a real dialogue. I don’t know what form it’s going to take yet. But we have to knock the world leaders’ heads together. Now, I’m now going to say something which also really struck me. Thank you. Is that working? Yes? At this conference. Everyone’s, I love, you know, in India, AI means all -inclusive. But 50 % of the population weren’t included yesterday, the women. Right? There were no women.

The CEOs of every country, every company, there was one lady CEO from Accenture, I think. There were a couple of ladies on the panels at the end. It was all men. The alpha males of this world. The alpha males of this world. The alpha males of this world. Men. Men. Men. The alpha males of this world. right the world leaders that spoke the ceos that spoke that this world is dominated by men and my mantra has always been in terms of the the lack of women and other other some other diversity points as well but mainly women is if it’s if it’s not diverse it’s not ethical people don’t really understand what that means that means is if you haven’t got a diversity of people discussing a problem how are you going to actually sort out the biases if you haven’t got women at the top level making these decisions trying to set up the guidelines i mean your comment was yeah we want to make sure for the safety of women and children well let’s include the women and children in the discussions i mean that my third point um is that we we are watching i mean i’m very into watching these um experiments i did it all through the web and we need to learn how to monitor what’s going on so that we can say what is the right direction to go in the future.

It means collecting data and evidence and doing longitudinal studies, and it takes time. But take, for example, what Australia is doing with social media. We’ve heard at this conference several other… for teenagers. I mean, didn’t Macron… Who was there yesterday? Macron said under 15 in France. Our Prime Minister, who constantly changes his mind, so I don’t suppose it will happen, but he’s talked… Sorry, that’s a joke for any Brits in the audience, but there aren’t many. He’s saying 16 in the UK, some out of Spain saying 16. There will be unintended consequences of that. Making a ban like that without thinking about the nuances of… Well, what happens if… Well, first of all, the kids are ingenious enough to get round it.

And then they’re back on the dark side of things again, even worse than before. Because they’re doing it in… secret um what happens when they start to use social media how do we train them to do it properly my worry about a ban like that i said i mean it’s very brave of australia to do it first and we can watch and they’re saying six months time they’ll have some evidence of how many under 16s are still on social media but the behavioral issues take much much longer to explore than that and we have to get over this fact that whilst the technology is going on a pace because the alpha males are driving it without any you know just worrying about technical safety maybe um we have to we can’t say well it’s all going too fast we can’t do any we have to study this stuff um we have and i think this is what i want the acm to do i talked at my keynote talk this is my last point by the way i my keynote talk on whatever day it was wednesday on the main stage I talked about two things happening in the UK actually around one is our National Physical Laboratory which is the sort of equivalent of NIST in America has just launched with government backing a centre for AI measurement and the AI Security Institute in the UK and the other security institutes that are growing up around the world that network is now being called largely driven by the US because Trump doesn’t want to call it anything to do with safety I can’t believe I just said that anyway, but then he was the man that drank bleach in Covid they’re calling their network the network for AI measurement and I think this is a breakthrough I think this is, I mean I love AI for science, but we need to think about the science of AI and I think, and that’s a social it’s a socio -technical and I’m starting to call these things social machines as we did on the web that came from Tim Berners -Lee the idea of technology and society coming together to create artefact systems that wouldn’t have existed if they hadn’t come together and the technology doesn’t understand society at the moment most of society doesn’t understand this technology but together those two systems will create socio -technical systems or social machines and I want to build a science of studying social machines and it will be called AI measurement or AI metrology I love that word, I’ve learnt to say it it’s a cool script it’s a cool script everything’s Greek to us I love the yogurt don’t you love Greek yogurt so sorry I’m finishing there AI metrology and we’re going to launch I’m chair of the ACM publications committee or co -chair he’s president we’re going to launch a journal first journal in this area and it will be associated pulling together work and the data sharing the data that people are collecting to

Virginia Dignum

thank you Wendy, very important point and I think you can leave it there again if you understand when you have to leave you just leave we understand that so for the rest of us in the panel we start the day or the session talking about AI safety needs to be more than just the technical robustness I love your idea of the the social machines of this AI metrology yes it is it does yeah yeah with me only sometimes probably but i i did my best now yeah anyway i would like to bring you into the discussion how can we both dr shaman and wendy wall gave us examples of issues that we need really to include in going beyond this idea of technical robustness even if systems perform exactly as they have been designed and safely designed they will still probably be causing harm which is not probably just a technical failure but also a failure of inclusion a failure of imagination so i would like to get your opinions from where where you think that we can change the where can we start changing the discourse of of a pure technical approach to a broader inclusive societal institutional approach to the discussion on AI safety, on AI measurement, and so on.

And I would like to start this question, which is for all of you, starting with Professor Ioannis Ioannidis, who is the current president of ACM, and also a professor at the University of Athens.

Yannis Ioannidis

Thank you very much for having me in this panel. I’m a technical person, very sociable, but technical, that’s my expertise. So I want to separate the issue of safety of AI and talk about safety of AI use. And for me, in my technical mind, there is the AI technology, And I think that’s where I’m going to be. which is the algorithms, which are the models, and so on, from the use of this technology, the use of the software that is on AI. And we are using this software both in the beginning with the input that we give it and at the output when we create what is called I have an artificial intelligence, I have an agent, and so on, to do this or that or the other.

The technology, there’s no issue, there’s no social issue in the safety of the technology itself. It’s like the car, whether it’s working or not. There is no issue of safety. And innovation in that regard has to be let free, like the human mind and all the innovators to progress on that. And robustness and not having bugs or not bugs are an issue there, but it’s a day in the park for us. Software engineers and computing scientists. The use is the important thing and sometimes the key thing that people are talking about is the end result, the model. We put it in the judge’s hands, we put it in the doctor’s hands, we put it in the youth’s hands in terms of social media and so on.

This we have to work on, measure, regulate potentially and in any case all sciences like it was said before, especially the humanities, philosophers, ethicists, legal people, cognitive scientists and so on have to come together to address this. But there is also the input side which is again humans doing it. Humans are determining the first parameters where the system is. The first parameters where the systems are starting to be trained. The data that we feed it, it’s again humans that are choosing it. And as much as we have to… regulate or measure or think the end result the model the humanoid or non -humanoid robot that is telling us do thing or that or the agent and the same level of importance is that we have to think about what to do with what comes in and humans are using it different humans are feeding it and i think the safety must start from there we should not grow the input size we should not let it run for free even at that level we have to have the different sciences the different technologies civic society to be represented there and having an ai with whatever data we happen to have or whatever data generates billion dollar industries these are the data that that will use it’s wrong i mean there is a right and wrong here and and we have to be on the right side of that you so As a quick wrap -up, so for others to express their opinion, technology should be running free, but both input and output and result should be in the

Virginia Dignum

Thank you, Wendy. Thank you, Dr. Shiman. Thank you. See you soon. Okay. Okay, let’s continue the discussion. Sarah, Sarah Hooker, you are the co -founder and president, I believe, of Adaption Labs, a very young company, I believe. You have been before with Cohera and with other… developing organizations. What do you think about this balance or tension between the technical robustness, the technical safety measures, and the need for understanding more the environment, the context, the social context in which systems are built? And how can we technologists, those that develop like yourself, be developing systems while they are aware of this type of tension and also the insertion of the systems in very concrete, real -world domains?

Sara Hooker

And typically it’s been how do you build extremely large systems at the frontier of what’s possible. I think it’s interesting. I’ll share a few things. So one, I think what Wendy was getting to is that one of the biggest signals of whether you actually care about safety is what the forms of prestige and power look like. I think that’s mainly her comment. She’s saying, you know, we are at the pinnacle of where we all gather to discuss these things. And the way resources have actually been allocated doesn’t show that people are serious, which I think is fair. I think you have to look to the surrounding environment to understand if people are serious or not about safety or whether it’s just a panel title, candidly.

And maybe today it’s just a panel title. I think in general my philosophy about these forums is that you have to look six months out to actually get a signal of what has happened. That doesn’t mean that they’re not critical. I frankly don’t know if the expectation should be anymore that we have universal rules for AI. It’s not clear to me that that should be the outcome of these forums. So I think decidedly, if you’re going in with that expectation, you’re going to be very disappointed because I don’t think that’s going to happen at this forum or at the next one. But I do think it’s worth asking, well, where are we going as a conversation about safety and the precision of it?

Because for me, that’s the most interesting part. Time is very valuable. It’s our most precious resource. And so for me, the more precise the conversation, the better. I do think if I look at the overarching arc from Bletchley to now, we’ve had now four summits. We’ll have the fifth. It’s worth asking, has it become more precise? Candidly, and thank goodness, yes. I still remember Bletchley where it was all about existential risk and six months from now, and there were protests and hunger strikes from people who thought machines were taking over, but no precision to the conversation, no accountability for where these timelines were coming from. Thank you. And then I look to now, and now we have a very messy conversation about safety.

Certainly everyone has a different view. It’s still a blanket term, but at least it’s more accountable to what is the real -world impact of these conversations and the technology that we build. Because when I started my career as a computer scientist, we were just in research conferences. I mean, I think the fact that ACM is so well represented on this panel speaks to the origins of, like, you know, a very narrow group of people who work in a very academic community, and now our technology is used everywhere. So it’s a much more important conversation to have. So, one, I think we have gotten more precise, but it’s still very murky what people mean. Here’s the other thing I’ll say.

I think there’s often desire in these conversations about where technical meets the ecosystem to say, oh, well, safety has to be everything to everyone. And, frankly, that’s not a precise conversation either, because the truth is there are tradeoffs. When you build systems, there are tradeoffs. And too often when these conversations enter this arena, there’s a misconception about the sheer difficulty of how do you actually impose constraints on these systems. So the other thing I’ll say is the biggest thing that has to come out is an understanding of what you give up, because you give up something. The big things for me are, you know, I work a lot on language. My big ask is just report what languages model providers cover.

Report essentially, like, what they say that the safety parameters are not, and report what they don’t cover or they haven’t tested for. This sounds like a simple ask, but I think this is actually quite precise. And what it establishes is what have we given up? What are you confident about what have we given up? There’s many versions of this, but too often, and this is my ask, in conversations like this, we end up just circling around and saying we want safety, we need perspectives of everyone in the model. And the truth is that’s also a naive statement, because it is almost certainly the fact. that there will be some trade -off. Someone will not be represented.

Someone will be represented. And actually, what I think these forums are very useful for, having us all in the same conference, is about galvanizing ecosystems where you can make your own constraints and trade -offs, but also having a discussion about, you know, for the models that are being shipped that serve billions of people, we have these static monolithic models that are served the same way. What are the trade -offs that they have made, you know? And that’s, you know, as someone who’s built these models, there are almost certainly trade -offs in place. So we need to understand the state of the world as well as where we want to go. And it’s okay if there are clearly, you know, things left out.

It’s more that they have to be stated out loud. That’s my wish list, yeah. So maybe I’ll leave it there, and I’ll pass it on. I think you were next. Go for it.

Virginia Dignum

Thank you very much. Thank you, Sarah. And indeed, next one. Jibu Elias, you are a researcher, but you are also an activist who examines how technology and innovation institutions receive knowledge, labor, and legitimacy. so help us making sense of what it means safety, AI safety for society that seems to be what you do

Jibu Elias

I was more interested in the real world consequences of the panel title but wonderful conversations by Sarah and Wendy and all here so when I look back how technology has shaped my understanding of the world I feel like an idiot because I grew up in a time watching this animated shows like Jetsons and all these futuristic shows believing that the more advanced the technology gets the better the better our world will be I grew up as this idealist kid who thought when AI comes there will be no inequality that’s what I’m saying I was an AI kid back then and nowadays when I look at these things. I mean, there have been phenomenal work done by computer scientists like people present here in panel, Sarah and everyone, right?

On technical aspects of things. But more and more, we are seeing AI now becoming more political. It’s becoming a larger sociopolitic construct in general. And what concerns me more is its exploitative and extractive nature. I think Sarah mentioned about Bletchley and where the talk was all about existential risk. But now I think we are all at a point where we are agreeing that the accumulated risk have become more worrying at the same time. I’ve been tracking people who’ve been using tools, people who’ve been impacted by and those who were excluded from the benefit of this kind of technology, right? If you go around states like Telangana, Chhattisgarh, Jharkhand, there are big group of tribal populations.

You know, their language is not represented in Gemini or anything, right? And I know everybody wants to impose Hindi on all of us, but sorry, I still, you know, Hindi is not the national language of India. But what about them? How do they get access? So more and more, what I’m seeing is the divide between the socioeconomic divide becoming more wider, especially in countries like India. And, you know, it’s fascinating that, you know, we’ve been celebrating the data centers that we’ve been building. And I mean, I had firsthand experience of a data center that’s very much celebrated in Telangana in a place called Make a Good. I’m not, I don’t want to mention the company associated with it, but how it was built, how the people were manipulated, how the groundwater being extracted, right?

In a place where there is a water scarcity, you know, and when I asked the company, you know, Hey, this happened and I have a close association with that organization. and they said we interacted with the community leaders. So what I did, I reached out to the serpents. He has no idea what they mean. So essentially there’s a lot of, you know, I mean, in India we know what that means, reaching out to community leaders, bribing the politicians. But that’s the larger things I’m worried about. And the people who are using this technology, you know, now some people are talking about terms like AI psychosis. I don’t know how valid those terms are. But it’s fascinating to see that me and my executive director of Muslim Foundation has been chatting about how elderly people are using these models.

It’s very fascinating and it’s worrying at the same time. You know, we often put our attention on younger folks. But, I mean, it’s funny at the same time, but still. So my larger question is why, you know, the going forward, like yesterday the gentleman from US was telling that, you know, everyone should use a US AI stack. I think people in Denmark will be a good idea how US rates its strategic partners. Yeah. Yeah, so my larger question is where are we headed, right? Are we still going to have this extractive nature, you know, the data annotation workers who are building these models, right? So I will stop here and looking forward to the next level of conversation.

Virginia Dignum

Unfortunately, we have our second round of the panel and like all that, what we all are complaining about, it will happen. We all say our thing and the dialogue will need to be done outside in the corridor and we really hope also to, after this meeting, try to combine all what has been said in some kind of ask or report. But anyway, now we are moving to the second part of the panel. We were all going to be in the same panel, but there weren’t enough chairs. So we are splitting into two. Patience with us. You are proxy. Okay. okay everyone uh thank you so much for being here in the second part of our session and thank you for all of the panelists who are joining me here on stage i think we’re going to do something a little different than the first panel did i would like everyone to just quickly introduce themselves um nay how would you uh start

Neha Kumar

hello check okay uh hi everyone i am neha kumar and i’m an associate professor at georgia tech in the school of interactive computing i’m also uh president of this special interest group on computer human interaction and so uh this summit is um is really a coming together of many different worlds for me i actually i grew up in delhi so it’s been uh about coming home but also uh a lot of people have been coming to me for a long time and i’m really excited to be here a lot of the conversations we’ve been having are conversations that are really very very active right now discipline of human -computer interaction, HCI, some of you might know it, and it’s great to see how central human centricity is to what we’ve been discussing.

And third, something that’s been much closer to my own area of study is really looking at HCI and technology use in the context of social impact, and this has been named in many different ways over the years, social goods, social impact, societal impact, public interest, whatever you want to call it. But really, it’s an area that we’ve been studying for many, many years before AI was on the scene. And so I would say that we’re looking at multidisciplinarity in this panel, and to me, there’s a lot of learning that could be happening from many of these disciplines that have been actively looking at some of these, agreed that the platform that we’re looking at is different.

It’s unprecedented in many ways. At the same time, there’s a lot that we have to learn from as well. So I’ll stop there.

Virginia Dignum

Thank you, Neha. Thank you, Eugena. Merve Hickok.

Merve Hickok

I’m the president and policy director for Center for AI and Digital Policy. We are an independent think tank working globally at the intersection of AI policy and human rights, democratic values, and rule of law. So I would like to take a more expansive view of safety and governance at large. More to come on that. Thank you.

Virginia Dignum

Rasmus?

Rasmus Andersen

Yes. I think this works now. Yes, my name is Rasmus Andersen. I work with the Tony Blair Institute of Government where I advise leaders around the world at the prime ministerial or presidential level, but also at the line minister level on navigating AI. What does it mean for them? How they both deliver results to citizens with AI and also avoid them. I think it’s important to avoid. harm to their citizens. And so the question of safety comes up a lot, but it’s also usually not the top of leaders’ minds, and it’s really about, for me, helping them often realize the long -term best interest, informed self -interest of what will actually, what is the world likely to look like in 2030, in 2035?

How can you best make sure that your country and your constituents and citizens are in the best possible position as the world will change very rapidly? Thank you.

Virginia Dignum

Tom?

Tom Romanoff

Is this one working? Great. I am not James. I am Tom Romanoff. I am the director of policy for ACM, where I help manage the policy committees, which Gina and Virginia chair our global committee. We also have regional committees across the world, including the United States, Europe, Asia, India. Africa. Africa. and the APEC regions. So what my job at ACM is is to help the computer science folks translate their recommendations on harms or issues that they see in the technology to policymakers and engage those policymakers on behalf of ACM. So before that, I was at a think tank in Washington, D.C., so I worked with Congress and have been working in tech policy for many years now.

Jeanna Matthews

Okay. Okay. So in the interest of time, I’m going to get right to a very provocative question, which is we’ve been seeing wellness for all, happiness for all, in the presence of a fairly extractive and exploitive potential. Does history tell us that it’s going to be great for everyone, just works out, or there have to be some musts, not just good intentions or shoulds? If we are not seeing things like recovery, retribution, remuneration, we don’t see people going to jail when they do bad things with AI. Are we serious about AI safety?

Merve Hickok

So no, history does not show us that it’s going to be cool. And history is definitely another good indicator, which means that we need to fight harder this time around and try to get that level up, right? So history is always a story of the powerful, of the winner, like who gets to decide the narrative. And we are seeing that again today, the narratives around what is safety, what should be the evaluations, where should the money go, whether we should regulate or not. Whether it should be. It should be should or must. is always the narrative of the powerful. And when Dame Andy Hall mentioned, the representation was very much the same kind of people throughout the conversations, higher -level conversations yesterday.

So I think first and foremost, the narrative needs to change in safety as well. So far it has been, I think it’s been an evaluation, but so far the most important safety issues has been around nuclear, cyber security, chemical weapons, etc. Yes, they might be, or existential risk, which is another story. Yes, maybe we talk about those, we should talk about those, but there are real consequences right now on people’s rights, freedoms, ability to live with their dignity, and people’s rights to participate in democracy, and democratic processes. All of these are undermined, and as an organization where those… three issues are in our mission, we are seeing this more and more under pressure. So this is the time to get your voices up as citizens, as consumers, as professionals in your own right, and try to change the narrative.

Because otherwise it’s going to just be a repeat of history.

Jeanna Matthews

Well said. Neha?

Neha Kumar

Yeah, I think coming back to something that Wendy said, right, about being all -inclusive at the same time as having no women around in decision -making places, I think that that is something we should really be thinking about. I mean, do we have a history of being inclusive? What inclusivity have we been practicing in our innermost circles? It’s easy enough to say that the poorest of the poor should have access to this AI, but how are we doing on being all -inclusive? So I think there are lessons from disciplines such as feminist and women’s studies that we can learn from to really ask the who question. Who is making decisions? Who is being benefited? Who is part of the design process?

That’s one. Second, I would say in learning from design, which is one of the disciplinary disciplines that I’ve trained in, thinking about zooming out is great, and that’s where we have value. We talk about inclusivity. We talk about diversity. We talk about all these great -sounding words, but then when we zoom in, what are we actually doing? I think that a lot of the dialogue that we’ve been having is in this disembodied state where we talk about infrastructure, and we talk about data, and we talk about interoperability, and we talk about processes, but who is benefiting? The panelists before me also talked about aging, so people who are… more vulnerable, where are they in the conversation?

And lastly, with regards to development studies, thinking about… what are the benefits of development really. Like we want development and impact, and that’s what we’re talking about here for five days at the summit. But we know from historical perspectives that development hasn’t worked out so well for so many people and so many countries across the globe, and how are we making sure that we don’t repeat those same mistakes? And I think these have to be very much part of the conversations so that it’s safety of the human, of the body, of our values, of just our communities, our structures, social structures that are so critical to us. Thank you.

Jeanna Matthews

Gnasmus?

Rasmus Andersen

Yeah, I think we’re not seeing people go to jail. I’m not sure we have seen something just as of yet that really where that’s the case. There are lawsuits ongoing on suicides among young people, et cetera. But I do think that we will see a moment pretty soon where something does go pretty wrong, and then we’re going to have a decision on what we do with that. Some people – this is a very dark parallel. Some people said we needed to have World War II to have the UN and other systems that were put in place to avoid that happening again. And, yeah, I think it’s a matter of time when we get something, and we will have to make those decisions.

And currently, I think I’m not super confident that we will interpret those events correctly, that we will have a realistic view of what might change and how we might prevent them from happening again. And it could be people leveraging them, organized crime. It could be – I mean, we’ve had – Very recently, these – where we’ve successfully had Elon Musk and Grok stop allowing – people to create non -consensual deep fakes of nudes, which had happened in the millions. So we have sort of small, that’s not small, but we’ll have much bigger things than that. And I do think still when that happens, we will have to think about both pros and cons and costs and benefits.

When we regulate things, we don’t regulate risks down to zero. You know, when you get into a car, there’s a risk something will happen, but you still need to get places. And I think it’s, with safety, we do have to take some of the same lessons, as Mariam mentioned, from nuclear, from flights. You know, it used to be that when you got on an airplane, you know, something like 200 or 1 ,000 more of them crashed than today. And we’ve reduced that level of risk very far down. And I do think that the political level, while we need technical inputs, the only force in the world. I can really take all those considerations together and think about the partial perspectives that technical people have, that civil society has, that industry has.

Really, the only place it comes together imperfectly is at government, and that’s why it’s so important that we are here, however imperfect these summits are.

Jeanna Matthews

Tom?

Tom Romanoff

All right, something a little different. I would like everybody in the room to raise their hand if you think safety is an important aspect of the AI deployment. Great. Keep your hands up. Keep your hands up. Now, take your hand down if you think that safety should be enforced on the output of AI outcomes. Oh, wow. Okay. Take your hand down if you think that laws should apply to the outputs of AI rather than AI itself. Okay? All right, you can go ahead and put your hands down. It wasn’t as dramatic as I thought. I thought it would be. So I’m going to talk a little bit about the 49 -51 % rule. And across all political spectrums, no matter where you are in the world, there’s this idea that you only need 51 % of the political willpower to start passing regulations, and 49 % won’t get it done.

It applies in the business world as well. You have 51 % of the board control or equity in a company. Basically control that company, right? Right? Lobbyists have an extreme incentive to not push anybody past that 51 % or 49 % in order to have an action in the political space, right? So across all of our governments in here, there is private – I don’t want to say private sector because they’re important, but there are private entities that would like to have an action in the regulatory space. And it’s not until 51 % of those politicians or that political regulator or that regulation gets – it’s to that threshold that you’ll start seeing some changes.

And so you see examples of that with the example that my colleague here mentioned with deepfakes or notification applications causing worldwide outrage. And you started seeing governments across the spectrum say, that’s something that at least 51 % of our population does not want. And so they start moving towards regulating or enforcing current laws to punish that kind of action. And so I say all this because there is also this conversation around moderates, right? We don’t know where the technology is going. We have computer scientists. We have civil society screaming about the need for action, for security within the stack, right? And the rest of the world are moderates. They’re still engaged. They’re still engaging this AI.

They’re still figuring out what it can be doing. And it’s not until some kind of action happens, some kind of consequence, some kind of… issue happens that people wake up to the folks who’ve been screaming about it for years. And so what I encourage everybody in here is not be a moderate. Pick a side and start encouraging your politicians, your family, your community. Educate them. Figure out ways to communicate the very heady technical aspects of security within the AI stack to the common person, to the person who can understand it. And that’s when you’re going to start seeing the regulations start to roll out.

Jeanna Matthews

I think that’s a great place to end because I think we are not going to get happiness for all and wellness for all unless we insist. We’re all going to have to insist. It’s not going to come automatically. So asking each of us to ask ourselves a question, what are we going to do to insist, I think is a really good place to end. I think we started this session a little late but I’ve been told that they would really like us to try to end on time so I think I will leave it there but we would love to engage you in conversation out in the hall after this session is over thank you to all the panelists in the first session and also all of us up here thank you so much thank you all indeed I think that there is actually time to have one question or two questions maybe now there are too many questions I have to vote ok sir there and the lady there

Participant

so I would like to a very short question I would like it’s not a question it’s a suggestion to the gentleman who has beard on that side name I missed yeah Jitu that go get some life at Sarvam I think that setup your agenda of Hindi and other language is going to die very soon so you have to get some life of that Hindi imposition and all those things nobody will impose down the line few ones sure thank you so much for the provocative discussion this is what I was hoping to get that the India impact summit my question is about how can regulatory artifacts like data set cards model cards system cards rigorous evaluations user feedback now be extended to cover multiple languages multiple contexts and multiple cultures I think a lot of hard work

Speaker 2

be

Merve Hickok

ing used as well. So it might perform really good in English, but we know that these systems are not safe or secure or perform that well in many different languages that are not English or as resource intensive as English. So great question. They need to be dynamic and they need to reflect languages. And I will also say just very briefly following up on this is that these are things that governments can require for model providers to release models in your jurisdiction. And they so far are not. Thank you very much. We could insist. We need to insist. They are like California started this. I just want to just…

Virginia Dignum

I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be able together with all the panelists to create some kind of model for the next year. Thank you very much. the measures and we will hopefully facilitate and continue this discussion I would ask all the panellists of the first and of the second round to stay here for a memento from the organisation and I would like to thank you all for being here and all the panellists again of course thank you so much Thank you Thank you. Thank you.

V

Virginia Dignum

Speech speed

62 words per minute

Speech length

1141 words

Speech time

1103 seconds

Safety beyond technical metrics

Explanation

Virginia stresses that AI safety cannot be limited to technical robustness, accuracy or alignment. It must incorporate multidisciplinary governance, societal context and real‑world impact to protect people and values.


Evidence

“And today we are here to discuss how to move beyond technical safety and looking at aspects of multidisciplinarity, governance, and real world” [1]. “Across global AI discussion, safety is too often being framed in technical terms” [7]. “What do you think about this balance or tension between the technical robustness, the technical safety measures, and the need for understanding more the environment, the context, the social context in which systems are built?” [3].


Major discussion point

Broadening AI Safety Beyond Technical Metrics


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development


Policy recommendations for AI safety

Explanation

Virginia calls for concrete policy work that translates the broader safety conversation into actionable national AI strategies and models for the coming year.


Evidence

“AI safety needs to be more than just the technical robustness” [4]. “I also hope that we will be able together with all the panelists to create some kind of model for the next year” [91].


Major discussion point

Policy, Regulation, and Political Will for Enforcing AI Safety


Topics

Artificial intelligence | The enabling environment for digital development


L

Lourino Chemane

Speech speed

160 words per minute

Speech length

573 words

Speech time

213 seconds

Governance beyond technical metrics

Explanation

Lourino argues that AI governance must prioritize human, social and institutional impact, treating safety as the protection of people rather than just system performance.


Evidence

“So we look at AI governance must prioritize human, social, and institutional impact, going beyond technical metrics such as robustness, accuracy, or algorithm alignment” [2]. “For our safety, we look at it as the protection of people, not only systems” [8]. “We also look at the multidisciplinary governance, grounded in the world context of use of AI” [17]. “We look also from the continuous human oversight and institutional accountability” [18].


Major discussion point

Broadening AI Safety Beyond Technical Metrics


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development


National AI policy and infrastructure safety

Explanation

Lourino describes concrete steps in Mozambique to regulate data‑center construction, cloud computing and cyber‑security as part of a national AI strategy that safeguards sovereignty and public safety.


Evidence

“We just adopted in Mozambique the regulation for the construction and operation of data centers and also the regulation for cloud computing, because we believe that infrastructure is a fundamental and key element for sovereignty of our country in terms of when it comes to safety” [100]. “We are reviewing our national cyber security strategy” [101]. “We are also drafting our data policy and its implementation strategy, because we believe that data is a fundamental element for AI system” [102].


Major discussion point

Policy, Regulation, and Political Will for Enforcing AI Safety


Topics

Building confidence and security in the use of ICTs | The enabling environment for digital development


Inclusive policy formulation

Explanation

She emphasizes that effective AI policies need input from law, social sciences, education, labor, ethics and affected communities.


Evidence

“For us, effective AI policies require input from law, social sciences, education, labor, ethics, and affected communities” [21].


Major discussion point

Policy, Regulation, and Political Will for Enforcing AI Safety


Topics

The enabling environment for digital development | Artificial intelligence


D

Dame Wendy Hall

Speech speed

147 words per minute

Speech length

1140 words

Speech time

462 seconds

AI measurement and social machines

Explanation

Wendy proposes a new field of AI measurement or AI metrology to study socio‑technical “social machines”, linking technical artefacts with societal impact.


Evidence

“I want to build a science of studying social machines and it will be called AI measurement or AI metrology” [78].


Major discussion point

Measurement, Metrology, and Transparency Mechanisms


Topics

Monitoring and measurement | Artificial intelligence


Diversity as ethical prerequisite

Explanation

She stresses that without gender and broader diversity AI safety cannot be ethical, because bias‑free design requires women and other under‑represented groups at decision‑making levels.


Evidence

“if it’s not diverse it’s not ethical people don’t really understand what that means that means is if you haven’t got a diversity of people discussing a problem how are you going to actually sort out the biases if you haven’t got women at the top level making these decisions” [46].


Major discussion point

Inclusion, Diversity, and Representation in AI Governance


Topics

Closing all digital divides | Human rights and the ethical dimensions of the information society


Y

Yannis Ioannidis

Speech speed

140 words per minute

Speech length

537 words

Speech time

229 seconds

Separate safety of AI technology from safety of AI use

Explanation

Yannis argues that safety must be considered not only for the algorithmic artefacts but also for how humans feed, interact with and deploy them.


Evidence

“So I want to separate the issue of safety of AI and talk about safety of AI use” [10]. “we have to think about what to do with what comes in and humans are using it different humans are feeding it… safety must start from there… both input and output and result should be in the” [14].


Major discussion point

Broadening AI Safety Beyond Technical Metrics


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Need for systematic measurement and regulation

Explanation

He calls for interdisciplinary work to measure and regulate AI inputs, outputs and the surrounding data, involving humanities, law and ethics.


Evidence

“This we have to work on, measure, regulate potentially and in any case all sciences like it was said before, especially the humanities, philosophers, ethicists, legal people, cognitive scientists and so on have to come together to address this” [75]. “The data that we feed it, it’s again humans that are choosing it” [65].


Major discussion point

Measurement, Metrology, and Transparency Mechanisms


Topics

Monitoring and measurement | Data governance | Artificial intelligence


S

Sara Hooker

Speech speed

191 words per minute

Speech length

918 words

Speech time

287 seconds

Precise safety conversation and trade‑offs

Explanation

Sara highlights that safety discussions are currently vague; they need to explicitly state trade‑offs, report omitted safety parameters and be grounded in the surrounding environment.


Evidence

“But I do think it’s worth asking, well, where are we going as a conversation about safety and the precision of it?” [26]. “And, frankly, that’s not a precise conversation either, because the truth is there are tradeoffs” [28]. “Report essentially, like, what they say that the safety parameters are not, and report what they don’t cover or they haven’t tested for” [30]. “the more precise the conversation, the better” [34]. “When you build systems, there are tradeoffs” [32].


Major discussion point

Measurement, Metrology, and Transparency Mechanisms


Topics

Monitoring and measurement | Artificial intelligence


Make trade‑offs explicit

Explanation

She urges that reports should list what safety aspects were omitted, making the underlying compromises visible to stakeholders.


Evidence

“Report essentially, like, what they say that the safety parameters are not, and report what they don’t cover or they haven’t tested for” [30]. “What are the trade‑offs that they have made, you know?” [82].


Major discussion point

Measurement, Metrology, and Transparency Mechanisms


Topics

Monitoring and measurement | Artificial intelligence


J

Jibu Elias

Speech speed

155 words per minute

Speech length

658 words

Speech time

253 seconds

Extractive and exploitative AI impacts

Explanation

Jibu points out that AI development often exploits data‑annotation workers and creates extractive economic relations, raising concerns about fairness and sustainability.


Evidence

“And what concerns me more is its exploitative and extractive nature” [61]. “Are we still going to have this extractive nature, you know, the data annotation workers who are building these models, right?” [66].


Major discussion point

Socio‑economic and Environmental Impacts of AI Deployment


Topics

Social and economic development | Human rights and the ethical dimensions of the information society


AI as a sociopolitical construct

Explanation

He notes that AI is increasingly political, shaping power dynamics and governance structures.


Evidence

“But more and more, we are seeing AI now becoming more political” [41]. “It’s becoming a larger sociopolitic construct in general” [113].


Major discussion point

Socio‑economic and Environmental Impacts of AI Deployment


Topics

Social and economic development | Human rights and the ethical dimensions of the information society


N

Neha Kumar

Speech speed

163 words per minute

Speech length

643 words

Speech time

236 seconds

Feminist perspective on who benefits

Explanation

Neha calls for AI design to ask explicitly who benefits, drawing on feminist and women’s studies to surface hidden power relations.


Evidence

“who is benefiting?” [36]. “lessons from disciplines such as feminist and women’s studies that we can learn from to really ask the who question” [53]. “Who is part of the design process?” [54]. “Who is making decisions?” [56]. “Who is being benefited?” [59].


Major discussion point

Inclusion, Diversity, and Representation in AI Governance


Topics

Closing all digital divides | Human rights and the ethical dimensions of the information society


Human‑centric safety framing

Explanation

She stresses that safety must protect human bodies, values and community structures, not just technical performance.


Evidence

“And I think these have to be very much part of the conversations so that it’s safety of the human, of the body, of our values, of just our communities, our structures, social structures that are so critical to us” [35].


Major discussion point

Broadening AI Safety Beyond Technical Metrics


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


M

Merve Hickok

Speech speed

148 words per minute

Speech length

454 words

Speech time

182 seconds

Narrative shift toward inclusive safety

Explanation

Merve argues that safety narratives are currently dominated by powerful actors and must be reframed to include marginalized voices and rights‑based concerns.


Evidence

“So I would like to take a more expansive view of safety and governance at large” [5]. “So I think first and foremost, the narrative needs to change in safety as well” [6]. “is always the narrative of the powerful” [68]. “history is always a story of the powerful, of the winner, like who gets to decide the narrative” [70]. “So this is the time to get your voices up as citizens, as consumers, as professionals in your own right, and try to change the narrative” [69].


Major discussion point

Inclusion, Diversity, and Representation in AI Governance


Topics

Closing all digital divides | Human rights and the ethical dimensions of the information society


Human rights impacts of AI

Explanation

She highlights that AI can threaten rights, dignity and democratic participation, urging policies that protect these fundamental values.


Evidence

“real consequences right now on people’s rights, freedoms, ability to live with their dignity, and people’s rights to participate in democracy, and democratic processes” [110]. “We need to insist” [94].


Major discussion point

Socio‑economic and Environmental Impacts of AI Deployment


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


R

Rasmus Andersen

Speech speed

158 words per minute

Speech length

568 words

Speech time

214 seconds

Long‑term foresight in AI policy

Explanation

Rasmus stresses that leaders need to consider the long‑term societal picture (2030‑2035) when shaping AI strategies, ensuring citizen safety and wellbeing.


Evidence

“helping them often realize the long -term best interest, informed self -interest of what will actually, what is the world likely to look like in 2030, in 2035?” [40]. “How they both deliver results to citizens with AI and also avoid them” [25].


Major discussion point

Policy, Regulation, and Political Will for Enforcing AI Safety


Topics

The enabling environment for digital development | Artificial intelligence


Political thresholds for regulation

Explanation

He notes that achieving regulatory change depends on surpassing a political threshold (the 51 % rule) and mobilising public advocacy.


Evidence

“And I do think that the political level, while we need technical inputs, the only force in the world” [93]. “How can you best make sure that your country and your constituents and citizens are in the best possible position as the world will change very rapidly?” [112].


Major discussion point

Policy, Regulation, and Political Will for Enforcing AI Safety


Topics

The enabling environment for digital development | Artificial intelligence


T

Tom Romanoff

Speech speed

155 words per minute

Speech length

628 words

Speech time

242 seconds

Enforce safety on AI outputs

Explanation

Tom argues that legal frameworks should target the outcomes produced by AI systems rather than the underlying models themselves.


Evidence

“Now, take your hand down if you think that safety should be enforced on the output of AI outcomes” [22]. “Take your hand down if you think that laws should apply to the outputs of AI rather than AI itself” [42].


Major discussion point

Policy, Regulation, and Political Will for Enforcing AI Safety


Topics

The enabling environment for digital development | Artificial intelligence


51 % political rule for regulation

Explanation

He explains that regulatory adoption typically requires a 51 % majority among policymakers, and that lobbyists work to keep the threshold just below that level.


Evidence

“And it’s not until 51 % of those politicians or that political regulator or that regulation gets – it’s to that threshold that you’ll start seeing some changes” [95]. “Lobbyists have an extreme incentive to not push anybody past that 51 % or 49 % in order to have an action in the political space, right?” [120]. “So I’m going to talk a little bit about the 49 -51 % rule” [121].


Major discussion point

Policy, Regulation, and Political Will for Enforcing AI Safety


Topics

The enabling environment for digital development | Artificial intelligence


P

Participant

Speech speed

126 words per minute

Speech length

141 words

Speech time

67 seconds

Multilingual regulatory artifacts

Explanation

The participant calls for extending dataset, model and system cards, as well as evaluation frameworks, to cover multiple languages, contexts and cultures.


Evidence

“how can regulatory artifacts like data set cards model cards system cards rigorous evaluations user feedback now be extended to cover multiple languages multiple contexts and multiple cultures” [74]. “They need to be dynamic and they need to reflect languages” [73]. “My big ask is just report what languages model providers cover” [76]. “So it might perform really good in English, but we know that these systems are not safe or secure or perform that well in many different languages that are not English” [77].


Major discussion point

Measurement, Metrology, and Transparency Mechanisms


Topics

Monitoring and measurement | Closing all digital divides


J

Jeanna Matthews

Speech speed

145 words per minute

Speech length

277 words

Speech time

113 seconds

Insist on proactive AI safety

Explanation

AI safety will not emerge on its own; all stakeholders must actively insist on robust safeguards and concrete actions to protect people and societies.


Evidence

“It’s not going to come automatically.” [3]. “We’re all going to have to insist.” [4]. “I think that’s a great place to end because we are not going to get happiness for all and wellness for all unless we insist.” [7]. “So asking each of us to ask ourselves a question, what are we going to do to insist, I think is a really good place to end.” [10].


Major discussion point

Policy, Regulation, and Political Will for Enforcing AI Safety


Topics

Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society


Question historical assumptions about AI benefits

Explanation

We must critically examine whether history shows AI will automatically benefit everyone, or whether mandatory safeguards are required to avoid relying on good intentions alone.


Evidence

“Does history tell us that it’s going to be great for everyone, just works out, or there have to be some musts, not just good intentions or shoulds?” [8].


Major discussion point

Broadening AI Safety Beyond Technical Metrics


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Highlight extractive and exploitative AI potential

Explanation

AI systems can be built on extractive, exploitative practices; safety discussions must address these socioeconomic dimensions to ensure equitable outcomes.


Evidence

“We’ve been seeing wellness for all, happiness for all, in the presence of a fairly extractive and exploitive potential.” [11].


Major discussion point

Socio‑economic and Environmental Impacts of AI Deployment


Topics

Social and economic development | Human rights and the ethical dimensions of the information society


Demand accountability for AI misuse

Explanation

Effective AI safety requires legal and institutional mechanisms that hold individuals accountable when AI is used to cause harm.


Evidence

“If we are not seeing things like recovery, retribution, remuneration, we don’t see people going to jail when they do bad things with AI.” [13].


Major discussion point

Policy, Regulation, and Political Will for Enforcing AI Safety


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Call for seriousness about AI safety

Explanation

The panel must treat AI safety as a serious, non‑optional agenda rather than a peripheral concern.


Evidence

“Are we serious about AI safety?” [9].


Major discussion point

Broadening AI Safety Beyond Technical Metrics


Topics

Artificial intelligence | The enabling environment for digital development


S

Speaker 2

Speech speed

60 words per minute

Speech length

1 words

Speech time

1 seconds

Call for active engagement

Explanation

The speaker urges participants to simply ‘be’—to be present, attentive, and proactive—in the collective effort to shape AI safety and governance.


Evidence

“be” [15].


Major discussion point

Policy, Regulation, and Political Will for Enforcing AI Safety


Topics

Artificial intelligence | The enabling environment for digital development


Agreements

Agreement points

AI safety must go beyond technical metrics to include human, social, and institutional considerations

Speakers

– Virginia Dignum
– Lourino Chemane
– Dame Wendy Hall
– Sara Hooker

Arguments

AI systems fail not due to technical flaws but because they are embedded in institutional, economic and political systems


AI safety must prioritize human, social, and institutional impact beyond technical metrics like robustness and accuracy


AI measurement and metrology should become a new scientific discipline studying socio-technical systems


AI systems create trade-offs that must be explicitly acknowledged rather than claiming universal benefit


Summary

Multiple speakers agreed that traditional technical approaches to AI safety are insufficient and that broader socio-technical considerations are essential for effective AI governance


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development


Multidisciplinary approaches are essential for effective AI governance

Speakers

– Lourino Chemane
– Yannis Ioannidis
– Neha Kumar
– Tom Romanoff

Arguments

Effective AI policies require input from law, social sciences, education, labor, ethics, and affected communities


Technology should run free but both input and output should be regulated with multidisciplinary involvement


Learning from disciplines like feminist studies, design, and development studies can inform better AI governance


Computer scientists must translate technical recommendations to policymakers for effective governance


Summary

Speakers consistently emphasized the need for diverse disciplinary perspectives including law, social sciences, ethics, design, and policy expertise in AI governance


Topics

Artificial intelligence | Capacity development | The enabling environment for digital development


Marginalized communities are systematically excluded from AI benefits and decision-making

Speakers

– Dame Wendy Hall
– Jibu Elias
– Neha Kumar
– Merve Hickok

Arguments

AI governance lacks diversity, particularly women’s representation in leadership positions, which undermines ethical decision-making


Marginalized communities like tribal populations are excluded from AI benefits due to language barriers and lack of representation


True inclusivity requires examining who is making decisions, who benefits, and who participates in the design process


The narrative around AI safety is controlled by powerful actors, excluding voices of affected communities


Summary

All speakers agreed that current AI development and governance systematically excludes marginalized groups, particularly women, tribal populations, and other underrepresented communities


Topics

Closing all digital divides | Human rights and the ethical dimensions of the information society | Artificial intelligence


Transparency and accountability measures are needed from AI model providers

Speakers

– Sara Hooker
– Merve Hickok
– Participant
– Speaker 2

Arguments

Model providers should report what languages they cover, safety parameters, and what they don’t test for


Governments can require transparency measures from model providers operating in their jurisdictions


Regulatory artifacts like model cards need to be extended to cover multiple languages, contexts, and cultures


AI systems may perform well in English but are not safe, secure, or performant in many other languages


Summary

Speakers agreed on the need for comprehensive transparency requirements for AI systems, particularly regarding language coverage, safety parameters, and performance across different contexts


Topics

Artificial intelligence | Monitoring and measurement | Data governance


Active political engagement and citizen action are necessary for AI safety

Speakers

– Tom Romanoff
– Jeanna Matthews
– Merve Hickok
– Rasmus Andersen

Arguments

Political action requires reaching a 51% threshold of support, necessitating public education and advocacy


Citizens must actively insist on safety measures rather than expecting automatic benefits


Historical patterns show technology doesn’t automatically benefit everyone without deliberate intervention


Only governments can integrate all perspectives and make comprehensive decisions about AI governance


Summary

Speakers agreed that meaningful AI safety requires active political engagement, citizen advocacy, and government action rather than relying on voluntary industry measures


Topics

The enabling environment for digital development | Human rights and the ethical dimensions of the information society | Artificial intelligence


Similar viewpoints

Both speakers emphasized that AI safety must center on protecting people within broader institutional contexts rather than focusing solely on technical system performance

Speakers

– Virginia Dignum
– Lourino Chemane

Arguments

AI systems fail not due to technical flaws but because they are embedded in institutional, economic and political systems


Safety should focus on protection of people, not just systems, requiring continuous human oversight and institutional accountability


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers highlighted the critical importance of examining who holds decision-making power in AI development and the need for genuine diversity in leadership positions

Speakers

– Dame Wendy Hall
– Neha Kumar

Arguments

AI governance lacks diversity, particularly women’s representation in leadership positions, which undermines ethical decision-making


True inclusivity requires examining who is making decisions, who benefits, and who participates in the design process


Topics

Human rights and the ethical dimensions of the information society | Closing all digital divides


Both speakers emphasized the extractive and exploitative nature of current AI development patterns and the need for deliberate intervention to prevent harm

Speakers

– Jibu Elias
– Merve Hickok

Arguments

AI development has extractive consequences including environmental damage and exploitation of local communities


Historical patterns show technology doesn’t automatically benefit everyone without deliberate intervention


Topics

Environmental impacts | Human rights and the ethical dimensions of the information society | Social and economic development


Both emphasized the need for comprehensive transparency about AI system capabilities and limitations across different languages and cultural contexts

Speakers

– Sara Hooker
– Participant

Arguments

Model providers should report what languages they cover, safety parameters, and what they don’t test for


Regulatory artifacts like model cards need to be extended to cover multiple languages, contexts, and cultures


Topics

Artificial intelligence | Monitoring and measurement | Closing all digital divides


Unexpected consensus

Government as the primary integrator of AI governance perspectives

Speakers

– Rasmus Andersen
– Tom Romanoff
– Merve Hickok

Arguments

Only governments can integrate all perspectives and make comprehensive decisions about AI governance


Political action requires reaching a 51% threshold of support, necessitating public education and advocacy


Governments can require transparency measures from model providers operating in their jurisdictions


Explanation

Despite representing different backgrounds (government advisor, policy advocate, civil society), these speakers unexpectedly agreed on the central role of government in AI governance, which contrasts with common industry preferences for self-regulation


Topics

The enabling environment for digital development | Artificial intelligence


Technical innovation should remain unrestricted while regulating applications

Speakers

– Yannis Ioannidis
– Sara Hooker

Arguments

Technology should run free but both input and output should be regulated with multidisciplinary involvement


AI systems create trade-offs that must be explicitly acknowledged rather than claiming universal benefit


Explanation

Both a technical academic and an industry researcher agreed on separating technical development from application regulation, which is unexpected given typical debates about regulating AI development itself


Topics

Artificial intelligence | The enabling environment for digital development


Need for longitudinal studies and evidence-based approaches

Speakers

– Dame Wendy Hall
– Rasmus Andersen

Arguments

Longitudinal studies and evidence collection are needed to monitor AI’s real-world effects over time


Only governments can integrate all perspectives and make comprehensive decisions about AI governance


Explanation

A computer scientist and a government policy advisor unexpectedly agreed on the importance of evidence-based, long-term approaches to AI governance, bridging the gap between academic research and policy implementation


Topics

Monitoring and measurement | Artificial intelligence | Building confidence and security in the use of ICTs


Overall assessment

Summary

The speakers demonstrated strong consensus on moving beyond technical safety measures to holistic AI governance, the need for multidisciplinary approaches, addressing systematic exclusion of marginalized communities, requiring transparency from AI providers, and the necessity of active political engagement for meaningful change


Consensus level

High level of consensus across diverse stakeholder groups (academics, industry, civil society, government advisors) on fundamental principles, suggesting these issues transcend traditional sectoral boundaries. The agreement implies that current AI governance approaches are inadequate and that comprehensive reform involving multiple stakeholders and government action is necessary for effective AI safety and governance.


Differences

Different viewpoints

Scope of AI safety regulation – technical vs. holistic approach

Speakers

– Yannis Ioannidis
– Virginia Dignum
– Lourino Chemane

Arguments

Technology should run free but both input and output should be regulated with multidisciplinary involvement


AI systems fail not due to technical flaws but because they are embedded in institutional, economic and political systems


AI safety must prioritize human, social, and institutional impact beyond technical metrics like robustness and accuracy


Summary

Ioannidis advocates for keeping AI technology development unrestricted while regulating inputs and outputs, whereas Dignum and Chemane argue for more comprehensive governance that addresses institutional and social contexts from the start


Topics

Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society


Transparency requirements vs. trade-off acknowledgment

Speakers

– Sara Hooker
– Merve Hickok

Arguments

AI systems create trade-offs that must be explicitly acknowledged rather than claiming universal benefit


Governments can require transparency measures from model providers operating in their jurisdictions


Summary

Hooker focuses on acknowledging inevitable trade-offs and limitations in AI systems, while Hickok emphasizes that governments should actively require comprehensive transparency measures from providers


Topics

Artificial intelligence | Data governance | The enabling environment for digital development


Unexpected differences

Role of government regulation in AI development

Speakers

– Yannis Ioannidis
– Rasmus Andersen

Arguments

Technology should run free but both input and output should be regulated with multidisciplinary involvement


Only governments can integrate all perspectives and make comprehensive decisions about AI governance


Explanation

Both speakers support government involvement but disagree on scope – Ioannidis wants to keep core technology development unrestricted while Andersen sees government as the central coordinating body for all AI governance decisions. This is unexpected as both come from technical/policy backgrounds but have different views on innovation freedom


Topics

Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs


Overall assessment

Summary

The main disagreements center around the balance between technical innovation freedom and comprehensive governance, the scope of transparency requirements, and the mechanisms for ensuring inclusive representation in AI development


Disagreement level

Moderate disagreement with significant implications – while speakers generally agree on the need to move beyond purely technical safety measures, they differ substantially on implementation approaches, regulatory scope, and the role of different stakeholders. These differences could lead to conflicting policy recommendations and governance frameworks


Partial agreements

Partial agreements

All three speakers agree that current AI governance lacks proper representation and diversity, but they propose different solutions – Hall focuses on gender representation and scientific measurement, Kumar emphasizes learning from established disciplines like feminist studies, and Hickok calls for changing power narratives

Speakers

– Dame Wendy Hall
– Neha Kumar
– Merve Hickok

Arguments

AI governance lacks diversity, particularly women’s representation in leadership positions, which undermines ethical decision-making


True inclusivity requires examining who is making decisions, who benefits, and who participates in the design process


The narrative around AI safety is controlled by powerful actors, excluding voices of affected communities


Topics

Human rights and the ethical dimensions of the information society | Closing all digital divides | Artificial intelligence


Both agree on the need for systematic monitoring and government involvement, but Hall emphasizes scientific measurement and evidence collection while Andersen focuses on government as the integrating institution for all stakeholder perspectives

Speakers

– Dame Wendy Hall
– Rasmus Andersen

Arguments

Longitudinal studies and evidence collection are needed to monitor AI’s real-world effects over time


Only governments can integrate all perspectives and make comprehensive decisions about AI governance


Topics

Monitoring and measurement | The enabling environment for digital development | Artificial intelligence


Both agree that change requires active effort and won’t happen automatically, but Romanoff focuses on political mobilization strategies while Hickok emphasizes historical patterns of power and the need to fight against them

Speakers

– Tom Romanoff
– Merve Hickok

Arguments

Political action requires reaching a 51% threshold of support, necessitating public education and advocacy


Historical patterns show technology doesn’t automatically benefit everyone without deliberate intervention


Topics

The enabling environment for digital development | Human rights and the ethical dimensions of the information society | Capacity development


Similar viewpoints

Both speakers emphasized that AI safety must center on protecting people within broader institutional contexts rather than focusing solely on technical system performance

Speakers

– Virginia Dignum
– Lourino Chemane

Arguments

AI systems fail not due to technical flaws but because they are embedded in institutional, economic and political systems


Safety should focus on protection of people, not just systems, requiring continuous human oversight and institutional accountability


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers highlighted the critical importance of examining who holds decision-making power in AI development and the need for genuine diversity in leadership positions

Speakers

– Dame Wendy Hall
– Neha Kumar

Arguments

AI governance lacks diversity, particularly women’s representation in leadership positions, which undermines ethical decision-making


True inclusivity requires examining who is making decisions, who benefits, and who participates in the design process


Topics

Human rights and the ethical dimensions of the information society | Closing all digital divides


Both speakers emphasized the extractive and exploitative nature of current AI development patterns and the need for deliberate intervention to prevent harm

Speakers

– Jibu Elias
– Merve Hickok

Arguments

AI development has extractive consequences including environmental damage and exploitation of local communities


Historical patterns show technology doesn’t automatically benefit everyone without deliberate intervention


Topics

Environmental impacts | Human rights and the ethical dimensions of the information society | Social and economic development


Both emphasized the need for comprehensive transparency about AI system capabilities and limitations across different languages and cultural contexts

Speakers

– Sara Hooker
– Participant

Arguments

Model providers should report what languages they cover, safety parameters, and what they don’t test for


Regulatory artifacts like model cards need to be extended to cover multiple languages, contexts, and cultures


Topics

Artificial intelligence | Monitoring and measurement | Closing all digital divides


Takeaways

Key takeaways

AI safety must move beyond purely technical measures to encompass human, social, and institutional impacts, as AI systems fail not due to technical flaws but because they are embedded in broader societal systems


Lack of diversity and inclusion in AI governance, particularly the absence of women and marginalized communities in decision-making roles, fundamentally undermines ethical AI development


AI development has extractive and exploitative consequences that widen socioeconomic divides, with benefits unevenly distributed across populations and communities


There is an urgent need for a new scientific discipline of ‘AI metrology’ or measurement to study socio-technical systems through longitudinal studies and evidence collection


Model providers should be required to transparently report what languages they cover, safety parameters, and limitations rather than claiming universal applicability


Political action and citizen advocacy are essential for AI safety – it requires reaching a threshold of public support and active insistence rather than expecting automatic benefits


Effective AI governance requires genuine multidisciplinary collaboration involving law, social sciences, education, ethics, and affected communities, not just technical experts


Historical patterns demonstrate that technology does not automatically benefit everyone without deliberate intervention and regulatory oversight


Resolutions and action items

ACM to launch a new journal focused on AI measurement/metrology as the first publication in this emerging field


Panelists committed to creating a collaborative report or model based on the discussion to continue the dialogue beyond the summit


Call for model providers to report language coverage, safety parameters, and testing limitations as a transparency measure


Governments should require transparency measures from AI model providers operating in their jurisdictions


Citizens and professionals urged to actively educate communities and advocate with politicians rather than remaining moderate on AI safety issues


Unresolved issues

How to effectively balance innovation freedom in AI technology development with necessary regulation of AI system deployment and use


Specific mechanisms for ensuring meaningful inclusion of marginalized communities and women in AI governance beyond aspirational statements


How to address the extractive nature of AI development while maintaining technological progress and economic benefits


Practical implementation of multidisciplinary governance structures that can effectively integrate diverse perspectives


How to establish accountability mechanisms that result in actual consequences when AI systems cause harm


Methods for extending regulatory frameworks like model cards to effectively cover multiple languages, contexts, and cultures


How to conduct meaningful longitudinal studies of AI impact when technology is evolving rapidly


Whether universal AI governance rules are achievable or desirable given diverse global contexts and needs


Suggested compromises

Separate regulation of AI technology development (which should remain free for innovation) from AI system deployment and use (which should be regulated with multidisciplinary input)


Focus on transparency and explicit acknowledgment of trade-offs in AI systems rather than claiming universal benefit or safety


Governments can serve as the integrating force that balances technical, civil society, and industry perspectives even if imperfectly


Start with precise, achievable transparency requirements (like language coverage reporting) rather than attempting comprehensive universal AI governance


Learn from existing regulatory frameworks in nuclear, aviation, and other high-risk domains while adapting to AI’s unique characteristics


Combine technical safety measures with broader institutional and social safeguards rather than viewing them as competing approaches


Thought provoking comments

AI systems do not fail simply because of flaws in the model architecture or in the data or in the alignment technique. They fail or they produce harm because they are embedded in institutional, economic and political systems.

Speaker

Virginia Dignum


Reason

This opening statement fundamentally reframes AI safety from a purely technical problem to a socio-technical one, challenging the dominant narrative that focuses on algorithmic fixes. It introduces the core thesis that safety cannot be divorced from the systems and contexts in which AI operates.


Impact

This comment set the entire tone and direction of the discussion, establishing the framework that all subsequent panelists would build upon. It moved the conversation away from technical metrics toward examining power structures, governance, and real-world deployment contexts.


Everyone’s, I love, you know, in India, AI means all-inclusive. But 50% of the population weren’t included yesterday, the women… if it’s not diverse it’s not ethical… if you haven’t got a diversity of people discussing a problem how are you going to actually sort out the biases if you haven’t got women at the top level making these decisions

Speaker

Dame Wendy Hall


Reason

This was a bold, direct critique of the summit itself and the broader AI governance ecosystem. Hall connected the lack of diversity in decision-making directly to ethical outcomes, making the abstract concept of ‘inclusive AI’ concrete and immediate.


Impact

This comment created a pivotal moment that shifted the discussion from theoretical safety concerns to examining the very structures of power and representation in AI governance. It introduced a meta-critique that influenced how subsequent speakers framed their contributions, with multiple panelists referencing representation and inclusion.


I want to separate the issue of safety of AI and talk about safety of AI use… The technology, there’s no issue, there’s no social issue in the safety of the technology itself. It’s like the car, whether it’s working or not… The use is the important thing

Speaker

Yannis Ioannidis


Reason

This comment introduced a crucial distinction between the technology itself and its application, challenging the premise that AI technology is inherently social. It offered a different perspective that separated technical robustness from social impact.


Impact

This created a productive tension in the discussion, forcing other panelists to grapple with where exactly social considerations should enter the AI development pipeline. It led to more nuanced discussions about input data, deployment contexts, and the boundaries between technical and social responsibility.


The biggest thing that has to come out is an understanding of what you give up, because you give up something… too often we end up just circling around and saying we want safety, we need perspectives of everyone in the model. And the truth is that’s also a naive statement, because it is almost certainly the fact that there will be some trade-off

Speaker

Sara Hooker


Reason

This comment introduced much-needed pragmatism to the discussion by acknowledging that safety measures involve trade-offs rather than universal solutions. It challenged the idealistic notion that AI can be made safe for everyone simultaneously.


Impact

This shifted the conversation toward more concrete, actionable discussions about transparency and accountability. It moved the panel away from aspirational statements toward practical considerations of how to make trade-offs visible and accountable.


I grew up as this idealist kid who thought when AI comes there will be no inequality… more and more, we are seeing AI now becoming more political. It’s becoming a larger sociopolitic construct in general. And what concerns me more is its exploitative and extractive nature

Speaker

Jibu Elias


Reason

This personal reflection powerfully illustrated the gap between technological promises and lived realities, particularly from a Global South perspective. It grounded abstract discussions in concrete examples of extraction and exclusion.


Impact

This comment brought a crucial perspective on how AI development affects marginalized communities, shifting the discussion toward questions of digital colonialism and extractive practices. It influenced the second panel’s focus on who benefits from AI development.


All right, something a little different. I would like everybody in the room to raise their hand if you think safety is an important aspect of the AI deployment… Take your hand down if you think that safety should be enforced on the output of AI outcomes

Speaker

Tom Romanoff


Reason

This interactive exercise brilliantly demonstrated the gap between abstract agreement on safety and concrete willingness to implement enforcement mechanisms. It made visible the political dynamics that prevent action on AI safety.


Impact

This moment created a powerful visual demonstration of the 49-51% rule Romanoff discussed, showing how consensus on principles doesn’t translate to consensus on action. It energized the discussion toward more concrete political strategies and the need for advocacy.


Overall assessment

These key comments fundamentally transformed what could have been a typical academic discussion about AI safety into a critical examination of power, representation, and political action. The progression moved from Dignum’s theoretical reframing, through Hall’s direct challenge to existing power structures, to increasingly concrete discussions about trade-offs, extraction, and political mobilization. The comments created a cascading effect where each intervention built upon previous ones, ultimately shifting the conversation from ‘how do we make AI safe?’ to ‘who gets to decide what safety means, and how do we ensure those decisions serve broader human flourishing rather than narrow interests?’ The discussion evolved from technical considerations to fundamental questions about democracy, representation, and justice in the age of AI.


Follow-up questions

How can regulatory artifacts like data set cards, model cards, system cards, rigorous evaluations, and user feedback be extended to cover multiple languages, multiple contexts, and multiple cultures?

Speaker

Participant


Explanation

This addresses the critical gap in AI safety measures that are primarily designed for English and Western contexts, but need to work across diverse linguistic and cultural settings globally.


What are the unintended consequences of social media bans for teenagers, and how can we study the behavioral impacts over time?

Speaker

Dame Wendy Hall


Explanation

She highlighted that countries like Australia, France, and UK are implementing age restrictions without understanding long-term behavioral consequences, and emphasized the need for longitudinal studies to understand the real impact.


How can we develop a science of studying ‘social machines’ or socio-technical systems through AI metrology?

Speaker

Dame Wendy Hall


Explanation

She proposed creating a new field called AI metrology to scientifically study how AI technology and society come together to create systems that wouldn’t exist without both components.


How can model providers be required to report what languages they cover, what safety parameters they have tested, and what they haven’t covered or tested for?

Speaker

Sara Hooker


Explanation

This addresses the need for transparency about trade-offs and limitations in AI systems, particularly regarding language coverage and safety testing scope.


How can we ensure representation of tribal populations and minority languages in AI systems like Gemini?

Speaker

Jibu Elias


Explanation

He highlighted that tribal populations in Indian states like Telangana, Chhattisgarh, and Jharkhand are excluded from AI benefits because their languages aren’t represented in major AI systems.


What are the environmental and social impacts of data center construction, particularly regarding groundwater extraction and community consultation?

Speaker

Jibu Elias


Explanation

He raised concerns about extractive practices in data center development, including water scarcity issues and inadequate community engagement.


How can we study and address ‘AI psychosis’ and the impact of AI systems on elderly users?

Speaker

Jibu Elias


Explanation

He mentioned emerging concerns about psychological impacts of AI use, particularly among vulnerable populations like the elderly, which requires further research.


How can we ensure that AI development doesn’t repeat the mistakes of historical development approaches that failed many people and countries?

Speaker

Neha Kumar


Explanation

She emphasized learning from development studies to avoid repeating patterns where development initiatives didn’t benefit the intended populations.


How can we collect data and evidence through longitudinal studies to monitor AI’s real-world impact and guide future policy decisions?

Speaker

Dame Wendy Hall


Explanation

She stressed the need for systematic data collection and long-term studies to understand AI’s actual effects on society, similar to how aviation safety was improved over time.


How can we move from ‘shoulds’ to ‘musts’ in AI safety – from good intentions to enforceable requirements with real consequences?

Speaker

Jeanna Matthews


Explanation

She questioned whether AI safety efforts are serious without enforcement mechanisms, accountability, and real consequences for harmful AI deployment.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.