From Technical Safety to Societal Impact Rethinking AI Governanc
20 Feb 2026 13:00h - 14:00h
From Technical Safety to Societal Impact Rethinking AI Governanc
Summary
The panel opened by stating that AI safety is often framed only in technical terms such as model alignment and benchmark performance, but the discussion must move beyond these to address multidisciplinarity, governance, and real-world impact [14-20]. Speakers emphasized that AI systems do not fail solely because of model flaws; their harms arise from the institutional, economic, and political contexts in which they are deployed [21-24].
Lourino Chemane argued that safety should be understood as the protection of people, requiring AI governance that integrates law, ethics, social sciences, education, labor, and the voices of affected communities [31-36]. He highlighted the need for comprehensive data policies, cybersecurity measures, and interoperable digital-government frameworks to secure national AI strategies and infrastructure [43-48].
Wendy Hall criticized the summit’s lack of gender diversity and warned that safety must include systematic monitoring, longitudinal studies, and the creation of AI measurement and “social-machines” metrology to capture socio-technical effects [78-84][89-103]. Yannis Ioannidis distinguished between the safety of AI technology itself and the safety of its use, calling for regulation of both inputs and outputs and for multidisciplinary oversight [108-119][120-124]. Sara Hooker noted that safety conversations have become more precise yet remain vague, stressing the importance of acknowledging trade-offs and transparently reporting what model capabilities are omitted [135-146][147-166][167-185].
Jibu Elias warned that AI is increasingly a sociopolitical and extractive force that widens socioeconomic gaps and can cause environmental harms such as water depletion from data-center projects [192-207][208-224]. Neha Kumar underscored the relevance of human-centred HCI research and called for genuine inclusivity, asking who designs, benefits from, and decides AI systems [233-239][285-303]. Merve Hickok broadened safety to encompass human-rights and democratic values, arguing that historical power narratives must be challenged to protect citizens [242-245][271-281]. Rasmus Andersen stressed advising political leaders to consider long-term societal impacts and embed safety in policy before harms materialize [248-256]. Tom Romanoff described ACM’s role in translating technical concerns into policy recommendations for lawmakers worldwide [261-265]. Jeanna Matthews posed a provocative question about whether good intentions alone suffice, highlighting the need for enforceable safeguards and accountability [266-270].
The session closed with Virginia Dignum asserting that achieving inclusive, multidisciplinary AI safety will require ongoing dialogue, concrete governance tools, and collective insistence from all stakeholders [375-378].
Keypoints
Major discussion points
– Broadening AI safety beyond technical metrics – The session was opened by stressing that AI safety is often framed only in terms of model alignment, robustness, and benchmarks, but real-world value or harm depends on deployment context, governance, and institutional factors [14-19]. Panelists echoed this, noting that safety must prioritize human, social, and institutional impact and draw on law, ethics, education, and affected communities [31-34].
– Inclusion and diversity as essential for safe AI – Multiple speakers highlighted the systematic exclusion of women, children, and marginalized groups from AI decision-making. Wendy Hall pointed out the all-male composition of the summit’s leadership and argued that “if it’s not diverse it’s not ethical” [78-85]. Jibu Elias warned that tribal languages are omitted from major models, illustrating cultural exclusion [202-205]. Neha Kumar called for concrete answers to “who is making decisions?” and stressed the gap between inclusive rhetoric and actual practice [285-293].
– Policy, regulation, and institutional frameworks are needed – Mozambique’s effort to draft a national AI strategy, data policy, and regulations for data centres and cloud computing shows how governance structures shape safety [42-48]. Rasmus Andersen described advising governments on long-term AI impacts and the need to embed safety in public-service delivery [250-256]. Tom Romanoff explained ACM’s role in turning technical recommendations into policy actions [261-265], while Merve Hickok called for a broader view of safety that links AI policy to human-rights and democratic values [242-245].
– Measuring AI systems and acknowledging trade-offs – Wendy Hall introduced the concept of “AI metrology” – a science of measuring social machines and their societal effects [57-68]. Sara Hooker stressed that safety discussions must be precise, expose what has been sacrificed in model design, and require transparent reporting of coverage and omitted safety tests [164-176].
– Urgent need for accountability and proactive enforcement – Panelists warned that history shows safety only improves after crises. Jeanna Matthews asked whether good intentions are enough, and Merve Hickok argued that narratives of safety must shift from optional evaluation to mandatory protection of rights [267-279]. Tom Romanoff illustrated the “51 % rule” of political will needed to pass regulations and urged participants to move from “moderate” to active advocacy [326-338]. The session closed with a collective call to “insist” on concrete actions for inclusive, accountable AI safety [359-363].
Overall purpose / goal of the discussion
The panel aimed to re-frame AI safety from a narrow technical problem to a multidisciplinary challenge that integrates governance, policy, societal impact, and inclusive participation, and to generate concrete ideas for future frameworks, standards, and accountability mechanisms.
Overall tone and its evolution
The conversation began formally and optimistically, focusing on the need for broader perspectives [14-19]. It quickly turned critical, with speakers highlighting exclusion, tokenism, and the gap between rhetoric and practice [78-85][285-293]. As the dialogue progressed, it became constructive and solution-oriented, introducing concepts like AI metrology, trade-off reporting, and policy roadmaps [57-68][164-176][250-256]. The final segment adopted an urgent, activist tone, urging participants to move beyond discussion to concrete advocacy and enforcement [267-279][326-338][359-363].
Speakers
– Virginia Dignum – Co-host of the session and Chair of the Technology Policy Council of ACM; expert in AI policy, governance, and multidisciplinary safety frameworks [S15].
– Lourino Chemane – Chairman of the Board of the National Institute of Information and Communication Technology (Mozambique) and lead of Mozambique’s national AI strategy; focuses on AI policy, governance, and safety from a national-level perspective [S10].
– Dame Wendy Hall – Regius Professor of Computer Science, Associate Vice-President and Director of the Web Science Institute at the University of Southampton; former member of the United Nations high-level expert advisory body; expertise in computer science, web science, and AI governance [S3].
– Yannis Ioannidis – President of the ACM and Professor at the University of Athens; specialist in computer science and AI safety from a technical standpoint [S2].
– Sara Hooker – Co-founder and President of Adaption Labs (formerly with Cohera); AI researcher focusing on large language models, safety, and the societal impact of AI [S1].
– Jibu Elias – Researcher and activist examining how technology and innovation institutions acquire knowledge, labor, and legitimacy; concentrates on AI’s sociopolitical and extractive dimensions [transcript].
– Speaker 2 – Unnamed participant who contributed a brief comment (“be”) during the discussion; no additional role or expertise identified [S7].
– Participant – Audience member who raised a question about multilingual safety and regulatory artifacts; no formal title or affiliation provided [S11][S12][S13].
– Neha Kumar – Associate Professor at Georgia Tech, School of Interactive Computing; President of the ACM SIGCHI (Special Interest Group on Computer-Human Interaction); expertise in human-computer interaction, social impact of technology, and inclusive design [transcript].
– Merve Hickok – President and Policy Director for the Center for AI and Digital Policy, an independent think-tank working at the intersection of AI policy, human rights, democratic values, and the rule of law [S18][S19].
– Tom Romanoff – Director of Policy for the ACM, overseeing global and regional policy committees; former Washington, D.C. think-tank professional who worked with U.S. Congress on tech policy [S20][S21].
– Jeanna Matthews – Co-host of the second session of the panel; involved in organizing and moderating the discussion [S22].
– Rasmus Andersen – Advisor at the Tony Blair Institute of Government, providing AI guidance to heads of state and senior ministers; expertise in AI policy advisory and strategic planning for governments [S23][S24].
– Sara Hooker – (Listed again for completeness; see entry above.)
Additional speakers:
– Gina Matthews – Co-host of the session (mentioned by Virginia Dignum) and Chair of the Technology Policy Council of ACM; involved in session moderation and organization [S15].
The session opened with Virginia Dignum reminding the audience that AI safety is often reduced to technical notions such as model alignment, red-team testing and benchmark performance, yet these tools “matter” but “do not address the core question” of what determines whether AI creates societal value or harm when deployed [14-20]. She argued that AI systems are never isolated; their impact is shaped by deployment context, governance capacity, incentive structures and the lived realities of the communities that use them, so failures often stem from institutional, economic and political embedding rather than from model flaws alone [21-24].
Dr. Lourino Chemane, chair of Mozambique’s National Institute of Information and Communication Technology, reframed safety as the protection of people, not merely of systems. He stressed that AI governance must prioritise human, social and institutional impact and be grounded in multidisciplinary input from law, ethics, education, labour, social sciences and the affected communities [31-36]. Mozambique is drafting a national AI strategy, a data-policy and a cybersecurity strategy, and has already adopted regulations for data-centre construction and cloud computing to safeguard national sovereignty and democratic processes [42-48]. He also highlighted the need for interoperable digital-government frameworks to ensure that AI improves public-service efficiency while remaining safe [46-48].
Dame Wendy Hall criticised the summit’s lack of gender diversity, noting that “50 % of the population weren’t included yesterday, the women” and that the panels were dominated by “alpha males” [78-85]. She introduced the concept of “AI metrology” – a science of measuring “social machines” to capture socio-technical effects [57-68] and cited concrete initiatives such as the UN high-level expert advisory board, the upcoming AI for Good conference in Geneva (July), the UK National Physical Laboratory’s AI Measurement Centre, and the AI Security Institute as steps toward operationalising AI metrology [57-68]. Hall warned that safety requires systematic monitoring and longitudinal studies, citing Australia’s social-media age-restriction experiment and the unintended consequences of bans that may drive youth to hidden platforms [89-103].
After Hall’s remarks, Virginia Dignum thanked her, acknowledged that Hall needed to leave, and posed a question to the panel about shifting the discourse from a purely technical approach to a broader societal one [104-105].
Yannis Ioannidis distinguished the safety of the technology (the algorithm/model) from the safety of its use, likening the technology to a car that is either working or not [111-115]. He emphasized that the real safety concerns lie in the data-input stage and the deployment-output stage, both of which require regulation and multidisciplinary oversight involving humanities, legal, ethical and civic-society experts [118-124].
Sara Hooker reflected on the evolution of the safety debate, observing that early discussions were vague and centred on existential risk, whereas today the conversation is “messier” but more accountable to real-world impact [151-156]. She noted that the term “safety” remains a blanket term, that trade-offs are inevitable, and that transparent reporting of which safety parameters are covered, which languages are supported and what trade-offs have been made is essential [164-176][167-185]. Hooker also warned that prestige and resource allocation signal how seriously safety is taken, and that panel titles alone do not guarantee substantive action [135-146][147-166].
Jibu Elias warned that AI is increasingly a sociopolitical construct with exploitative and extractive dimensions. He cited the omission of tribal languages from major models, the imposition of Hindi as a national language, and the environmental damage caused by a data-centre in Telangana that depleted groundwater and involved community bribery [202-210][211-224]. Elias highlighted the emerging concern of “AI psychosis” among vulnerable users and critiqued the US-centric AI stack being promoted globally, questioning whether this extractive model will continue [215-224].
Neha Kumar, an HCI scholar, reinforced the human-centred perspective, urging the panel to ask “who is making decisions, who is being benefited, who is part of the design process?” [285-293]. She argued that inclusive rhetoric often remains disembodied, focusing on infrastructure and data without addressing lived impacts on women, children and marginalised groups [294-303]. Kumar suggested drawing on feminist, women’s studies and development studies to interrogate power dynamics and avoid repeating historical development failures [285-303].
Merve Hickok broadened safety to encompass human-rights, democratic values and the rule of law. She argued that the prevailing safety narrative is an “evaluation” driven by powerful interests and called for a shift to mandatory, rights-based safeguards that protect citizens’ freedoms, dignity and democratic participation [242-245][271-281]. Hickok emphasized that such artefacts must be dynamic, cover multiple languages and cultures, and can be mandated by governments (e.g., the California precedent) [364-371].
Rasmus Andersen, advising leaders at the Tony Blair Institute, stressed long-term foresight, urging policymakers to consider how AI will affect citizens in 2030-35 and to embed safety in public-service delivery [250-256]. He cited ongoing lawsuits concerning suicides among young people and the deep-fake regulation example (the Elon Musk/Grok incident) as evidence that significant harms are already emerging [250-256]. Andersen noted that governments are the only arena where imperfect technical, civil-society and industry perspectives can be reconciled, making state-level coordination essential [322-324].
Tom Romanoff described the ACM’s role in translating technical safety concerns into policy action. He explained that the ACM’s policy office works with regional committees worldwide to convey researchers’ recommendations to legislators [261-265]. Romanoff introduced the “51 % rule”, stating that regulatory change occurs only when support exceeds the 51 % threshold, whereas 49 % support is insufficient, and urged participants to move from “moderate” to active advocacy [326-338]. He highlighted the need for concrete artefacts-model cards, dataset cards and user-feedback mechanisms-to be mandated by governments [364-371].
During the audience Q&A, a participant requested multilingual, culturally-aware model-card, dataset-card and system-card evaluations. Hickok responded that such artefacts must be dynamic, cover multiple languages and cultures, and can be mandated by governments, citing the California precedent [359-363][364-371].
Jeanna Matthews posed a provocative question about whether history shows that AI will automatically benefit everyone or whether enforceable “musts” are required. She warned that good intentions alone are insufficient and that without binding safeguards, “people won’t go to jail when they do bad things with AI” [266-270][359-363].
Finally, Virginia Dignum synthesised the discussion, reiterating that safety must move beyond technical robustness to an inclusive, multidisciplinary approach that addresses governance, institutional capacity and societal impact [104-105]. She announced the intention to develop a collaborative AI-safety governance model within the next year and to produce a post-summit report with concrete recommendations [375-379]. The session closed with a shared acknowledgement that achieving inclusive, accountable AI safety will require ongoing dialogue, concrete standards such as multilingual model-card disclosures, and sustained advocacy from both technical and policy communities [359-363][364-371].
Overall, the panel reached strong consensus that AI safety is a socio-technical challenge demanding multidisciplinary governance, inclusive design, systematic measurement and outcome-oriented regulation. Points of contention remained around the primary locus of safety (technology versus use), the preferred horizon for measurement (long-term longitudinal studies versus immediate trade-off reporting), and whether coordination should be led by governments or multistakeholder bodies such as the ACM. Agreed-upon action items include finalising Mozambique’s AI strategy and data-policy, launching an ACM-sponsored journal on AI measurement/metrology, drafting a post-summit report with concrete recommendations, and urging governments to require multilingual, culturally aware model-card disclosures. Unresolved issues-operationalising inclusive governance structures, defining legal liability for harmful AI outputs, and balancing rapid innovation with the time needed for longitudinal safety studies-were identified as priorities for future research and policy work.
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. session so if you just want to stand here in front they want to make a picture of all of us you Thank you. Thank you. Yes, you have to sit there. Okay. Good morning, everybody. Thank you very much for being here. My name is Virginia Dignam. I will be co -hosting this session with my colleague Gina Matthews there. We both are the chairs of the… Technology Policy Council of ACM. And today we are here to discuss how to move beyond technical safety and looking at aspects of multidisciplinarity, governance, and real world.
impact. Across global AI discussion, safety is too often being framed in technical terms. Model alignment, red teaming, benchmark performance, frontier containment, and so on. These tools matter and they really are further development is crucial. But they don’t address the core question or at least one of the core questions. What determines whether AI systems produce human and societal value or harm in real deployment contexts? That’s what we are going to discuss in this session. AI systems, like we all know, do not operate in isolation. Their impact is shaped by deployment context, by governance capacity, by incentive structures, and by the lived reality of the communities that use and are impacted by these systems. As such, AI systems do not fail simply because of flaws in the model architecture or in the data or in the alignment technique.
they fail or they produce harm because they are embedded in institutional, economic and political systems. So we will have an open discussion with the panelists. It will be two rounds of panelists. And I would like to start by inviting Dr. Lorine Chaman, who is the chairman of the board of the National Institute of Information and Communication Technology in Mozambique, where he is at this moment leading the national strategy on AI for Mozambique. Please.
Thank you. I would like to start by thanking the invitation to join this panel and also to congratulate the government of India hosting this AI Impact Summit. Going directly to the topic, part of this panel, as part of our exercise of crafting the… the national AI strategy, we look to this topic of safety. And for us, safety… working for the policy subject and from the policy formulation point of view. For our safety, we look at it as the protection of people, not only systems. So we look at AI governance must prioritize human, social, and institutional impact, going beyond technical metrics such as robustness, accuracy, or algorithm alignment. We also look at it from the multidisciplinary governance, grounded in the world context of use of AI.
For us, effective AI policies require input from law, social sciences, education, labor, ethics, and affected communities. So the inclusion of the people and how they will feel safe in using these technologies. We look also from the continuous human oversight and institutional accountability. People must know what’s in the bread box, how they’re designed, if they’re functional, if they’re not functional, if they’re not functional, and what factors that are affecting their lives, the decision made by the algorithm, have taken into consideration their feelings in the design phase. We also look for the protection of children, young people and women. From the studies that were conducted, women and children and youth are the first victims of the bad application of the AI.
We also look for the ethical and social assessment. Mozambique is one of the pilot countries adopting the UNESCO principles of ethics in adopting AI, and we are looking also for the dimension defined by UNESCO in this perspective. Sharing what we are doing in the country now, in Mozambique we are drafting, as I mentioned, our national AI strategy with the support of UNESCO and thank Professor Virginia, who is the leading expert in our team, but the contribution of other experts from UNESCO. We are also drafting our data policy and its implementation strategy, because we believe that data is a fundamental element for AI system. We are reviewing our national cyber security strategy. data that we’re collecting now is that there are already cybersecurity -related problems by the use of a young use of AI model.
We just adopted in Mozambique the regulation for the construction and operation of data centers and also the regulation for cloud computing, because we believe that infrastructure is a fundamental and key element for sovereignty of our country in terms of when it comes to safety, but from the policy point of view for the democratic system and all other dimensions. But we also look at it from the digital government point of view. So we’re reviewing also our interoperability framework that’s related to data to make sure that in adopting AI in the public administration, we address our main objective of improving efficiency and efficacy and delivering public services. For us, these are the elements that will be contained in the overall digital transformation strategy that, if everything goes as planned, will be approved by our government during government.
This year, and we are learning a lot in this summit. and gathering important elements that will help us to uplift and improve our work in crafting this element. Thank you for the opportunity to be part of this session.
Thank you very much, Dr. Shaman. I understand that you have to move to another session, so feel free to leave whenever you need to go. We understand the complexities of the program. Now I would like to ask Dame Wendy Ho, Regis Professor of Computer Science, Associate Vice President and Director of the Web Science Institute at the University of Southampton, and also a former member of the United Nations high -level expert advisory body, to give us some provocative statements. They will be. Good. Provoke us.
I’m fed up with just towing the party line. So I will… I have to first apologize, because I have to leave at 11. I’m supposed to be on three panels at the moment. and I also have a lunch date at midday in town. So, that’s my morning. I want to say, I think, three things. One is, what’s really… Four. If you know Monty Python, nobody expected the Spanish Inquisition. Anyway, so first of all, it’s been wonderful to be in India. I love India, and I have a love -hate relationship with this summit. It’s too big. There’s too much going on, and not enough actual real debate about the core. There’s going to be some sort of platitude statement come out today.
Yeah. And I’ve just been come back from the UN. Our advisory board and the new scientific panel get together. They’ve got a panel going on. At the moment, the dialogue that’s starting in… The dialogue that’s starting in the AI for Good conference in Geneva in July, we hope will be a real dialogue. I don’t know what form it’s going to take yet. But we have to knock the world leaders’ heads together. Now, I’m now going to say something which also really struck me. Thank you. Is that working? Yes? At this conference. Everyone’s, I love, you know, in India, AI means all -inclusive. But 50 % of the population weren’t included yesterday, the women. Right? There were no women.
The CEOs of every country, every company, there was one lady CEO from Accenture, I think. There were a couple of ladies on the panels at the end. It was all men. The alpha males of this world. The alpha males of this world. The alpha males of this world. Men. Men. Men. The alpha males of this world. right the world leaders that spoke the ceos that spoke that this world is dominated by men and my mantra has always been in terms of the the lack of women and other other some other diversity points as well but mainly women is if it’s if it’s not diverse it’s not ethical people don’t really understand what that means that means is if you haven’t got a diversity of people discussing a problem how are you going to actually sort out the biases if you haven’t got women at the top level making these decisions trying to set up the guidelines i mean your comment was yeah we want to make sure for the safety of women and children well let’s include the women and children in the discussions i mean that my third point um is that we we are watching i mean i’m very into watching these um experiments i did it all through the web and we need to learn how to monitor what’s going on so that we can say what is the right direction to go in the future.
It means collecting data and evidence and doing longitudinal studies, and it takes time. But take, for example, what Australia is doing with social media. We’ve heard at this conference several other… for teenagers. I mean, didn’t Macron… Who was there yesterday? Macron said under 15 in France. Our Prime Minister, who constantly changes his mind, so I don’t suppose it will happen, but he’s talked… Sorry, that’s a joke for any Brits in the audience, but there aren’t many. He’s saying 16 in the UK, some out of Spain saying 16. There will be unintended consequences of that. Making a ban like that without thinking about the nuances of… Well, what happens if… Well, first of all, the kids are ingenious enough to get round it.
And then they’re back on the dark side of things again, even worse than before. Because they’re doing it in… secret um what happens when they start to use social media how do we train them to do it properly my worry about a ban like that i said i mean it’s very brave of australia to do it first and we can watch and they’re saying six months time they’ll have some evidence of how many under 16s are still on social media but the behavioral issues take much much longer to explore than that and we have to get over this fact that whilst the technology is going on a pace because the alpha males are driving it without any you know just worrying about technical safety maybe um we have to we can’t say well it’s all going too fast we can’t do any we have to study this stuff um we have and i think this is what i want the acm to do i talked at my keynote talk this is my last point by the way i my keynote talk on whatever day it was wednesday on the main stage I talked about two things happening in the UK actually around one is our National Physical Laboratory which is the sort of equivalent of NIST in America has just launched with government backing a centre for AI measurement and the AI Security Institute in the UK and the other security institutes that are growing up around the world that network is now being called largely driven by the US because Trump doesn’t want to call it anything to do with safety I can’t believe I just said that anyway, but then he was the man that drank bleach in Covid they’re calling their network the network for AI measurement and I think this is a breakthrough I think this is, I mean I love AI for science, but we need to think about the science of AI and I think, and that’s a social it’s a socio -technical and I’m starting to call these things social machines as we did on the web that came from Tim Berners -Lee the idea of technology and society coming together to create artefact systems that wouldn’t have existed if they hadn’t come together and the technology doesn’t understand society at the moment most of society doesn’t understand this technology but together those two systems will create socio -technical systems or social machines and I want to build a science of studying social machines and it will be called AI measurement or AI metrology I love that word, I’ve learnt to say it it’s a cool script it’s a cool script everything’s Greek to us I love the yogurt don’t you love Greek yogurt so sorry I’m finishing there AI metrology and we’re going to launch I’m chair of the ACM publications committee or co -chair he’s president we’re going to launch a journal first journal in this area and it will be associated pulling together work and the data sharing the data that people are collecting to
thank you Wendy, very important point and I think you can leave it there again if you understand when you have to leave you just leave we understand that so for the rest of us in the panel we start the day or the session talking about AI safety needs to be more than just the technical robustness I love your idea of the the social machines of this AI metrology yes it is it does yeah yeah with me only sometimes probably but i i did my best now yeah anyway i would like to bring you into the discussion how can we both dr shaman and wendy wall gave us examples of issues that we need really to include in going beyond this idea of technical robustness even if systems perform exactly as they have been designed and safely designed they will still probably be causing harm which is not probably just a technical failure but also a failure of inclusion a failure of imagination so i would like to get your opinions from where where you think that we can change the where can we start changing the discourse of of a pure technical approach to a broader inclusive societal institutional approach to the discussion on AI safety, on AI measurement, and so on.
And I would like to start this question, which is for all of you, starting with Professor Ioannis Ioannidis, who is the current president of ACM, and also a professor at the University of Athens.
Thank you very much for having me in this panel. I’m a technical person, very sociable, but technical, that’s my expertise. So I want to separate the issue of safety of AI and talk about safety of AI use. And for me, in my technical mind, there is the AI technology, And I think that’s where I’m going to be. which is the algorithms, which are the models, and so on, from the use of this technology, the use of the software that is on AI. And we are using this software both in the beginning with the input that we give it and at the output when we create what is called I have an artificial intelligence, I have an agent, and so on, to do this or that or the other.
The technology, there’s no issue, there’s no social issue in the safety of the technology itself. It’s like the car, whether it’s working or not. There is no issue of safety. And innovation in that regard has to be let free, like the human mind and all the innovators to progress on that. And robustness and not having bugs or not bugs are an issue there, but it’s a day in the park for us. Software engineers and computing scientists. The use is the important thing and sometimes the key thing that people are talking about is the end result, the model. We put it in the judge’s hands, we put it in the doctor’s hands, we put it in the youth’s hands in terms of social media and so on.
This we have to work on, measure, regulate potentially and in any case all sciences like it was said before, especially the humanities, philosophers, ethicists, legal people, cognitive scientists and so on have to come together to address this. But there is also the input side which is again humans doing it. Humans are determining the first parameters where the system is. The first parameters where the systems are starting to be trained. The data that we feed it, it’s again humans that are choosing it. And as much as we have to… regulate or measure or think the end result the model the humanoid or non -humanoid robot that is telling us do thing or that or the agent and the same level of importance is that we have to think about what to do with what comes in and humans are using it different humans are feeding it and i think the safety must start from there we should not grow the input size we should not let it run for free even at that level we have to have the different sciences the different technologies civic society to be represented there and having an ai with whatever data we happen to have or whatever data generates billion dollar industries these are the data that that will use it’s wrong i mean there is a right and wrong here and and we have to be on the right side of that you so As a quick wrap -up, so for others to express their opinion, technology should be running free, but both input and output and result should be in the
Thank you, Wendy. Thank you, Dr. Shiman. Thank you. See you soon. Okay. Okay, let’s continue the discussion. Sarah, Sarah Hooker, you are the co -founder and president, I believe, of Adaption Labs, a very young company, I believe. You have been before with Cohera and with other… developing organizations. What do you think about this balance or tension between the technical robustness, the technical safety measures, and the need for understanding more the environment, the context, the social context in which systems are built? And how can we technologists, those that develop like yourself, be developing systems while they are aware of this type of tension and also the insertion of the systems in very concrete, real -world domains?
And typically it’s been how do you build extremely large systems at the frontier of what’s possible. I think it’s interesting. I’ll share a few things. So one, I think what Wendy was getting to is that one of the biggest signals of whether you actually care about safety is what the forms of prestige and power look like. I think that’s mainly her comment. She’s saying, you know, we are at the pinnacle of where we all gather to discuss these things. And the way resources have actually been allocated doesn’t show that people are serious, which I think is fair. I think you have to look to the surrounding environment to understand if people are serious or not about safety or whether it’s just a panel title, candidly.
And maybe today it’s just a panel title. I think in general my philosophy about these forums is that you have to look six months out to actually get a signal of what has happened. That doesn’t mean that they’re not critical. I frankly don’t know if the expectation should be anymore that we have universal rules for AI. It’s not clear to me that that should be the outcome of these forums. So I think decidedly, if you’re going in with that expectation, you’re going to be very disappointed because I don’t think that’s going to happen at this forum or at the next one. But I do think it’s worth asking, well, where are we going as a conversation about safety and the precision of it?
Because for me, that’s the most interesting part. Time is very valuable. It’s our most precious resource. And so for me, the more precise the conversation, the better. I do think if I look at the overarching arc from Bletchley to now, we’ve had now four summits. We’ll have the fifth. It’s worth asking, has it become more precise? Candidly, and thank goodness, yes. I still remember Bletchley where it was all about existential risk and six months from now, and there were protests and hunger strikes from people who thought machines were taking over, but no precision to the conversation, no accountability for where these timelines were coming from. Thank you. And then I look to now, and now we have a very messy conversation about safety.
Certainly everyone has a different view. It’s still a blanket term, but at least it’s more accountable to what is the real -world impact of these conversations and the technology that we build. Because when I started my career as a computer scientist, we were just in research conferences. I mean, I think the fact that ACM is so well represented on this panel speaks to the origins of, like, you know, a very narrow group of people who work in a very academic community, and now our technology is used everywhere. So it’s a much more important conversation to have. So, one, I think we have gotten more precise, but it’s still very murky what people mean. Here’s the other thing I’ll say.
I think there’s often desire in these conversations about where technical meets the ecosystem to say, oh, well, safety has to be everything to everyone. And, frankly, that’s not a precise conversation either, because the truth is there are tradeoffs. When you build systems, there are tradeoffs. And too often when these conversations enter this arena, there’s a misconception about the sheer difficulty of how do you actually impose constraints on these systems. So the other thing I’ll say is the biggest thing that has to come out is an understanding of what you give up, because you give up something. The big things for me are, you know, I work a lot on language. My big ask is just report what languages model providers cover.
Report essentially, like, what they say that the safety parameters are not, and report what they don’t cover or they haven’t tested for. This sounds like a simple ask, but I think this is actually quite precise. And what it establishes is what have we given up? What are you confident about what have we given up? There’s many versions of this, but too often, and this is my ask, in conversations like this, we end up just circling around and saying we want safety, we need perspectives of everyone in the model. And the truth is that’s also a naive statement, because it is almost certainly the fact. that there will be some trade -off. Someone will not be represented.
Someone will be represented. And actually, what I think these forums are very useful for, having us all in the same conference, is about galvanizing ecosystems where you can make your own constraints and trade -offs, but also having a discussion about, you know, for the models that are being shipped that serve billions of people, we have these static monolithic models that are served the same way. What are the trade -offs that they have made, you know? And that’s, you know, as someone who’s built these models, there are almost certainly trade -offs in place. So we need to understand the state of the world as well as where we want to go. And it’s okay if there are clearly, you know, things left out.
It’s more that they have to be stated out loud. That’s my wish list, yeah. So maybe I’ll leave it there, and I’ll pass it on. I think you were next. Go for it.
Thank you very much. Thank you, Sarah. And indeed, next one. Jibu Elias, you are a researcher, but you are also an activist who examines how technology and innovation institutions receive knowledge, labor, and legitimacy. so help us making sense of what it means safety, AI safety for society that seems to be what you do
I was more interested in the real world consequences of the panel title but wonderful conversations by Sarah and Wendy and all here so when I look back how technology has shaped my understanding of the world I feel like an idiot because I grew up in a time watching this animated shows like Jetsons and all these futuristic shows believing that the more advanced the technology gets the better the better our world will be I grew up as this idealist kid who thought when AI comes there will be no inequality that’s what I’m saying I was an AI kid back then and nowadays when I look at these things. I mean, there have been phenomenal work done by computer scientists like people present here in panel, Sarah and everyone, right?
On technical aspects of things. But more and more, we are seeing AI now becoming more political. It’s becoming a larger sociopolitic construct in general. And what concerns me more is its exploitative and extractive nature. I think Sarah mentioned about Bletchley and where the talk was all about existential risk. But now I think we are all at a point where we are agreeing that the accumulated risk have become more worrying at the same time. I’ve been tracking people who’ve been using tools, people who’ve been impacted by and those who were excluded from the benefit of this kind of technology, right? If you go around states like Telangana, Chhattisgarh, Jharkhand, there are big group of tribal populations.
You know, their language is not represented in Gemini or anything, right? And I know everybody wants to impose Hindi on all of us, but sorry, I still, you know, Hindi is not the national language of India. But what about them? How do they get access? So more and more, what I’m seeing is the divide between the socioeconomic divide becoming more wider, especially in countries like India. And, you know, it’s fascinating that, you know, we’ve been celebrating the data centers that we’ve been building. And I mean, I had firsthand experience of a data center that’s very much celebrated in Telangana in a place called Make a Good. I’m not, I don’t want to mention the company associated with it, but how it was built, how the people were manipulated, how the groundwater being extracted, right?
In a place where there is a water scarcity, you know, and when I asked the company, you know, Hey, this happened and I have a close association with that organization. and they said we interacted with the community leaders. So what I did, I reached out to the serpents. He has no idea what they mean. So essentially there’s a lot of, you know, I mean, in India we know what that means, reaching out to community leaders, bribing the politicians. But that’s the larger things I’m worried about. And the people who are using this technology, you know, now some people are talking about terms like AI psychosis. I don’t know how valid those terms are. But it’s fascinating to see that me and my executive director of Muslim Foundation has been chatting about how elderly people are using these models.
It’s very fascinating and it’s worrying at the same time. You know, we often put our attention on younger folks. But, I mean, it’s funny at the same time, but still. So my larger question is why, you know, the going forward, like yesterday the gentleman from US was telling that, you know, everyone should use a US AI stack. I think people in Denmark will be a good idea how US rates its strategic partners. Yeah. Yeah, so my larger question is where are we headed, right? Are we still going to have this extractive nature, you know, the data annotation workers who are building these models, right? So I will stop here and looking forward to the next level of conversation.
Unfortunately, we have our second round of the panel and like all that, what we all are complaining about, it will happen. We all say our thing and the dialogue will need to be done outside in the corridor and we really hope also to, after this meeting, try to combine all what has been said in some kind of ask or report. But anyway, now we are moving to the second part of the panel. We were all going to be in the same panel, but there weren’t enough chairs. So we are splitting into two. Patience with us. You are proxy. Okay. okay everyone uh thank you so much for being here in the second part of our session and thank you for all of the panelists who are joining me here on stage i think we’re going to do something a little different than the first panel did i would like everyone to just quickly introduce themselves um nay how would you uh start
hello check okay uh hi everyone i am neha kumar and i’m an associate professor at georgia tech in the school of interactive computing i’m also uh president of this special interest group on computer human interaction and so uh this summit is um is really a coming together of many different worlds for me i actually i grew up in delhi so it’s been uh about coming home but also uh a lot of people have been coming to me for a long time and i’m really excited to be here a lot of the conversations we’ve been having are conversations that are really very very active right now discipline of human -computer interaction, HCI, some of you might know it, and it’s great to see how central human centricity is to what we’ve been discussing.
And third, something that’s been much closer to my own area of study is really looking at HCI and technology use in the context of social impact, and this has been named in many different ways over the years, social goods, social impact, societal impact, public interest, whatever you want to call it. But really, it’s an area that we’ve been studying for many, many years before AI was on the scene. And so I would say that we’re looking at multidisciplinarity in this panel, and to me, there’s a lot of learning that could be happening from many of these disciplines that have been actively looking at some of these, agreed that the platform that we’re looking at is different.
It’s unprecedented in many ways. At the same time, there’s a lot that we have to learn from as well. So I’ll stop there.
Thank you, Neha. Thank you, Eugena. Merve Hickok.
I’m the president and policy director for Center for AI and Digital Policy. We are an independent think tank working globally at the intersection of AI policy and human rights, democratic values, and rule of law. So I would like to take a more expansive view of safety and governance at large. More to come on that. Thank you.
Rasmus?
Yes. I think this works now. Yes, my name is Rasmus Andersen. I work with the Tony Blair Institute of Government where I advise leaders around the world at the prime ministerial or presidential level, but also at the line minister level on navigating AI. What does it mean for them? How they both deliver results to citizens with AI and also avoid them. I think it’s important to avoid. harm to their citizens. And so the question of safety comes up a lot, but it’s also usually not the top of leaders’ minds, and it’s really about, for me, helping them often realize the long -term best interest, informed self -interest of what will actually, what is the world likely to look like in 2030, in 2035?
How can you best make sure that your country and your constituents and citizens are in the best possible position as the world will change very rapidly? Thank you.
Tom?
Is this one working? Great. I am not James. I am Tom Romanoff. I am the director of policy for ACM, where I help manage the policy committees, which Gina and Virginia chair our global committee. We also have regional committees across the world, including the United States, Europe, Asia, India. Africa. Africa. and the APEC regions. So what my job at ACM is is to help the computer science folks translate their recommendations on harms or issues that they see in the technology to policymakers and engage those policymakers on behalf of ACM. So before that, I was at a think tank in Washington, D .C., so I worked with Congress and have been working in tech policy for many years now.
Okay. Okay. So in the interest of time, I’m going to get right to a very provocative question, which is we’ve been seeing wellness for all, happiness for all, in the presence of a fairly extractive and exploitive potential. Does history tell us that it’s going to be great for everyone, just works out, or there have to be some musts, not just good intentions or shoulds? If we are not seeing things like recovery, retribution, remuneration, we don’t see people going to jail when they do bad things with AI. Are we serious about AI safety?
So no, history does not show us that it’s going to be cool. And history is definitely another good indicator, which means that we need to fight harder this time around and try to get that level up, right? So history is always a story of the powerful, of the winner, like who gets to decide the narrative. And we are seeing that again today, the narratives around what is safety, what should be the evaluations, where should the money go, whether we should regulate or not. Whether it should be. It should be should or must. is always the narrative of the powerful. And when Dame Andy Hall mentioned, the representation was very much the same kind of people throughout the conversations, higher -level conversations yesterday.
So I think first and foremost, the narrative needs to change in safety as well. So far it has been, I think it’s been an evaluation, but so far the most important safety issues has been around nuclear, cyber security, chemical weapons, etc. Yes, they might be, or existential risk, which is another story. Yes, maybe we talk about those, we should talk about those, but there are real consequences right now on people’s rights, freedoms, ability to live with their dignity, and people’s rights to participate in democracy, and democratic processes. All of these are undermined, and as an organization where those… three issues are in our mission, we are seeing this more and more under pressure. So this is the time to get your voices up as citizens, as consumers, as professionals in your own right, and try to change the narrative.
Because otherwise it’s going to just be a repeat of history.
Well said. Neha?
Yeah, I think coming back to something that Wendy said, right, about being all -inclusive at the same time as having no women around in decision -making places, I think that that is something we should really be thinking about. I mean, do we have a history of being inclusive? What inclusivity have we been practicing in our innermost circles? It’s easy enough to say that the poorest of the poor should have access to this AI, but how are we doing on being all -inclusive? So I think there are lessons from disciplines such as feminist and women’s studies that we can learn from to really ask the who question. Who is making decisions? Who is being benefited? Who is part of the design process?
That’s one. Second, I would say in learning from design, which is one of the disciplinary disciplines that I’ve trained in, thinking about zooming out is great, and that’s where we have value. We talk about inclusivity. We talk about diversity. We talk about all these great -sounding words, but then when we zoom in, what are we actually doing? I think that a lot of the dialogue that we’ve been having is in this disembodied state where we talk about infrastructure, and we talk about data, and we talk about interoperability, and we talk about processes, but who is benefiting? The panelists before me also talked about aging, so people who are… more vulnerable, where are they in the conversation?
And lastly, with regards to development studies, thinking about… what are the benefits of development really. Like we want development and impact, and that’s what we’re talking about here for five days at the summit. But we know from historical perspectives that development hasn’t worked out so well for so many people and so many countries across the globe, and how are we making sure that we don’t repeat those same mistakes? And I think these have to be very much part of the conversations so that it’s safety of the human, of the body, of our values, of just our communities, our structures, social structures that are so critical to us. Thank you.
Gnasmus?
Yeah, I think we’re not seeing people go to jail. I’m not sure we have seen something just as of yet that really where that’s the case. There are lawsuits ongoing on suicides among young people, et cetera. But I do think that we will see a moment pretty soon where something does go pretty wrong, and then we’re going to have a decision on what we do with that. Some people – this is a very dark parallel. Some people said we needed to have World War II to have the UN and other systems that were put in place to avoid that happening again. And, yeah, I think it’s a matter of time when we get something, and we will have to make those decisions.
And currently, I think I’m not super confident that we will interpret those events correctly, that we will have a realistic view of what might change and how we might prevent them from happening again. And it could be people leveraging them, organized crime. It could be – I mean, we’ve had – Very recently, these – where we’ve successfully had Elon Musk and Grok stop allowing – people to create non -consensual deep fakes of nudes, which had happened in the millions. So we have sort of small, that’s not small, but we’ll have much bigger things than that. And I do think still when that happens, we will have to think about both pros and cons and costs and benefits. When we regulate things, we don’t regulate risks down to zero.
You know, when you get into a car, there’s a risk something will happen, but you still need to get places. And I think it’s, with safety, we do have to take some of the same lessons, as Mariam mentioned, from nuclear, from flights. You know, it used to be that when you got on an airplane, you know, something like 200 or 1 ,000 more of them crashed than today. And we’ve reduced that level of risk very far down. And I do think that the political level, while we need technical inputs, the only force in the world. I can really take all those considerations together and think about the partial perspectives that technical people have, that civil society has, that industry has.
Really, the only place it comes together imperfectly is at government, and that’s why it’s so important that we are here, however imperfect these summits are.
Tom?
All right, something a little different. I would like everybody in the room to raise their hand if you think safety is an important aspect of the AI deployment. Great. Keep your hands up. Keep your hands up. Now, take your hand down if you think that safety should be enforced on the output of AI outcomes. Oh, wow. Okay. Take your hand down if you think that laws should apply to the outputs of AI rather than AI itself. Okay? All right, you can go ahead and put your hands down. It wasn’t as dramatic as I thought. I thought it would be. So I’m going to talk a little bit about the 49 -51 % rule. And across all political spectrums, no matter where you are in the world, there’s this idea that you only need 51 % of the political willpower to start passing regulations, and 49 % won’t get it done.
It applies in the business world as well. You have 51 % of the board control or equity in a company. Basically control that company, right? Right? Lobbyists have an extreme incentive to not push anybody past that 51 % or 49 % in order to have an action in the political space, right? So across all of our governments in here, there is private – I don’t want to say private sector because they’re important, but there are private entities that would like to have an action in the regulatory space. And it’s not until 51 % of those politicians or that political regulator or that regulation gets – it’s to that threshold that you’ll start seeing some changes. And so you see examples of that with the example that my colleague here mentioned with deepfakes or notification applications causing worldwide outrage.
And you started seeing governments across the spectrum say, that’s something that at least 51 % of our population does not want. And so they start moving towards regulating or enforcing current laws to punish that kind of action. And so I say all this because there is also this conversation around moderates, right? We don’t know where the technology is going. We have computer scientists. We have civil society screaming about the need for action, for security within the stack, right? And the rest of the world are moderates. They’re still engaged. They’re still engaging this AI. They’re still figuring out what it can be doing. And it’s not until some kind of action happens, some kind of consequence, some kind of…
issue happens that people wake up to the folks who’ve been screaming about it for years. And so what I encourage everybody in here is not be a moderate. Pick a side and start encouraging your politicians, your family, your community. Educate them. Figure out ways to communicate the very heady technical aspects of security within the AI stack to the common person, to the person who can understand it. And that’s when you’re going to start seeing the regulations start to roll out.
I think that’s a great place to end because I think we are not going to get happiness for all and wellness for all unless we insist. We’re all going to have to insist. It’s not going to come automatically. So asking each of us to ask ourselves a question, what are we going to do to insist, I think is a really good place to end. I think we started this session a little late but I’ve been told that they would really like us to try to end on time so I think I will leave it there but we would love to engage you in conversation out in the hall after this session is over thank you to all the panelists in the first session and also all of us up here thank you so much thank you all indeed I think that there is actually time to have one question or two questions maybe now there are too many questions I have to vote ok sir there and the lady there
so I would like to a very short question I would like it’s not a question it’s a suggestion to the gentleman who has beard on that side name I missed yeah Jitu that go get some life at Sarvam I think that setup your agenda of Hindi and other language is going to die very soon so you have to get some life of that Hindi imposition and all those things nobody will impose down the line few ones sure thank you so much for the provocative discussion this is what I was hoping to get that the India impact summit my question is about how can regulatory artifacts like data set cards model cards system cards rigorous evaluations user feedback now be extended to cover multiple languages multiple contexts and multiple cultures I think a lot of hard work
be
ing used as well. So it might perform really good in English, but we know that these systems are not safe or secure or perform that well in many different languages that are not English or as resource intensive as English. So great question. They need to be dynamic and they need to reflect languages. And I will also say just very briefly following up on this is that these are things that governments can require for model providers to release models in your jurisdiction. And they so far are not. Thank you very much. We could insist. We need to insist. They are like California started this. I just want to just…
I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be able together with all the panelists to create some kind of model for the next year. Thank you very much. the measures and we will hopefully facilitate and continue this discussion I would ask all the panellists of the first and of the second round to stay here for a memento from the organisation and I would like to thank you all for being here and all the panellists again of course thank you so much Thank you Thank you. Thank you.
Virginia stresses that AI safety cannot be limited to technical robustness, accuracy or alignment. It must incorporate multidisciplinary governance, societal context and real‑world impact to protect p…
Event“Those we’ll put in a higher risk category compared to something which is just working, let’s say, on certain animals which are not dangerous.”<a href=”https://dig.watch/event/india-ai-impact-summit-2…
EventThe discussion highlighted the need for enhanced collaboration among standards organisations to address AI’s complexity and rapid evolution. Speakers emphasised moving beyond purely technical approach…
EventBjorn Berge: Thank you very much, Sara, and very good afternoon to all of you. Let me first start by congratulating Norway, my home country, for hosting this year’s Internet Governance Forum. We are h…
EventHannah Taieb:Real diversity is very important indeed, and it all depends on the models and business models. Algorithms are mainly based on a content, and then it gives some content. Now, there are oth…
EventMonica Lopez: Okay, yes. So, can you hear me okay? Yes? All right. Well, first of all, thank you for the forum organizers for continuing to put together this summit on really such critical issues r…
EventClosing off research could create power asymmetries and solidify the current power positions in the AI industry. Another important aspect of the AI discourse is adopting a rights-based approach toward…
EventMaria Paz Canales:Thank you. Thank you very much for the invitation for being here. I think that the benefit of being almost at the end of the round of speakers is that I can build on top of what alre…
EventDiscussion of different governance approaches being implemented across regions and stakeholder groups Legal and regulatory | Infrastructure Sasha highlights the challenge of developing governance fr…
EventThis comment reinforced the toolkit approach discussed in the first segment by validating the need for flexible, adaptive governance frameworks rather than one-size-fits-all solutions. It connected th…
Event“institutionalizing it should be a priority.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc?diplo-deep-link-text=This+we+…
Event– Peter Mattson- Wendy Hall – Wendy Hall- Other panelists While both advocate for measurement, Mattson focuses on technical benchmarking while Hall calls for a broader interdisciplinary approach inc…
EventAnd the US, this is the man again who drank bleach during COVID, says no regulation. So we can’t talk about the network being a network of safety institutes. Why would we want to be safe? Sorry, joke….
Event_reportingThe discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in Baroness Shields’ opening about AI engineering “simulated intimacy”), evolved int…
EventSo I think first and foremost, the narrative needs to change in safety as well. So far it has been, I think it’s been an evaluation, but so far the most important safety issues has been around nuclear…
Event_reporting“AI safety is often reduced to technical notions such as model alignment, red‑team testing and benchmark performance, but these tools do not address the core question of whether AI creates societal value or harm; AI impact is shaped by deployment context, governance capacity, incentive structures and lived realities.”
The knowledge base describes the summit discussion as moving beyond purely technical approaches to AI safety toward multidisciplinary governance frameworks that address real-world societal impacts, confirming Dignum’s point [S1].
“The summit lacked gender diversity, with women under‑represented and panels dominated by “alpha males”.”
An IGF 2023 workshop notes a gender disparity in standards work and limited involvement of diverse stakeholders, echoing Hall’s criticism of gender imbalance at the summit [S103].
“Safety concerns lie primarily in the data‑input stage and the deployment‑output stage, requiring regulation and multidisciplinary oversight involving humanities, legal, ethical and civic‑society experts.”
UN commentary highlights that AI governance must be multifaceted, including prevention, mitigation, human-rights-based policy and community engagement, providing context for the need to regulate data and deployment phases and to involve a broad set of disciplines [S56]; the summit’s broader call for multidisciplinary governance also supports this view [S1].
The panel displayed strong consensus that AI safety cannot be reduced to technical robustness alone; it requires multidisciplinary governance, inclusive design, systematic measurement, and outcome‑oriented regulation. Participants from technical, policy, civil‑society, and regional backgrounds converged on these themes, while emphasizing the need for concrete tools (model‑cards, AI metrology) and long‑term foresight.
High consensus across most speakers, indicating a shared understanding that future AI governance must integrate technical, social, and legal dimensions. This broad agreement creates a solid foundation for developing collaborative frameworks, standards, and policy recommendations in the coming year.
The panel displayed a broad consensus that AI safety must go beyond pure technical robustness and involve governance, inclusion, and human‑rights considerations. However, clear disagreements emerged around the primary locus of safety (technology vs. use), the preferred measurement horizon (long‑term longitudinal studies vs. immediate precise reporting), and the institutional arena best suited to coordinate diverse stakeholder inputs (government versus multistakeholder bodies such as ACM).
Moderate to high – while participants share the overarching goal of safer AI, they diverge on methodological and institutional pathways, which may impede the formulation of unified policy recommendations and could lead to fragmented governance approaches.
The discussion was shaped by a series of pivotal interventions that moved the conversation from a generic, technical framing of AI safety to a richly layered, socio‑political analysis. Virginia Dignum’s opening set the agenda, but it was the successive challenges—Lourino Chemane’s people‑first definition, Dame Wendy Hall’s critique of gender exclusion and call for AI metrology, Sara Hooker’s exposure of power‑driven safety signals, Jibu Elias’s concrete examples of extractive harms, and Tom Romanoff’s 51 % rule—each acted as a turning point that redirected focus, introduced new concepts, and heightened the urgency for actionable governance. Collectively, these comments deepened the panel’s understanding of safety as an interdisciplinary, inclusive, and politically contested issue, steering the dialogue toward concrete accountability mechanisms and a call for activist engagement.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

