From Technical Safety to Societal Impact Rethinking AI Governanc

20 Feb 2026 13:00h - 14:00h

From Technical Safety to Societal Impact Rethinking AI Governanc

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by stating that AI safety is often framed only in technical terms such as model alignment and benchmark performance, but the discussion must move beyond these to address multidisciplinarity, governance, and real-world impact [14-20]. Speakers emphasized that AI systems do not fail solely because of model flaws; their harms arise from the institutional, economic, and political contexts in which they are deployed [21-24].


Lourino Chemane argued that safety should be understood as the protection of people, requiring AI governance that integrates law, ethics, social sciences, education, labor, and the voices of affected communities [31-36]. He highlighted the need for comprehensive data policies, cybersecurity measures, and interoperable digital-government frameworks to secure national AI strategies and infrastructure [43-48].


Wendy Hall criticized the summit’s lack of gender diversity and warned that safety must include systematic monitoring, longitudinal studies, and the creation of AI measurement and “social-machines” metrology to capture socio-technical effects [78-84][89-103]. Yannis Ioannidis distinguished between the safety of AI technology itself and the safety of its use, calling for regulation of both inputs and outputs and for multidisciplinary oversight [108-119][120-124]. Sara Hooker noted that safety conversations have become more precise yet remain vague, stressing the importance of acknowledging trade-offs and transparently reporting what model capabilities are omitted [135-146][147-166][167-185].


Jibu Elias warned that AI is increasingly a sociopolitical and extractive force that widens socioeconomic gaps and can cause environmental harms such as water depletion from data-center projects [192-207][208-224]. Neha Kumar underscored the relevance of human-centred HCI research and called for genuine inclusivity, asking who designs, benefits from, and decides AI systems [233-239][285-303]. Merve Hickok broadened safety to encompass human-rights and democratic values, arguing that historical power narratives must be challenged to protect citizens [242-245][271-281]. Rasmus Andersen stressed advising political leaders to consider long-term societal impacts and embed safety in policy before harms materialize [248-256]. Tom Romanoff described ACM’s role in translating technical concerns into policy recommendations for lawmakers worldwide [261-265]. Jeanna Matthews posed a provocative question about whether good intentions alone suffice, highlighting the need for enforceable safeguards and accountability [266-270].


The session closed with Virginia Dignum asserting that achieving inclusive, multidisciplinary AI safety will require ongoing dialogue, concrete governance tools, and collective insistence from all stakeholders [375-378].


Keypoints

Major discussion points


Broadening AI safety beyond technical metrics – The session was opened by stressing that AI safety is often framed only in terms of model alignment, robustness, and benchmarks, but real-world value or harm depends on deployment context, governance, and institutional factors [14-19]. Panelists echoed this, noting that safety must prioritize human, social, and institutional impact and draw on law, ethics, education, and affected communities [31-34].


Inclusion and diversity as essential for safe AI – Multiple speakers highlighted the systematic exclusion of women, children, and marginalized groups from AI decision-making. Wendy Hall pointed out the all-male composition of the summit’s leadership and argued that “if it’s not diverse it’s not ethical” [78-85]. Jibu Elias warned that tribal languages are omitted from major models, illustrating cultural exclusion [202-205]. Neha Kumar called for concrete answers to “who is making decisions?” and stressed the gap between inclusive rhetoric and actual practice [285-293].


Policy, regulation, and institutional frameworks are needed – Mozambique’s effort to draft a national AI strategy, data policy, and regulations for data centres and cloud computing shows how governance structures shape safety [42-48]. Rasmus Andersen described advising governments on long-term AI impacts and the need to embed safety in public-service delivery [250-256]. Tom Romanoff explained ACM’s role in turning technical recommendations into policy actions [261-265], while Merve Hickok called for a broader view of safety that links AI policy to human-rights and democratic values [242-245].


Measuring AI systems and acknowledging trade-offs – Wendy Hall introduced the concept of “AI metrology” – a science of measuring social machines and their societal effects [57-68]. Sara Hooker stressed that safety discussions must be precise, expose what has been sacrificed in model design, and require transparent reporting of coverage and omitted safety tests [164-176].


Urgent need for accountability and proactive enforcement – Panelists warned that history shows safety only improves after crises. Jeanna Matthews asked whether good intentions are enough, and Merve Hickok argued that narratives of safety must shift from optional evaluation to mandatory protection of rights [267-279]. Tom Romanoff illustrated the “51 % rule” of political will needed to pass regulations and urged participants to move from “moderate” to active advocacy [326-338]. The session closed with a collective call to “insist” on concrete actions for inclusive, accountable AI safety [359-363].


Overall purpose / goal of the discussion


The panel aimed to re-frame AI safety from a narrow technical problem to a multidisciplinary challenge that integrates governance, policy, societal impact, and inclusive participation, and to generate concrete ideas for future frameworks, standards, and accountability mechanisms.


Overall tone and its evolution


The conversation began formally and optimistically, focusing on the need for broader perspectives [14-19]. It quickly turned critical, with speakers highlighting exclusion, tokenism, and the gap between rhetoric and practice [78-85][285-293]. As the dialogue progressed, it became constructive and solution-oriented, introducing concepts like AI metrology, trade-off reporting, and policy roadmaps [57-68][164-176][250-256]. The final segment adopted an urgent, activist tone, urging participants to move beyond discussion to concrete advocacy and enforcement [267-279][326-338][359-363].


Speakers

Virginia Dignum – Co-host of the session and Chair of the Technology Policy Council of ACM; expert in AI policy, governance, and multidisciplinary safety frameworks [S15].


Lourino Chemane – Chairman of the Board of the National Institute of Information and Communication Technology (Mozambique) and lead of Mozambique’s national AI strategy; focuses on AI policy, governance, and safety from a national-level perspective [S10].


Dame Wendy Hall – Regius Professor of Computer Science, Associate Vice-President and Director of the Web Science Institute at the University of Southampton; former member of the United Nations high-level expert advisory body; expertise in computer science, web science, and AI governance [S3].


Yannis Ioannidis – President of the ACM and Professor at the University of Athens; specialist in computer science and AI safety from a technical standpoint [S2].


Sara Hooker – Co-founder and President of Adaption Labs (formerly with Cohera); AI researcher focusing on large language models, safety, and the societal impact of AI [S1].


Jibu Elias – Researcher and activist examining how technology and innovation institutions acquire knowledge, labor, and legitimacy; concentrates on AI’s sociopolitical and extractive dimensions [transcript].


Speaker 2 – Unnamed participant who contributed a brief comment (“be”) during the discussion; no additional role or expertise identified [S7].


Participant – Audience member who raised a question about multilingual safety and regulatory artifacts; no formal title or affiliation provided [S11][S12][S13].


Neha Kumar – Associate Professor at Georgia Tech, School of Interactive Computing; President of the ACM SIGCHI (Special Interest Group on Computer-Human Interaction); expertise in human-computer interaction, social impact of technology, and inclusive design [transcript].


Merve Hickok – President and Policy Director for the Center for AI and Digital Policy, an independent think-tank working at the intersection of AI policy, human rights, democratic values, and the rule of law [S18][S19].


Tom Romanoff – Director of Policy for the ACM, overseeing global and regional policy committees; former Washington, D.C. think-tank professional who worked with U.S. Congress on tech policy [S20][S21].


Jeanna Matthews – Co-host of the second session of the panel; involved in organizing and moderating the discussion [S22].


Rasmus Andersen – Advisor at the Tony Blair Institute of Government, providing AI guidance to heads of state and senior ministers; expertise in AI policy advisory and strategic planning for governments [S23][S24].


Sara Hooker – (Listed again for completeness; see entry above.)


Additional speakers:


Gina Matthews – Co-host of the session (mentioned by Virginia Dignum) and Chair of the Technology Policy Council of ACM; involved in session moderation and organization [S15].


Full session reportComprehensive analysis and detailed insights

The session opened with Virginia Dignum reminding the audience that AI safety is often reduced to technical notions such as model alignment, red-team testing and benchmark performance, yet these tools “matter” but “do not address the core question” of what determines whether AI creates societal value or harm when deployed [14-20]. She argued that AI systems are never isolated; their impact is shaped by deployment context, governance capacity, incentive structures and the lived realities of the communities that use them, so failures often stem from institutional, economic and political embedding rather than from model flaws alone [21-24].


Dr. Lourino Chemane, chair of Mozambique’s National Institute of Information and Communication Technology, reframed safety as the protection of people, not merely of systems. He stressed that AI governance must prioritise human, social and institutional impact and be grounded in multidisciplinary input from law, ethics, education, labour, social sciences and the affected communities [31-36]. Mozambique is drafting a national AI strategy, a data-policy and a cybersecurity strategy, and has already adopted regulations for data-centre construction and cloud computing to safeguard national sovereignty and democratic processes [42-48]. He also highlighted the need for interoperable digital-government frameworks to ensure that AI improves public-service efficiency while remaining safe [46-48].


Dame Wendy Hall criticised the summit’s lack of gender diversity, noting that “50 % of the population weren’t included yesterday, the women” and that the panels were dominated by “alpha males” [78-85]. She introduced the concept of “AI metrology” – a science of measuring “social machines” to capture socio-technical effects [57-68] and cited concrete initiatives such as the UN high-level expert advisory board, the upcoming AI for Good conference in Geneva (July), the UK National Physical Laboratory’s AI Measurement Centre, and the AI Security Institute as steps toward operationalising AI metrology [57-68]. Hall warned that safety requires systematic monitoring and longitudinal studies, citing Australia’s social-media age-restriction experiment and the unintended consequences of bans that may drive youth to hidden platforms [89-103].


After Hall’s remarks, Virginia Dignum thanked her, acknowledged that Hall needed to leave, and posed a question to the panel about shifting the discourse from a purely technical approach to a broader societal one [104-105].


Yannis Ioannidis distinguished the safety of the technology (the algorithm/model) from the safety of its use, likening the technology to a car that is either working or not [111-115]. He emphasized that the real safety concerns lie in the data-input stage and the deployment-output stage, both of which require regulation and multidisciplinary oversight involving humanities, legal, ethical and civic-society experts [118-124].


Sara Hooker reflected on the evolution of the safety debate, observing that early discussions were vague and centred on existential risk, whereas today the conversation is “messier” but more accountable to real-world impact [151-156]. She noted that the term “safety” remains a blanket term, that trade-offs are inevitable, and that transparent reporting of which safety parameters are covered, which languages are supported and what trade-offs have been made is essential [164-176][167-185]. Hooker also warned that prestige and resource allocation signal how seriously safety is taken, and that panel titles alone do not guarantee substantive action [135-146][147-166].


Jibu Elias warned that AI is increasingly a sociopolitical construct with exploitative and extractive dimensions. He cited the omission of tribal languages from major models, the imposition of Hindi as a national language, and the environmental damage caused by a data-centre in Telangana that depleted groundwater and involved community bribery [202-210][211-224]. Elias highlighted the emerging concern of “AI psychosis” among vulnerable users and critiqued the US-centric AI stack being promoted globally, questioning whether this extractive model will continue [215-224].


Neha Kumar, an HCI scholar, reinforced the human-centred perspective, urging the panel to ask “who is making decisions, who is being benefited, who is part of the design process?” [285-293]. She argued that inclusive rhetoric often remains disembodied, focusing on infrastructure and data without addressing lived impacts on women, children and marginalised groups [294-303]. Kumar suggested drawing on feminist, women’s studies and development studies to interrogate power dynamics and avoid repeating historical development failures [285-303].


Merve Hickok broadened safety to encompass human-rights, democratic values and the rule of law. She argued that the prevailing safety narrative is an “evaluation” driven by powerful interests and called for a shift to mandatory, rights-based safeguards that protect citizens’ freedoms, dignity and democratic participation [242-245][271-281]. Hickok emphasized that such artefacts must be dynamic, cover multiple languages and cultures, and can be mandated by governments (e.g., the California precedent) [364-371].


Rasmus Andersen, advising leaders at the Tony Blair Institute, stressed long-term foresight, urging policymakers to consider how AI will affect citizens in 2030-35 and to embed safety in public-service delivery [250-256]. He cited ongoing lawsuits concerning suicides among young people and the deep-fake regulation example (the Elon Musk/Grok incident) as evidence that significant harms are already emerging [250-256]. Andersen noted that governments are the only arena where imperfect technical, civil-society and industry perspectives can be reconciled, making state-level coordination essential [322-324].


Tom Romanoff described the ACM’s role in translating technical safety concerns into policy action. He explained that the ACM’s policy office works with regional committees worldwide to convey researchers’ recommendations to legislators [261-265]. Romanoff introduced the “51 % rule”, stating that regulatory change occurs only when support exceeds the 51 % threshold, whereas 49 % support is insufficient, and urged participants to move from “moderate” to active advocacy[326-338]. He highlighted the need for concrete artefacts-model cards, dataset cards and user-feedback mechanisms-to be mandated by governments [364-371].


During the audience Q&A, a participant requested multilingual, culturally-aware model-card, dataset-card and system-card evaluations. Hickok responded that such artefacts must be dynamic, cover multiple languages and cultures, and can be mandated by governments, citing the California precedent [359-363][364-371].


Jeanna Matthews posed a provocative question about whether history shows that AI will automatically benefit everyone or whether enforceable “musts” are required. She warned that good intentions alone are insufficient and that without binding safeguards, “people won’t go to jail when they do bad things with AI” [266-270][359-363].


Finally, Virginia Dignum synthesised the discussion, reiterating that safety must move beyond technical robustness to an inclusive, multidisciplinary approach that addresses governance, institutional capacity and societal impact [104-105]. She announced the intention to develop a collaborative AI-safety governance model within the next year and to produce a post-summit report with concrete recommendations[375-379]. The session closed with a shared acknowledgement that achieving inclusive, accountable AI safety will require ongoing dialogue, concrete standards such as multilingual model-card disclosures, and sustained advocacy from both technical and policy communities [359-363][364-371].


Overall, the panel reached strong consensus that AI safety is a socio-technical challenge demanding multidisciplinary governance, inclusive design, systematic measurement and outcome-oriented regulation. Points of contention remained around the primary locus of safety (technology versus use), the preferred horizon for measurement (long-term longitudinal studies versus immediate trade-off reporting), and whether coordination should be led by governments or multistakeholder bodies such as the ACM. Agreed-upon action items include finalising Mozambique’s AI strategy and data-policy, launching an ACM-sponsored journal on AI measurement/metrology, drafting a post-summit report with concrete recommendations, and urging governments to require multilingual, culturally aware model-card disclosures. Unresolved issues-operationalising inclusive governance structures, defining legal liability for harmful AI outputs, and balancing rapid innovation with the time needed for longitudinal safety studies-were identified as priorities for future research and policy work.


Session transcriptComplete transcript of the session
Virginia Dignum

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. session so if you just want to stand here in front they want to make a picture of all of us you Thank you. Thank you. Yes, you have to sit there. Okay. Good morning, everybody. Thank you very much for being here. My name is Virginia Dignam. I will be co -hosting this session with my colleague Gina Matthews there. We both are the chairs of the… Technology Policy Council of ACM. And today we are here to discuss how to move beyond technical safety and looking at aspects of multidisciplinarity, governance, and real world.

impact. Across global AI discussion, safety is too often being framed in technical terms. Model alignment, red teaming, benchmark performance, frontier containment, and so on. These tools matter and they really are further development is crucial. But they don’t address the core question or at least one of the core questions. What determines whether AI systems produce human and societal value or harm in real deployment contexts? That’s what we are going to discuss in this session. AI systems, like we all know, do not operate in isolation. Their impact is shaped by deployment context, by governance capacity, by incentive structures, and by the lived reality of the communities that use and are impacted by these systems. As such, AI systems do not fail simply because of flaws in the model architecture or in the data or in the alignment technique.

they fail or they produce harm because they are embedded in institutional, economic and political systems. So we will have an open discussion with the panelists. It will be two rounds of panelists. And I would like to start by inviting Dr. Lorine Chaman, who is the chairman of the board of the National Institute of Information and Communication Technology in Mozambique, where he is at this moment leading the national strategy on AI for Mozambique. Please.

Lourino Chemane

Thank you. I would like to start by thanking the invitation to join this panel and also to congratulate the government of India hosting this AI Impact Summit. Going directly to the topic, part of this panel, as part of our exercise of crafting the… the national AI strategy, we look to this topic of safety. And for us, safety… working for the policy subject and from the policy formulation point of view. For our safety, we look at it as the protection of people, not only systems. So we look at AI governance must prioritize human, social, and institutional impact, going beyond technical metrics such as robustness, accuracy, or algorithm alignment. We also look at it from the multidisciplinary governance, grounded in the world context of use of AI.

For us, effective AI policies require input from law, social sciences, education, labor, ethics, and affected communities. So the inclusion of the people and how they will feel safe in using these technologies. We look also from the continuous human oversight and institutional accountability. People must know what’s in the bread box, how they’re designed, if they’re functional, if they’re not functional, if they’re not functional, and what factors that are affecting their lives, the decision made by the algorithm, have taken into consideration their feelings in the design phase. We also look for the protection of children, young people and women. From the studies that were conducted, women and children and youth are the first victims of the bad application of the AI.

We also look for the ethical and social assessment. Mozambique is one of the pilot countries adopting the UNESCO principles of ethics in adopting AI, and we are looking also for the dimension defined by UNESCO in this perspective. Sharing what we are doing in the country now, in Mozambique we are drafting, as I mentioned, our national AI strategy with the support of UNESCO and thank Professor Virginia, who is the leading expert in our team, but the contribution of other experts from UNESCO. We are also drafting our data policy and its implementation strategy, because we believe that data is a fundamental element for AI system. We are reviewing our national cyber security strategy. data that we’re collecting now is that there are already cybersecurity -related problems by the use of a young use of AI model.

We just adopted in Mozambique the regulation for the construction and operation of data centers and also the regulation for cloud computing, because we believe that infrastructure is a fundamental and key element for sovereignty of our country in terms of when it comes to safety, but from the policy point of view for the democratic system and all other dimensions. But we also look at it from the digital government point of view. So we’re reviewing also our interoperability framework that’s related to data to make sure that in adopting AI in the public administration, we address our main objective of improving efficiency and efficacy and delivering public services. For us, these are the elements that will be contained in the overall digital transformation strategy that, if everything goes as planned, will be approved by our government during government.

This year, and we are learning a lot in this summit. and gathering important elements that will help us to uplift and improve our work in crafting this element. Thank you for the opportunity to be part of this session.

Virginia Dignum

Thank you very much, Dr. Shaman. I understand that you have to move to another session, so feel free to leave whenever you need to go. We understand the complexities of the program. Now I would like to ask Dame Wendy Ho, Regis Professor of Computer Science, Associate Vice President and Director of the Web Science Institute at the University of Southampton, and also a former member of the United Nations high -level expert advisory body, to give us some provocative statements. They will be. Good. Provoke us.

Dame Wendy Hall

I’m fed up with just towing the party line. So I will… I have to first apologize, because I have to leave at 11. I’m supposed to be on three panels at the moment. and I also have a lunch date at midday in town. So, that’s my morning. I want to say, I think, three things. One is, what’s really… Four. If you know Monty Python, nobody expected the Spanish Inquisition. Anyway, so first of all, it’s been wonderful to be in India. I love India, and I have a love -hate relationship with this summit. It’s too big. There’s too much going on, and not enough actual real debate about the core. There’s going to be some sort of platitude statement come out today.

Yeah. And I’ve just been come back from the UN. Our advisory board and the new scientific panel get together. They’ve got a panel going on. At the moment, the dialogue that’s starting in… The dialogue that’s starting in the AI for Good conference in Geneva in July, we hope will be a real dialogue. I don’t know what form it’s going to take yet. But we have to knock the world leaders’ heads together. Now, I’m now going to say something which also really struck me. Thank you. Is that working? Yes? At this conference. Everyone’s, I love, you know, in India, AI means all -inclusive. But 50 % of the population weren’t included yesterday, the women. Right? There were no women.

The CEOs of every country, every company, there was one lady CEO from Accenture, I think. There were a couple of ladies on the panels at the end. It was all men. The alpha males of this world. The alpha males of this world. The alpha males of this world. Men. Men. Men. The alpha males of this world. right the world leaders that spoke the ceos that spoke that this world is dominated by men and my mantra has always been in terms of the the lack of women and other other some other diversity points as well but mainly women is if it’s if it’s not diverse it’s not ethical people don’t really understand what that means that means is if you haven’t got a diversity of people discussing a problem how are you going to actually sort out the biases if you haven’t got women at the top level making these decisions trying to set up the guidelines i mean your comment was yeah we want to make sure for the safety of women and children well let’s include the women and children in the discussions i mean that my third point um is that we we are watching i mean i’m very into watching these um experiments i did it all through the web and we need to learn how to monitor what’s going on so that we can say what is the right direction to go in the future.

It means collecting data and evidence and doing longitudinal studies, and it takes time. But take, for example, what Australia is doing with social media. We’ve heard at this conference several other… for teenagers. I mean, didn’t Macron… Who was there yesterday? Macron said under 15 in France. Our Prime Minister, who constantly changes his mind, so I don’t suppose it will happen, but he’s talked… Sorry, that’s a joke for any Brits in the audience, but there aren’t many. He’s saying 16 in the UK, some out of Spain saying 16. There will be unintended consequences of that. Making a ban like that without thinking about the nuances of… Well, what happens if… Well, first of all, the kids are ingenious enough to get round it.

And then they’re back on the dark side of things again, even worse than before. Because they’re doing it in… secret um what happens when they start to use social media how do we train them to do it properly my worry about a ban like that i said i mean it’s very brave of australia to do it first and we can watch and they’re saying six months time they’ll have some evidence of how many under 16s are still on social media but the behavioral issues take much much longer to explore than that and we have to get over this fact that whilst the technology is going on a pace because the alpha males are driving it without any you know just worrying about technical safety maybe um we have to we can’t say well it’s all going too fast we can’t do any we have to study this stuff um we have and i think this is what i want the acm to do i talked at my keynote talk this is my last point by the way i my keynote talk on whatever day it was wednesday on the main stage I talked about two things happening in the UK actually around one is our National Physical Laboratory which is the sort of equivalent of NIST in America has just launched with government backing a centre for AI measurement and the AI Security Institute in the UK and the other security institutes that are growing up around the world that network is now being called largely driven by the US because Trump doesn’t want to call it anything to do with safety I can’t believe I just said that anyway, but then he was the man that drank bleach in Covid they’re calling their network the network for AI measurement and I think this is a breakthrough I think this is, I mean I love AI for science, but we need to think about the science of AI and I think, and that’s a social it’s a socio -technical and I’m starting to call these things social machines as we did on the web that came from Tim Berners -Lee the idea of technology and society coming together to create artefact systems that wouldn’t have existed if they hadn’t come together and the technology doesn’t understand society at the moment most of society doesn’t understand this technology but together those two systems will create socio -technical systems or social machines and I want to build a science of studying social machines and it will be called AI measurement or AI metrology I love that word, I’ve learnt to say it it’s a cool script it’s a cool script everything’s Greek to us I love the yogurt don’t you love Greek yogurt so sorry I’m finishing there AI metrology and we’re going to launch I’m chair of the ACM publications committee or co -chair he’s president we’re going to launch a journal first journal in this area and it will be associated pulling together work and the data sharing the data that people are collecting to

Virginia Dignum

thank you Wendy, very important point and I think you can leave it there again if you understand when you have to leave you just leave we understand that so for the rest of us in the panel we start the day or the session talking about AI safety needs to be more than just the technical robustness I love your idea of the the social machines of this AI metrology yes it is it does yeah yeah with me only sometimes probably but i i did my best now yeah anyway i would like to bring you into the discussion how can we both dr shaman and wendy wall gave us examples of issues that we need really to include in going beyond this idea of technical robustness even if systems perform exactly as they have been designed and safely designed they will still probably be causing harm which is not probably just a technical failure but also a failure of inclusion a failure of imagination so i would like to get your opinions from where where you think that we can change the where can we start changing the discourse of of a pure technical approach to a broader inclusive societal institutional approach to the discussion on AI safety, on AI measurement, and so on.

And I would like to start this question, which is for all of you, starting with Professor Ioannis Ioannidis, who is the current president of ACM, and also a professor at the University of Athens.

Yannis Ioannidis

Thank you very much for having me in this panel. I’m a technical person, very sociable, but technical, that’s my expertise. So I want to separate the issue of safety of AI and talk about safety of AI use. And for me, in my technical mind, there is the AI technology, And I think that’s where I’m going to be. which is the algorithms, which are the models, and so on, from the use of this technology, the use of the software that is on AI. And we are using this software both in the beginning with the input that we give it and at the output when we create what is called I have an artificial intelligence, I have an agent, and so on, to do this or that or the other.

The technology, there’s no issue, there’s no social issue in the safety of the technology itself. It’s like the car, whether it’s working or not. There is no issue of safety. And innovation in that regard has to be let free, like the human mind and all the innovators to progress on that. And robustness and not having bugs or not bugs are an issue there, but it’s a day in the park for us. Software engineers and computing scientists. The use is the important thing and sometimes the key thing that people are talking about is the end result, the model. We put it in the judge’s hands, we put it in the doctor’s hands, we put it in the youth’s hands in terms of social media and so on.

This we have to work on, measure, regulate potentially and in any case all sciences like it was said before, especially the humanities, philosophers, ethicists, legal people, cognitive scientists and so on have to come together to address this. But there is also the input side which is again humans doing it. Humans are determining the first parameters where the system is. The first parameters where the systems are starting to be trained. The data that we feed it, it’s again humans that are choosing it. And as much as we have to… regulate or measure or think the end result the model the humanoid or non -humanoid robot that is telling us do thing or that or the agent and the same level of importance is that we have to think about what to do with what comes in and humans are using it different humans are feeding it and i think the safety must start from there we should not grow the input size we should not let it run for free even at that level we have to have the different sciences the different technologies civic society to be represented there and having an ai with whatever data we happen to have or whatever data generates billion dollar industries these are the data that that will use it’s wrong i mean there is a right and wrong here and and we have to be on the right side of that you so As a quick wrap -up, so for others to express their opinion, technology should be running free, but both input and output and result should be in the

Virginia Dignum

Thank you, Wendy. Thank you, Dr. Shiman. Thank you. See you soon. Okay. Okay, let’s continue the discussion. Sarah, Sarah Hooker, you are the co -founder and president, I believe, of Adaption Labs, a very young company, I believe. You have been before with Cohera and with other… developing organizations. What do you think about this balance or tension between the technical robustness, the technical safety measures, and the need for understanding more the environment, the context, the social context in which systems are built? And how can we technologists, those that develop like yourself, be developing systems while they are aware of this type of tension and also the insertion of the systems in very concrete, real -world domains?

Sara Hooker

And typically it’s been how do you build extremely large systems at the frontier of what’s possible. I think it’s interesting. I’ll share a few things. So one, I think what Wendy was getting to is that one of the biggest signals of whether you actually care about safety is what the forms of prestige and power look like. I think that’s mainly her comment. She’s saying, you know, we are at the pinnacle of where we all gather to discuss these things. And the way resources have actually been allocated doesn’t show that people are serious, which I think is fair. I think you have to look to the surrounding environment to understand if people are serious or not about safety or whether it’s just a panel title, candidly.

And maybe today it’s just a panel title. I think in general my philosophy about these forums is that you have to look six months out to actually get a signal of what has happened. That doesn’t mean that they’re not critical. I frankly don’t know if the expectation should be anymore that we have universal rules for AI. It’s not clear to me that that should be the outcome of these forums. So I think decidedly, if you’re going in with that expectation, you’re going to be very disappointed because I don’t think that’s going to happen at this forum or at the next one. But I do think it’s worth asking, well, where are we going as a conversation about safety and the precision of it?

Because for me, that’s the most interesting part. Time is very valuable. It’s our most precious resource. And so for me, the more precise the conversation, the better. I do think if I look at the overarching arc from Bletchley to now, we’ve had now four summits. We’ll have the fifth. It’s worth asking, has it become more precise? Candidly, and thank goodness, yes. I still remember Bletchley where it was all about existential risk and six months from now, and there were protests and hunger strikes from people who thought machines were taking over, but no precision to the conversation, no accountability for where these timelines were coming from. Thank you. And then I look to now, and now we have a very messy conversation about safety.

Certainly everyone has a different view. It’s still a blanket term, but at least it’s more accountable to what is the real -world impact of these conversations and the technology that we build. Because when I started my career as a computer scientist, we were just in research conferences. I mean, I think the fact that ACM is so well represented on this panel speaks to the origins of, like, you know, a very narrow group of people who work in a very academic community, and now our technology is used everywhere. So it’s a much more important conversation to have. So, one, I think we have gotten more precise, but it’s still very murky what people mean. Here’s the other thing I’ll say.

I think there’s often desire in these conversations about where technical meets the ecosystem to say, oh, well, safety has to be everything to everyone. And, frankly, that’s not a precise conversation either, because the truth is there are tradeoffs. When you build systems, there are tradeoffs. And too often when these conversations enter this arena, there’s a misconception about the sheer difficulty of how do you actually impose constraints on these systems. So the other thing I’ll say is the biggest thing that has to come out is an understanding of what you give up, because you give up something. The big things for me are, you know, I work a lot on language. My big ask is just report what languages model providers cover.

Report essentially, like, what they say that the safety parameters are not, and report what they don’t cover or they haven’t tested for. This sounds like a simple ask, but I think this is actually quite precise. And what it establishes is what have we given up? What are you confident about what have we given up? There’s many versions of this, but too often, and this is my ask, in conversations like this, we end up just circling around and saying we want safety, we need perspectives of everyone in the model. And the truth is that’s also a naive statement, because it is almost certainly the fact. that there will be some trade -off. Someone will not be represented.

Someone will be represented. And actually, what I think these forums are very useful for, having us all in the same conference, is about galvanizing ecosystems where you can make your own constraints and trade -offs, but also having a discussion about, you know, for the models that are being shipped that serve billions of people, we have these static monolithic models that are served the same way. What are the trade -offs that they have made, you know? And that’s, you know, as someone who’s built these models, there are almost certainly trade -offs in place. So we need to understand the state of the world as well as where we want to go. And it’s okay if there are clearly, you know, things left out.

It’s more that they have to be stated out loud. That’s my wish list, yeah. So maybe I’ll leave it there, and I’ll pass it on. I think you were next. Go for it.

Virginia Dignum

Thank you very much. Thank you, Sarah. And indeed, next one. Jibu Elias, you are a researcher, but you are also an activist who examines how technology and innovation institutions receive knowledge, labor, and legitimacy. so help us making sense of what it means safety, AI safety for society that seems to be what you do

Jibu Elias

I was more interested in the real world consequences of the panel title but wonderful conversations by Sarah and Wendy and all here so when I look back how technology has shaped my understanding of the world I feel like an idiot because I grew up in a time watching this animated shows like Jetsons and all these futuristic shows believing that the more advanced the technology gets the better the better our world will be I grew up as this idealist kid who thought when AI comes there will be no inequality that’s what I’m saying I was an AI kid back then and nowadays when I look at these things. I mean, there have been phenomenal work done by computer scientists like people present here in panel, Sarah and everyone, right?

On technical aspects of things. But more and more, we are seeing AI now becoming more political. It’s becoming a larger sociopolitic construct in general. And what concerns me more is its exploitative and extractive nature. I think Sarah mentioned about Bletchley and where the talk was all about existential risk. But now I think we are all at a point where we are agreeing that the accumulated risk have become more worrying at the same time. I’ve been tracking people who’ve been using tools, people who’ve been impacted by and those who were excluded from the benefit of this kind of technology, right? If you go around states like Telangana, Chhattisgarh, Jharkhand, there are big group of tribal populations.

You know, their language is not represented in Gemini or anything, right? And I know everybody wants to impose Hindi on all of us, but sorry, I still, you know, Hindi is not the national language of India. But what about them? How do they get access? So more and more, what I’m seeing is the divide between the socioeconomic divide becoming more wider, especially in countries like India. And, you know, it’s fascinating that, you know, we’ve been celebrating the data centers that we’ve been building. And I mean, I had firsthand experience of a data center that’s very much celebrated in Telangana in a place called Make a Good. I’m not, I don’t want to mention the company associated with it, but how it was built, how the people were manipulated, how the groundwater being extracted, right?

In a place where there is a water scarcity, you know, and when I asked the company, you know, Hey, this happened and I have a close association with that organization. and they said we interacted with the community leaders. So what I did, I reached out to the serpents. He has no idea what they mean. So essentially there’s a lot of, you know, I mean, in India we know what that means, reaching out to community leaders, bribing the politicians. But that’s the larger things I’m worried about. And the people who are using this technology, you know, now some people are talking about terms like AI psychosis. I don’t know how valid those terms are. But it’s fascinating to see that me and my executive director of Muslim Foundation has been chatting about how elderly people are using these models.

It’s very fascinating and it’s worrying at the same time. You know, we often put our attention on younger folks. But, I mean, it’s funny at the same time, but still. So my larger question is why, you know, the going forward, like yesterday the gentleman from US was telling that, you know, everyone should use a US AI stack. I think people in Denmark will be a good idea how US rates its strategic partners. Yeah. Yeah, so my larger question is where are we headed, right? Are we still going to have this extractive nature, you know, the data annotation workers who are building these models, right? So I will stop here and looking forward to the next level of conversation.

Virginia Dignum

Unfortunately, we have our second round of the panel and like all that, what we all are complaining about, it will happen. We all say our thing and the dialogue will need to be done outside in the corridor and we really hope also to, after this meeting, try to combine all what has been said in some kind of ask or report. But anyway, now we are moving to the second part of the panel. We were all going to be in the same panel, but there weren’t enough chairs. So we are splitting into two. Patience with us. You are proxy. Okay. okay everyone uh thank you so much for being here in the second part of our session and thank you for all of the panelists who are joining me here on stage i think we’re going to do something a little different than the first panel did i would like everyone to just quickly introduce themselves um nay how would you uh start

Neha Kumar

hello check okay uh hi everyone i am neha kumar and i’m an associate professor at georgia tech in the school of interactive computing i’m also uh president of this special interest group on computer human interaction and so uh this summit is um is really a coming together of many different worlds for me i actually i grew up in delhi so it’s been uh about coming home but also uh a lot of people have been coming to me for a long time and i’m really excited to be here a lot of the conversations we’ve been having are conversations that are really very very active right now discipline of human -computer interaction, HCI, some of you might know it, and it’s great to see how central human centricity is to what we’ve been discussing.

And third, something that’s been much closer to my own area of study is really looking at HCI and technology use in the context of social impact, and this has been named in many different ways over the years, social goods, social impact, societal impact, public interest, whatever you want to call it. But really, it’s an area that we’ve been studying for many, many years before AI was on the scene. And so I would say that we’re looking at multidisciplinarity in this panel, and to me, there’s a lot of learning that could be happening from many of these disciplines that have been actively looking at some of these, agreed that the platform that we’re looking at is different.

It’s unprecedented in many ways. At the same time, there’s a lot that we have to learn from as well. So I’ll stop there.

Virginia Dignum

Thank you, Neha. Thank you, Eugena. Merve Hickok.

Merve Hickok

I’m the president and policy director for Center for AI and Digital Policy. We are an independent think tank working globally at the intersection of AI policy and human rights, democratic values, and rule of law. So I would like to take a more expansive view of safety and governance at large. More to come on that. Thank you.

Virginia Dignum

Rasmus?

Rasmus Andersen

Yes. I think this works now. Yes, my name is Rasmus Andersen. I work with the Tony Blair Institute of Government where I advise leaders around the world at the prime ministerial or presidential level, but also at the line minister level on navigating AI. What does it mean for them? How they both deliver results to citizens with AI and also avoid them. I think it’s important to avoid. harm to their citizens. And so the question of safety comes up a lot, but it’s also usually not the top of leaders’ minds, and it’s really about, for me, helping them often realize the long -term best interest, informed self -interest of what will actually, what is the world likely to look like in 2030, in 2035?

How can you best make sure that your country and your constituents and citizens are in the best possible position as the world will change very rapidly? Thank you.

Virginia Dignum

Tom?

Tom Romanoff

Is this one working? Great. I am not James. I am Tom Romanoff. I am the director of policy for ACM, where I help manage the policy committees, which Gina and Virginia chair our global committee. We also have regional committees across the world, including the United States, Europe, Asia, India. Africa. Africa. and the APEC regions. So what my job at ACM is is to help the computer science folks translate their recommendations on harms or issues that they see in the technology to policymakers and engage those policymakers on behalf of ACM. So before that, I was at a think tank in Washington, D .C., so I worked with Congress and have been working in tech policy for many years now.

Jeanna Matthews

Okay. Okay. So in the interest of time, I’m going to get right to a very provocative question, which is we’ve been seeing wellness for all, happiness for all, in the presence of a fairly extractive and exploitive potential. Does history tell us that it’s going to be great for everyone, just works out, or there have to be some musts, not just good intentions or shoulds? If we are not seeing things like recovery, retribution, remuneration, we don’t see people going to jail when they do bad things with AI. Are we serious about AI safety?

Merve Hickok

So no, history does not show us that it’s going to be cool. And history is definitely another good indicator, which means that we need to fight harder this time around and try to get that level up, right? So history is always a story of the powerful, of the winner, like who gets to decide the narrative. And we are seeing that again today, the narratives around what is safety, what should be the evaluations, where should the money go, whether we should regulate or not. Whether it should be. It should be should or must. is always the narrative of the powerful. And when Dame Andy Hall mentioned, the representation was very much the same kind of people throughout the conversations, higher -level conversations yesterday.

So I think first and foremost, the narrative needs to change in safety as well. So far it has been, I think it’s been an evaluation, but so far the most important safety issues has been around nuclear, cyber security, chemical weapons, etc. Yes, they might be, or existential risk, which is another story. Yes, maybe we talk about those, we should talk about those, but there are real consequences right now on people’s rights, freedoms, ability to live with their dignity, and people’s rights to participate in democracy, and democratic processes. All of these are undermined, and as an organization where those… three issues are in our mission, we are seeing this more and more under pressure. So this is the time to get your voices up as citizens, as consumers, as professionals in your own right, and try to change the narrative.

Because otherwise it’s going to just be a repeat of history.

Jeanna Matthews

Well said. Neha?

Neha Kumar

Yeah, I think coming back to something that Wendy said, right, about being all -inclusive at the same time as having no women around in decision -making places, I think that that is something we should really be thinking about. I mean, do we have a history of being inclusive? What inclusivity have we been practicing in our innermost circles? It’s easy enough to say that the poorest of the poor should have access to this AI, but how are we doing on being all -inclusive? So I think there are lessons from disciplines such as feminist and women’s studies that we can learn from to really ask the who question. Who is making decisions? Who is being benefited? Who is part of the design process?

That’s one. Second, I would say in learning from design, which is one of the disciplinary disciplines that I’ve trained in, thinking about zooming out is great, and that’s where we have value. We talk about inclusivity. We talk about diversity. We talk about all these great -sounding words, but then when we zoom in, what are we actually doing? I think that a lot of the dialogue that we’ve been having is in this disembodied state where we talk about infrastructure, and we talk about data, and we talk about interoperability, and we talk about processes, but who is benefiting? The panelists before me also talked about aging, so people who are… more vulnerable, where are they in the conversation?

And lastly, with regards to development studies, thinking about… what are the benefits of development really. Like we want development and impact, and that’s what we’re talking about here for five days at the summit. But we know from historical perspectives that development hasn’t worked out so well for so many people and so many countries across the globe, and how are we making sure that we don’t repeat those same mistakes? And I think these have to be very much part of the conversations so that it’s safety of the human, of the body, of our values, of just our communities, our structures, social structures that are so critical to us. Thank you.

Jeanna Matthews

Gnasmus?

Rasmus Andersen

Yeah, I think we’re not seeing people go to jail. I’m not sure we have seen something just as of yet that really where that’s the case. There are lawsuits ongoing on suicides among young people, et cetera. But I do think that we will see a moment pretty soon where something does go pretty wrong, and then we’re going to have a decision on what we do with that. Some people – this is a very dark parallel. Some people said we needed to have World War II to have the UN and other systems that were put in place to avoid that happening again. And, yeah, I think it’s a matter of time when we get something, and we will have to make those decisions.

And currently, I think I’m not super confident that we will interpret those events correctly, that we will have a realistic view of what might change and how we might prevent them from happening again. And it could be people leveraging them, organized crime. It could be – I mean, we’ve had – Very recently, these – where we’ve successfully had Elon Musk and Grok stop allowing – people to create non -consensual deep fakes of nudes, which had happened in the millions. So we have sort of small, that’s not small, but we’ll have much bigger things than that. And I do think still when that happens, we will have to think about both pros and cons and costs and benefits. When we regulate things, we don’t regulate risks down to zero.

You know, when you get into a car, there’s a risk something will happen, but you still need to get places. And I think it’s, with safety, we do have to take some of the same lessons, as Mariam mentioned, from nuclear, from flights. You know, it used to be that when you got on an airplane, you know, something like 200 or 1 ,000 more of them crashed than today. And we’ve reduced that level of risk very far down. And I do think that the political level, while we need technical inputs, the only force in the world. I can really take all those considerations together and think about the partial perspectives that technical people have, that civil society has, that industry has.

Really, the only place it comes together imperfectly is at government, and that’s why it’s so important that we are here, however imperfect these summits are.

Jeanna Matthews

Tom?

Tom Romanoff

All right, something a little different. I would like everybody in the room to raise their hand if you think safety is an important aspect of the AI deployment. Great. Keep your hands up. Keep your hands up. Now, take your hand down if you think that safety should be enforced on the output of AI outcomes. Oh, wow. Okay. Take your hand down if you think that laws should apply to the outputs of AI rather than AI itself. Okay? All right, you can go ahead and put your hands down. It wasn’t as dramatic as I thought. I thought it would be. So I’m going to talk a little bit about the 49 -51 % rule. And across all political spectrums, no matter where you are in the world, there’s this idea that you only need 51 % of the political willpower to start passing regulations, and 49 % won’t get it done.

It applies in the business world as well. You have 51 % of the board control or equity in a company. Basically control that company, right? Right? Lobbyists have an extreme incentive to not push anybody past that 51 % or 49 % in order to have an action in the political space, right? So across all of our governments in here, there is private – I don’t want to say private sector because they’re important, but there are private entities that would like to have an action in the regulatory space. And it’s not until 51 % of those politicians or that political regulator or that regulation gets – it’s to that threshold that you’ll start seeing some changes. And so you see examples of that with the example that my colleague here mentioned with deepfakes or notification applications causing worldwide outrage.

And you started seeing governments across the spectrum say, that’s something that at least 51 % of our population does not want. And so they start moving towards regulating or enforcing current laws to punish that kind of action. And so I say all this because there is also this conversation around moderates, right? We don’t know where the technology is going. We have computer scientists. We have civil society screaming about the need for action, for security within the stack, right? And the rest of the world are moderates. They’re still engaged. They’re still engaging this AI. They’re still figuring out what it can be doing. And it’s not until some kind of action happens, some kind of consequence, some kind of…

issue happens that people wake up to the folks who’ve been screaming about it for years. And so what I encourage everybody in here is not be a moderate. Pick a side and start encouraging your politicians, your family, your community. Educate them. Figure out ways to communicate the very heady technical aspects of security within the AI stack to the common person, to the person who can understand it. And that’s when you’re going to start seeing the regulations start to roll out.

Jeanna Matthews

I think that’s a great place to end because I think we are not going to get happiness for all and wellness for all unless we insist. We’re all going to have to insist. It’s not going to come automatically. So asking each of us to ask ourselves a question, what are we going to do to insist, I think is a really good place to end. I think we started this session a little late but I’ve been told that they would really like us to try to end on time so I think I will leave it there but we would love to engage you in conversation out in the hall after this session is over thank you to all the panelists in the first session and also all of us up here thank you so much thank you all indeed I think that there is actually time to have one question or two questions maybe now there are too many questions I have to vote ok sir there and the lady there

Participant

so I would like to a very short question I would like it’s not a question it’s a suggestion to the gentleman who has beard on that side name I missed yeah Jitu that go get some life at Sarvam I think that setup your agenda of Hindi and other language is going to die very soon so you have to get some life of that Hindi imposition and all those things nobody will impose down the line few ones sure thank you so much for the provocative discussion this is what I was hoping to get that the India impact summit my question is about how can regulatory artifacts like data set cards model cards system cards rigorous evaluations user feedback now be extended to cover multiple languages multiple contexts and multiple cultures I think a lot of hard work

Speaker 2

be

Merve Hickok

ing used as well. So it might perform really good in English, but we know that these systems are not safe or secure or perform that well in many different languages that are not English or as resource intensive as English. So great question. They need to be dynamic and they need to reflect languages. And I will also say just very briefly following up on this is that these are things that governments can require for model providers to release models in your jurisdiction. And they so far are not. Thank you very much. We could insist. We need to insist. They are like California started this. I just want to just…

Virginia Dignum

I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be able together with all the panelists to create some kind of model for the next year. Thank you very much. the measures and we will hopefully facilitate and continue this discussion I would ask all the panellists of the first and of the second round to stay here for a memento from the organisation and I would like to thank you all for being here and all the panellists again of course thank you so much Thank you Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“AI safety is often reduced to technical notions such as model alignment, red‑team testing and benchmark performance, but these tools do not address the core question of whether AI creates societal value or harm; AI impact is shaped by deployment context, governance capacity, incentive structures and lived realities.”

The knowledge base describes the summit discussion as moving beyond purely technical approaches to AI safety toward multidisciplinary governance frameworks that address real-world societal impacts, confirming Dignum’s point [S1].

Confirmedmedium

“The summit lacked gender diversity, with women under‑represented and panels dominated by “alpha males”.”

An IGF 2023 workshop notes a gender disparity in standards work and limited involvement of diverse stakeholders, echoing Hall’s criticism of gender imbalance at the summit [S103].

Additional Contexthigh

“Safety concerns lie primarily in the data‑input stage and the deployment‑output stage, requiring regulation and multidisciplinary oversight involving humanities, legal, ethical and civic‑society experts.”

UN commentary highlights that AI governance must be multifaceted, including prevention, mitigation, human-rights-based policy and community engagement, providing context for the need to regulate data and deployment phases and to involve a broad set of disciplines [S56]; the summit’s broader call for multidisciplinary governance also supports this view [S1].

External Sources (104)
S1
From Technical Safety to Societal Impact Rethinking AI Governanc — -Sara Hooker- Co-founder and president of Adaption Labs, formerly with Cohera and other developing organizations
S2
From Technical Safety to Societal Impact Rethinking AI Governanc — -Yannis Ioannidis- Current president of ACM, Professor at the University of Athens
S3
From Technical Safety to Societal Impact Rethinking AI Governanc — -Dame Wendy Hall- Regius Professor of Computer Science, Associate Vice President and Director of the Web Science Institu…
S4
EQUAL Global Partnership Research Coalition Annual Meeting | IGF 2023 — Barhanu Nugusi, the Pan-African Youth Ambassador for Internet Governance, is actively working on internet-related issues…
S5
Session — – Eliud Kibii: Journalist, political analyst and editor Mwende Njiraini: Okay, good morning, good afternoon and good ev…
S6
Closing Ceremony and Orientation for WAIGF 2025 — – Abilahi Eliassu: Cybersecurity analyst at National Information Technology Development Agency Audience: Good evening e…
S8
https://dig.watch/event/india-ai-impact-summit-2026/press-briefing-by-hmit-ashwani-vaishnav-on-ai-impact-summit-2026-l-day-5 — Anybody else on front row? Anyone? Okay, please. Anybody else? Anybody in third row? Okay. Please. Anyone else? Yes, …
S9
https://dig.watch/event/india-ai-impact-summit-2026/advancing-scientific-ai-with-safety-ethics-and-responsibility — And I guess there’s, we, in the recommendation from the RAND Europe that I was, you know, helping out with is that we re…
S10
S11
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S12
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S13
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — – **Participant**: Role/Title not specified, Area of expertise not specified
S14
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be…
S15
From Technical Safety to Societal Impact Rethinking AI Governanc — -Gina Matthews- Co-host of the session, Chair of the Technology Policy Council of ACM (mentioned by Virginia Dignum but …
S16
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Tatjana Titareva: Thank you so much. Today’s session’s focus is to discuss the roadmap for AI Policy Lab that we have de…
S18
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Moderator:Thank you very much, Ivana. And as you say, new technologies create new problems sometimes, but they can also …
S20
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — All right, something a little different. I would like everybody in the room to raise their hand if you think safety is a…
S21
From Technical Safety to Societal Impact Rethinking AI Governanc — Is this one working? Great. I am not James. I am Tom Romanoff. I am the director of policy for ACM, where I help manage …
S22
From Technical Safety to Societal Impact Rethinking AI Governanc — -Gina Matthews- Co-host of the session, Chair of the Technology Policy Council of ACM (mentioned by Virginia Dignum but …
S23
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — And currently, I think I’m not super confident that we will interpret those events correctly, that we will have a realis…
S24
From Technical Safety to Societal Impact Rethinking AI Governanc — Yes. I think this works now. Yes, my name is Rasmus Andersen. I work with the Tony Blair Institute of Government where I…
S25
Closing remarks – Charting the path forward — Bouverot argues for comprehensive inclusion in AI governance discussions, extending beyond just governmental participati…
S26
AI That Empowers Safety Growth and Social Inclusion in Action — “investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are alig…
S27
Dynamic Coalition Collaborative Session — The panelists’ emphasis on moving beyond purely technical approaches toward comprehensive frameworks addressing economic…
S28
Parliamentary Session 3 Click with Care Protecting Vulnerable Groups Online — High level of consensus with significant implications for policy development. The agreement across different stakeholder…
S29
High-level AI Standards panel — 3. **Include**: Engaging diverse stakeholders beyond traditional technical communities The discussion highlighted the n…
S30
S31
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Ernst Noorman: Thank you very much, Zach, and thank you, Rasmus, for your words. While leaders at this moment gather in …
S32
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — Large language models have demonstrated dangerous capabilities, including documented cases of AI systems coaching childr…
S33
Main Session on Artificial Intelligence | IGF 2023 — Finally, it was suggested that an independent multi-stakeholder panel should be implemented for important technologies t…
S34
AI governance debated at IGF 2025: Global cooperation meets local needs — At theInternet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of arti…
S35
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — And so those are the sorts of conversations I have. I think, you know, in the AI space, I think you can look at countrie…
S36
UNGA/DAY 1/PART 2 — The advancement of AI is outpacing regulation and responsibility, with its control concentrated in a few hands. (UN Secr…
S37
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — **Human Control and Oversight**: Despite different approaches, speakers across perspectives emphasized the importance of…
S38
The Overlooked Peril: Cyber failures amidst AI hype — This has become evident in recent years concerning the security of digital products due to several high-effect cyberatta…
S39
Building Indias Digital and Industrial Future with AI — The panel discussion, expertly moderated by Debashish Chakraborty, revealed a sophisticated understanding of the challen…
S40
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This distinction has profound implications for risk mitigation strategies. Safety requires internal controls and model v…
S41
Policymaker’s Guide to International AI Safety Coordination — As the final substantive comment, this provided a provocative reframing that challenged participants to consider whether…
S42
Artificial intelligence (AI) – UN Security Council — Furthermore, another critical responsibility discussed is the implementation of robust safety measures to prevent misuse…
S43
Global AI Governance: Reimagining IGF’s Role & Impact — Paloma Lara-Castro: Thank you, Liz. Hi, everyone. Thank you for the space. I’m representing Derechos Digitales. We are a…
S44
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — She notes that legal certainty which can only be provided through regulations is necessary. The panel also explored the…
S45
Lightning Talk #245 Advancing Equality and Inclusion in AI — Bjorn Berge: Thank you very much, Sara, and very good afternoon to all of you. Let me first start by congratulating Norw…
S46
Democratizing AI Building Trustworthy Systems for Everyone — And the US, this is the man again who drank bleach during COVID, says no regulation. So we can’t talk about the network …
S47
AI experts ask governments to introduce algorithmic impact assessments — In apaper released by artificial intelligence (AI) experts from the AI Now Institute, governments are invited to conduct…
S48
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Examples of missing stakeholders include women’s rights organizations, trade unions, journalists, researchers who should…
S49
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Gautam brought attention to the lack of capacity in developing nations to implement or create AI standards, highlighting…
S50
Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477 — In conclusion, the analysis brings attention to several key aspects of gender equality and cybersecurity policies. It hi…
S51
From principles to practice: Governing advanced AI in action — Discussion of different governance approaches being implemented across regions and stakeholder groups Legal and regulat…
S52
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — This comment reinforced the toolkit approach discussed in the first segment by validating the need for flexible, adaptiv…
S53
Main Session | Policy Network on Artificial Intelligence — Anita Gurumurthy: Sure, I can do that. Am I audible? Okay. Thank you. I just wanted to commend the report, and especia…
S54
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S55
From Technical Safety to Societal Impact Rethinking AI Governanc — Virginia stresses that AI safety cannot be limited to technical robustness, accuracy or alignment. It must incorporate m…
S56
What is it about AI that we need to regulate? — A key distinction emerged around technical versus broader governance issues. InWorkshop 344 on WSIS+20 Technical Layer, …
S57
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S58
The Dawn of Artificial General Intelligence? / DAVOS 2025 — Yoshua Bengio advocates for substantial investment in AI safety research alongside the development of AI capabilities. H…
S59
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Furthermore, the analysis underscores the importance of considering regional regulations and governance in cybersecurity…
S60
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S61
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S62
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Inclusion of all relevant stakeholders is seen as crucial for effective AI standards. The inclusivity of diverse perspec…
S63
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Joanna Bryson: Hi, yeah, sure. Thanks very much and sorry not to be in Oslo. I wanted to come specifically to your quest…
S64
Informal Stakeholder Consultation Session — Digital transformation affects every sector, so coordinated policymaking helps ensure coherence and better outcomes for …
S65
Main Topic 3: Europe at the Crossroads: Digital and Cyber Strategy 2030 — The disagreement level was moderate and constructive. Speakers generally agreed on core goals like improving cybersecuri…
S66
High Level Session 1: Losing the Information Space? Ensuring Human Rights and Resilient Societies in the Age of Big Tech — Effective governance requires clear separation of roles rather than treating all stakeholders as equals in multi-stakeho…
S67
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-bein…
S68
High-level AI Standards panel — Need to embrace a socio-technical paradigm that goes beyond technical aspects to include societal considerations
S69
Advancing Scientific AI with Safety Ethics and Responsibility — And also, very importantly, how we have to also see it from the context of, you know, people doing their own thing, DIY …
S70
AI Governance Dialogue: Steering the future of AI — Because principles and declarations alone are not enough. We need technical standards that translate high level commitme…
S71
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Agents functions as invaluable teammates, unlocking productivity gains and time savings, which we all want more of. Howe…
S72
Four seasons of AI:  From excitement to clarity in the first year of ChatGPT — Dealing with risks is nothing new for humanity, even if AI risks are new. In environment and climate fields, there is a …
S73
Toward Collective Action_ Roundtable on Safe & Trusted AI — This comment introduces a temporal framework that prioritizes immediate, observable risks over speculative future threat…
S74
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Audience:Thank you. My name is Sonny. I’m from the National Physical Laboratory of the United Kingdom. There’s a few wor…
S75
Towards a Safer South Launching the Global South AI Safety Research Network — Crampton argues that evaluations must be continuous and supported by large‑scale infrastructure investments to track mod…
S76
From Technical Safety to Societal Impact Rethinking AI Governanc — Virginia stresses that AI safety cannot be limited to technical robustness, accuracy or alignment. It must incorporate m…
S77
Advancing Scientific AI with Safety Ethics and Responsibility — “Those we’ll put in a higher risk category compared to something which is just working, let’s say, on certain animals wh…
S78
High-level AI Standards panel — The discussion highlighted the need for enhanced collaboration among standards organisations to address AI’s complexity …
S79
Lightning Talk #245 Advancing Equality and Inclusion in AI — Bjorn Berge: Thank you very much, Sara, and very good afternoon to all of you. Let me first start by congratulating Norw…
S80
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Hannah Taieb:Real diversity is very important indeed, and it all depends on the models and business models. Algorithms a…
S81
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Monica Lopez: Okay, yes. So, can you hear me okay? Yes? All right. Well, first of all, thank you for the forum organiz…
S82
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Closing off research could create power asymmetries and solidify the current power positions in the AI industry. Another…
S83
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Maria Paz Canales:Thank you. Thank you very much for the invitation for being here. I think that the benefit of being al…
S84
From principles to practice: Governing advanced AI in action — Discussion of different governance approaches being implemented across regions and stakeholder groups Legal and regulat…
S85
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — This comment reinforced the toolkit approach discussed in the first segment by validating the need for flexible, adaptiv…
S86
Policymaker’s Guide to International AI Safety Coordination — “institutionalizing it should be a priority.”[119]. “We need to start thinking how we can build structures and perhaps i…
S87
Democratizing AI Building Trustworthy Systems for Everyone — – Peter Mattson- Wendy Hall – Wendy Hall- Other panelists While both advocate for measurement, Mattson focuses on tech…
S88
https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — And the US, this is the man again who drank bleach during COVID, says no regulation. So we can’t talk about the network …
S89
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S90
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — So I think first and foremost, the narrative needs to change in safety as well. So far it has been, I think it’s been an…
S91
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe Metzger:Thank you, Bilel. Maybe to be as succinct as possible, just would like to mention four areas, which I t…
S92
AI for social good: the new face of technosolutionism — Abeba Birhane presents a critical analysis of AI systems and their impact on society, arguing that current AI technologi…
S93
Global Enterprises Show How to Scale Responsible AI — The implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani empha…
S94
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240/2/OEWG 2025 — Mozambique: Mr. Chair, thank you for giving us the floor. With regard to application of international law to the use …
S95
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 3 — Mozambique: Distinguished Chair, since it’s our first intervention in this session, the Mozambique delegation commends…
S96
Main Session 2: The governance of artificial intelligence — Human Rights and Ethical Considerations Human rights | Legal and regulatory Mashologu emphasizes that AI governance mu…
S97
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S98
Agenda item 6: other matters — Mozambique: Thank you, Chair. Mozambique will speak out for national capacity. Mozambique delegation recognize that c…
S99
MahaAI Building Safe Secure & Smart Governance — “The answer is intelligent governance”[1]. “Governance frameworks must evolve as the artificial intelligence evolves”[2]…
S100
State of play of major global AI Governance processes — Hiroshi Yoshida from Japan discussed the country’s active role in international AI governance, including the Hiroshima A…
S101
WS #98 Towards a global, risk-adaptive AI governance framework — During the Q&A session, the importance of standards in AI governance was discussed. Speakers highlighted the need for te…
S102
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S103
Internet standards and human rights | IGF 2023 WS #460 — In addition to the gender disparity, there is a noted lack of involvement from governments and their agencies, including…
S104
Panel Discussion: 01 — “The percentage of people that have access.”[19]. “Quality AI enabled services.”[9]. “They have to benefit from healthca…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Virginia Dignum
4 arguments62 words per minute1141 words1103 seconds
Argument 1
AI safety must incorporate governance, deployment context, and societal impact, not just technical robustness (Virginia Dignum)
EXPLANATION
Virginia argues that focusing solely on technical measures such as model alignment and benchmarking overlooks the broader factors that determine AI’s real‑world value or harm. She stresses that governance capacity, incentive structures, and the lived realities of affected communities shape AI outcomes.
EVIDENCE
She notes that safety is often framed in technical terms like model alignment and red-team­ing, but the core question is what determines whether AI produces societal value or harm, emphasizing the role of deployment context, governance, and institutional systems [14-24]. Later she reiterates the need to move beyond pure technical robustness when discussing the panel’s focus [104-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to go beyond technical robustness and include multidisciplinary governance and societal context is emphasized in the discussion on technical safety versus societal impact [S1].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
AGREED WITH
Lourino Chemane, Yannis Ioannidis, Dame Wendy Hall, Sara Hooker, Merve Hickok, Neha Kumar, Jibu Elias
DISAGREED WITH
Yannis Ioannidis, Tom Romanoff
Argument 2
Panel emphasizes moving beyond technical safety to multidisciplinary policy frameworks (Virginia Dignum)
EXPLANATION
Virginia frames the session as a call to shift AI safety discussions from narrow technical concerns to broader, multidisciplinary policy approaches. She highlights the importance of integrating law, social sciences, and governance structures into AI safety work.
EVIDENCE
In her opening remarks she states the session will discuss moving beyond technical safety toward multidisciplinarity, governance, and real-world impact, and she invites panelists to address these broader issues [14-24][104-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel calls for multidisciplinary policy frameworks are echoed in the Dynamic Coalition Collaborative Session that stresses moving beyond purely technical approaches [S27].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
Argument 3
Even perfectly designed AI systems can cause harm if societal inclusion and imagination are lacking.
EXPLANATION
Dignum points out that safety failures may arise from a lack of inclusive perspectives and imagination about broader impacts, not merely from technical flaws in the system.
EVIDENCE
She observes that systems may perform exactly as designed yet still cause harm because of failures of inclusion and imagination, emphasizing the need to broaden safety considerations beyond technical design [104-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The argument that inclusion and imagination are essential to prevent harm aligns with the broader governance perspective that safety cannot be limited to technical design [S1].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
Argument 4
The panel should develop a concrete collaborative model for AI safety governance to be implemented in the next year.
EXPLANATION
Dignum expresses the intention to work with all panelists to produce a shared model or report that will guide AI safety efforts in the coming year.
EVIDENCE
She states hope to create some kind of model for the next year and to combine panelists’ input into a report, indicating a concrete plan for ongoing collaboration [375-379].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for a concrete collaborative model matches the Dynamic Coalition session’s focus on creating actionable, multi-stakeholder frameworks [S27] and the proposal for an independent multi-stakeholder panel on critical AI infrastructure [S33].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
L
Lourino Chemane
2 arguments160 words per minute573 words213 seconds
Argument 1
Safety is the protection of people and requires multidisciplinary governance, human oversight, and ethical standards (Lourino Chemane)
EXPLANATION
Lourino defines AI safety as the protection of people, not just systems, and calls for governance that integrates law, ethics, education, labor, and affected communities. Continuous human oversight and institutional accountability are essential to ensure safe AI deployment.
EVIDENCE
She outlines that safety means protecting people, prioritising human, social and institutional impact, and that effective AI policies need input from law, social sciences, ethics, and affected communities, as well as continuous human oversight and protection of women, children, and youth [30-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safety as protection of people, requiring law, ethics, and continuous human oversight, is highlighted in the technical-to-societal safety discussion [S1] and reinforced by calls for human oversight in autonomous systems [S37].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
AGREED WITH
Virginia Dignum, Yannis Ioannidis, Dame Wendy Hall, Sara Hooker, Merve Hickok, Neha Kumar, Jibu Elias
Argument 2
National AI strategies must address infrastructure sovereignty, cybersecurity, and digital government interoperability (Lourino Chemane)
EXPLANATION
Lourino explains Mozambique’s ongoing work on a national AI strategy, emphasizing data policy, cybersecurity, regulation of data centres and cloud computing, and an interoperability framework for public administration. These elements are seen as essential for sovereign, safe AI deployment.
EVIDENCE
She describes drafting a data policy, reviewing the national cybersecurity strategy, adopting regulations for data-centre construction and cloud computing, and updating the interoperability framework to improve efficiency of public services [43-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
National AI strategy elements such as data policy, cybersecurity, and interoperability are discussed in the AI-driven cyber-defense briefing for developing nations [S30] and in comparative country approaches to AI governance [S35].
MAJOR DISCUSSION POINT
Socio‑political and environmental impacts of AI deployment
Y
Yannis Ioannidis
2 arguments140 words per minute537 words229 seconds
Argument 1
Distinguish safety of AI technology from safety of AI use; emphasize regulation of inputs and outputs (Yannis Ioannidis)
EXPLANATION
Yannis separates the technical safety of AI models from the safety of their use, arguing that the latter—how inputs are chosen and how outputs are applied—requires regulation and multidisciplinary oversight. He stresses that both the data fed into models and the contexts in which they are deployed must be governed.
EVIDENCE
He states that the technology itself (algorithms, models) does not raise safety issues, but the use, including the data inputs and the decisions made by humans, must be measured, regulated, and involve multiple disciplines [108-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The distinction between technical safety and use-case regulation, focusing on inputs and outputs, is articulated in the human-rights-focused AI governance session [S18] and the Policymaker’s Guide to AI safety coordination [S41].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
AGREED WITH
Virginia Dignum, Lourino Chemane, Dame Wendy Hall, Sara Hooker, Merve Hickok, Neha Kumar, Jibu Elias
Argument 2
Safety must be enforced through law on AI outputs, not just on the technology itself (Yannis Ioannidis)
EXPLANATION
Yannis argues that legal frameworks should target the outcomes produced by AI systems rather than only the underlying technology. This approach ensures accountability for harms that arise in real‑world deployments.
EVIDENCE
He explicitly says safety must start from regulating both input size and output, suggesting that laws should apply to AI outputs rather than merely the technology [108-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Legal frameworks targeting AI outputs rather than the underlying technology are advocated in the same human-rights-oriented discussion [S18] and reinforced by policy guidance emphasizing outcome-based regulation [S41].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
AGREED WITH
Tom Romanoff, Jeanna Matthews, Merve Hickok, Participant
D
Dame Wendy Hall
3 arguments147 words per minute1140 words462 seconds
Argument 1
Lack of diversity undermines ethical AI; inclusive representation is essential for true safety (Dame Wendy Hall)
EXPLANATION
Wendy points out that AI discussions and leadership are dominated by men, which she argues compromises ethical outcomes. She stresses that without gender and broader diversity, AI systems cannot be truly safe or unbiased.
EVIDENCE
She observes that 50 % of the population (women) were not represented at the summit, noting the all-male panel and emphasizing that lack of diversity leads to ethical blind spots [78-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The critique of gender-imbalanced panels and the link to ethical blind spots are echoed in the equality and inclusion lightning talk [S45] and the broader governance call for diverse participation [S1].
MAJOR DISCUSSION POINT
Diversity, inclusion, and representation in AI governance
AGREED WITH
Neha Kumar, Jibu Elias, Virginia Dignum, Merve Hickok
Argument 2
Propose “AI metrology” and the study of “social machines” to systematically measure AI impact (Dame Wendy Hall)
EXPLANATION
Wendy introduces the concept of AI metrology, a systematic science for measuring AI’s socio‑technical impact, likening AI systems to “social machines”. She calls for dedicated research institutes and a new journal to advance this field.
EVIDENCE
She describes the launch of a UK centre for AI measurement, the AI Security Institute, and her vision of studying “social machines” and establishing “AI metrology” with a dedicated journal [90-120].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of AI metrology and “social machines” as a systematic measurement discipline is introduced in the technical-to-societal safety discussion [S1] and further developed by the UK AI measurement institute initiative [S46].
MAJOR DISCUSSION POINT
Measurement, transparency, and accountability
AGREED WITH
Sara Hooker, Participant, Virginia Dignum, Tom Romanoff
Argument 3
AI safety requires long‑term monitoring and longitudinal studies to understand delayed consequences.
EXPLANATION
Hall argues that collecting data over extended periods is essential to assess the real impact of AI interventions, especially when immediate bans may have unintended effects.
EVIDENCE
She mentions the need for longitudinal studies, referencing Australia’s age-restriction experiment on social media and noting that behavioral issues take much longer to surface, highlighting the difficulty of assessing impacts with short-term measures [89-103].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for long-term, longitudinal monitoring of AI impacts is highlighted in the World Economic Forum panel on dangerous AI capabilities and the importance of extended observation periods [S32].
MAJOR DISCUSSION POINT
Measurement, transparency, and accountability
S
Sara Hooker
1 argument191 words per minute918 words287 seconds
Argument 1
Safety discussions need precision, acknowledgment of trade‑offs, and transparent reporting of what is sacrificed (Sara Hooker)
EXPLANATION
Sara argues that AI safety conversations must become more precise, openly acknowledge trade‑offs, and require clear reporting of what safety parameters are covered or omitted. She sees this transparency as essential for accountability.
EVIDENCE
She notes that prestige and power allocation reveal seriousness about safety, calls for precise conversation, highlights trade-offs in model design, and asks for reporting of language coverage and safety gaps as a concrete step [135-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for precise safety conversations, explicit trade-off reporting, and algorithmic impact assessments are reflected in the expert recommendation for mandatory impact assessments [S47].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
AGREED WITH
Dame Wendy Hall, Rasmus Andersen, Tom Romanoff
N
Neha Kumar
2 arguments163 words per minute643 words236 seconds
Argument 1
Human‑centred design and HCI research highlight the need for inclusive, context‑aware AI systems (Neha Kumar)
EXPLANATION
Neha emphasizes that human‑computer interaction research has long studied social impact, user‑centred design, and inclusive technology, providing a foundation for AI systems that respect context and diverse user needs.
EVIDENCE
She describes her background in HCI, the study of social impact, and the importance of learning from disciplines that have examined inclusive design for years before AI emerged [233-239].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-centred design principles and the importance of inclusive, context-aware AI are underscored in the multistakeholder AI governance forum that stresses human-rights-based design [S31].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
AGREED WITH
Virginia Dignum, Lourino Chemane, Yannis Ioannidis, Dame Wendy Hall, Sara Hooker, Merve Hickok, Jibu Elias
Argument 2
Question who decides, who benefits, and who is involved in design to avoid exclusion (Neha Kumar)
EXPLANATION
Neha calls for critical reflection on decision‑making power, beneficiary identification, and inclusive design processes, warning that current dialogues often ignore who actually gains from AI deployments.
EVIDENCE
She asks who makes decisions, who benefits, and who is part of the design process, linking these questions to feminist studies, design thinking, and development studies, and notes the lack of women and vulnerable groups in current conversations [285-303].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The imperative to identify decision-makers, beneficiaries, and inclusive design processes is emphasized in the closing remarks on charting an inclusive AI governance path [S25] and the high-level AI standards panel’s call for diverse stakeholder engagement [S29].
MAJOR DISCUSSION POINT
Diversity, inclusion, and representation in AI governance
AGREED WITH
Dame Wendy Hall, Jibu Elias, Virginia Dignum, Merve Hickok
M
Merve Hickok
2 arguments148 words per minute454 words182 seconds
Argument 1
Safety must protect human rights, democratic values, and be driven by an expanded governance narrative (Merve Hickok)
EXPLANATION
Merve frames AI safety as a matter of safeguarding human rights, democratic participation, and rule of law, arguing that safety narratives need to shift toward protecting freedoms and dignity.
EVIDENCE
She describes her organization’s focus on AI policy, human rights, democratic values, and the need for an expanded safety narrative that addresses rights, freedoms, and democratic participation [242-246][271-283].
MAJOR DISCUSSION POINT
Diversity, inclusion, and representation in AI governance
AGREED WITH
Virginia Dignum, Lourino Chemane, Yannis Ioannidis, Dame Wendy Hall, Sara Hooker, Neha Kumar, Jibu Elias
Argument 2
Change the safety narrative to focus on rights, freedoms, and democratic participation (Merve Hickok)
EXPLANATION
Merve argues that the current safety narrative is dominated by powerful interests and must be reframed to centre human rights, democratic processes, and equitable participation.
EVIDENCE
She notes that history shows narratives are set by the powerful, stresses the need to shift safety discussions toward rights, freedoms, and democratic participation, and calls for collective action to change the narrative [271-283].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation to shift the safety narrative toward rights, freedoms, and democratic participation aligns with the UNGA emphasis on common standards for human-rights protection [S36] and the multistakeholder AI governance discussion [S31].
MAJOR DISCUSSION POINT
Measurement, transparency, and accountability
AGREED WITH
Yannis Ioannidis, Tom Romanoff, Jeanna Matthews, Participant
J
Jibu Elias
3 arguments155 words per minute658 words253 seconds
Argument 1
Language and cultural exclusion (e.g., tribal groups) illustrate extractive AI practices (Jibu Elias)
EXPLANATION
Jibu highlights how AI systems often ignore minority languages and cultures, leading to extractive practices that marginalise tribal communities, especially in India.
EVIDENCE
He mentions tribal populations in Telangana, Chhattisgarh, and Jharkhand whose languages are not represented in models like Gemini, and describes how data-centre projects have been built without community consent, harming local water resources [192-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The problem of language and cultural exclusion in AI models is highlighted in the broader governance discussion that stresses inclusion beyond technical metrics [S1].
MAJOR DISCUSSION POINT
Diversity, inclusion, and representation in AI governance
AGREED WITH
Dame Wendy Hall, Neha Kumar, Virginia Dignum, Merve Hickok
Argument 2
AI deployment can be exploitative; data‑center construction can harm local communities and resources (Jibu Elias)
EXPLANATION
Jibu argues that AI deployment can be exploitative, citing data‑centre construction that extracts groundwater and manipulates local communities, illustrating environmental and social harms beyond technical failures.
EVIDENCE
He recounts a data-centre built in Telangana that extracted groundwater in a water-scarce area, with companies interacting only with community leaders and politicians, reflecting exploitative practices [208-214].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The environmental and community harms linked to AI infrastructure, such as data-centre water extraction, are discussed in the AI-driven cyber-defense briefing on national strategies and infrastructure sovereignty [S30].
MAJOR DISCUSSION POINT
Socio‑political and environmental impacts of AI deployment
Argument 3
AI systems may generate new mental‑health challenges, such as ‘AI psychosis’, especially among vulnerable groups like the elderly.
EXPLANATION
Elias raises concerns that emerging AI technologies could lead to novel psychological issues and affect older populations, indicating a need to consider health impacts beyond technical performance.
EVIDENCE
He mentions the term “AI psychosis,” admits uncertainty about its validity, and describes conversations with a foundation about elderly people using AI models, signaling emerging health concerns [215-220].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emerging mental-health risks from AI, including concerns about “AI psychosis,” are mentioned in the World Economic Forum panel on dangerous AI capabilities and the need for careful monitoring of societal impacts [S32].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
R
Rasmus Andersen
2 arguments158 words per minute568 words214 seconds
Argument 1
Leaders need foresight on long‑term AI effects to safeguard citizens by 2030‑35 (Rasmus Andersen)
EXPLANATION
Rasmus stresses that policymakers must consider the long‑term trajectory of AI up to 2030‑35 to ensure that citizens are protected from emerging risks, emphasizing strategic foresight in AI governance.
EVIDENCE
He explains his advisory role at the Tony Blair Institute, helping leaders anticipate AI’s impact on citizens and plan for the world in 2030-35 [248-256].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Strategic foresight for AI governance up to 2030-35 is advocated in the multistakeholder AI governance forum that calls for long-term scenario planning [S31].
MAJOR DISCUSSION POINT
Socio‑political and environmental impacts of AI deployment
AGREED WITH
Dame Wendy Hall, Sara Hooker, Tom Romanoff
Argument 2
Effective AI safety governance requires government to serve as the central hub where technical, civil‑society, and industry perspectives converge.
EXPLANATION
Andersen observes that the only place where the imperfect perspectives of technologists, civil society, and industry can be coordinated is within governmental structures, underscoring the pivotal role of state actors.
EVIDENCE
He states that the only place it comes together imperfectly is at government, highlighting the importance of the summit’s presence for aligning diverse viewpoints [322-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of government as the coordinating hub for imperfect technical, civil-society, and industry inputs is highlighted in the proposal for an independent multi-stakeholder panel on critical AI infrastructure [S33] and the call for legal certainty through regulation [S44].
MAJOR DISCUSSION POINT
Socio‑political and environmental impacts of AI deployment
T
Tom Romanoff
3 arguments155 words per minute628 words242 seconds
Argument 1
ACM’s role is to bridge technical recommendations with policymakers worldwide (Tom Romanoff)
EXPLANATION
Tom describes ACM’s function of translating technical AI safety concerns into policy advice for governments, acting as a conduit between researchers and decision‑makers.
EVIDENCE
He outlines his position as director of policy for ACM, managing policy committees and connecting computer-science experts with policymakers across regions [258-265].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
ACM’s function as a conduit between technical experts and policymakers is described in the multistakeholder AI governance session that emphasizes bridging technical advice to policy action [S31].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
Argument 2
Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff)
EXPLANATION
Tom introduces the “51 % rule”, explaining that a majority threshold is needed for regulatory change, and urges participants to become advocates to push political will toward AI safety regulations.
EVIDENCE
He conducts a hand-raising exercise, explains the 51 % rule for political and corporate decision-making, gives examples such as deep-fake regulation, and calls for active advocacy rather than moderation [326-358].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion of a 51 % political-will threshold and the need for advocacy to drive regulation mirrors the consensus-building parliamentary session that stresses coordinated stakeholder responses [S28].
MAJOR DISCUSSION POINT
Measurement, transparency, and accountability
AGREED WITH
Dame Wendy Hall, Rasmus Andersen, Sara Hooker
DISAGREED WITH
Virginia Dignum, Yannis Ioannidis
Argument 3
Translating technical AI safety concerns into understandable language for the public is essential to drive regulatory change.
EXPLANATION
Romanoff stresses that without clear communication of technical risks to lay audiences, it will be difficult to generate the political will needed for effective AI regulation.
EVIDENCE
He urges participants to “Educate them. Figure out ways to communicate the very heady technical aspects of security within the AI stack to the common person,” emphasizing public education as a catalyst for policy action [357-358].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emphasizing clear communication of technical risks to the public to generate political will is supported by the Policymaker’s Guide that calls for prioritizing human welfare over technical advancement [S41].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
J
Jeanna Matthews
1 argument145 words per minute277 words113 seconds
Argument 1
Historical lessons show that voluntary good intentions are insufficient; mandatory safeguards are needed (Jeanna Matthews)
EXPLANATION
Jeanna asserts that history demonstrates reliance on goodwill does not guarantee safety, implying that enforceable regulations are required to protect against AI harms.
EVIDENCE
She states that history does not show safety will happen automatically and that powerful narratives must change, emphasizing the need for mandatory safeguards [266-270][359-363].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UNGA’s call for universal guardrails and mandatory standards underscores the need for enforceable safeguards rather than reliance on goodwill [S36].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
AGREED WITH
Yannis Ioannidis, Tom Romanoff, Merve Hickok, Participant
P
Participant
2 arguments126 words per minute141 words67 seconds
Argument 1
Extend model‑card and dataset‑card frameworks to cover multiple languages and cultures (Participant)
EXPLANATION
The participant suggests that regulatory artifacts such as model cards and dataset cards should be adapted to reflect multilingual and multicultural contexts, ensuring AI safety across diverse populations.
EVIDENCE
He asks how regulatory artifacts can be extended to multiple languages, contexts, and cultures, emphasizing the need for dynamic, language-aware tools [364].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The push for multilingual, inclusive model and dataset documentation aligns with the equality and inclusion lightning talk that calls for broader representation in AI artifacts [S45] and the AI measurement institute’s work on multilingual standards [S46].
MAJOR DISCUSSION POINT
Diversity, inclusion, and representation in AI governance
AGREED WITH
Yannis Ioannidis, Tom Romanoff, Jeanna Matthews, Merve Hickok
Argument 2
Suggest dynamic, multilingual regulatory artifacts (model cards, dataset cards) for broader accountability (Participant)
EXPLANATION
The participant argues that model‑card and dataset‑card evaluations must be dynamic and reflect language diversity, and that governments could require such multilingual disclosures from model providers.
EVIDENCE
He notes that current artifacts perform well in English but not in other languages, calls for dynamic, multilingual standards, and mentions that governments could mandate these disclosures, citing California as an example [366-371].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation for dynamic, multilingual regulatory artifacts is echoed in the equality-focused discussion on extending AI documentation to diverse languages [S45] and the development of AI metrology tools for systematic measurement [S46].
MAJOR DISCUSSION POINT
Measurement, transparency, and accountability
Agreements
Agreement Points
AI safety must extend beyond technical robustness to include governance, deployment context, and societal impact.
Speakers: Virginia Dignum, Lourino Chemane, Yannis Ioannidis, Dame Wendy Hall, Sara Hooker, Merve Hickok, Neha Kumar, Jibu Elias
AI safety must incorporate governance, deployment context, and societal impact, not just technical robustness (Virginia Dignum) Safety is the protection of people and requires multidisciplinary governance, human oversight, and ethical standards (Lourino Chemane) Distinguish safety of AI technology from safety of AI use; emphasize regulation of inputs and outputs (Yannis Ioannidis) Propose “AI metrology” and the study of “social machines” to systematically measure AI impact (Dame Wendy Hall) Safety discussions need precision, acknowledgment of trade‑offs, and transparent reporting of what is sacrificed (Sara Hooker) Safety must protect human rights, democratic values, and be driven by an expanded governance narrative (Merve Hickok) Human‑centred design and HCI research highlight the need for inclusive, context‑aware AI systems (Neha Kumar) Language and cultural exclusion (e.g., tribal groups) illustrate extractive AI practices (Jibu Elias)
All speakers stress that focusing only on technical metrics (e.g., model alignment, robustness) is insufficient; AI safety requires multidisciplinary governance, attention to deployment contexts, inclusive design, and societal impact considerations [14-24][30-38][108-124][90-120][135-186][242-246][285-303][192-205].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with calls for multidisciplinary AI governance that incorporate societal impact, as emphasized by Virginia in S55 and the socio‑technical paradigm discussions in S67 and S68.
Inclusion and diversity are essential for ethical and safe AI outcomes.
Speakers: Dame Wendy Hall, Neha Kumar, Jibu Elias, Virginia Dignum, Merve Hickok
Lack of diversity undermines ethical AI; inclusive representation is essential for true safety (Dame Wendy Hall) Question who decides, who benefits, and who is involved in design to avoid exclusion (Neha Kumar) Language and cultural exclusion (e.g., tribal groups) illustrate extractive AI practices (Jibu Elias) Even perfectly designed AI systems can cause harm if societal inclusion and imagination are lacking (Virginia Dignum) Change the safety narrative to focus on rights, freedoms, and democratic participation (Merve Hickok)
Speakers highlight that gender, linguistic, and cultural representation gaps create blind spots and can lead to harm; inclusive decision-making and diverse participation are required for trustworthy AI [78-88][285-303][192-205][104-105][271-283].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of inclusive stakeholder participation and gender considerations is highlighted in S61 (gender inclusivity) and S62 (multistakeholder AI standards), and further expanded to address gender‑based violence in AI safety in S73.
Systematic measurement, monitoring, and documentation (e.g., AI metrology, model‑cards) are needed to assess AI safety over time.
Speakers: Dame Wendy Hall, Sara Hooker, Participant, Virginia Dignum, Tom Romanoff
Propose “AI metrology” and the study of “social machines” to systematically measure AI impact (Dame Wendy Hall) Safety discussions need precision, acknowledgment of trade‑offs, and transparent reporting of what is sacrificed (Sara Hooker) Extend model‑card and dataset‑card frameworks to cover multiple languages and cultures (Participant) The panel should develop a concrete collaborative model for AI safety governance to be implemented in the next year (Virginia Dignum) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff)
Across the panel there is consensus on creating concrete, transparent artefacts (AI metrology, model-cards, longitudinal studies) and on institutionalising them through standards or collaborative models to enable ongoing safety assessment [90-120][135-186][364-371][375-379][326-358].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions call for concrete technical standards and continuous evaluation mechanisms, such as model‑cards and AI metrology (S70), systematic assessment frameworks (S74), and large‑scale monitoring infrastructure (S75).
Regulation should focus on AI outputs and societal impacts rather than only on the underlying technology.
Speakers: Yannis Ioannidis, Tom Romanoff, Jeanna Matthews, Merve Hickok, Participant
Safety must be enforced through law on AI outputs, not just on the technology itself (Yannis Ioannidis) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff) Historical lessons show that voluntary good intentions are insufficient; mandatory safeguards are needed (Jeanna Matthews) Change the safety narrative to focus on rights, freedoms, and democratic participation (Merve Hickok) Extend model‑card and dataset‑card frameworks to cover multiple languages and cultures (Participant)
Speakers agree that legal and policy mechanisms must target the real-world consequences of AI (outputs, harms) and that voluntary measures are inadequate; concrete regulatory artefacts can be mandated to ensure compliance [108-124][326-358][266-270][359-363][364-371].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates distinguishing output‑oriented regulation from technology‑centric approaches appear in S56 and S60, while S55 stresses the need for legal frameworks that target societal outcomes of AI systems.
Long‑term foresight, scenario planning and longitudinal monitoring are crucial for AI safety.
Speakers: Dame Wendy Hall, Rasmus Andersen, Sara Hooker, Tom Romanoff
AI safety requires long‑term monitoring and longitudinal studies to understand delayed consequences (Dame Wendy Hall) Leaders need foresight on long‑term AI effects to safeguard citizens by 2030‑35 (Rasmus Andersen) Safety discussions need precision, acknowledgment of trade‑offs, and transparent reporting of what is sacrificed (Sara Hooker) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff)
The panel stresses that AI impacts unfold over extended periods; therefore, policymakers need scenario-based foresight and continuous evidence-gathering to guide regulation and mitigation [89-103][248-256][151-156][326-358].
POLICY CONTEXT (KNOWLEDGE BASE)
Recommendations for precautionary principles, scenario building, and longitudinal monitoring are documented in S72, echoed in strategic foresight discussions in S57, and operationalized through continuous tracking initiatives in S75.
Similar Viewpoints
Both argue that the core safety challenge lies in how AI is used and governed, not merely in technical robustness of the models themselves [14-24][108-124].
Speakers: Virginia Dignum, Yannis Ioannidis
AI safety must incorporate governance, deployment context, and societal impact, not just technical robustness (Virginia Dignum) Distinguish safety of AI technology from safety of AI use; emphasize regulation of inputs and outputs (Yannis Ioannidis)
Both call for precise, systematic measurement frameworks (AI metrology, model‑card reporting) to make safety discussions concrete and accountable [90-120][135-186].
Speakers: Dame Wendy Hall, Sara Hooker
Propose “AI metrology” and the study of “social machines” to systematically measure AI impact (Dame Wendy Hall) Safety discussions need precision, acknowledgment of trade‑offs, and transparent reporting of what is sacrificed (Sara Hooker)
Both stress that AI safety must be grounded in multidisciplinary, human‑centred approaches that consider social, ethical, and contextual factors [30-38][285-303].
Speakers: Lourino Chemane, Neha Kumar
Safety is the protection of people and requires multidisciplinary governance, human oversight, and ethical standards (Lourino Chemane) Human‑centred design and HCI research highlight the need for inclusive, context‑aware AI systems (Neha Kumar)
Both argue that relying on goodwill is inadequate; robust, rights‑based regulatory safeguards are required to prevent harm [242-246][266-270][359-363].
Speakers: Merve Hickok, Jeanna Matthews
Safety must protect human rights, democratic values, and be driven by an expanded governance narrative (Merve Hickok) Historical lessons show that voluntary good intentions are insufficient; mandatory safeguards are needed (Jeanna Matthews)
Both focus on the political dimension: leaders must anticipate future AI impacts and actively mobilise political will to enact effective regulation [248-256][326-358].
Speakers: Rasmus Andersen, Tom Romanoff
Leaders need foresight on long‑term AI effects to safeguard citizens by 2030‑35 (Rasmus Andersen) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff)
Unexpected Consensus
A technical expert (Yannis Ioannidis) aligns with policy‑oriented speakers on enforcing AI safety through law on outputs.
Speakers: Yannis Ioannidis, Tom Romanoff, Jeanna Matthews, Merve Hickok
Safety must be enforced through law on AI outputs, not just on the technology itself (Yannis Ioannidis) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff) Historical lessons show that voluntary good intentions are insufficient; mandatory safeguards are needed (Jeanna Matthews) Change the safety narrative to focus on rights, freedoms, and democratic participation (Merve Hickok)
Yannis, who frames himself as a technical person, nevertheless calls for legal regulation of AI outputs-a stance that converges with the explicitly policy-driven arguments of Tom, Jeanna, and Merve, showing an unexpected cross-disciplinary agreement on outcome-based regulation [108-124][326-358][266-270][359-363][271-283].
Overall Assessment

The panel displayed strong consensus that AI safety cannot be reduced to technical robustness alone; it requires multidisciplinary governance, inclusive design, systematic measurement, and outcome‑oriented regulation. Participants from technical, policy, civil‑society, and regional backgrounds converged on these themes, while emphasizing the need for concrete tools (model‑cards, AI metrology) and long‑term foresight.

High consensus across most speakers, indicating a shared understanding that future AI governance must integrate technical, social, and legal dimensions. This broad agreement creates a solid foundation for developing collaborative frameworks, standards, and policy recommendations in the coming year.

Differences
Different Viewpoints
Scope of AI safety – technical robustness versus governance and societal context
Speakers: Virginia Dignum, Yannis Ioannidis, Tom Romanoff
AI safety must incorporate governance, deployment context, and societal impact, not just technical robustness (Virginia Dignum) Distinguish safety of AI technology from safety of AI use; technology itself has no safety issue, focus on regulating inputs and outputs (Yannis Ioannidis) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff)
Virginia argues that safety cannot be reduced to model alignment or robustness and must include governance, incentives and lived realities [14-24]. Yannis counters that the technology itself is not a safety problem and that regulation should target the data fed into models and the way outputs are used [108-124]. Tom adds that achieving safety depends on political mobilisation and thresholds rather than purely technical fixes [326-358].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between a purely technical safety focus and broader governance considerations is reflected in S55, the technical‑vs‑governance framing of S56, and the socio‑technical perspectives of S67 and S68.
Approach to measuring AI safety – long‑term longitudinal monitoring vs. immediate precise reporting of trade‑offs
Speakers: Dame Wendy Hall, Sara Hooker
Propose “AI metrology” and the study of “social machines” to systematically measure AI impact; stress need for longitudinal studies to capture delayed consequences (Dame Wendy Hall) Safety discussions need precision, acknowledgment of trade‑offs and transparent reporting of what is sacrificed (Sara Hooker)
Wendy calls for a new discipline of AI metrology and long-term, longitudinal evidence-gathering to understand impacts over time [89-120]. Hooker argues for more precise, short-term accountability, demanding clear reporting of language coverage and safety gaps as a concrete step [135-186]. The two differ on whether safety measurement should prioritise long-term studies or immediate, granular transparency.
POLICY CONTEXT (KNOWLEDGE BASE)
The contrast between immediate risk reporting and longer‑term scenario planning is discussed in S72 (precautionary tools) and S73, which presents a temporal framework prioritizing observable risks.
Who should coordinate the convergence of technical, civil‑society and industry perspectives – government versus multi‑stakeholder bodies
Speakers: Rasmus Andersen, Tom Romanoff
Effective AI safety governance requires government to serve as the central hub where imperfect perspectives converge (Rasmus Andersen) ACM’s role is to bridge technical recommendations with policymakers worldwide and to mobilise advocacy for regulatory change (Tom Romanoff)
Rasmus states that the only place where diverse viewpoints can be coordinated is within government structures [322-324]. Tom describes ACM as the conduit between researchers and policy makers and urges participants to become advocates to push political will [258-265][326-358]. The tension lies in whether the state or a multistakeholder professional association should lead the coordination effort.
POLICY CONTEXT (KNOWLEDGE BASE)
Multi‑stakeholder coordination is advocated in S62 and S64, while S66 argues for a clear governmental lead, and S63 highlights the need for global cooperation among diverse actors.
Unexpected Differences
Attitude toward technical safety of AI models
Speakers: Yannis Ioannidis, Dame Wendy Hall
Technology itself does not raise safety issues; focus should be on regulating inputs and outputs (Yannis Ioannidis) Propose AI metrology to systematically measure AI impact, implying that technical aspects also need rigorous safety assessment (Dame Wendy Hall)
Yannis treats the AI model as inherently safe and shifts responsibility to use-case regulation, whereas Wendy argues that even the technical side requires a new measurement discipline (AI metrology) to ensure safety, revealing an unexpected clash over whether technical robustness itself warrants dedicated safety science [108-124][90-120].
POLICY CONTEXT (KNOWLEDGE BASE)
S55 notes that focusing solely on technical safety is insufficient, indicating divergent attitudes toward the role of technical safeguards in overall AI safety.
Overall Assessment

The panel displayed a broad consensus that AI safety must go beyond pure technical robustness and involve governance, inclusion, and human‑rights considerations. However, clear disagreements emerged around the primary locus of safety (technology vs. use), the preferred measurement horizon (long‑term longitudinal studies vs. immediate precise reporting), and the institutional arena best suited to coordinate diverse stakeholder inputs (government versus multistakeholder bodies such as ACM).

Moderate to high – while participants share the overarching goal of safer AI, they diverge on methodological and institutional pathways, which may impede the formulation of unified policy recommendations and could lead to fragmented governance approaches.

Partial Agreements
Both agree that AI safety cannot be limited to technical metrics and must involve multidisciplinary governance and protection of people, but Virginia stresses moving beyond technical framing while Lourino focuses on concrete policy instruments such as data‑centre regulation and cybersecurity [14-24][30-38].
Speakers: Virginia Dignum, Lourino Chemane
AI safety must incorporate governance, deployment context, and societal impact, not just technical robustness (Virginia Dignum) Safety is the protection of people; AI governance must prioritise human, social and institutional impact, requiring multidisciplinary input (Lourino Chemane)
Both see history as a warning that safety cannot rely on goodwill alone and call for stronger safeguards. Merve frames this as a narrative shift toward rights and democracy, while Jeanna calls explicitly for enforceable regulations [271-283][266-270].
Speakers: Merve Hickok, Jeanna Matthews
Safety must protect human rights, democratic values and be driven by an expanded governance narrative (Merve Hickok) Historical lessons show that voluntary good intentions are insufficient; mandatory safeguards are needed (Jeanna Matthews)
Takeaways
Key takeaways
AI safety must be understood as a socio‑technical challenge, not merely a set of technical robustness metrics. Governance, deployment context, incentive structures, and institutional capacity shape whether AI creates value or harm. Multidisciplinary input (law, ethics, social sciences, HCI, labor, education, affected communities) is essential for effective AI policy. Human‑centred design, continuous oversight, and accountability mechanisms are required to protect people, especially women, children, youth, and marginalized groups. Diversity and inclusive representation in decision‑making bodies are critical; lack of gender and cultural diversity undermines ethical outcomes. Transparency through systematic reporting (model cards, dataset cards, AI metrology) and explicit articulation of trade‑offs is needed. AI deployment can have exploitative socio‑political and environmental impacts (e.g., data‑center resource extraction, language exclusion). Policy and regulatory frameworks must evolve to address real‑world harms, enforce safety on AI outputs, and align with democratic values and human rights. The ACM plans to bridge technical research with policymakers and launch a dedicated AI measurement journal to foster a science of “social machines.” Effective change requires active advocacy, shifting narratives from voluntary goodwill to mandatory safeguards, and mobilising political will (the “51 % rule”).
Resolutions and action items
Mozambique will finalize its national AI strategy, data policy, and cybersecurity regulations, incorporating UNESCO ethics principles and focusing on infrastructure sovereignty. The ACM will launch a new journal on AI measurement/metrology to collect and share systematic evaluation data. Panelists agreed to draft a post‑summit report/model that captures the discussed themes and recommendations for the next year. Call for governments to require multilingual, culturally aware model‑card and dataset‑card disclosures for AI systems deployed within their jurisdictions. Commitment from ACM policy office (Tom Romanoff) to continue translating technical safety recommendations into policy engagements worldwide.
Unresolved issues
How to operationalise inclusive governance structures that meaningfully involve women, children, indigenous and language‑minority communities. Specific mechanisms for enforcing safety on AI outputs versus the technology itself remain undefined. Concrete standards for multilingual and culturally contextualised regulatory artifacts (model cards, dataset cards) have not been established. The balance between technical innovation speed and the time needed for longitudinal safety studies is still an open question. How to ensure accountability for extractive practices (e.g., data‑center construction, labor exploitation) linked to AI deployment. What legal or punitive measures should apply when AI systems cause serious harm (e.g., criminal liability, civil lawsuits).
Suggested compromises
Acknowledge inevitable trade‑offs in AI design and require explicit public reporting of what safety aspects are being sacrificed. Combine technical robustness measures (alignment, red‑team testing) with governance tools (human oversight, institutional accountability) rather than treating them as mutually exclusive. Adopt a phased approach: start with mandatory disclosures and multilingual model‑card requirements, then progress to stronger regulatory enforcement as capacity builds. Leverage existing international frameworks (UNESCO ethics principles, national AI strategies) while tailoring them to local socio‑cultural contexts. Encourage moderate stakeholders to move from a “wait‑and‑see” stance to active participation by aligning incentives with long‑term societal benefits.
Thought Provoking Comments
AI safety needs to move beyond technical robustness and consider deployment context, governance capacity, incentive structures, and the lived reality of communities; harms arise because AI is embedded in institutional, economic and political systems.
Sets a foundational reframing of the entire discussion, shifting focus from model‑centric metrics to socio‑technical ecosystems.
Established the thematic lens for the panel, prompting subsequent speakers to address multidisciplinary governance, inclusion, and real‑world impact rather than purely technical fixes.
Speaker: Virginia Dignum
Safety is the protection of people, not just systems; AI governance must prioritize human, social, and institutional impact, involve law, ethics, labor, education, and affected communities, and ensure continuous human oversight and accountability.
Introduces a concrete, people‑first definition of safety and emphasizes the need for multidisciplinary input and continuous oversight.
Reinforced Dignum’s framing and broadened the conversation to include policy mechanisms such as data policies, cyber‑security, and digital government interoperability.
Speaker: Lourino Chemane
If it’s not diverse it’s not ethical – the lack of women and other under‑represented groups at the summit shows that ethical AI cannot be achieved without inclusive decision‑making. Also proposes the concept of ‘AI metrology’ to study AI as socio‑technical ‘social machines’.
Combines a sharp critique of gender bias with a novel proposal for a new scientific discipline (AI measurement/metrology) to systematically study AI’s societal impact.
Shifted the tone from abstract policy talk to concrete calls for diversity and measurement, inspiring later speakers (e.g., Sara Hooker, Merve Hickok) to discuss accountability, metrics, and the need for new research infrastructures.
Speaker: Dame Wendy Hall
Separate safety of the AI technology from safety of AI use; the technology itself can be robust, but the inputs, outputs, and human choices determine real‑world safety, requiring involvement of humanities, law, ethics, and civic society.
Clarifies a nuanced distinction that many participants had conflated, highlighting where technical work ends and socio‑political governance begins.
Prompted other panelists to discuss the role of data, human agency, and regulatory frameworks, deepening the analysis of where responsibility lies.
Speaker: Yannis Ioannidis
The real signal of whether we care about safety is the prestige and power structures that allocate resources; we need precise, transparent reporting of what safety parameters models cover and what they omit, acknowledging inevitable trade‑offs.
Moves the discussion from high‑level ideals to actionable transparency, emphasizing that safety is a political and economic signal, not just a technical checkbox.
Steered the conversation toward concrete accountability mechanisms (model cards, dataset cards) and sparked agreement on the need for explicit trade‑off disclosures.
Speaker: Sara Hooker
AI is becoming an extractive, exploitative construct: data‑center construction harms local water supplies, language minorities are excluded from models, and AI tools can cause ‘AI psychosis’ among vulnerable users.
Provides vivid, ground‑level examples of how AI deployment can cause social and environmental harm, grounding abstract safety concerns in lived realities.
Shifted the panel from policy theory to tangible harms, prompting participants like Merve Hickok and Neha Kumar to stress rights, democracy, and inclusive design.
Speaker: Jibu Elias
History shows that without deliberate action, safety narratives are shaped by the powerful; we must change the narrative to protect rights, freedoms, and democratic participation, not just focus on existential or nuclear‑style risks.
Frames AI safety within a historical and political lens, arguing that current safety discussions repeat past power dynamics unless actively contested.
Reinforced calls for activist stances, influencing later remarks about moving beyond “moderate” positions and encouraging participants to demand concrete regulatory change.
Speaker: Merve Hickok
The 51 % rule: regulatory change only happens when a majority of political or corporate power aligns; we must move from being moderates to activists who educate and pressure decision‑makers.
Introduces a pragmatic political insight about how change actually occurs, coupled with a clear call to action for the audience.
Marked a turning point toward a more urgent, action‑oriented tone, culminating in Jeanna Matthews’ concluding appeal for insistence and collective responsibility.
Speaker: Tom Romanoff
Inclusivity must be examined not just at the level of rhetoric but by zooming in on who designs, who benefits, and who is left out; lessons from feminist studies and development studies can help ask the ‘who’ question concretely.
Bridges HCI and feminist scholarship to critique superficial diversity claims and propose concrete analytical lenses.
Deepened the discussion on inclusion, prompting reflections on design practices and the need for measurable outcomes rather than buzzwords.
Speaker: Neha Kumar
Overall Assessment

The discussion was shaped by a series of pivotal interventions that moved the conversation from a generic, technical framing of AI safety to a richly layered, socio‑political analysis. Virginia Dignum’s opening set the agenda, but it was the successive challenges—Lourino Chemane’s people‑first definition, Dame Wendy Hall’s critique of gender exclusion and call for AI metrology, Sara Hooker’s exposure of power‑driven safety signals, Jibu Elias’s concrete examples of extractive harms, and Tom Romanoff’s 51 % rule—each acted as a turning point that redirected focus, introduced new concepts, and heightened the urgency for actionable governance. Collectively, these comments deepened the panel’s understanding of safety as an interdisciplinary, inclusive, and politically contested issue, steering the dialogue toward concrete accountability mechanisms and a call for activist engagement.

Follow-up Questions
How can we shift the discourse from a purely technical AI safety focus to a broader inclusive societal and institutional approach?
Addressing this question is crucial to ensure that AI safety considerations incorporate governance, ethics, and real‑world impact rather than remaining confined to model robustness alone.
Speaker: Virginia Dignum
What specific trade‑offs are being made in AI models, and can providers transparently report which safety parameters are covered and which are omitted?
Transparency about omitted safety tests and trade‑offs would allow stakeholders to understand what risks are being accepted and to hold developers accountable.
Speaker: Sara Hooker
How can we mitigate the extractive and exploitative aspects of AI development, such as the labor conditions of data annotation workers and the environmental impacts of data‑center construction?
Investigating these socio‑economic and environmental harms is essential to prevent AI from deepening inequality and resource depletion.
Speaker: Jibu Elias
Does history indicate that AI safety will automatically benefit everyone, or are enforceable mandates (musts) required? Are we serious about AI safety?
Understanding whether voluntary measures suffice or whether binding regulations are needed informs policy design and prevents repeating past failures.
Speaker: Jeanna Matthews
How can regulatory artifacts like dataset cards, model cards, system cards, rigorous evaluations, and user‑feedback mechanisms be extended to cover multiple languages, contexts, and cultures?
Ensuring these tools work across linguistic and cultural boundaries is vital for equitable AI safety worldwide.
Speaker: Unnamed Participant (audience)
What longitudinal evidence is needed to assess the impact of age‑based social‑media bans, and how can studies be designed to capture unintended consequences?
Long‑term studies are required to determine whether bans protect youth or drive them to riskier, hidden platforms, informing better policy.
Speaker: Wendy Hall
How can we develop a science of AI measurement or AI metrology to study ‘social machines’ and their socio‑technical dynamics?
A systematic measurement framework would enable consistent evaluation of AI’s societal effects, supporting evidence‑based governance.
Speaker: Wendy Hall
What mechanisms can ensure the inclusion of women, children, and other vulnerable groups in AI governance and design processes?
Inclusive participation is necessary to avoid bias and to make AI systems safe and beneficial for all segments of society.
Speaker: Wendy Hall; Neha Kumar
How can national AI strategies (e.g., Mozambique’s) integrate data policy, cybersecurity, and digital government frameworks to ensure safety and sovereignty?
Research on policy integration can guide other nations in building coherent, safe AI ecosystems aligned with national interests.
Speaker: Lorine Chemane
What role should interdisciplinary collaboration (law, ethics, social sciences, humanities) play in AI safety governance and regulation?
Cross‑disciplinary input is needed to address the full spectrum of safety concerns beyond technical robustness.
Speaker: Yannis Ioannidis

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.