Who Watches the Watchers Building Trust in AI Governance

20 Feb 2026 18:00h - 19:00h

Who Watches the Watchers Building Trust in AI Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, introduced by Gregory C. Allen, featured Stephen Clare, co-lead author of the International AI Safety Report, Hiroki Hibuka, a Japanese AI policy expert, and Shana Mansbach of the think-tank Fathom, which convenes AI governance discussions [1-3][4-8][9-10]. Clare explained that the report, originating from the 2023 Bletchley Safety Summit, is meant to be an IPCC-style evidence base for AI governance and is backed by more than 30 countries and intergovernmental bodies [19-22].


He noted that many risks have moved from theoretical to observable, with billions of users and incidents such as deepfakes and AI-enabled cyber attacks prompting a surge in risk-management techniques [25-31]. Clare highlighted that model jailbreaks have become substantially harder, citing the UK Security Institute’s shift from minutes to several hours to find universal jailbreaks for the latest models [42-45]. Nevertheless, he warned that safeguards remain vulnerable to skilled actors, that implementation is uneven across companies, and that ensuring broad compliance is now a pressing governance challenge [51-57][58].


Hiroki contrasted hard-law and soft-law strategies, arguing that most jurisdictions already have sector-specific regulations (privacy, copyright, finance, etc.) and the key question is how to adapt them rather than create entirely new AI statutes [82-86]. He described the EU’s AI Act versus Japan’s and the US’s more sector-specific, “exempt” versus “exposed” approaches, noting Japan’s preference for pre-emptive rules and the need for more agile, multi-stakeholder soft-law mechanisms [87-95][96-100]. He emphasized the difficulty of evaluating values such as privacy or fairness and the lack of benchmark standards worldwide [98-100].


Mansbach argued that the rapid rise in AI capabilities has created a systemic trust deficit for the public, deployers, regulators and developers, which traditional command-and-control governance cannot address because of speed and technical-capacity gaps [105-113][114-118]. Fathom proposes a government-authorized marketplace of independent verification organizations (IVOs) that would assess outcomes such as child safety, data privacy, controllability and interpretability, providing a rebuttable presumption of a heightened standard of care [116-122][124-128][173-179]. She identified liability clarity, insurance eligibility and market advantage as three incentives for entities to seek verification, likening the model to UL or Underwriters Lab certifications [221-230][231-239].


Gregory highlighted that without insurance or liability frameworks AI adoption could be stifled, and that analogies such as AS9100 in aerospace or the NHTSA’s star-rating system illustrate how third-party standards can drive safety [207-214][330-334]. The panel agreed that current evaluation tools are narrow and quickly become outdated, underscoring the urgency of developing flexible, outcome-based standards and independent audits to keep pace with evolving AI systems [258-266][270-276][292-298]. Overall, they concluded that a layered, outcomes-focused verification ecosystem, supported by legal, insurance and market incentives, is essential to bridge the trust gap and enable effective AI governance [171-179][221-230][292-298].


Keypoints


Major discussion points


The International AI Safety Report as the new baseline for AI governance – The panel repeatedly cites the report as the “foundation” for current conversations, noting that AI risks have moved from theoretical to observable real-world impacts (e.g., deep-fakes, cyber-attacks) and that technical safeguards are becoming harder to bypass, yet still have vulnerabilities that raise urgent governance questions. [2-4][24-31][33-41][50-58]


Divergent global regulatory approaches – Participants compare the EU’s hard-law AI Act with Japan’s sector-specific, pre-emptive soft-law model and the United States’ high-level, principle-based regime, emphasizing that the real issue is how existing laws (privacy, copyright, sector regulations) are updated or supplemented rather than whether new AI-specific statutes are needed. [80-88][89-96]


The “trust problem” and the proposal of independent verification organizations (IVOs) – A central theme is the lack of trust for the public, deployers, regulators, and developers. The panel proposes a government-authorized marketplace of IVOs that issue outcomes-based certifications, which can clarify standards of care, unlock insurance, and create market incentives (e.g., “seal of approval” similar to UL). [106-112][117-124][125-130][171-178][221-230][231-239]


Practical challenges of auditing and evaluation – Audits are costly, lack clear economic incentives, and suffer from an “evaluation gap” because existing benchmarks are narrow and quickly become outdated. The discussion highlights the need for adaptable, incentive-aligned testing frameworks and more transparent, third-party evaluation capacity. [187-192][197-199][255-268][270-284]


Layered responsibility across the AI ecosystem – Rather than assigning safety to a single actor, the speakers argue for a “defense-in-depth” model that distributes duties among developers, downstream deployers, ecosystem monitors, and end-users-mirroring analogies to automotive and aerospace safety standards. [155-162][158-166][161-168]


Overall purpose / goal of the discussion


The panel’s aim was to take stock of where AI governance stands in 2026, using the International AI Safety Report as a common reference point, to compare how different jurisdictions are handling regulation, and to explore innovative governance mechanisms-particularly independent, outcomes-based verification-that can bridge the trust gap, align incentives, and support effective, scalable oversight of rapidly advancing AI systems.


Overall tone


The conversation began with a celebratory, appreciative tone toward the report and the progress made since the Bletchley Summit. As the dialogue progressed, the tone shifted to a more urgent and problem-focused stance, highlighting gaps in technical safeguards, regulatory inconsistencies, and incentive misalignments. By the end, the tone became constructive and forward-looking, emphasizing collaborative solutions (IVOs, market incentives, analogies to other safety regimes) while maintaining a realistic acknowledgment of the challenges ahead.


Speakers

Gregory C. Allen


Area of expertise: AI governance, policy discussion moderation


Role/Title: Moderator/Host of the panel discussion [S4]


Stephen Clare


Area of expertise: AI safety, technical risk management, AI governance


Role/Title: Co-lead author of the International AI Safety Report; co-lead writer of the report [S3]


Hiroki Hibuka


Area of expertise: AI policy, law, and governance, especially in Japan


Role/Title: Research Professor, Kyoto University Graduate School of Law; former Japanese government policymaker; non-resident senior associate at CSIS [S1]


Shana Mansbach


Area of expertise: AI governance, independent verification, policy innovation


Role/Title: Vice President of Strategy and Communications, Fathom [S5]


Additional speakers:


Karina Prunkle – Co-lead writer of the International AI Safety Report (mentioned in the discussion).


Full session reportComprehensive analysis and detailed insights

Gregory C. Allen opened the session by introducing the four panelists and noting Stephen Clare’s contribution to the International AI Safety Report as the “foundation” for AI-governance discussions in the coming year [1-4]. He also highlighted Hiroki Hibuka’s expertise on Japanese AI policy [5-8] and mentioned Shana Mansbach’s role at the young think-tank Fathom, a leading convenor of the ASHFE conference series [9-10].


Stephen Clare then outlined the origins and purpose of the International AI Safety Report. Drafted as the shared evidence base for the 2023 Bletchley Safety Summit and modelled on IPCC reports, the document is backed by more than thirty countries and intergovernmental organisations [18-22]. Its 2026 message is that “the rubber is really hitting the road”: risks once theoretical are now observable at scale, with a billion users worldwide and concrete harms such as deep-fake proliferation and AI-enabled cyber-attacks [24-31]. Clare reported that technical safeguards have improved markedly-modern models now require seven to ten hours for a universal jailbreak, compared with minutes for earlier systems [42-45]-and that twelve leading AI developers publish frontier safety frameworks, indicating greater transparency [48-49]. He cautioned, however, that safeguards remain vulnerable to skilled actors, implementation is uneven, and the key governance challenge is ensuring broad compliance and addressing non-adoption [51-58].


Hiroki Hibuka provided a comparative overview of global regulatory approaches. He emphasized that all jurisdictions already contain a mix of hard-law and soft-law instruments (privacy, copyright, sector-specific rules) [80-86] and argued that the policy task is to update these existing rules rather than create brand-new AI statutes. He contrasted the EU’s AI Act (hard-law, high-risk-focused) with Japan’s pre-emptive, sector-specific soft-law approach and the United States’ “exposed” principle-based regime that relies on high-level guidelines and post-hoc litigation [87-96]. Hibuka noted the difficulty of evaluating abstract values such as privacy, transparency and fairness, pointing to the current lack of benchmark standards worldwide [98-100]. He further observed that democratic debate is needed to decide acceptable safety levels (e.g., how many deaths are tolerable for autonomous vehicles) and that test-measure design-such as comparing accident rates on a straight highway versus a complex city-is itself a policy question [300-310]. Hibuka also highlighted public procurement as a powerful market pull: governments could require verified AI in contracts, creating a strong incentive for firms to seek certification [300-310].


Gregory then asked Shana Mansbach to explain Fathom’s perspective on the emerging “trust problem”. She described how the surge in model capabilities has generated uncertainty for the public, deployers, regulators and developers, producing a systemic lack of confidence that AI systems work safely, securely and as advertised [105-108]. She argued that traditional command-and-control governance cannot keep pace with AI’s speed or the scarcity of technical expertise outside frontier labs [111-114].


Mansbach proposed an outcomes-based marketplace of government-authorised independent verification organisations (IVOs). Regulators would define desired outcomes-such as child safety, data-privacy, controllability and interpretability-and IVOs would conduct up-to-date testing to certify that AI systems meet those outcomes [117-122]. She discussed the concept of a “standard of care” that verification could establish, providing a rebuttable presumption of heightened care and clarifying liability before any harm occurs [173-179]. Mansbach identified three primary incentives for organisations to seek verification: (i) liability clarity, (ii) eligibility for insurance coverage (insurers are currently refusing to underwrite AI-enabled products), and (iii) a market advantage akin to UL or Underwriters Lab seals, which could become decisive for buyers such as school superintendents [221-230][231-239]. She qualified these analogues as partial rather than perfect matches to existing safety-certification models [231-239].


Gregory linked these ideas to existing safety-standard mechanisms, noting that in aerospace the AS9100 certification is required for insurance and that insurers’ refusal to cover AI-driven activities could act as a de-facto regulatory lever [207-214][240-250]. He also drew an analogy to the U.S. National Highway Traffic Safety Administration’s star-rating system for vehicles, suggesting a similar rating could guide AI-system adoption [330-334].


Stephen elaborated on a “layered, defence-in-depth” responsibility model. He argued that no single actor can bear full responsibility: developers should embed training techniques to reduce dangerous outputs, downstream deployers should implement monitoring and classification systems, and ecosystem-wide monitors should track AI-generated content across borders. He stressed the need for societal-level resilience-hardening digital infrastructure against AI-enhanced cyber-attacks-rather than attempting to prevent every harmful use [155-168].


The panel then examined incentives for independent audits. Hibuka reiterated that without clear economic benefits corporate executives are unlikely to pursue verification, citing autonomous-vehicle certification as a strong market driver [187-192]. He reiterated that public procurement could provide a powerful pull if governments required verified AI for contracts [300-310], and noted that insurance could serve as another carrot, though current lack of AI-specific coverage limits this lever [197-199][318-328]. Stephen highlighted a significant “evaluation gap”: existing benchmarks are narrow, quickly become outdated, and fail to capture the breadth of real-world use cases, as many evaluations consist of static question sets that do not reflect the stochastic, multi-turn nature of modern models [255-267]. Shana agreed, adding that testing is intrinsically hard because model outputs vary across runs and downstream impacts can differ dramatically between users (e.g., a harmful suggestion that may be benign for most but catastrophic for a vulnerable individual) [270-277]. She argued that a competitive IVO marketplace would incentivise continual improvement of testing tools, creating a “race to the top” similar to how UL certification drives product safety in other sectors [285-290].


Gregory asked how consensus on risks could be turned into formal standards. Stephen responded that while the report provides a state-of-the-science baseline, there is still a lack of agreed-upon best practices, and any standards would need to evolve rapidly to keep pace with model capabilities [292-298][255-267].


Across the panel, the participants repeatedly referred to the International AI Safety Report as a foundational baseline for current AI-governance discussions [2-3][19-23]. They agreed that technical safeguards have improved yet remain vulnerable and unevenly applied [35-40][51-57]; organisational safety frameworks are inconsistent, creating a need for outcomes-based verification [48-57][111-130]; and insurance can serve as a powerful lever to drive adoption of verification standards [221-231][244-250]. Disagreements centred on the primary economic incentive for audits (public procurement versus insurance versus market pressure) [187-195][318-328][221-238] and on whether existing hard- and soft-law regimes are sufficient or new governance mechanisms are required [80-86][48-57][62-65].


Key take-aways


1. The International AI Safety Report is a foundational baseline confirming that AI risks are now material.


2. Technical safeguards are stronger but remain vulnerable and unevenly applied.


3. Global regulatory approaches differ, yet all must adapt existing hard- and soft-law rules to cover AI.


4. A trust deficit exists across stakeholders; an outcomes-based IVO marketplace could mitigate it by providing liability clarity, insurance eligibility, and market advantage.


5. Safety responsibility must be layered across developers, deployers and societal monitors.


6. Incentives such as insurance underwriting, public procurement and consumer-facing seals are essential to motivate audits.


7. Current evaluation benchmarks are narrow and outdated, necessitating dynamic, multi-turn testing tools.


8. Lessons from aerospace (AS9100), automotive safety ratings and UL certification can inform AI-safety standards.


Proposed actions


a. Establish a government-authorised IVO marketplace.


b. Encourage regulators and insurers to tie compliance with IVO verification to liability standards, insurance premiums and procurement contracts.


c. Develop sector-specific safety standards that combine hard law, soft law and voluntary frameworks.


d. Increase transparency from AI labs to reduce information asymmetry.


Unresolved issues include designing economically viable incentives, defining a universal standard of care, creating up-to-date evaluation methodologies that capture stochastic, multi-turn risks, and ensuring third-party auditors retain expertise as technology evolves. The panel suggested a hybrid approach that blends layered responsibility, flexible outcomes-based standards and market-driven incentives to achieve scalable, trustworthy AI governance.


Session transcriptComplete transcript of the session
Gregory C. Allen

Again, to my immediate right, we have Stephen Clare, who wrote the International AI Safety Report as the co -lead author, if I’m not mistaken. And he earned that applause, because that report is a remarkable document that I do think is the foundation upon which all conversations about AI governance now must rest for the next year. It’s the sort of minimum amount of knowledge that you must have to participate in the conversation, which I think is really a tribute to him. Then we have Hiroki Hibuka, who is currently a research professor at the Kyoto University Graduate School of Law, and was also deeply involved in drafting Japan’s first set of soft law regulations, and is an expert on all things AI, but also especially astute at what’s going on in Japan.

We also have a privilege of collaborating with him at CSIS, where he’s a non -resident senior associate. And I must say, he is probably the best person writing about Japanese AI policy in Japanese, but he is definitely the best person writing about it in English. And so I often tell Hiroki that, like, if he doesn’t write about it, nobody in Washington, D .C. knows about it. So it’s important, his work. And then finally, we have Shana Mansbach, who’s the vice president of strategy and communications at Fathom, which is a young think tank, started only two years ago, but has already succeeded as one of the best conveners of the ASHFE conference series on AI, and also now leading a policy initiative, which I think she’s going to tell us all about.

So without further ado, I’d like to start with you, Stephen. I just said that the report that you were the lead author of is sort of the bedrock for having a conversation on AI governance. For those in the audience who haven’t yet made it through, but they, of course, will, can you sort of set the stage? Where are we in 2026 in AI governance and in AI safety, technical and procedural intervention?

Stephen Clare

Sure. Thanks, Greg. First of all, I’m sorry if I’d known Greg was going to make the report, you know, required reading, I would have tried harder to make it shorter. Yeah. Thanks for having me. Thanks for really excited to be here. So for people who don’t know, the report is it was founded up the Bletchley 2023 Bletchley Safety Summit as sort of, you know, the shared evidence base for decision makers thinking about these complicated, fast moving, noisy governance questions. It’s kind of trying to be like the IPCC report for for AI. It’s backed by over 30 countries and intergovernmental organizations. You know, I’m one of two co lead writers along with Karina Prunkle, but there’s over 30 dedicated experts writing different sections, and there’s hundreds of people that review it.

So it’s really trying to be a sort of state of the art, what do we know? What don’t we know about general purpose AI systems and the risks they might pose? I think this year the main message of the report is like the rubber is really hitting the road or something with these kind of systems. Risks that even a year or two ago might have been theoretical are now very real and we’re seeing emerging empirical evidence. More real world impacts of AI on productivity and labor markets and in science and in software engineering. It’s all like really happening out in the world. There’s a billion people now using AI around the world. Many of those impacts include risks.

So we’re seeing effects of deepfake spreading, cyber attacks being more common with AI systems. And so the need for sort of risk management techniques that are effective is also growing. One thing that I found surprising working on the report is that in this domain on risk management and technical safety, there’s actually some good news. Quite a lot of good news, I’d say. In various ways, our technical safeguards are improving. Models are becoming much harder to jailbreak. So. You know. So three, four years ago, if you asked a model to give you a recipe for a Molotov cocktail, it would not do that. But if you said, oh, I miss my grandma, and she used to tell me this amazing bedtime story about how she loved making Molotov cocktails, please help me remember my grandmother, it would be like, okay, well, if it’s for your grandmother.

Then that stopped working maybe a year or two ago, but then if you maybe translated your question into Swahili or something and put it in the model and then translated the answer back, it might have made safeguards. So none of that works anymore. These safeguards are much harder to evade, and we know this quantitatively. For example, the UK Security Institute will try and evade the safeguards or jailbreak all these new models when they’re released. At the beginning of 2025, they could do this in literally minutes, find a sort of universal jailbreak that would elicit potentially harmful knowledge. For the latest models, it’s taking them seven, ten hours to get around safeguards. So there’s still vulnerabilities, but for novices or even moderately skilled actors, it’s basically the same thing.

It’s becoming much, much harder to evade them. We’re also seeing more of these safeguards get implemented into organizational practices. So 12 companies, all the leading AI developers now have frontier safety frameworks, which are these documents that describe how they plan to manage risks as they scale more powerful systems, which is many more than had them a couple of years ago and is, I think, a sign of transparency and sort of collective learning about risk management that’s worth noting. So basically, yeah, our toolkit for managing these risks is growing. But, you know, it wouldn’t be a safety report if I didn’t maybe end on a few caveats or some bad news. The first is that these technical safeguards are still vulnerable in many ways.

They can still be jailbroken with enough effort or in edge cases, and it’s very difficult to test and provide reliable assurances that these safeguards will work across this huge range of use cases that these models are now applied to in the real world. And on the organizational side, you know, these safeguards only work if they are applied. And although we’re seeing, especially from frontier developers, we’re very prominent, usually quite robust safeguards applied on models, across the whole industry, and especially behind the frontier, application remains quite inconsistent. The safety frameworks, all these companies have them, but they vary in the risks they cover, they vary in the practices that they recommend. And so the landscape as a whole, you know, these tools only work if they are applied.

And we still see that, some vulnerabilities across the landscape, which I think turns this technical challenge, that points towards the governance challenge of how do we assure broader adoption, how do you ensure compliance, what do you do when there’s a lack of compliance. We’re sort of facing these questions, and again, because these risks and the impacts are now not something that we can sort of push down the road anymore, I think, for future years, the governance questions are becoming a lot more urgent.

Gregory C. Allen

Terrific. And if I could contrast what you said with what we might have said if we were having this conversation back at the Bletchley Park AI Summit. But it’s almost like the only good news on AI safety, AI security, and AI governance at Bletchley was, well, at least we’re all here talking about it. And now, three years later, the good news is we’ve done a lot about it. We have techniques that can provide demonstrable increases in safety. We don’t know everything that we need to work, but we know a lot of stuff that does work. And really, a lot of the challenges, I think, as the report says, it’s now in the hands of policymakers to make sure that these safeguards get implemented robustly and diversely.

So with that, I now want to turn to Hiroki, who I hope can give us a state of where we are in the story of AI governance around the world. If the next steps are really in the hands of policymakers, where are we globally?

Hiroki Hibuka

Thank you, Greg. And again, congratulations. Stephen was the publisher of the great report. And I think, first of all, I feel very glad that now the discussion on AI governance is such advanced compared to three years ago. I’m a lawyer and I’m a former policymaker. I worked for the Japanese government for four years, designing the Japanese AI policies, mainly in terms of regulation and governance. And as a lawyer and policymaker, the question after reading the report is, where is the end? And to what extent stakeholders have to manage the risks? Because in the end, you can’t remove all the risks. AI is black box and the technology advances so fast. And even though there is advance and progress of Godwins, the next day you may find another risk.

So there is no end to the story of how regulators should design the regulations. That is the main question. All countries. Countries are facing and different nations, regions take different approaches. Maybe the most famous regulation is the EU AI Act. And in that context, a lot of people say, hey, EU takes a hard law regulatory approach on AIs while Japan or UK or United States takes a software approach. But I think it’s a completely wrong understanding of the regulatory framework because, as you know, there are already lots of regulations that can be applied to AI systems. Privacy protection laws, copyright laws, or sector -specific laws such as finance, automotive or healthcare. We already have a lot of regulations out there.

So the real question is not whether or not to regulate AIs, but the real question is how to update our existing regulations and whether or not we need additional regulations targeting AI systems. In addition to the existing regulatory framework, so in that sense all countries take the hard law approach and also all countries have soft laws because European Union there are a lot of technical standards to implement the EU AI Act that are now under discussion but anyways all countries have both hard laws and soft laws that is the start of the discussion and then when we compare EU approach and Japan approach the clear difference is whether to regulate AI holistically or not sector -specific and when I compare the Japanese policy and the US policy we are on the same position as to taking a sector -specific regulation the main difference I understand is whether you prioritize the exempt approach or exposed approach the US takes more exposed approach you can do whatever you want to do and the regulation is usually very high level the principle is very high But once you have a problem, if you damage others’ properties or lives, then you go to the court and you fight in the court.

The Japanese society is not like that. In Japan, actually the number of losses is very low. People prefer to set the rules in advance. Japanese companies are very, very good at complying with the given rules. But they are not very good at creating their own governance mechanisms or explaining to stakeholders why you are doing that. And now Japanese stakeholders are starting to realize that it doesn’t work. So we need to have more agile and multi -stakeholder approach. So we are trying to leverage the power of soft laws, negotiating among different stakeholders, and give the standards, guidances. But in the end, again, if you violate the existing hard laws, of course you will be sanctioned. So that’s the main differences in American approach and Japan approaches.

And in the end, all countries are facing difficult questions of how to deal with this cutting -edge technologies that are black box and there are unlimited risk scenarios. And sometimes we don’t know how to evaluate the values such as privacy or transparency or fairness. There has been no clear benchmark standards so far in the society. So how to design those benchmarks and regulation methods are the challenges all countries are facing.

Gregory C. Allen

Terrific, Hiroki. And Shaina, I know you have a unique perspective on this because your organization is now proposing sort of additional models of AI governance that are not really reflected in existing law, whether in the United States or Europe or Japan or India. So walk us through what you see as the important work we’re doing now.

Shana Mansbach

Sure. My panelists have set me up very well to say this. So I think as the International AI Safety Report shows, the capabilities around these models are surging. And as the capabilities surge, so too does the uncertainty around the risks, by which I mean, do these systems work safely, securely, and as advertised? That uncertainty creates a trust problem, a trust problem for the public, which doesn’t have a way of figuring out what is actually safe, a trust problem for deployers, by which I mean hospital systems, retail, banks, who want to and indeed need to use these systems, but have no idea what they can actually trust. So there’s a trust problem for the regulators, too.

They don’t know, how do you confer not just trust, but how do you confer earned trust? And I would say there’s a trust problem for the developers also, because if and as trust starts to grow, there’s a trust problem for So if the trust starts to decline, you’re going to see adoption decline as well, so this is something that developers should be focused on too. The current approach is just not the current approach to tech governance is not equipped to handle this trust problem very well. Traditional command and control governance says here are the rules, here are all the things you have to do, here are the procedures, here is what compliance actually here’s what compliance actually looks like.

There are a bunch of problems with this approach in the context of AI, but I’ll focus on two, which is the speed problem. AI moves really, really quickly, and even well -intentioned regulations are going to become outdated very, very quickly, and then there’s the technical capacity problem. Even with the rise of the AI safety institutes, which are doing amazing work, the talents, the expertise for understanding these systems and understanding their risks is largely concentrated in the frontier labs, which of course leads some people to say, well, let’s just go to the frontier labs. They can regulate themselves. I don’t think I have to spend too much time explaining why there are problems with that approach but it’s simple incentives I think all of us know people in the labs who are doing amazing, amazing work they are the people who make sure that I can because of them I sleep better at night but the incentives are just not there there are always going to be trade -offs between investing in safety testing and tooling and investing in development so we’re going to have problems with self -regulation in terms of addressing that trust gap so where does that lead us?

at Fathom, my organization, we’re very focused on coming up with new models that can solve this trust gap so we’re very focused on independent verification specifically the marketplace of independent verification organizations by which I mean a government -authorized and overseen marketplace of independent verifiers which are trying to be charged with creating testing and tooling to determine whether these AI systems are actually safe The difference here is that this is an outcomes -based approach. Instead of, as I said, having procedures, here are the rules, here are all the things you need to do, here are all the boxes you must check to be certified as being good, you have an outcomes -based approach where you have a government saying, here are the things that we care about.

We care about children’s safety. We care about data privacy and protection. We care about controllability and interpretability. And then you have independent verifiers that can actually go out, do the testing, have updated testing constantly to make sure that those outcomes are being met. We think that independent verification solves for a couple of these deficits in the trust context. First, they are independent. The labs are not grading their own homework. Second, democratic accountability. You have governments that are creating outcomes instead of the industry doing it itself. Third, flexibility. Under this system, the IVOs, independent verification organizations, are constantly updating their testing and criteria to make sure that they’re keeping up with the pace of technology and the pace of risks as well.

And I think the fourth thing, which is pretty interesting, is it creates a race to the top here. Right now, the only people working on safety testing and tooling are in the labs. What we’re envisioning is a marketplace that incentivizes ever better testing and tooling here. I could talk about IVOs for days and days, but let me just end on one point. I was talking to Greg about this earlier, and Greg asked, are there analogous systems or industries or sectors that we could talk about? And I said, yeah, sort of. I mean, in America, we have Underwriters Lab. There’s LEED certification. There are some analogies. But the honest answer is there’s not a perfect analogy.

We have had the same regulatory system for the last century. And I think that with the rise of AI, we’re seeing that system is no longer built for purpose. And when we try to use old systems, hard law, soft law, any of these things, we’re really struggling to make it work. So what I’m trying to do, what I’d encourage all of us to do is to say, you know, we do need to think a little bit differently. Because this is what this technology in this time calls for.

Gregory C. Allen

Well, that’s great. So there’s a few points I want to pull together there. The first is, you know, as Hiroki pointed out, in the U .S. system, liability law looms extremely large, right? The lawsuits at the end of this story when things go wrong. And when you have, as, for example, ChatGPT does, 800 million weekly average users, something’s going to go wrong every week, right? And the question is… How is that going to intersect with our existing body of regulation? How is that going to intersect with liability law? The second thing is this is going to, because we’re talking about these general purpose technologies, this is going to be adopted in so many different sectors of the economy.

And right now, as Shana pointed out, the number of people who have, you know, Steven’s expertise on what it takes to really make AI systems safe and well -governed and perform reliably as intended across the whole range of potential applications, that’s not a lot of humans on planet Earth who are good at that stuff. And because these AI models are going to be deployed in just about every sector of the economy, we need some level of those capabilities in every sector of the economy. And so the question is, you know, if I am a financier, if I am a finance company, if I am a health care company, you know, how am I going to know and how are my consumers going to know?

that when they use AI -related capabilities, it’s going to work reliably as intended over the full range of acceptable use cases. And so, Stephen, I want to come to you and ask, when it comes to governance, when it comes to oversight and verification, how do you see the balance of responsibilities in terms of what responsibilities need to fall upon the model developers, what responsibilities need to fall upon the users, what responsibilities need to fall on independent third parties, whether that’s the government, whether that’s auditors, whether that’s this marketplace of verification that Shana is talking about. So what do you see as the balance of responsibilities, and how might this go wrong, how might this go right?

In 30 seconds or less.

Stephen Clare

I mean, I’m sure it’s kind of the boring but true answer. It’s the boring part of it. depends and it’ll vary a lot across use cases and sectors. I think probably it’s not the case that it’s fair or helpful or true to allocate to one actor or another, but instead we need this layered approach of just many different policies and practices at different parts of the stack. Because none of our approaches are foolproof, they all have vulnerabilities, and so we have, instead of safety by design, we have this safety by degree situation where we want defense in depth. So for developers, there will be training techniques that they can implement to make models less likely to elicit dangerous knowledge in the first place.

If there are people building on top of those models and then deploying them, there will be monitoring systems they can put in place and classifiers that identify dangerous queries and stop models from answering them. and then probably for ecosystem monitoring bodies which could be deployers but could also be other institutions in the world there can be tracking how AI content is spreading across borders and around the world and then I think there’s this other aspect of we’re focusing a lot on sort of model or developer safety but as we are moving into this world where many people around the world are having access to powerful, helpful intelligent technologies and we also just need to adapt for that reality and think about resilience at the societal level too of how do we adapt to the beneficial use cases and the various use cases that these models will be used for so thinking about hardening digital systems against increased cyber attacks just sort of admitting the reality of the situation in many ways and adapting to it rather than trying to prevent all harmful uses in the first place I think we need a variety of approaches across all these different actors

Gregory C. Allen

Yeah. And just to use an analogy for how broad the group of stakeholders is, if you think about a ride hailing service, a taxi service like Uber, you have the automobile manufacturers who have to make sure that this is a solid car design that was manufactured safely and appropriately to specification. Then you have Uber, where in some countries Uber owns the car, and so they’re responsible for ensuring that it gets maintenance appropriately. And then you have the driver who’s responsible for ensuring that they are actually following the law and driving the car safely. And if you apply that analogy to AI, you have the model developer, then you might have the sort of business use case deployer, which could be a bank, a medical device company.

Who? A financial institution, whoever. And then you finally have the end customer who’s receiving those services and making sure that they’re using them appropriately. And so. If you think about that sort of different body of use cases, as I said before, the capabilities are not symmetric across all of those. But there are sort of obligations. And so, Shana, I want to come back to you and ask this model that you’re proposing, what exactly does it mean for the different stakeholders in the ecosystem? How does their life change if we adopt the system that you’re in favor of?

Shana Mansbach

Yeah, I mean, the overarching answer is we create trust throughout the system, which is the missing piece here. I think there are a couple of pieces that I would pull out. You had mentioned liability earlier, and let me talk about that a little bit. What this system does, it does not assign liability. It doesn’t say, you know, deployers, developer, it’s you, it’s you, it’s you. We’re seeing, at least in America, courts move their way through this. Sister. court cases move their ways through the court system and we’ll see where that is but where that ends up being but what is really missing is a standard of care and this is I think one of the real advantages that this system has so right now at least how it works in our current tort system is that if you’re Waymo kill someone someone can sue and then a judge and a jury has to figure out so again we’re not answering who should be sued but let’s say that the family of someone who got hurt or killed is suing Waymo what happens is that the jury has to decide whether whether the person who was sued did the right thing and if you are not technical that is the hardest thing even if you are technical and maybe even Waymo doesn’t know So what this system would do is confer, if you are verified, it would confer, the verification would confer a rebuttal presumption of having met a heightened standard of care.

So what we’re doing is clarifying and defining up front before an actual harm happens what a deployer or whoever is sued is actually supposed to do instead of having this very, very messy system where someone after the fact has to figure out what went wrong and who’s responsible for that. I can talk about other layers of this back here, but I think the liability piece is really key. I mean, we just see this. I think it’s a reflection of the trust problem here where when you’re a deployer, I mean, God, I think everyone that I talk to, you know, again, hospital systems, retail, banks, anyone who needs to be consumer facing is really worried about this problem.

I mean, when I get sued, what do I do? And maybe there’ll be. a populist backlash and everyone will hate everyone who’s using AI systems. And it’s much better to, ahead of something like that, ahead of that happening, have that standard of care defined up front and have that seal of approval conferred.

Gregory C. Allen

And Hiroki, as you think about the different stakeholders in the system and especially the idea of auditors, which now there are a number of organizations being founded, it seems like almost every day, who are proposing to provide external evaluation services that can help companies understand, as Shane has said, this product or this service or this company meets the seal of approval and we vouch for it as an independent entity. What kind of momentum do you see for this independent assessment part of the story across regulatory frameworks?

Hiroki Hibuka

Independent evaluation. Independent evaluation is essential given that we are all using AI systems for all different situations, starting from language models to healthcare systems to car driving. But it would be not easy to persuade corporate executives to use the independent audit without clear economic incentives. For example, if you get the certification for autonomous driving, then you can sell the car to the big market. Then, of course, you pay for the audit. But if you take this audit for this language model, then you can prove that this language model is relatively safer than the other models. But it doesn’t necessarily make enough incentive for model developers to conduct the audit or evaluation systems, independent evaluation, because there is no clear financial incentives.

Gregory C. Allen

Actually, could I? I can ask you to elaborate on that. So where might these financial incentives come from? You mentioned one, which is the regulators force you to do it. That’s one. Maybe insurance is one. another like where where might these incentives come from

Hiroki Hibuka

I think it should start from the regulated areas such as cars health care systems finance systems or infrastructures because everybody needs a strong requires strong trust on those systems if it doesn’t work well then somebody might have a baby kills that’s a big problem and maybe you could say hey but in the end if you are killed you can be compensated but it’s not the end of the story while if the damage could be compensated by money by the company and stakeholders are okay with that maybe companies like to just run the system go and and compensate to the victims for example if the language model says something discriminated the company can just say hey we’re very sorry we introduce better guardrails and we pay for that if you want compensation

Gregory C. Allen

in terms of what is possible, what interventions work, what the risks are. But I want to ask about how we go from that degree of consensus to something that might be more of like a standard around procedural implementation. You know, Shana’s term of art is standard of care, which matters a lot in the American legal system. I’m sure it matters a lot in other legal systems. I’m just ignorant about, you know, how and where. And so I’m curious, you know, what do you see as the gap? If these independent evaluators, these independent auditing organizations are emerging, how do they go from we think we’re good at this to, no, this is the accepted best practice?

You know, we have accepted consensus on the risks and the interventions, but, like, how do you turn that into a procedure? Just to give an example to the folks in the audience, I used to work at a rocket company, and the safety standard in the American aerospace industry is AS9100. And in the history, of our company, there’s kind of like a before AS9100 moment, and then there’s an after AS9100 moment. And everything changed for our company, you know, after we got that third -party audit evaluation. A lot of our customers, you know, just said, we do not sign checks for companies that are not AS9100 certified. So, you know, you are deeply steeped in where we are today on the consensus, but how far are we from converting that into standards and procedures for third -party evaluation?

Yeah. I’ll also say one follow -up to Hiroki’s point, too, about auditing. Not only is there sort of a lack of incentives to conduct audits voluntarily now, but there might even be disincentives where one is it’s costly, and it slows you down, and there’s very intense competitive pressures to release faster. And there’s also potentially… like, information or security risks to sharing. You spent hundreds of millions, maybe billions of dollars developing a model, and then you have to share it with an external party before deployment. Like, serious risks to, or perceived risks, at least, to having that information leak or… So I think, yeah, there’s some serious challenges there. I guess there’s one other potential part of the story, which is sometimes you see companies want to be willfully blind, right?

If they have a report that says my product is not safe, well, now they know they’re going to lose the lawsuit. Whereas if they never commission the report, maybe they’ll win the lawsuit. So, Shana, what do you see as meaningful interventions that can help address this problem, both the cost side that Stephen mentioned and the other parts of the incentive structure?

Shana Mansbach

Yeah, let me make a couple of points. I mean, I think we’re talking about the cost of audits, and I think this… this is a big issue that we think about a lot. This system will not work if everyone, if there’s a flat fee, everyone is paying a ton. I mean, we are really, we think that an unsuccessful, there are many ways that a system looks unsuccessful, and one of those ways is if it is just protecting incumbents. And we’re thinking, we envision the system as something that works for, you could verify a general purpose LLM, you could also have narrow AI, you could have a tiny little tool, a little chatbot that is used in schools.

Those three different products should not be audited, not only at the same cost, but in the same way. I mean, compliance isn’t just the check that you’re writing, it is how much of a pain in the butt is it? How many lawyers do you need? How long will this take? So the great thing about this being a marketplace is that the system is right -sized to risk type, to size of these products. and again instead of having just a one size fits all this is what you have to do to comply because I think that that is a real issue it really quickly I just want to go back to you know the question that you asked Hiroki about incentives I mean you can imagine a system where this is mandatory and maybe in some areas you can imagine that but I think that there are three real real carrots for wanting to get verified we talked a little bit about liability so obviously the liability clarity that this is a big carrot I think the insurance piece the insurance piece is real right now we are seeing the big insurers saying we’re not going to touch this we’re not going to insure any AI products because we have no idea what’s inside of them at least in America the way that life insurance works is if you want insurance you have to have a lot of money and a lot of money and a lot of money and a lot of money you have to jump on a scale and tell someone how healthy you are and what are the things that you do and the insurer decides okay are you worthy of being insured and at what premium I think that’s actually a pretty direct analog for what we’re trying to do here where the books are opened and an insurer can look at whether they don’t have to do the testing themselves, but they can look at whether the system has been verified and say, okay, we will actually insure you or we will insure you at a more affordable premium.

I think the third thing is just straight -up market competitive advantage. If I’m a school superintendent and I am choosing between two learning chatbots to put in my schools, I’m not going to choose the one that has not been verified. I want the one that has been verified, that is safest. Yes, because I’m worried about getting sued, but because I want my kids to be safe. And you can imagine a situation much like Underwriters Lab in the United States where basically all consumer products like light bulbs, toothbrushes, basic things that you buy in a store like Walmart, all have the UL seal of approval, and those are the ones that get sold in stores. They have a huge market advantage.

They pay a little bit, but not very much. And in exchange for doing that, they go to market in a way that, or they compete in a market in a way that the ones that don’t go through verification. do. I’m so sorry, Gary, you asked me an actual question and I just answered everyone else’s question and probably not my own.

Gregory C. Allen

It’s okay. You get out of jail free card because you mentioned insurance, which is something I’m deeply interested in right now. I mean, in that space orbital launch vehicle example that I just mentioned, you can’t get insurance for space launches of satellites until you’re AS9100 certified. And that is 10 % of the cost of getting a satellite into space is just the insurance on the rocket. And so basically companies that can’t get insurance can’t compete in the market. And as Shana mentioned, and I think this is a super undercovered story, there are now many of the major insurers in the United States at least are saying, for your enterprise risk policy, AI is not included. So if you are a major bank and you are doing big, important financial transactions, as soon as you start using AI, you’ve lost all your insurance.

And I think the Trump administration in the United States has a very light -touch regulatory approach. And my concern there is that, well, just because the government is not doing anything big and bold on regulation doesn’t mean there will be no regulation. The insurers will step in. And if the insurers exit the market, maybe not in legal terms, but in economic outcome terms, that could be very similar to draconian regulation. So, Shana, you’re mentioning the Underwriters Lab, which is an organization that writes standards that are relied upon by underwriters, the people who are issuing insurance. This is a huge part of the regulatory and governance ecosystem that I think is really important. And so now I’m hoping, Stephen, that you’re going to tell me, that you’ve been reached out to by a bunch of insurance companies, and they’re all reading your report eagerly and thinking about this.

But maybe, maybe not. What’s the case?

Stephen Clare

Not yet, but it’s a really long report. 312 pages, but it goes like that. Maybe I can come back to the best practices point a little bit, because I think we’re talking about auditing here, and at least I know there’s a lot of steps involved, I’m sure, but at least at the technical level, the main tool we have right now to audit the capabilities of the RISC -MD AI model are evaluations. And although in my opening I sort of talked about, oh, it’s great we have this toolkit that’s emerging and it’s strengthening, and that is true, I think on evaluations in particular, as far as like, okay, let’s say we have auditors that are looking at these companies, looking at models, what are they actually looking at to audit or evaluate the models?

I think we actually have a big gap here, a big evaluation gap in terms of, well, how are we actually assessing? So if we’re moving towards best practices, not only do I think we don’t have a sense of the best practices right now, but if we did, they’d be different in a year, because the capabilities are moving too quickly for these technical tools to be in date, for very long. So for example, you’ll have, you know, these evaluations often look like a set of questions related to a certain topic, and you ask the model, so you have a bunch of questions about biosecurity or a bunch of questions about cybersecurity. And if it’s above, if it scores high enough on the test, you say, whoa, this is a dangerous capability, and we need to implement more safeguards or something.

And as far as what’s best practice or safe risk management for a company, we evaluate in terms of, well, does it seem like the safeguards apply proportionately to the risk that you’ve assessed? But I think in many cases, these evaluations we’re using are already not super informative about real -world risk because they’re too narrow. Because you have to build a set of questions that gives you some information about the vast range of use cases in the real world. And as models have become more capable and general and adopted more widely, this has become much more difficult. And I don’t think there’s very many actors out there that are constantly thinking about new ways to evaluate the capabilities.

And so I think this… This is like an important gap in terms of our toolkit. that is, again, quite urgent because these models have been released and we’re using our current evaluations, which are already, in many cases, out of date and not super informative about real -world risk. Shannon, do you want to jump in here?

Shana Mansbach

Yeah. Stephen, I agree with you so much. I mean, all of us are obsessed with benchmarks because that’s kind of all we have, and they’re just so narrow. I spend a lot of time with organizations that we think will become these IVOs, and testing is so, so hard. I mean, think about this. We have a fundamentally stochastic system, so I can ask something 10 times, system 10 times, and I’m going to get 10 different answers. So what does that mean in a safety context? Another problem that we have, what a model outputs is not the same thing as what someone does with it. So think about in the context of mental health. Maybe the model says to 10 different people different versions of, I think you should kill yourself.

Nine times, maybe for nine of those users, that’s fine, they will laugh it out. But for one of those users, there’s going to be a real problem here. and also the multi -turn nature of AI. I mean, you build relationships with these systems and you ask long queries and the stuff just gets really complicated really quickly as technical minds could explain far better than I could. So what we’re trying to do here is incentivize better testing because right now the only people creating evals or eval organizations or doing God’s work, doing awesome stuff, but what does it mean? You’re the best meter out there. I mean, there’s not an incentive to go from good to the best.

And the other actor working, of course, are the labs. And I think many of the labs are actually attempting to be responsible actors here, but again, there’s an incentive gap. I think the only way you’re going to solve this is to have an ecosystem where all of the actors are competing to have the best services, to have the best evaluations, to have the best feedback, to have the best feedback, to have the best feedback, And we hope one day one of these IVOs says, I’ve developed a new type of testing that figures out this kid safety thing that no one has ever thought about. And then the next day someone says, well, we have to be better because then everyone will want to be verified from that organization.

So you are incentivizing ever better testing. And as Stephen says, I think that just given how quickly and dramatically the capabilities and the risks of these systems are increasing, we need really good testing and tooling that can keep up with that. And the only way to do that is to incentivize

Gregory C. Allen

So, Stephen, if I could come to you about what Shana just said. You pointed out how the state of the art in evaluations and assessment is constantly shifting as the capabilities are shifting. I sometimes hear the frontier labs say, yes, and that’s why we’re the only ones who can do the testing, because we’re the ones out there on the frontier. But Shana is making this point about misaligned incentives, which I think we saw. In a conversation you and I had a couple weeks ago in the XAI Grok undressing children kind of example, there’s perverse incentives sometimes at work here in terms of the companies evaluating themselves. So how do you reconcile that gap between the frontier AI labs often do have a unique perspective and a unique understanding, but also it’s really hard to see how we could ever be comfortable with them being the only ones assessing themselves?

Stephen Clare

Well, I can talk about a bit in the context of the report where we try to work with everybody to get the state of the science across the whole landscape. And there I think it is true that there’s this big information asymmetry between the people in the labs who both have the most technical capacity and also the most access to leading models and all of the information about testing and development and all of the information about the technology that’s being used in the lab. And if you don’t draw on that knowledge, you can’t really do anything about it. you’re not going to be able to understand what’s actually going on in the AI world but then I think we brought in a lot of perspectives from academia and society and government feedback to sort of get a full perspective of the landscape as far as what to do going forward to deal with this I think probably it looks something like this with partnerships that are aiming to draw on that knowledge but then aiming for transparency and information sharing that gives third parties and external actors a better understanding of what’s actually going on because it’s true like even writing the report we were reliant on these papers that labs will occasionally publish and drop with like very useful data on how people are using the models or adoption rates but we’re kind of reliant on these like ad hoc publications and then that leaves a lot of gaps across the landscape and different risks and so we you know constantly had the word uncertainty or unknowns in the report because we lack that data outside of the labs

Gregory C. Allen

And do you think that that’s likely to remain the case, or do you think that that could change over time? As we’ve seen, literally, the safety staff of some of these labs quit and start their own auditing companies. So are they likely to have their skills atrophy as they get farther from the development process, or do you think it’s credible that these third -party organizations can build, the word that comes to mind is like economies of scale that are relevant to be able to continue advancing the state -of -the -art of safety and governance, even as the technology keeps evolving?

Stephen Clare

I’m not sure, but I think what we can do is sort of look at the trend, and the trend is towards, I think, a stronger ecosystem around AI labs. As more people, as these problems of lack of data and lack of independent verification are identified more, there’s more people working on it. And then I think we’ve seen some movement towards greater transparency with AI labs as well. So frontier safety frameworks are now a governance mechanism that’s in the EU. AI are in the code of practice, and it’s become institutionalized. It started as a voluntary, anthropic, just published, a responsible scaling policy. And so you see these movements towards sharing more information in more structured ways.

I think also, yesterday, there were the new commitments from the companies at the summit, which were related to sharing data about usage. So I think as a broader set of actors in society are paying attention to AI, because, again, we’re feeling the effects more clearly. It’s becoming more of an economic priority. We’ll see more demand from outside the labs to share this information, and maybe that will lead to some changes.

Gregory C. Allen

Hiroki, you’ve written a ton about AI, but in your capacity as a lawyer, you also have a lot of understanding of many different industries. Are there any lessons learned from other industries here that have solved this sort of technical expertise exists here, but the need for independence exists here? What kind of precedents do you see that we can learn from?

Hiroki Hibuka

Okay, so before that, let me add one more incentive, which is public procurement. If the government says, we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, LLM or model is safe and then government procures this standard then it will be a big incentive for developers so that is one thing and when I try to answer your questions I think democratic debate is necessary as to what kind of risk level is acceptable and also what kind of test measures are good because there is any single specific answer as to this is acceptable level of perspective.

For example in Japan every year more than 2 ,000 people were killed by a human driven car and the question is what kind of safety would we require for the autonomous vehicles? Is it okay if the kill number is less than 2 ,000 or would we like to require more safety than human drivers? If so, what would be the level? There is no single answer to that kind of question so we need to debate. in a democratic manner as to what is our acceptable goal. And also about the test measures. For example, we can just simply compare the number of rates per kilometers, but if you test in a very safe straight highway, of course it’s easier to get to safety.

While if you try to drive in a pretty complex city, it’s gonna be very difficult. So how to measure how to define the test method is another question. And I don’t go into the details, but the thing that discussion has been done in a lot of industries, car industries, or finance industries, or aerospace industries, we can certainly do a lot of lessons learned from the existing.

Gregory C. Allen

Yeah, one analogy that as you were talking, you jogged my memory, is the National Highway Transportation Safety Administration in the United States, which actually industry begged for this organization to be created. They did. in the 60s and 70s because they said, look, all of us are going to claim that we have safe cars, but only some of us are making big investments in becoming safe, and we want to reward the people whose good behavior is making big safety investments. And so they created this new organization which would give cars a safety rating on one to five, five star or one star. And so now the companies can only get a five star rating if they’re actually doing what it takes to be safe.

And consumers, you know, they’re not always qualified to rip open their car’s engine and see what it looks like under the hood, what’s safe, but they can interpret that five star rating. And so my idea was to ask you, Shana, to elaborate on this in the context of your model, but I’m now scared of the beeper, which is quite loud and scary. So please join me in thanking our terrific panel. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The International AI Safety Report was drafted as the shared evidence base for the 2023 Bletchley Safety Summit and is backed by more than thirty countries and intergovernmental organisations.”

The AI Safety Summit was held at Bletchley Park with participation from China, the United States, the European Union and over 25 other nations, demonstrating broad multi-country backing, and the summit produced the Bletchley Declaration establishing a shared understanding of AI risks [S76] and [S77].

Confirmedhigh

“Risks once theoretical are now observable at scale, with concrete harms such as deep‑fake proliferation and AI‑enabled cyber‑attacks.”

Discussions of AI risk management explicitly cite the spread of deepfakes and the rise of AI-enabled cyber attacks as emerging threats [S1].

Additional Contextmedium

“The International AI Safety Report is modelled on IPCC reports.”

The IPCC is referenced as a successful example of an international, evidence-based report that creates a shared factual base for policy, providing context for why the AI Safety Report would adopt a similar structure [S75].

External Sources (86)
S1
Who Watches the Watchers Building Trust in AI Governance — – Hiroki Hibuka- Shana Mansbach – Stephen Clare- Hiroki Hibuka
S2
Lights, Camera, Deception? Sides of Generative AI | IGF 2023 WS #57 — Hiroki Habuka, Civil Society, Asia-Pacific Group
S3
Who Watches the Watchers Building Trust in AI Governance — – Stephen Clare- Hiroki Hibuka- Shana Mansbach – Stephen Clare- Shana Mansbach
S4
Who Watches the Watchers Building Trust in AI Governance — -Gregory C. Allen: Moderator/Host of the panel discussion
S5
Who Watches the Watchers Building Trust in AI Governance — – Hiroki Hibuka- Shana Mansbach – Shana Mansbach- Gregory C. Allen
S6
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S7
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S8
Global telecommunication and AI standards development for all — Bilel Jamoussi:Thank you, thank you LJ and good afternoon everyone. I’d like to invite a list of colleagues for a big an…
S9
Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477 — Policymakers need to support collaboration between different sectors The analysis underscores the critical need for cyb…
S10
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — Lastly, inclusive involvement of the technical community in the policy-making process is advocated. The technical commun…
S11
AI Safety at the Global Level Insights from Digital Ministers Of — So I think there’s that. I do think that it needs to be obviously multi -sector. It’s a fairly obvious point. How do you…
S12
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S13
Why science metters in global AI governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S14
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S15
https://dig.watch/event/india-ai-impact-summit-2026/who-watches-the-watchers-building-trust-in-ai-governance — I mean, I’m sure it’s kind of the boring but true answer. It’s the boring part of it. depends and it’ll vary a lot acros…
S16
AI That Empowers Safety Growth and Social Inclusion in Action — Despite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap betwee…
S17
Advancing Scientific AI with Safety Ethics and Responsibility — And also create more awareness about the main fundamental thing is that they will be expected to document whatever testi…
S18
Networking Session #232 Bringing Safety Communities Together a Fishbowl Style Event — Legal frameworks exist but enforcement remains problematic due to lack of understanding within judiciary and law enforce…
S19
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Current syllabuses are outdated and policy makers face pressure to update educational frameworks rapidly
S20
Main Session 2: The governance of artificial intelligence — Different sectors (financial services, agriculture, healthcare) require different regulatory approaches, but there’s a n…
S21
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived ina…
S22
Consumer data rights from Japan to the world | PART 1 | IGF 2023 — Minako Morita-Jaeger:Thank you, Javier. My PowerPoint. Yes, lovely. And then you can, yeah, and you can change it when I…
S23
Japan favours softer AI regulations — Japan is paving the way for a more lenient approach to AI regulation, as an official familiar with the deliberationsreve…
S24
AI as critical infrastructure for continuity in public services — “We shouldn’t be fixing things after the fact, but we should go on an input before the deployment.”[115]. “The second on…
S25
Open Forum #26 High-level review of AI governance from Inter-governmental P — Yoichi Iida: Thank you very much, Ambassador. You talked about a lot of various risks and challenges. In particular, y…
S26
WS #123 Responsible AI in Security Governance Risks and Innovation — These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, a…
S27
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Kyoko Yoshinaga:Thank you, Michael. Welcome to Japan. I’m Kyoko in Kyoto. Okay. So let me, first of all, give you a brie…
S28
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S29
Global AI Governance: Reimagining IGF’s Role & Impact — Legal and regulatory | Privacy and data protection | Cybersecurity Regulatory approach – existing laws vs new framework…
S30
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Context-specific deployment focusing on appropriate use cases can unlock both productivity and trust simultaneously
S31
In brief — The scientific literature on the evaluation of humanitarian assistance is extensive. Approaches include the scient…
S32
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Kamesh Shekar:Thank you so much, Luca. And so, yeah, so I guess we have very few time to rush through the paper. But our…
S33
Opening — As new technologies emerge, there is a need to assess whether existing governance frameworks are sufficient or if new on…
S34
WS #134 Data governance for children: EdTech, NeuroTech and FinTech — Current laws and regulations may not provide sufficient coverage for emerging technologies like neurotechnology. Some co…
S35
New Technologies and the Impact on Human Rights — **Implementation Over Innovation**: There was consensus that established international frameworks provide adequate found…
S36
Global AI Policy Framework: International Cooperation and Historical Perspectives — Despite coming from different backgrounds (diplomatic/legal vs academic), both speakers advocate for patience and carefu…
S37
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — “But just thinking about closing the AI insurance divide, we released this paper, and in it we talk about around six cha…
S38
Who Watches the Watchers Building Trust in AI Governance — Actually, could I? I can ask you to elaborate on that. So where might these financial incentives come from? You mentione…
S39
Japan’s move toward active cyber defence: a strategic shift in national security — On 10 September, the Liberal Democratic Party (LDP)proposeda groundbreaking system of ‘active cyber defence’ (Nōdō-teki …
S40
The Protection of Children Online — National policies vary greatly in their reliance on technical measures. Countries such as Australia and Ja…
S41
Japan passes landmark cyber defence bill — Japan has passed theActive Cyber Defence Bill, which permits the country’s military and law enforcement agencies to unde…
S42
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe Metzger:Thank you, Bilel. Maybe to be as succinct as possible, just would like to mention four areas, which I t…
S43
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The discussion highlighted the importance of policy interoperability rather than uniform global governance, recognizing …
S44
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S45
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S46
Driving Social Good with AI_ Evaluation and Open Source at Scale — This highlights the complexity of contextual safety requirements and the need for flexible evaluation frameworks
S47
Procuring modern security standards by governments&industry | IGF 2023 Open Forum #57 — An important observation highlighted by the coalition is the lack of recognition of open internet standards by governmen…
S48
Policies and platforms in support of learning: towards more coherence, coordination and convergence — – (a) Common standards for needs assessment and evaluation of learning programmes; – (b) Coordination and possibly int…
S49
AI Safety at the Global Level Insights from Digital Ministers Of — There’s a need to develop an independent evaluation ecosystem similar to accounting auditors, but the optimal structure …
S50
https://dig.watch/event/india-ai-impact-summit-2026/who-watches-the-watchers-building-trust-in-ai-governance — Independent evaluation. Independent evaluation is essential given that we are all using AI systems for all different sit…
S51
The Overlooked Peril: Cyber failures amidst AI hype — Developing and enforcing legal and policy instruments, such as the 11 UN cyber norms, is imperative. These norms provide…
S52
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — Lastly, inclusive involvement of the technical community in the policy-making process is advocated. The technical commun…
S53
Leveraging AI4All_ Pathways to Inclusion — The report identified three interconnected pillars essential for inclusive AI: design, access, and investment. The desig…
S54
Table of Contents — Closely linked and o/ften a consequence of government internal policies is public procurement. Where standards and commo…
S55
https://dig.watch/event/india-ai-impact-summit-2026/leveraging-ai4all_-pathways-to-inclusion — This is where you have to make sure AI is usable in real world conditions. I know we’re in the AI Impact Summit, but som…
S56
Please cite this document as: — 8. Members should take greater account of environmental criteria in public procurement of ICT goods and services and inc…
S57
WS #123 Responsible AI in Security Governance Risks and Innovation — These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, a…
S58
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment. This sug…
S59
Who Watches the Watchers Building Trust in AI Governance — This comment exposes a fundamental technical limitation in current AI safety approaches: the evaluation methods themselv…
S60
How to make AI governance fit for purpose? — Shan emphasized international collaboration through the ITU and global standards development, expressing concern about p…
S61
From Technical Safety to Societal Impact Rethinking AI Governanc — This comment set the entire tone and direction of the discussion, establishing the framework that all subsequent panelis…
S62
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Kyoko Yoshinaga:Thank you, Michael. Welcome to Japan. I’m Kyoko in Kyoto. Okay. So let me, first of all, give you a brie…
S63
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S64
Online trust: between competences and intentions — Trust (or the lack thereof) is a frequent theme in public debates. It is often seen as a monolithic concept. However, we…
S65
https://dig.watch/event/india-ai-impact-summit-2026/who-watches-the-watchers-building-trust-in-ai-governance — Sure. My panelists have set me up very well to say this. So I think as the International AI Safety Report shows, the cap…
S66
OECD DIGITAL ECONOMY PAPERS — These gaps result from misaligned incentives, a lack of awareness, externalities, a misperception of risks and informati…
S67
Advancing Scientific AI with Safety Ethics and Responsibility — Oversight should be distributed across multiple entities rather than relying on a single central authority, creating che…
S68
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Kamesh Shekar:Thank you so much, Luca. And so, yeah, so I guess we have very few time to rush through the paper. But our…
S69
Networking Session #60 Risk & impact assessment of AI on human rights & democracy — – Clara Neppel: Senior Director at IEEE David Leslie: Can everyone hear me? Samara, can you hear me? Hello? Hello? …
S70
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Moderator – Massimo Marioni:AI’s role in securing the future. Dr. Helmut Reisinger, Chief Executive Officer, EMEA and LA…
S71
Opening address of the co-chairs of the AI Governance Dialogue — ## Themes from Previous Year 3. Establishing international technical standards that allow policy and regulation to rema…
S72
Laying the foundations for AI governance — – **Lan Xue**: Dean (Dean Xue Lan), expertise in governance and policy Artemis Seaford: That is a great question. So th…
S73
Multi-stakeholder Discussion on issues about Generative AI — Furthermore, Andrade highlights the significance of dialogue and cooperation in the global AI landscape. He particularly…
S74
Report outlines security threats from malicious use of AI — The Universities of Cambridge and Oxford, the Future of Humanity Institute, Open AI, the Electronic Frontier Foundation …
S75
Will science diplomacy survive? — Science in diplomacyis about using scientific evidence and advice for foreign policy decision-making. In these cases, so…
S76
China, the US, EU, and 25+ countries have joined forces to manage the risks of AI — At the AI Safety Summit hosted at Bletchley Park in England, representatives from China, the United States, the European…
S77
AI Safety Summit adopts Bletchley Declaration — On the first day of theUK AI Safety Summit, the government of the UK introduced the ‘Bletchley Declaration’ on AI safety…
S78
Knowledge Café: WSIS+20 Consultation: Strenghtening Multistakeholderism — Anita Gurumurthy: Thanks. So suggestions at the international level, as well as national to local. So I will start with …
S79
A Global Compact for Digital Justice: Southern perspectives | IGF 2023 — Anita Gurumurthy:But of course, like everything that is political and has an opportunity in the horizon, we will deal wi…
S80
Global Risks 2025 / Davos 2025 — Gillian R. Tett: Well, as somebody who obviously benefited in the past as being part of the media from vertical trust,…
S81
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — Audience:I hope someone can hear me, I still can’t have my video on. We can hear you clearly. Thank you. So this has bee…
S82
Setting the Scene  — This insight is particularly thought-provoking because it identifies that while technology and protection methods are im…
S83
UK report quantifies rapid advances in frontier AI capabilities — For the first time, the UK has published adetailed, evidence-based assessmentof frontier AI capabilities. The Frontier A…
S84
Climate change and Technology implementation | IGF 2023 WS #570 — This would enable broader adoption of these solutions, fostering real progress in addressing climate change and achievin…
S85
Non-regulatory approaches to the digital public debate | IGF 2023 Open Forum #139 — The lack of compliance of private tech companies and states with human rights obligations online propels effects of onli…
S86
Global Enterprises Show How to Scale Responsible AI — -Regulatory Approaches and Global Alignment: The panel debated whether global regulatory alignment is necessary or feasi…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
G
Gregory C. Allen
3 arguments167 words per minute2623 words939 seconds
Argument 1
The International AI Safety Report is the essential foundation for any AI governance discussion.
EXPLANATION
Allen emphasizes that the report represents the minimum body of knowledge required to meaningfully participate in AI governance debates, positioning it as the bedrock for future policy work.
EVIDENCE
He praises Stephen’s report as a remarkable document that forms the foundation for all conversations about AI governance and describes it as the minimum amount of knowledge needed to join the discussion [2-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report is described as an IPCC-style evidence base for AI governance and a shared scientific foundation for policy work [S12][S13].
MAJOR DISCUSSION POINT
Report as foundational knowledge for AI governance
AGREED WITH
Stephen Clare
Argument 2
Policymakers must translate existing technical safeguards into robust, diverse implementation across sectors.
EXPLANATION
While technical safety tools have improved, the remaining challenge lies in ensuring that policymakers adopt and enforce these safeguards consistently and at scale.
EVIDENCE
Allen notes that good news includes many techniques that demonstrably increase safety, but the real challenge now is that “the good news is we’ve done a lot about it… the challenges are now in the hands of policymakers to make sure that these safeguards get implemented robustly and diversely” [62-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for multi-sector collaboration and agile, sector-spanning regulation highlight the need for policymakers to operationalise technical safeguards [S9][S21].
MAJOR DISCUSSION POINT
Policy implementation of AI safety techniques
AGREED WITH
Stephen Clare, Shana Mansbach
Argument 3
The insurance market can serve as a powerful lever to drive AI safety standards, and the current lack of coverage creates market pressure.
EXPLANATION
Allen points out that major insurers are refusing to cover AI‑related risks, which could compel companies to adopt verification and safety standards in order to obtain necessary insurance and remain competitive.
EVIDENCE
He explains that many U.S. insurers are not including AI in enterprise risk policies, meaning banks and other firms lose insurance coverage when they use AI, and that this insurance gap can act as a lever for safety standards [244-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of insurance as a lever for AI safety, including parallels with other high-risk domains where coverage depends on safety certification, supports this claim [S1].
MAJOR DISCUSSION POINT
Insurance as lever for AI safety compliance
AGREED WITH
Shana Mansbach
S
Stephen Clare
4 arguments189 words per minute2162 words685 seconds
Argument 1
The International AI Safety Report functions as an IPCC‑like evidence base for AI governance.
EXPLANATION
Clare describes the report as a shared, state‑of‑the‑art evidence base that helps decision‑makers understand what is known and unknown about general‑purpose AI risks.
EVIDENCE
He explains that the report was founded at the 2023 Bletchley Safety Summit, is backed by over 30 countries and intergovernmental organisations, and aims to be the “IPCC report for AI,” summarising what we know and don’t know about AI risks [19-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report is positioned as an IPCC-like, state-of-the-art evidence base for AI risk assessment in global governance debates [S12][S13].
MAJOR DISCUSSION POINT
Report as evidence base for AI governance
AGREED WITH
Gregory C. Allen
Argument 2
Technical safeguards have markedly improved, making model jailbreaks significantly harder.
EXPLANATION
Clare highlights that recent models resist jailbreak attempts far better than earlier versions, with the time required for successful evasion increasing from minutes to many hours.
EVIDENCE
He cites the UK Security Institute’s attempts, noting that at the start of 2025 a jailbreak could be achieved in minutes, whereas for the latest models it now takes seven to ten hours, demonstrating that “it’s becoming much, much harder to evade them” [42-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Observations that evading modern models now takes many hours, making jailbreaks substantially more difficult, are noted in recent safety assessments [S1].
MAJOR DISCUSSION POINT
Improvement of technical safeguards
AGREED WITH
Gregory C. Allen
Argument 3
Organisational safety frameworks are expanding but remain inconsistent, creating a governance challenge around compliance.
EXPLANATION
Clare observes that while more AI developers now publish frontier safety frameworks, the scope and rigor of these frameworks vary, and many companies still apply them unevenly.
EVIDENCE
He notes that twelve leading AI developers now have frontier safety frameworks, but the risks covered and recommended practices differ across firms, leading to inconsistent application and a need for broader compliance mechanisms [48-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of gaps between adopted safety principles and their practical implementation underline the inconsistency of organisational frameworks [S16].
MAJOR DISCUSSION POINT
Need for consistent application of safety frameworks
AGREED WITH
Gregory C. Allen, Shana Mansbach
Argument 4
Current AI evaluation methods are narrow and quickly become outdated, necessitating new dynamic assessment tools.
EXPLANATION
Clare argues that existing evaluations rely on limited question sets that fail to capture real‑world risk, and because AI capabilities evolve rapidly, these tools lose relevance fast.
EVIDENCE
He describes evaluations as “a set of questions related to a certain topic,” which are often too narrow to be informative about real-world risk, and stresses that the rapid evolution of models makes many of these evaluations obsolete within a short time frame [258-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Critiques of existing benchmarks as quickly becoming obsolete and insufficient for real-world risk assessment are highlighted in recent governance reviews [S1].
MAJOR DISCUSSION POINT
Gap in AI evaluation tools
AGREED WITH
Hiroki Hibuka, Shana Mansbach, Gregory C. Allen
H
Hiroki Hibuka
3 arguments149 words per minute1274 words509 seconds
Argument 1
All countries already possess both hard and soft AI regulations; the key difference lies in sector‑specific versus holistic regulatory approaches.
EXPLANATION
Hibuka contends that the debate should focus on how existing legal regimes are adapted for AI, noting that the EU, Japan, the UK and the US all employ a mix of hard and soft law, but differ in whether they regulate AI holistically or by sector.
EVIDENCE
He references the EU AI Act as a prominent regulation, then explains that privacy, copyright and sector-specific laws already apply to AI, asserting that “all countries have both hard laws and soft laws” and that the main distinction is between holistic and sector-specific regulation [80-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions of sector-specific versus holistic AI regulatory models across jurisdictions illustrate this distinction [S20].
MAJOR DISCUSSION POINT
Existing legal frameworks and regulatory approaches
Argument 2
Japan’s pre‑emptive, soft‑law‑focused regulatory culture needs a more agile, multi‑stakeholder governance model.
EXPLANATION
Hibuka points out that Japanese companies excel at complying with prescribed rules but struggle to create their own governance mechanisms, suggesting that a flexible, stakeholder‑driven approach is required to keep pace with AI advances.
EVIDENCE
He notes that Japan experiences very low loss numbers, prefers setting rules in advance, and that Japanese firms are good at following given rules but not at creating their own governance, leading to a call for a more agile, multi-stakeholder approach [88-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on Japan’s preference for softer AI regulation and calls for more agile, multi-stakeholder governance support this view [S23].
MAJOR DISCUSSION POINT
Need for agile, multi‑stakeholder AI governance in Japan
Argument 3
Independent AI audits are essential but lack clear economic incentives; public procurement can provide a strong motivator.
EXPLANATION
Hibuka argues that without financial incentives, corporations are reluctant to undergo independent evaluation, and that government procurement policies could create market demand for verified, safe AI systems.
EVIDENCE
He explains that executives need clear economic incentives to adopt audits, cites the difficulty of persuading them without such incentives, and proposes that government procurement of verified models would create a powerful incentive for developers [187-195][318-328].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses note the absence of clear financial incentives for independent audits and suggest government procurement as a potential driver [S1].
MAJOR DISCUSSION POINT
Incentivising independent AI audits
AGREED WITH
Stephen Clare, Shana Mansbach, Gregory C. Allen
S
Shana Mansbach
4 arguments175 words per minute2464 words843 seconds
Argument 1
The rapid surge in AI capabilities has created a pervasive trust deficit among the public, deployers, regulators and developers.
EXPLANATION
Mansbach describes how the accelerating performance of AI systems leaves every stakeholder uncertain about safety, security and reliability, undermining confidence in AI deployments.
EVIDENCE
She outlines trust problems for the public, for deployers such as hospitals and banks, for regulators, and for developers, noting that “the capabilities are surging, and so too does the uncertainty around the risks” and that this creates a “trust problem” across all groups [106-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust challenges arising from fast-moving AI capabilities are highlighted in discussions of AI governance and security trust deficits [S14].
MAJOR DISCUSSION POINT
Trust problem caused by AI capability surge
Argument 2
An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance.
EXPLANATION
Mansbach proposes a government‑authorized marketplace where independent verifiers test AI systems against outcome goals (e.g., child safety, privacy), providing flexibility and continuous updates that match the rapid evolution of AI.
EVIDENCE
She describes IVOs as “government-authorized and overseen marketplace of independent verifiers” that assess outcomes such as children’s safety, data privacy, controllability, and interpretability, emphasizing independence, democratic accountability, flexibility, and a “race to the top” for better testing [111-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proposals for government-authorized verification marketplaces and independent oversight mechanisms align with this outcome-based model [S1][S24].
MAJOR DISCUSSION POINT
Outcomes‑based independent verification model
AGREED WITH
Hiroki Hibuka, Stephen Clare, Gregory C. Allen
Argument 3
Verification can establish a pre‑emptive standard of care, clarifying liability and reducing post‑incident legal uncertainty.
EXPLANATION
Mansbach argues that a verification seal would confer a rebuttal presumption that a developer or deployer has met a heightened standard of care, simplifying court decisions and providing clearer liability expectations before any harm occurs.
EVIDENCE
She explains that verification would “confer a rebuttal presumption of having met a heightened standard of care,” shifting the legal analysis from post-harm fact-finding to an upfront definition of required practices, thereby easing the burden on juries and courts [174-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of a verification seal that creates a rebuttal presumption of heightened care is discussed in recent governance literature [S1].
MAJOR DISCUSSION POINT
Verification as pre‑emptive liability standard
Argument 4
Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach.
EXPLANATION
Mansbach stresses that different AI products (LLMs, narrow AI, school chatbots) require tailored audit scopes and fees, and that a marketplace structure can align costs with the specific risk profile of each product.
EVIDENCE
She notes that “the system is right-sized to risk type, to size of these products,” and that a single uniform audit would be inappropriate for diverse AI offerings, emphasizing the need for proportionality in compliance costs [224-230].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for layered, risk-based regulatory approaches that avoid uniform treatment across AI products support this proportionality argument [S15].
MAJOR DISCUSSION POINT
Proportionate, risk‑based audit scaling
AGREED WITH
Hiroki Hibuka
Agreements
Agreement Points
The International AI Safety Report is the essential foundation/evidence base for AI governance discussions.
Speakers: Gregory C. Allen, Stephen Clare
The International AI Safety Report is the essential foundation for any AI governance discussion. The International AI Safety Report functions as an IPCC‑like evidence base for AI governance.
Both speakers emphasize that the Report provides the minimum knowledge required to participate in AI governance and serves as an IPCC-style shared evidence base for policymakers. [2-3][19-23]
POLICY CONTEXT (KNOWLEDGE BASE)
High-level AI governance forums have repeatedly emphasized the need for a shared evidence base, noting that reports such as the International AI Safety Report provide the empirical foundation for policy work (e.g., IGF 2025 high-level review) [S44] and echo calls for evidence-based AI policy across multiple stakeholder groups [S45].
Technical safeguards have improved, but policymakers must ensure their robust, widespread implementation.
Speakers: Gregory C. Allen, Stephen Clare
Policymakers must translate existing technical safeguards into robust, diverse implementation across sectors. Technical safeguards have markedly improved, making model jailbreaks significantly harder.
Both note that recent models are much harder to jailbreak, yet the remaining challenge is for policymakers to translate these technical gains into consistent, sector-wide safeguards. [62-65][35-40][51-57]
POLICY CONTEXT (KNOWLEDGE BASE)
Several jurisdictions, notably Japan and Australia, have pioneered the adoption of technical safeguards for online safety, illustrating the policy relevance of robust implementation [S40]; broader discussions stress that the main challenge lies in implementing existing safeguards rather than inventing new ones [S35].
Current organisational safety frameworks are inconsistent; a systematic, outcomes‑based verification mechanism is needed for consistent compliance.
Speakers: Gregory C. Allen, Stephen Clare, Shana Mansbach
Policymakers must translate existing technical safeguards into robust, diverse implementation across sectors. Organisational safety frameworks are expanding but remain inconsistent, creating a governance challenge around compliance. An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance.
All three agree that while many firms now publish safety frameworks, their scope and rigor vary, creating a compliance gap that could be closed by a government-authorized outcomes-based verification marketplace. [62-65][48-57][111-130]
POLICY CONTEXT (KNOWLEDGE BASE)
The need for an independent evaluation ecosystem comparable to accounting auditors has been highlighted as a priority for AI safety assurance [S49]; standards-as-implementation tools are also advocated to create systematic, outcomes-focused verification processes [S42].
The insurance market can serve as a powerful lever to drive AI safety verification and standards.
Speakers: Gregory C. Allen, Shana Mansbach
The insurance market can serve as a powerful lever to drive AI safety standards, and the current lack of coverage creates market pressure. Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach.
Both highlight that insurers are currently refusing AI coverage, creating market pressure, and that a verification seal could unlock insurance and lower premiums, providing a strong incentive for safety compliance. [244-250][221-231]
POLICY CONTEXT (KNOWLEDGE BASE)
Recent research on closing the AI insurance divide identifies insurance as a key market lever that can shape risk profiling and drive verification standards [S37]; discussions on financial incentives further underline insurance’s role in motivating compliance [S38].
Independent verification/audits are essential but require new incentive structures and a marketplace of IVOs to fill the evaluation gap.
Speakers: Hiroki Hibuka, Stephen Clare, Shana Mansbach, Gregory C. Allen
Independent AI audits are essential but lack clear economic incentives; public procurement can provide a strong motivator. Current AI evaluation methods are narrow and quickly become outdated, necessitating new dynamic assessment tools. An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance. Terrific. And if I could contrast what you said with what we might have said if we were having this conversation back at the Bletchley Park AI Summit.
All agree that third-party verification is needed; however, current incentives are weak. A marketplace of IVOs, supported by public procurement and clearer standards, can address the evaluation gap. [187-195][258-267][111-130][185-186]
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for a dedicated independent evaluation ecosystem for AI mirror the accounting-audit model and stress the need for new incentive structures [S49]; persuading corporate leaders without clear economic incentives remains a challenge, highlighting the importance of market-based mechanisms such as insurance or regulator-driven mandates [S50, S38].
Audit costs should be proportional to risk and product size, with mechanisms such as public procurement or market demand aligning incentives.
Speakers: Shana Mansbach, Hiroki Hibuka
Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach. Independent evaluation is essential … but it would not be easy to persuade corporate executives to use the independent audit without clear economic incentives.
Both stress that a one-size-fits-all audit is inappropriate; costs and rigor must match the specific risk profile, and incentives like public procurement can make audits attractive to firms. [224-230][187-195][318-328]
POLICY CONTEXT (KNOWLEDGE BASE)
Public procurement policies that reference technical standards are seen as a way to align incentives and ensure risk-based auditing [S54]; risk-based insurance pricing further supports proportional audit costs [S37]; broader incentive-design discussions emphasize regulator-driven and market-driven levers [S38].
Similar Viewpoints
Both recognize that a substantial body of existing regulations (hard and soft) already exists worldwide, but their application is uneven and sector‑specific, requiring updates rather than entirely new laws. [48-57][80-86]
Speakers: Stephen Clare, Hiroki Hibuka
Organisational safety frameworks are expanding but remain inconsistent, creating a governance challenge around compliance. All countries already possess both hard and soft AI regulations; the key difference lies in sector‑specific versus holistic regulatory approaches.
Both point to the inadequacy of current evaluation benchmarks and argue for new, continuously updated testing frameworks delivered by independent verifiers. [258-267][111-130]
Speakers: Stephen Clare, Shana Mansbach
Current AI evaluation methods are narrow and quickly become outdated, necessitating new dynamic assessment tools. An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance.
Unexpected Consensus
Insurance as a primary market lever for AI safety compliance.
Speakers: Gregory C. Allen, Shana Mansbach
The insurance market can serve as a powerful lever to drive AI safety standards, and the current lack of coverage creates market pressure. Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach.
While Gregory approaches the topic from a policy-maker’s perspective and Shana from a think-tank/market-design angle, both converge on the insight that insurers’ refusal to cover AI creates a strong incentive for verification and standards-an alignment not explicitly anticipated at the start of the discussion. [244-250][221-231]
POLICY CONTEXT (KNOWLEDGE BASE)
The AI insurance divide paper argues that insurance can serve as the primary market mechanism to enforce safety standards and drive compliance across AI providers [S37]; complementary analyses discuss how insurance-based incentives can be operationalised within regulatory frameworks [S38].
Overall Assessment

The panel shows strong convergence on four pillars: (1) the International AI Safety Report as the foundational evidence base; (2) technical safeguards have improved but need policy‑driven, sector‑wide enforcement; (3) independent, outcomes‑based verification is essential to bridge evaluation gaps; and (4) financial mechanisms—especially insurance and procurement incentives—can drive adoption of verification standards. There is high consensus on the need for a risk‑proportionate, market‑aligned verification ecosystem, and moderate consensus on the exact regulatory approach (sector‑specific vs holistic).

High consensus on the necessity of standards, verification marketplaces, and insurance‑driven incentives; moderate consensus on how existing regulatory regimes should be adapted. The alignment suggests momentum toward establishing a formal IVO marketplace linked to insurance and procurement requirements, which could shape future AI governance frameworks.

Differences
Different Viewpoints
What primary economic incentive should drive adoption of independent AI audits?
Speakers: Hiroki Hibuka, Shana Mansbach, Gregory C. Allen
Independent evaluation is essential but lacks clear economic incentives; public procurement could create demand (Hiroki) [187-195][318-328] Insurance can serve as a carrot, providing a rebuttal presumption of higher standard of care and market advantage for verified products (Shana) [221-238][250-251] The current lack of AI coverage by insurers creates market pressure; insurers stepping in could act like regulation (Gregory) [244-250]
All three speakers agree that independent audits are needed, but they diverge on the most effective lever: Hiroki stresses government procurement contracts, Shana highlights insurance coverage and competitive market signals, while Gregory points to the broader insurance gap as a de-facto regulatory driver [187-195][318-328][221-238][250-251][244-250].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders have identified regulator-mandated compliance, insurance-linked liability, and market demand as the main economic incentives that could spur adoption of independent AI audits [S38]; the difficulty of securing executive buy-in without clear financial benefits is also documented [S50].
Whether existing hard‑ and soft‑law frameworks are sufficient or new governance mechanisms are required.
Speakers: Hiroki Hibuka, Stephen Clare, Gregory C. Allen
All countries already have hard and soft AI regulations; the challenge is updating them rather than creating new rules (Hiroki) [80-86] Organisational safety frameworks are expanding but remain inconsistent, creating a governance gap that needs broader compliance mechanisms (Stephen) [48-57] Policymakers must translate technical safeguards into robust, diverse implementation across sectors (Gregory) [62-65]
Hiroki views the current legal mix as a sufficient foundation that merely needs refinement, whereas Stephen and Gregory argue that the present frameworks are fragmented and that new, possibly outcome-based, governance tools are required to ensure consistent safety and compliance [80-86][48-57][62-65].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the adequacy of current governance frameworks are reflected in multiple sources: an assessment of whether 20-year-old mechanisms can address new technologies [S33]; recognition that existing laws may not cover emerging domains such as neuro-tech [S34]; consensus that international frameworks provide a solid foundation but implementation is the key challenge [S35]; calls for patience and careful evaluation before introducing new regulations [S36]; and discussions on policy interoperability versus uniform global governance [S43].
Unexpected Differences
Effectiveness of Japan’s pre‑emptive, low‑loss regulatory culture versus the need for stronger technical safeguards.
Speakers: Hiroki Hibuka, Stephen Clare
Japan’s low loss numbers and preference for setting rules in advance suggest its current approach works (Hiroki) [88-90] Technical safeguards remain vulnerable and inconsistently applied across the industry, indicating existing approaches are insufficient (Stephen) [51-57]
Hiroki presents Japan’s pre-emptive, soft-law-focused model as largely successful, whereas Stephen stresses ongoing technical vulnerabilities and uneven adoption, an unexpected contrast given both discuss safety but from opposite assessments of current effectiveness [88-90][51-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Japan’s recent active cyber-defence legislation exemplifies a pre-emptive, low-loss regulatory approach, allowing proactive measures against cyber threats [S39, S41]; at the same time, Japan has been a leader in deploying technical safeguards for online safety, providing a historical contrast between pre-emptive policy and technical solutions [S40].
Overall Assessment

The panel shows moderate disagreement centered on how to create effective incentives for independent AI audits and whether existing regulatory regimes are adequate. While all participants agree on the necessity of stronger governance and verification, they diverge on the primary levers (public procurement, insurance, market competition) and on whether new outcome‑based mechanisms are needed beyond current hard/soft law structures.

The disagreements are substantive but not polarising; they reflect different policy‑design preferences rather than outright conflict, suggesting that a blended approach—combining regulatory updates, insurance‑driven standards, and procurement‑linked audits—could reconcile the viewpoints and advance AI safety governance.

Partial Agreements
They share the goal of establishing independent verification but differ on the mechanism: Stephen focuses on improving evaluation tools, Shana on creating a government‑authorized IVO marketplace, and Hiroki on coupling audits with public procurement incentives [258-267][111-130][187-195].
Speakers: Stephen Clare, Shana Mansbach, Hiroki Hibuka
All agree that independent verification/auditing is essential to close the trust and safety gap (Stephen notes evaluation gap; Shana proposes IVO marketplace; Hiroki stresses need for independent evaluation) [258-267][111-130][187-195]
They concur that existing evaluation methods are insufficient, but Stephen frames it as a technical gap needing new tools, while Shana emphasizes the need for a broader outcomes‑based verification ecosystem to keep pace with rapid capability growth [258-267][271-277].
Speakers: Stephen Clare, Shana Mansbach
Both highlight that current AI benchmarks/evaluations are narrow, quickly become outdated, and impede reliable risk assessment (Stephen) [258-267]; (Shana) [271-277]
Takeaways
Key takeaways
The International AI Safety Report (2023‑2026) provides a baseline knowledge set for AI governance and shows that real‑world risks from general‑purpose AI are now material. Technical safeguards (jailbreak resistance, safety frameworks) have improved markedly, but remain vulnerable to skilled attacks and are not uniformly applied across the industry. Governance is shifting from theoretical discussion to urgent implementation; policymakers must ensure that existing safeguards are adopted at scale. Regulatory approaches differ globally: the EU uses a hard‑law AI Act, Japan and the US rely on sector‑specific or principle‑based rules, but all need to update existing laws (privacy, copyright, sector regulations) to cover AI. Liability concerns are rising as AI is embedded in many sectors; current legal frameworks lack a clear standard of care for AI systems. A major trust deficit exists for the public, deployers, and regulators; an outcomes‑based marketplace of independent verification organizations (IVOs) is proposed to provide credible, up‑to‑date testing and certification. Responsibility for safety must be layered across developers (model training and safeguards), deployers (monitoring and risk assessment), and ecosystem monitors/independent auditors (verification and societal resilience). Incentives for independent audits are weak; potential carrots include insurance underwriting discounts, public‑procurement requirements, and market advantage (e.g., a “seal of approval” similar to UL or AS9100). Current evaluation benchmarks are narrow, quickly become outdated, and fail to capture stochastic, multi‑turn, real‑world risk; better, dynamic testing tools are needed. Lessons from other industries (automotive safety ratings, aerospace AS9100, insurance underwriting standards) can inform the development of AI safety standards and verification processes.
Resolutions and action items
Proposal to create a government‑authorized marketplace of Independent Verification Organizations (IVOs) that conduct outcomes‑based testing and issue verification seals. Encourage regulators and insurers to tie compliance with IVO verification to liability standards, insurance premiums, and eligibility for public procurement contracts. Call for the development of sector‑specific safety standards that combine hard law, soft law, and voluntary safety frameworks, with periodic updates to keep pace with AI advances. Suggest that AI labs increase transparency and share safety data with external auditors and the broader community to reduce information asymmetry.
Unresolved issues
How to design and fund economically viable incentives (insurance, procurement, regulatory mandates) that make independent audits attractive to AI developers. What concrete, industry‑wide procedural standards (analogous to AS9100) should look like for AI systems and how they will be enforced. How to define a universally accepted “standard of care” for AI deployments across diverse sectors and jurisdictions. Methods for creating and maintaining up‑to‑date evaluation benchmarks that capture stochastic, multi‑turn interactions and real‑world risk profiles. The balance between self‑regulation by frontier labs and external verification, especially given the rapid evolution of capabilities. How democratic debate will determine acceptable risk thresholds (e.g., safety levels for autonomous vehicles) and translate them into enforceable metrics.
Suggested compromises
Adopt a layered, defense‑in‑depth responsibility model that does not place the entire burden on any single actor but distributes duties among developers, deployers, and independent auditors. Combine hard‑law requirements with soft‑law standards and voluntary safety frameworks to allow flexibility while ensuring baseline safety. Use market mechanisms (insurance discounts, procurement preferences, consumer‑facing seals) as carrots to encourage voluntary verification, rather than relying solely on punitive regulation. Implement a hybrid approach where sector‑specific regulations are complemented by overarching outcomes‑based standards that can be adapted as technology evolves.
Thought Provoking Comments
Technical safeguards are getting much harder to evade – jailbreak attempts that used to take minutes now take seven to ten hours, and many models are becoming resistant to classic prompt‑jailbreak tricks.
Highlights concrete progress in AI safety that counters the dominant narrative of only bad news, showing that engineering advances can meaningfully reduce risk.
Shifted the conversation from a purely pessimistic view to a more balanced one, prompting Gregory to contrast the Bletchley‑Park optimism with current realities and setting up the later discussion on how to sustain and scale these gains.
Speaker: Stephen Clare
Even though safeguards are improving, they remain vulnerable and their adoption is uneven; the real governance challenge is how to assure broader compliance and what to do when there is a lack of compliance.
Identifies the critical gap between technical capability and policy implementation, moving the focus from technical fixes to systemic governance issues.
Served as a turning point that moved the dialogue toward policy‑level solutions, leading directly to Hiroki’s comparison of regulatory approaches and Shana’s proposal for independent verification.
Speaker: Stephen Clare
The key question isn’t whether to regulate AI at all, but how to update existing hard‑law frameworks (privacy, copyright, sector‑specific regulations) and whether additional AI‑specific rules are needed.
Reframes the regulatory debate by positioning AI within the broader legal ecosystem, challenging the simplistic EU‑vs‑US dichotomy.
Redirected the discussion from creating brand‑new AI statutes to integrating AI considerations into current laws, prompting deeper comparison of sector‑specific versus holistic regulation and influencing Shana’s focus on outcomes‑based standards.
Speaker: Hiroki Hibuka
Japan’s culture favors setting rules in advance and strong compliance, but companies struggle with self‑governance and explaining decisions; we need a more agile, multi‑stakeholder soft‑law approach.
Provides a nuanced cultural perspective that highlights why a single regulatory model may not fit all jurisdictions, emphasizing the need for flexible, collaborative governance.
Enriched the conversation with a concrete example of how national context shapes AI policy, leading Gregory to ask about incentives for auditors and prompting Shana to discuss market‑driven verification mechanisms.
Speaker: Hiroki Hibuka
The core problem is a trust gap among the public, deployers, regulators, and developers; we need an outcomes‑based marketplace of government‑authorized independent verification organizations (IVOs) to certify that AI meets defined safety, privacy, and controllability standards.
Introduces a novel governance model that moves beyond command‑and‑control to a dynamic, market‑driven certification system, addressing both technical and institutional challenges.
Opened a new line of inquiry about how such a marketplace could function, leading to a detailed discussion on liability, insurance, and economic incentives, and influencing later comments from Hiroki and Stephen about audit costs and evaluation gaps.
Speaker: Shana Mansbach
Verification would create a rebuttal presumption of having met a heightened standard of care, giving developers and deployers a clear legal shield before any harm occurs.
Connects the technical verification concept to concrete legal benefits, showing how it could reshape liability and risk management in practice.
Prompted Gregory to explore the interplay between liability law and insurance, and spurred Hiroki to discuss financial incentives, thereby deepening the conversation about practical implementation.
Speaker: Shana Mansbach
Current evaluations are narrow and quickly become outdated; we lack robust, real‑world risk assessments, which is a major gap in our safety toolkit.
Critically assesses the state of AI auditing tools, highlighting that even the best‑available benchmarks may not capture emerging risks, thus questioning the reliability of proposed verification schemes.
Triggered Shana and Stephen to discuss the need for continuous, incentive‑driven improvement of testing methods, reinforcing the argument for a competitive IVO marketplace.
Speaker: Stephen Clare
Insurance can be a powerful carrot: insurers will only underwrite AI‑enabled products that have been verified, similar to how UL certification drives market adoption in other industries.
Identifies a concrete economic lever that could drive widespread adoption of verification, linking governance to market dynamics.
Shifted the discussion toward real‑world enforcement mechanisms, leading Gregory to draw parallels with aerospace AS9100 certification and reinforcing the feasibility of the proposed model.
Speaker: Shana Mansbach
Public procurement can serve as an incentive: if governments only buy AI systems that have passed verification, developers will have a strong motivation to obtain certification.
Adds another practical policy tool that leverages government buying power to accelerate adoption of safety standards.
Expanded the set of suggested incentives beyond liability and insurance, reinforcing the multi‑pronged approach advocated by Shana and highlighting how different levers can work together.
Speaker: Hiroki Hibuka
Overall Assessment

The discussion pivoted around three core insights: (1) tangible technical progress in safeguards, (2) the persistent gap between those safeguards and their consistent, enforceable adoption, and (3) innovative governance proposals that blend legal, economic, and market mechanisms. Stephen’s acknowledgment of both advances and shortcomings set the stage for Hiroki’s reframing of regulation as an integration problem, while Shana’s introduction of an outcomes‑based verification marketplace offered a concrete solution that resonated with the panel. Subsequent comments about liability, insurance, and public procurement turned abstract ideas into actionable incentives, steering the conversation from diagnosis to potential implementation. Collectively, these thought‑provoking remarks reshaped the dialogue from a bleak outlook on AI risk to a nuanced roadmap for building trust and accountability across stakeholders.

Follow-up Questions
How can consensus on AI risks and interventions be transformed into accepted best‑practice standards and procedural certifications for independent evaluators?
Moving from informal agreement to formal standards (like AS9100) is needed for widespread industry adoption and to give customers confidence in AI safety.
Speaker: Gregory C. Allen
What financial incentives can be created to motivate companies to undergo independent AI audits and verification?
Without clear economic benefits, firms may view audits as costly and optional; incentives such as regulatory mandates, insurance discounts, or procurement requirements could drive participation.
Speaker: Gregory C. Allen, Hiroki Hibuka
How can liability and insurance frameworks be aligned with AI verification to establish a clear standard of care for developers and deployers?
A defined standard of care linked to verification could reduce legal uncertainty, lower insurance premiums, and encourage responsible AI deployment.
Speaker: Shana Mansbach
What robust, up‑to‑date evaluation methodologies can capture the stochastic, multi‑turn, and real‑world risk profile of AI systems?
Current benchmarks are narrow and quickly become obsolete, limiting the effectiveness of audits and risk assessments.
Speaker: Stephen Clare, Shana Mansbach
How can we design benchmark and regulatory methods for abstract values such as privacy, transparency, and fairness where no clear standards currently exist?
Absent benchmark standards, regulators struggle to assess compliance across jurisdictions, hindering consistent governance.
Speaker: Hiroki Hibuka
What mechanisms can ensure consistent adoption of technical safeguards across the entire AI industry, not just frontier developers?
Safeguards are unevenly applied, creating systemic risk; a strategy is needed to promote uniform implementation.
Speaker: Stephen Clare
Can third‑party verification organizations sustain expertise as AI technology evolves, or will skill atrophy undermine their effectiveness?
Ensuring that external auditors keep pace with rapid AI advances is crucial for long‑term credibility of independent verification.
Speaker: Stephen Clare
What lessons from other industries (automotive, aerospace, finance) can inform the creation of independent AI verification and safety‑rating systems?
Existing safety‑rating frameworks (e.g., NHTSA, UL) may provide models for structuring AI governance and certification.
Speaker: Hiroki Hibuka
How can public procurement be leveraged as an incentive for AI developers to obtain safety verification?
Government purchasing decisions that require verified AI could create market pressure for compliance.
Speaker: Hiroki Hibuka
How should societies define acceptable safety thresholds for autonomous AI systems (e.g., comparing AI‑induced fatalities to human‑driver rates)?
Establishing democratic, quantifiable safety targets is necessary for policy decisions on autonomous technologies.
Speaker: Hiroki Hibuka
How can a marketplace of independent verification organizations be designed to scale cost‑effectively across diverse AI product sizes and risk levels?
A right‑sized, tiered audit system would avoid one‑size‑fits‑all costs and make verification accessible to both large models and niche tools.
Speaker: Shana Mansbach
What concrete steps are needed to operationalize a layered ‘defense‑in‑depth’ approach that allocates safety responsibilities among developers, deployers, and societal actors?
Clarifying duties at each layer is essential to avoid gaps where safeguards fail or are not applied.
Speaker: Stephen Clare
How can we mitigate perverse incentives where companies might avoid audits to escape liability, ensuring they do not remain willfully blind to risks?
Mechanisms are needed to prevent firms from skipping verification to dodge legal exposure, preserving the integrity of the oversight system.
Speaker: Gregory C. Allen (referencing Shana)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.