Setting the Rules_ Global AI Standards for Growth and Governance
20 Feb 2026 17:00h - 18:00h
Setting the Rules_ Global AI Standards for Growth and Governance
Summary
The panel convened to discuss why AI standards are essential for aligning global AI development with safety, trust, and inclusive outcomes [4-5]. Participants defined standards variously as benchmarking methodologies that quantify risk uncertainty (ML Commons), safety guidelines that follow product release (Qualcomm), and normative governance frameworks that set global “good” baselines (Singapore government) [13-15][17-24][28-33]. Microsoft described its internal Responsible AI Standard as a tool to align product, engineering, and sales teams around common expectations, while urging external standards to create a shared language across the ecosystem [41-46]. OpenAI’s AI Standards Lead emphasized translating internal risk-management practices into a common language for customers and building interoperability to foster consumer trust [56-59]. The Indian Bureau of Standards highlighted standards as mechanisms for consumer confidence and quality assurance, linking national work to ISO’s SC42 efforts [61-64].
A recurring challenge identified was determining “what is good enough,” requiring consensus that includes industry, regulators, and broader stakeholders rather than a single perspective [96-103][108-114]. Panelists agreed that standards must be open and inclusive so smaller firms can adopt them without building proprietary processes, a point underscored by Qualcomm’s call for open governance models [169-185]. Measuring AI performance was described as developing taxonomies, datasets, and evaluators that estimate uncertainty under defined assumptions, recognizing that different sectors may accept different risk thresholds [251-259]. The group noted that standards should complement, not replace, regulation, providing technical expectations that regulators can reference even when formal rules are absent [77-90][214-223].
Looking ahead, participants expect a rise in certification schemes that signal consensus on “good enough” and modular, interoperable standards that can evolve with advancing models [336-340][388-392]. Future-proofing will rely on process-oriented standards that remain applicable as AI capabilities change, while specific evaluation methods will be updated over time [346-354]. Accelerated development of testing methodologies within ISO processes was cited as a priority to keep pace with rapid AI innovation [378-382]. The panel concluded that despite the nascent state of AI standardisation, collective action across industry, policy, and standards bodies is vital to build trust and enable responsible AI deployment [462-470].
Keypoints
Major discussion points
– Standards are seen as essential for building trust and aligning “what good looks like” across the AI ecosystem.
The moderator frames the need to demystify standard-setting and stresses global cooperation and inclusion [4-5]. Panelists echo this: Rebecca describes benchmarking as a way to measure risk, a major adoption barrier [13-15]; Amanda explains Microsoft’s internal responsible-AI standard that aligns product, engineering and sales teams and calls for a common external language [41-46]; Chris notes that standards solve a collective-action problem and give legitimacy that pure industry or government actions lack [108-114]; Esther adds that standards translate risk-management practices into a language of consumer trust and interoperability [57-59].
– Defining and measuring standards is technically difficult and requires consensus on “good enough.”
Rebecca points out the recurring question of what constitutes “good enough” and stresses the need for a broad, multi-stakeholder consensus [97-102]. Lee lists concrete focus areas-testing, transparency disclosures, and incident reporting-as early priorities for standardisation [80-90]. Rebecca further explains that benchmarking must provide a methodology, taxonomy and reference implementations, yet the core challenge is estimating uncertainty under defined assumptions [250-259]. Chris expands on this by distinguishing high-level process standards from technical benchmark standards that must evolve with model capabilities [291-298].
– Inclusive, global cooperation among industry, policy makers, and standards bodies is crucial.
Bhushan highlights the mix of “standard setters and measurers” from industry and policy [34-36]. Kshitij describes India’s AI governance framework and the inter-connectedness of ISO, ML Commons, IEEE and other bodies, stressing the need to adapt global standards to local use-cases [207-212]. Etienne stresses that open, inclusive governance (e.g., ML Commons, ISO) lets smaller firms participate and rely on standards without building their own risk-management systems [176-185]. Lee notes that regulators can reference technical standards to define expectations, and even without regulation standards help differentiate trustworthy providers [214-218].
– Future outlook: faster development, certification, interoperable modular standards, and addressing concrete challenges such as language bias.
Bhushan envisions certification that signals “good enough” and a move toward consensus-based benchmarks within two years [337-340]. Chris argues that process standards are relatively future-proof, while specific evaluations must be updated as models advance [347-354]. Lee reports ongoing work on testing methodologies that she hopes to push through ISO within a year [378-382]. Amanda calls for a modular, interoperable standards ecosystem that avoids reinventing the wheel for each new use-case [388-393]. Audience concerns about language bias are addressed by Esther (multilingual evaluation suites) and Etienne (need for reusable safety tests across languages) [441-447][452-460].
Overall purpose / goal of the discussion
The panel was convened to demystify AI standard-setting, explain why standards matter for safety, trust and market adoption, identify the technical and governance challenges in creating and measuring those standards, and outline a coordinated path forward that brings together industry, regulators, standards organisations and civil-society stakeholders.
Overall tone
The conversation maintains a collaborative and solution-focused tone throughout. Early remarks are introductory and aspirational, quickly moving to constructive exchanges about concrete challenges (testing, transparency, measurement). When discussing obstacles-such as the “good enough” dilemma, speed of standard development, and skill gaps-the tone becomes more urgent but remains collegial. The closing segment retains optimism, emphasizing shared commitment to faster, interoperable standards and collective action. No major shifts to conflict or negativity are observed; the tone stays professional, forward-looking, and inclusive.
Speakers
Speakers (from the provided list)
– Bhushan Sethi – AI transformation consultant; moderator of the panel.
– Rebecca Weiss – Executive Director of ML Commons, an AI benchmarking organization and engineering consortium. [S1]
– Etienne Chaponniere – Vice President of Technical Standards at Qualcomm. [S4]
– Lee Wan Sie – Singapore government official working on AI governance and policy; focuses on setting global AI norms. [S8]
– Amanda Craig – Leader of Microsoft’s Public Policy team for AI and the Office of Responsible AI. [S2]
– Joslyn Barnhart – Works at Google DeepMind on AI standards, governance, and policy. [S10]
– Chris Meserole – Executive Director of the Frontier Model Forum, advancing Frontier AI safety and security. [S12]
– Esther Tetruashvily – AI Standards Lead at OpenAI. [S6]
– Kshitij Bathla – Representative of the Bureau of Indian Standards (BIS), National Standards Body of India; represents ISO ICJTC1 SC42. [S17]
– Audience – Various audience members asking questions (e.g., on language bias, auditability, privacy governance). [S19][S20][S21]
Additional speakers (not in the provided list)
– Juan C. – Unnamed panel participant referenced by Amanda Craig; contributed a comment on standards aligning around “what good looks like.”
Opening & Goal – Bhushan Sethi, an AI-transformation consultant, opened the session by stating that the panel’s aim was to demystify AI standard-setting, explore global cooperation, and define “what good looks like” for AI development [2-3][4-5][8-10].
Speaker definitions (in speaking order)
– Rebecca Weiss (ML Commons) – Standards are benchmarking methodologies that define risk-measurement and provide technical artefacts for integration into development pipelines [13-15][250-261].
– Etienne Chaponnière (Qualcomm) – Unlike telecom standards, which are mandatory for product shipment, AI safety standards usually follow product releases and focus on safety [16-24].
– Lee Wan Sie (Singapore) – Standards set global norms and common technical processes for AI governance, aligning “what good looks like” across jurisdictions [26-33][80-90].
– Amanda Craig (Microsoft) – Microsoft’s internal Responsible AI Standard aligns product, engineering and sales functions; external standards are needed to create a shared market language [41-46].
– Joslyn Barnhart (Google DeepMind) – Regulation is already referencing standards that have not yet been created, creating an urgent need for industry-driven standardisation [48-51].
– Chris Meserole (Frontier Model Forum) – Standards solve a collective-action problem by providing an open, credible process that levels the playing field [52-55][108-110].
– Esther Tetruashvily (OpenAI) – Standards translate internal risk-management practices into a common language for customers and enable ecosystem interoperability [56-59].
– Kshitij Bathla (Bureau of Indian Standards) – Standards are tools that build consumer trust, assure quality, and must be adaptable to Indian-specific use-cases while aligning with ISO [61-64][207-212][206-210].
Core themes
– Trust & “good enough” – The panel repeatedly stressed the need for credible, non-subjective reporting and a consensus on what constitutes “good enough” for different sectors [96-102][112-114][134-136][215-224].
– Measurement & benchmarking – Rebecca detailed the benchmark components (taxonomy, dataset, evaluator) and highlighted uncertainty estimation as the main technical challenge [252-261][255-259]; Chris distinguished high-level process standards from the scientific benchmarks needed to operationalise them [291-298]; Amanda emphasized that shared metrics are essential to assess progress beyond the “nascent” stage [274-277].
– Inclusivity & open governance – Etienne, Kshitij and Lee emphasized that open governance models (ML Commons, ISO, IEEE) enable smaller firms to adopt standards without building bespoke risk-management systems [176-185][207-212][215-224].
– Regulation vs. market – Joslyn noted that regulators cite yet-to-exist standards; Chris explained that regulators often off-load risk-management requirements to the standards process, making standards a de-facto regulatory tool [48-51][194-196]; Lee argued that standards can serve as market differentiators even without legal mandates [215-224].
Regional perspectives
– India (Manav mission & BIS) – Kshitij described the “Manav” human-centric vision, India’s AI Governance Guidelines, and BIS’s work to align national standards with ISO/IEC JTC1/SC42 outputs while incorporating India-specific risk considerations [207-212][206-210].
– Singapore – Lee reported ongoing work on testing methodologies that she aims to submit to ISO within the next year, underscoring the panel’s consensus on the need for accelerated timelines [376-382].
Audience Q&A
– Skill-gap & auditability – An audience member asked how governments can audit industry-driven assurance programmes given technical skill gaps [398-405]; Chris replied that the openness and legitimacy of formal standard-setting bodies mitigate this risk [112-114]; Lee added that certification can provide an independent assurance mechanism [215-224].
– Language bias – A participant queried multilingual bias in India; Esther explained OpenAI’s use of multilingual evaluation suites (MMLU and Indian-dialect tests) and called for broader community participation [441-447]; Etienne noted that reusable safety-test frameworks are needed for many languages [452-460].
– Minimum-consensus vs. absolute requirements – Joslyn answered that regulators will likely accept standards that provide a concrete “minimum bar” rather than overly abstract criteria [436-438].
Two-year outlook & action items
– Bhushan envisaged a rise in certification schemes that codify consensus on “good enough” within the next two years [336-342].
– Chris advocated for future-proof, process-oriented standards with evaluation methods that evolve as model capabilities advance [347-354].
– Lee aims to accelerate ISO-level testing work within a year [376-382].
– Amanda called for a modular, interoperable standards ecosystem to avoid reinventing the wheel for each new use case [388-393].
– Etienne reiterated the importance of open standards that keep costs manageable for smaller companies [176-185].
Conclusion – The panel concluded that AI standards are essential for translating high-level norms into verifiable practices, building trust across consumers, enterprises and regulators, and addressing the collective-action problem of rapid AI innovation [462-470]. Unresolved challenges include auditability, defining “good enough” for diverse risk tolerances, and developing comprehensive multilingual evaluation frameworks. The discussion underscored that coordinated, multistakeholder effort is vital for standards to become a durable foundation for responsible AI deployment.
I’m going to provide a brief introduction and then I’ll have my panelists introduce themselves and we’ll get into the discussion. So I’m a consultant around AI transformation. I help companies implement AI, drive the return on investment in a responsible way with AI. What’s really important about this discussion is we need to demystify what we mean by standard setting. There’s been a whole lot of discussion at this week’s summit around the importance of global cooperation, that the importance of inclusion around AI, driving solutions that meet everybody’s needs. The tech CEOs spoke about it yesterday. World leaders have spoken about it. We’re here in India where it’s about planet and people and prosperity. So that’s what the discussion is going to be about.
And we are going to have time for Q &A at the end. But I’m going to have my panelists introduce themselves first in the order that they’re sitting to introduce themselves and also talk about what standards mean for them? What lens they’re looking at from a standard perspective around AI?
Hello, my name is Rebecca Weiss I’m the executive director of ML Commons we are an AI benchmarking organization we are an engineering consortium that focuses on that problem and so for us as a technical standards organization around benchmarking what that means for us is two things one, we want to define the methodology for measurement and two, we want to create the technical artifacts that allow for engineers to integrate this methodology into their development life cycle. So for us, when we see what’s happening in the world today, the ability to measure risk is a big barrier to adoption and that ability to understand and estimate the uncertainty around the behavior of an AI system is something where we think benchmarking can help.
So, I will actually we have a large panel so I’m going to let everyone else have a chance to talk and I’m sure more will come out in our dialogue.
My name is Etienne Chaponniere I work for Qualcomm. I’m a vice president of technical standards And so what we do within that role is, effectively, we have a team going to technical standards for AI, and we actually try to coordinate where is it that we need to go, how is it that we need to make sure that we understand what it means to be compliant. I come from a world of telecom, as Qualcomm can evoke to some folks. And for us, it’s a very different thing, right? For the telecom world, you cannot ship a product unless you comply to a standard because you need it for interoperability. In the world of AI standards, it’s a bit different.
So we’re talking more about safety standards, and those typically tend to trail the products. The products are out there, and then they’re going to comply to standards at some point when the standards are available. What matters, however, what is common in all of this is that the standards need to be available at scale for everyone and in a way that engineering teams can do it easily, at least from the product side. So I think I’ll leave it at that, and, yeah, that’s it.
I’m Wan Lee from Singapore government. I work in AI governance and policy. So many things, but specifically for standards, what it means to us is setting norms. That means alignment globally on what good looks like. And specifically in the area of AI governance, then a lot of it has to do at this stage in terms of common methodologies and processes that we have to follow. So, but it’s still technical. It’s not a checkbox, but hopefully that helps us all align to what good looks like. Thanks.
And maybe before the next introduction, just so you can get a flavor, we have standard setters and measurers. We have people in industry and we have people who play in the policy and the regulatory environment. And that’s the importance around this topic.
Thank you. Hi, everyone. I’m Amanda from Microsoft. I lead the public policy team with AI. And the Office of Responsible AI at Microsoft. I think Juan C. said it well when she described standards as really, like, aligning around what good looks like. And I would offer, you know, we actually at Microsoft in our office, we define something called our responsible AI standard that applies to all of our internal kind of product groups, our engineering function, our sales function. And if you think about, like, the role of that internal standard is to align all of the internal stakeholders we have around what good looks like. Like, externally, we need the same sort of mechanism, right? And that’s the role that standards can play in the broader ecosystem.
So we want to partner with our industry colleagues, and we want to partner with governments and others around the world to be able to define what good looks like so we can all have that common language instead of expectations.
Hello. Jocelyn, Google DeepMind, where I also work on issues of AI standards, governance, and policy. building on what’s been said. So I think that was an interesting point that often technical standards come first and process and safety standards often come later. In the space of AI at the moment, actually, regulation has gone ahead and jumped to, you know, we’ve regulated and essentially made reference to standards that do not yet exist. So for places like Google DeepMind who have not invested heavily in the standard space in the past, this is now of an utmost priority because we actually need this to assist with implementation and compliance. So that is a primary goal on our side.
I’m Chris Meserole,. I’m the executive director of the Frontier Model Forum. Our mission is to advance Frontier AI safety and security, and we work with many of the leading Frontier AI developers and employers, including several colleagues on the stage today, to advance, you know, best practices for risk management. For Frontier AI in particular, there’s a kind of unique and a set of unique and novel risks that over the last couple of years. the community has really started to develop and converge around a set of best practices that now I think need to start to graduate into actual formal standards, and I think that’s kind of why we’re here. That’s why we’re very interested in the standard -setting space.
Hi, everyone. My name is Esther Tetruashvily, and I’m the AI Standards Lead at OpenAI. Echoing many of the things that have already been said, I think standards for us, especially as a frontier AI lab, is about translating some of our practices for risk management into the language of risk management for customers across the supply chain, and it’s also about creating a language for consumer trust and assurance. It’s also about, in the age of agents, thinking about interoperability and helping everyone benefit from this ecosystem that we’re developing here. So I’m really excited to be here and to talk about these issues with you all. Thank you.
Hello, everyone. I’m Kshitij Bathla from Bureau of Indian Standards, the NETS. National Standards Body of India, and here representing ISO ICJTC1 SC42, because BIS, European Standards, is a part of the SC42. and for us I would say standards are the tools which enables consumers’ trust in whatever ecosystem for which they are developed as well as enable us for the industry to get it done to ensure the quality and the consumer trust. That’s the main focus area for us. Thank you.
So let’s start with why we need standards. Why are we even here? Because there’s a lot of confusion between standards, regulation, legislation. Are we going to get global cooperation around these things? Maybe should it just from a standard setting perspective and then maybe from a regulatory perspective. Why are we here? What’s the problem we’re solving and for whom?
So I would say the problems, there are multiples in the standards domain. Specifically, it always starts with what we are tackling with. What is AI? That was the primary focus of the JTC1 and SE42 when it started. So it defined what is AI. what is generative AI now they are talking about what is agenting AI as of now talking about so I think the most of the specific points that needs to be taken care is what is coming next and to keep pace with that and apart from once it comes to that when we have kind of mentioned that what it is all about then how do we verify and validate whatever is being said that this is a system which is having AI for example I would say someone says they have an equipment call it washing machine or is equipped with AI but is it actually equipped with AI or it’s just a normal logic system so this is something that we are trying to do the standardization.
So it’s about trust it’s about verifying the tech firms here represented are moving very fast with the model development so it’s like we need standards there from aregulatory perspective what would you add there?
I think the most important thing I wouldn’t say from a regulatory perspective. Maybe in terms of why, from an AI policy perspective, we think standards are helpful. Like I said, it’s about defining alignment in what should be in, let’s say, transparency. So I think if you say what would be the top three things today that we want to think about testing, setting for standards would be one, testing. How do you do testing for AI? Whether it’s AI models or AI applications, I think that’s one area. Because then it defines what good testing can look like. Two, perhaps in transparency, what would disclosure look like? Everyone has their own way of sharing the information that they want to share.
One way is to standardize it so it’s easier for the readers, people who are consuming this information to understand. And I’m saying this in very, very broad terms. I mean, it depends on which reader you’re talking about, who’s going to consume. just in broad terms, perhaps one way of standardizing it. Maybe the third way could be in how you’re reporting or monitoring incidents. But it’s still very, very early days. But that’s where standards, again, in terms of alignment, that might be one that would be useful to find alignment in these areas.
So ,how do we report? How do we disclose? How do we make it credible? And so it’s not a subjective tick -the -box exercise, etc. From a standard setting, Chris and Rebecca, from a standard setting perspective, what would you add to that before we have kind of the industry view?
I’m happy to add to this. So I think there’s been a theme that has come across in this panel a couple of times, which is what is good enough? And I think in order to define that, a standard represents a consensus about what is good enough. The problem that we have is who contributes to that consensus. It shouldn’t probably be exclusively an industry perspective. You need to have more stakeholders or more constituencies that need to be represented in that definition. And then on top of that, what is good enough, as I think Jocelyn mentioned earlier when we were talking before this panel, there’s a scientific element to that. How do you define the characteristics of a system such that you can actually create?
the kind of uncertainty estimation that lives up to a statistical guarantee, but then there’s also the political element to that, which represents a whole set of issues that I’m actually not qualified to talk about, so I will pass it to Chris.
I think it’s worth backing up from this thing. One of the original questions was, what are standards for? Is Chris’s mind working? I was just saying, one of the things we should maybe do is back up a little bit to this question of what are standards for, and I think a big part of what standards are for is to try and solve this collective action problem. There’s a kind of unique set of risks that we are worried about. We want to make sure everyone’s on the same page so that no one kind of actor is disadvantaged or advantaged compared to others. Having standards for how we’re going to manage risks across an ecosystem are extremely useful for that, so there’s a policy dimension to it.
There’s also an adoption dimension to it, right, because people want to know that there’s kind of… of a common way across industry of handling a certain class of risk. And I think being able to set standards and have a formal standard -setting body, to one of the points that was made earlier, by definition a standard -setting body is open, right? So there’s a legitimacy and a credibility to standard -setting bodies that you don’t have if it’s just industry or just government in many cases. And I think, you know, all of those kind of factors coming together are exactly why we’re so keen on kind of pushing forward the standards discussions.
Yep. So maybe from a hyperscaler perspective, maybe Esther, then Jocelyn, and we can kind of like play it clear, the difference, how is this showing up kind of at your firms and how are you thinking about this?
Yeah, no, that’s a great question. I think from sort of a market adoption perspective, a lot of our technology, like general purpose AI models or foundation models, are being integrated into existing ecosystems or on top of. stacks. And there’s a lot of confusion in terms of risk controls and risk management about what that means. We have our own risk management processes. They have their own risk management processes. And one of the barriers to adoption is having a common language to talk about how do you map those controls onto one another. There’s a separate challenge, I think, of who is best positioned to control a particular risk. What are the risks? What are the net new risks?
What are the risks that are already existing where we don’t need to create something net new? And so for us, it’s both an imperative in some ways to kind of translate what we’re doing in terms of managing risks into the language of upstream, downstream customers so that they can understand and map those same practices onto their controls. And then we kind of can create a universal language that can ease trust and assurance in an easy, rockable way across the market. There’s also just space for, I think several people have talked about. Regulations moving ahead. of the standards, where we are still developing methodologies, what is standardizable in what we’re doing, recognizing where the science is not cut up yet, and where we maybe are in a place of more maturity.
And maybe just to bring it to life for the audience, given the huge amount of subscribers you have in India, around the world, growing every day, what’s changed in the standard vernacular at OpenAI?
In terms of our adoption, or in terms of how we’re distributing it?
Yeah, the prominence of it, how people are thinking about it, the importance of the topic.
So I think there’s both an aspect of it that’s like, what does already exist that we can use that can reassure customers that we are following the best practices for the industry, say for privacy or cybersecurity. There’s an existing risk management standard, ISO 42001, that OpenAI just got certified in. And that definitely signals something to the market. And to customers. Then there’s also sort of a transparency. element, right? We have our safety frameworks, we update them, we disclose information about in our model cards performance on a variety of metrics. And then there’s certain things we do to kind of elevate and help stakeholders across the spectrum in terms of how to build evaluations. So we currently published a safety hub that gets updated regularly that kind of tells how we’re performing in a variety of metrics and what are the best methodologies and how to work with this.
Great. So Joslyn, can you bring to life how Google DeepMinds are thinking about standard setting in that context?
Yes. I’ll take it back to what Chris was talking about in terms of collective action problems. So some of the mitigations we’re talking about associated with some of the more extreme risks that Frontier AI poses can be quite costly. And so I do think that there is just a strong industry incentive to work together to resolve this collective action problem. Again, as Chris said, doing this through standards through an open, legitimate process seems to be incredibly impactful. Again, like the… The worst… thing for adoption would be a safety incident. So again, we have a collective incentive as an industry to make sure that we raise the floor to avoid that on all of our behalves.
So I do think that that is seen, you know, I think standards at this point are seen as a very clear and important strategic play for making, you know, essentially clearing the path for rapid adoption.
Amanda, how do they show up at Microsoft right now? Can you hear the question? How do they, how do these standards show up at Microsoft? Amanda’s going to speak about Microsoft experience.
Thank you. Yeah, I was going to start by just thinking about, at Microsoft, at Google, at other places, it’s not a totally new kind of process that we’re going through, right, in terms of thinking about standards and the importance of standards for adoption of this technology, sufficient trust in order to have adoption and in order to really enable compliance. I mean, I think Esther made a really good point. and sort of acknowledging that, you know, especially as we are deploying this technology, we are working with customers that have their own set of standards and regulation, and part of the challenge that we find ourselves, like, facing right now in AI governance is we have a lot of high -level norms and expectations that, again, are not so different from the patterns we’ve seen before.
Basically, we want to know how AI providers are managing risk, but we are in the early days of defining really what that means in practice in a really detailed way, especially, like, across the AI value chain. So what are model developers really responsible for doing for risk management? What are application developers really responsible for doing? How does that dock in to what deployers of those applications that are oftentimes implementing existing standards and meeting existing regulatory requirements? How does all that fit together? And, again, you know, we’ve done this with other digital technologies as well, like software, like cloud services, where we’re ultimately trying to define in practice what are the challenges that we’re facing right now.
is everyone responsible for doing? How do we have a common language to be able to talk to each other among sort of providers or the supply chain of technology and those that are ultimately deploying it? But we actually really do need the standards to support that, right? Because otherwise we are stuck at the sort of like high level conversation about norms around we want to evaluate risk. We want to figure out what the kind of right transparency practices are. Or we can find ourselves in this sort of deep technical weeds but like sort of having a place in between that is really at the level of standards, of technical standards, really helps drive that kind of common set of expectations so that you can have trusted.
So we need them. They’re important. We’ve got to drive adoption. There’s a collective action agreement here. From a Qualcomm perspective, SCM, bring to life the business model, how you use this in engineering your products.
Yeah, so I think there’s one thing that I’d like to note. I think there’s one thing that I’d like to note here. As Qualcomm, we basically provide chipsets, right? We’re not building chipsets. We’re not building chipsets. We’re not building big models. What matters to us still is the fact and the reason why we’re engaged in those standards, whether it’s in ISO, Sentinelic for Europe, ML Commons, when it’s other type of standards, is effectively the fact that it provides scale in the sense of providing scale not only across the globe but also allows any different type of companies to benefit from it. I mean, let’s be clear, right? If you look at the companies who have the type of resources to either set up their own standards and risk management systems internally, they’re typically pretty big companies.
Now, the thing with AI is that there’s a huge amount of companies who are being created every day, and they don’t have the resources to put this together. And so there’s two conditions for making sure that the type of standards that are being put together are, one, inclusive, is that they’re open, as Rebecca, you were alluding to before. And so, whether it’s ML Commons, which has a very open governance model, or ISO, or Sentinelic in Europe, there needs to be an opportunity for everyone to participate. So that’s the first step. However, we know, and that’s the reality, that not everyone has the means to participate. Because they’re like super focused, they need to bring up their own LLM for that particular use case or maybe very general use case, and they just don’t have the resources to do this.
So from that standpoint, having the standard as effectively a mechanism for them to go directly to product and know that they’re going to comply with what the, effectively, world or the community has set up is really important. So from Qualcomm, the reason why we want to participate is to enable this type of accessibility to companies which are not always the biggest one.
Yep. So agreement that we need them. Before we go into how we set standards, how we measure and benchmark them, and Rebecca will bring that to life, a wildcard question is, there could be a lot of people listening to this to say, the world is not connected and cooperating around this. We don’t have global regulations on AI. But yet we have… industry leaders, standard setters, vehemently agreeing. How should the audience think about that? Is there a disconnect there or would anyone like to comment on that?
I would actually put, so part of one of the reasons why I think we’re all so interested in standards is one of the things you have, one of the things you’re seeing is multiple jurisdictions saying some version of we think that there are new risks with frontier AI. We as the government are concerned on behalf of our citizens that we are kind of attending to those risks across industry. Those risks and how to manage those risks are probably best left to be developed or kind of managed through the standard setting process, but they aren’t always setting the standards. So in the United States, there’s a couple of different states, for example, within the United States that have passed requirements for frontier AI developers.
to have a frontier AI framework, but they don’t specify what should actually be in the framework. They kind of offload some of that to the standards process, which is why I think it’s so important to have these standards in place. Like, there’s a clear kind of policy and regulatory interest in there being mechanisms by which some of the risks that may come with frontier AI are managed, but we need to kind of color in the lines a little bit exactly, like, you know, how we’re all going
And before we go to Rebecca, just from an India perspective, PM Modiji talked about Manav yesterday and the AI vision. Through there, there was a lot of focus on validity and governance, so standards were implied there. Do you want to just bring to life kind of how India thinks about this before we go to Rebecca and talk about measurement?
So I would say the Manav mission, it’s welfare, human -centric, and all those aspects are there. And from the governance perspective, also what is going on is that the government is not going to be able to do anything about it. we as of now the India AI governance guidelines are there. This is providing you a framework that these are the things that you should look into. Just providing a reference to. So in this direction the Indian government as of now is moving into. Coming into the from the perspective of standardization and at the national level as well as the ISO level I am adding to the question that you asked previously. That standards bodies are interconnected with each other.
The ISO there is a license mechanisms. We have the ML Commons as the license there. The IEEE is there. All bodies are there. So they are all interconnected there and whatever is coming as of out of these bodies is an outcome which is based on the studies. but done by various forums it’s not only the one I would say just the ISO body or not so in this direction the Indian standards that we are working on we are developing are also in the direction because here is something which is global we can’t have cells was specifically for India there could be the risks there could be specific use cases that are India specific for that those we need to have some specific guidance but more or less everything is the global thing that we are trying to look
into and then adapt those with the specific use cases that we need to right so we need global we need to adapt that to kind of local kind of conditions and use cases so let’s get a bit more technical Rebecca like why is this hard how do we measure it like how does it compare to benchmarking maybe Rebecca and then and then from a regulatory perspective did you want to make
I just want to respond to Chris comment and your question about you know if there’s no regulations then why do we care about standards right I mean, sure, I think there will be regulators who will say, yes, turn to the technical standards to define the expectations, which I think is a fair point that Chris made. But even when there’s no regulations, I think the standards still are useful. I mean, Esther just mentioned that OpenAI is certified for 42 ,001. You didn’t need to do that, but why did you do it, right? And Entropy has done that as well. And I think the idea is that perhaps there’s also a way to differentiate for organizations, for enterprises. And it doesn’t have to be the frontier model labs only.
It could be app developers and so on. A way to differentiate themselves and say that, look, I’m adhering to a global standard. I’m demonstrating that I have actually implemented something that’s good enough. I’ve addressed a risk in this way. I think that’s one good…
Do you want to make a quick comment? Yes, do you want to make a response to everything we’re getting to? Sorry, Rebecca. Please.
I just want to respond to Chris’ comment and your question about, you know, if there’s no regulations, then why do we care about standards, right? I mean, sure, I think there will be regulators who will say, yes, turn to the technical standards to define the expectations, which I think is the fair point that Chris made. But even when there’s no regulations, I think the standards still are useful. I mean, Esther just mentioned that OpenAI is certified for 42 ,001. You didn’t need to do that, but why did you do it, right? And Entropy has done that as well. And I think the idea is that perhaps there’s also a way to differentiate for organizations, for enterprises. And it doesn’t have to be the frontier model labs only.
It could be app developers and so on. A way to differentiate themselves and say that, look, I’m adhering to a global standard. I’m demonstrating that I have actually implemented something that’s good enough. I’ve addressed it. I’ve risen this way. I think that’s one good… reason for standards, even if there’s no regulatory cover. So the certification assurance part is helpful. Yeah, I just wanted to add that as a little bit of colour just to give some benefits to the standards community that is still kind of very…
Thank you. Bringing the regulatory perspective and kind of the Singapore experience. So let’s get into measure. And the fellow panellists, if you want to respond to anything, just give me the signal. We’re going to make this an interactive conversation. So Rebecca, how do we measure this?
Well, solve all the problems in one definition. No, I’m kidding. But as I said earlier, benchmarking consists of two things. It consists of a methodology, at least from our perspective, the way that we do benchmarking consists of a measurement methodology, and it consists of reference builds, implementations of that methodology so that engineers can use that. And the definition of a benchmark, as we’ve been trying to operationalize this in places like ISO and others, is a taxonomy, a data set, and an evaluator system. And the point of all of that construct is, as Etienne pointed out, this allows for you to scale this kind of approach towards the type of deployments that we’re expecting to see in these types of AI settings.
The challenge behind all of this is that what you’re really trying to do is estimate uncertainty. Uncertainty. You’re trying to provide a sense of, I’m not going to tell you that your system is, quote -unquote, safe or not. What I’m going to tell you is, under these considerations, under these conditions, under these assumptions, the estimated likelihood of a particular risky behavior is X. And then it is up to you as a risk management professional, a deployer, a developer, it’s up for you to decide, is that enough? Is that good enough for your needs? And I don’t think it’s going to be the same for different sectors. I think sometimes. Sectors will have a much higher bar for the amount of uncertainty.
that needs to be estimated, and then other sectors will probably be like, that’s good enough for me. I don’t necessarily need to get much further than what you are offering right off the date. So we can go into all of the different questions that are made open, but those particular areas related to developing that taxonomy, developing those data sets, and developing those evaluators, the best practices and the standards to make it clear that this is the best in the industry, this is the way that it is, that’s what we need to get better at.
Yeah, so what I’m hearing is we need clarity. Clarity of the taxonomy, clarity of what we’re measuring, and it needs to be verifiable and credible. From an industry perspective, would anyone like to pick up, like, how’s that going to work? What’s in place now? What some of the challenges might be? How do you get organizational buy -in? Anything to add from an industry? Amanda, do you want to start us off?
Sure. I mean, I think there’s work to do across all the elements that Rebecca just laid out, and it’s really a reason why we are really invested in working with M .L. Cummins, because I think we need places that are bringing industry and and and civil society and stakeholders together to actually work through these problems and resolve these hard questions in ways that are really going to be sort of valid and reliable broadly. And so I think that’s really the work still ahead, but I think we are also making good progress, right? And thanks to ML Commons for helping to facilitate that. My thought on this is that we’ve been talking for years now about how nascent this field is and that actually to judge if we are actually making progress, this too could be standardized, right?
Like we don’t have common ways of assessing are we still in a nascent stage? What levels of uncertainty do we have? So to Rebecca’s point, I think this is absolutely essential so we can all align exactly on have we made some progress? We’ve made sufficient progress to start relying on these things. To what degree can we rely on them for important decision -making around deployments?
yeah I think I’ll just add if we take this back down to the basics I think whether you’re an enterprise customer or you’re a consumer of our products you just want to know is this thing going to be accurate can I rely on this thing is this going to get me into trouble if I incorporate this in my workflows am I going to carry some sort of liability and at the core of standards is figuring out a way to have a common mechanism to provide an answer of reassurance you can trust us here’s a measurement certified by somebody else that this thing is reliable that this thing is accurate that I can rely on this thing and I can use this thing and I think we’re in this moment where we’re still trying to figure out as an industry and as a community about what that’s going to look like and so whether it’s advancing the measurement science because we currently don’t have enough of that in order to make sure that we can give an estimate of what is accurate what is reliable what is safe for specific risks or on the other side, what are the risks that we care about?
I think some risks might be some countries, some jurisdictions might have one list of risks. Other countries might have a different list of risks. And then there’s going to be a question of, like, how do you control for that, right? And that’s kind of what Rebecca Nemel -Commons and many others are working on, is how do you provide some sort of mechanism of credibility that says we’ve measured this, this thing is safe, that can then be certified, could be, you know, understood in the same way for everyone. So at the end of the day, in order for us to really unlock the value of this new technology that is transformative, I think many of us who are here today for the Indian Impact Summit recognize that potential.
We all also need to kind of answer those questions, and standards are the way you facilitate it.
Yeah, and so there’s a theme of trust that’s going through this. So maybe, Chris, add to that, and then I’ll add to that into a comment from a quote,
Yeah, just briefly, I think I also just want to situate how kind of benchmarking standards and some of the scientific questions we’ve been talking about fit in. Like there’s I think we’ve been talking a lot about different types of standards. I just want to clarify that there’s like a kind of broader, high -level set of process standards where you kind of say, all right, for this class of risk, what we’re going to do is we’re going to identify what the risk is. We’re then going to evaluate what that risk might actually be. And then we’re going to put in place certain kinds of mitigations and controls. Those are kind of, it’s a process for how you’re going to walk through risk management for something.
That absolutely needs to be standardized. But then even within that, once we get to, all right, once we have agreed on what the risk is that we’re trying to evaluate, how do we actually do that? And that’s where the standards come in for the benchmarks that we want to see developed. And that’s where some of these scientific questions, I think, really come into play because we need to have, you know, those kind of credible scientific evaluations and tests for the whole kind of broader risk management effort to hang together. And it’s, you know, again, critical, I think, for this whole process.
Yes, this has got to live next to the risk. Risk management, identification, mitigation strategy in any company. Go ahead, Jocelyn.
I just had briefly. I think the possibility for comparison across models is also something that’s super important here. I think there’s an important safety dimension there. If we actually are all measuring the same thing and can give consumers some relative assessment of safety, of quality, this is actually going to potentially contribute to a race to the top as opposed to the bottom. And so we’re solving it.
So that’s the question of who we’re solving for. Two of the panelists have mentioned consumers. It’s not just about enterprise. It’s not just about government. It’s all about consumer trust. Essie, what would you add?
What I wanted to add is the fact that here when we’re talking in general about trying to create standards to resolve the type of safety risk that we’re going to see, it’s just also to reassure the audience that it’s not that we’re trying to solve every single risk that happens. There is a huge amount of existing standard bodies, whether it’s in ISO and SensenELEC and other places, where they already have identified risk for their particular verticals or their particular… not silos, but the particular industries, those are already at work, right? So how they’re going to use AI, how the AI is going to be effectively, the AI safety is going to be translated to their own processes.
Those things are already happening, right? So it’s not only the people on this panel who are working on this, the entire community of standards, whether it’s in automotive, radio equipment directive, everything is already, everybody’s already looking at that, right? In the end, the difficult part is going to be to make sure that there is a commonality in terms of the type of techniques that we’re using whenever there’s an automated technique that we can use. Because from an industry standpoint, what is really useful, in particular if you’re a smaller company, is to make sure that you can run something efficiently and it addresses as much as the use cases that you run as possible. So that is an important thing that we need to keep in mind when we’re doing this.
So it’s why, I mean, from Qualcomm, obviously, we don’t address every single thing, but we want to make sure that at least in the areas we’re involved, there’s going to be as much as a commonality in terms of the measurement techniques that we’re going to use.
So consensus around the need to do it, consensus around the fact that it’s hard, but it’s important for consumers and business and investors. But Jocelyn made a point that we’ve been talking about how this is a nascent topic, et cetera. I want to look forward. What over the next two years does this look like? What have we got to get right? The models are changing. There could be regulation that changes. There could be changes around China, U .S. operating in different ways. What does this topic look like? How do we make sure we stay the course on this topic? Anyone want to offer a perspective as we look forward? And then we’ll start wrapping up.
And thinking about questions so we can get questions from the audience. I’ll take a crack at it. So at least from my perspective, there are a couple of things that I hope to see over the next couple of years. One is that I think this idea of benchmarks and other standards representing consensus, we should be seeing more things like certification that represent more types of consensus. If benchmarking represents consensus around how to estimate and measure a thing, certification could end up representing agreement. A definition of what is good enough deserves some form of certification. I don’t know necessarily what that’s going to look like today, but I have to imagine that those sort of represent truces, the temporary agreements about this is good enough for my industry, this is good enough for my deployment, this is good enough for my use case.
So that’s what I’m hoping we start to see over the next two years. Anyone else want to add to that? Because, I mean, Chris, jump in, but we’ve seen some of these disclosures in the past, and people commit to environmental goals or DEI goals or other set of standards or disclosures. Stakeholder capitalism was a big deal, and now it’s more about shareholders. So I’d love to understand our perspective on how do we stay the course.
Yeah, I might distinguish a little bit between how do we future -proof these standards and then how do we kind of ensure that they’re implemented over time. And I think the way that we future -proof them is to some extent to go back to the point I was making earlier about process standards, right? The process is somewhat agnostic to the actual kind of, you know, AI system itself and the capabilities it has. If you have a good process for identifying risks, evaluating risks, that process can kind of be a bit future -proofed. The specific evals you run are probably going to have to be updated over time to account for the greater capabilities of models as they advance, right?
And I think… similar with some of the controls that might need to be kind of used to manage some of the risks if there’s certain thresholds or kind of if the evaluations kind of indicate a certain level of risk, right? So the subcomponents of it might need to be evaluated. The overarching framework hopefully can kind of have some legs behind it over time in terms of future -proofing it. So we must commit to a process. We can’t future -proof because we can’t predict the future, but the process is so important. Even a good example of this would be something like the, I think, 40 ,001 has come up a few times. Like there’s a certain class of AI that 40 ,001 is very kind of tailored to, but even that AI has changed over time.
But 40 ,001 is still a very good kind of standard for managing those kinds of risks for those kind of applications of AI across a broad array of machine learning algorithms. But the other point that I would make in terms of, you know, you alluded to some of the kind of implementation of standards over time and making sure that they have the same currency to them. And there, I think we can rely on some of the incentives and the need, again, for there to be collective action on this that we’ve talked about before. Some of the incentive to make sure that there’s a collective action problem is going to rest with policymakers, which is why you’ve seen some regulatory activity.
Even in areas where there’s not, to Juan C.’s point, there’s a clear market need for these standards to be developed and implemented over time because consumers want to see, you know, they want to trust that the, you know, whether it’s individual consumers or enterprise, they want to trust that the model is actually safe and secure to use. And so I don’t see kind of the standards, the importance of standards diminishing over time. In fact, if anything, as the capabilities advance, consumers and enterprises are going to be more and more interested in making sure that they
Yes, it’s going to be consumer -driven. Juan C., just from a regulatory perspective, any thoughts? Chris mentioned implementation. Which is the hard stuff of where lots of this stuff gets stuck. Any perspective on implementation or from your experience as a regulator to add here?
Implementation of standards? Yes. I mean, Chris put it very well, right? One, regulators could say, I expect you to comply with certain requirements and this is how you do it. And that’s where the standards set on how you do it. Or regulators can don’t provide certain requirements or certain expectations. And the market sets out these requirements and these expectations. If you do it, then we will buy your product, for example. So I think from an implementation point of view, I think there will be some momentum, either from the market or from regulations, to move standards. But I think where I think, back to your original question, what’s going to happen in two years, I hope we can actually move faster on standards in terms of the definitions of standards.
I think that would be super useful. We’re leading some work on testing, well, benchmarking and rate teaming, primarily methodology definition. But… Yeah. We hope that in the next one year that can be done and sorted and accepted within the ISO process. But the experience has shown us that it takes a while. So in the next few years, hopefully we will find a way in which we can move to standards faster.
So we need to move with speed from a regulatory perspective. Amanda is going to have the last word and then we’re going to go to questions. So please prepare them. Amanda?
I didn’t realize that. No, the one thing I wanted to add in terms of like a goal for where we can find ourselves two years from now is thinking about like a system of standards that are interoperable where we have a sort of modular approach, right, where across like general purpose technology and, for example, in different sort of deployment scenarios, different use cases, different sectors, we actually can get some efficiency from, you know, these standards are all going to need to continuously evolve and improve and we’re going to learn from the science. And we’re going to keep evolving the benchmarks and the kind of methodology around the evaluations. But we don’t want to like keep starting from scratch with every piece of that, you know, puzzle.
And so we need to figure out a way to actually ensure that. like where we are making progress on the evaluation science and how we are doing this in the context of like evaluating AI models or systems and then how we are evaluating AI and deployment in like critical sectors, for example, we actually have some synergy built into the standards ecosystem so that we are making kind of more dynamic progress across everything at the same time.
Yeah, so it needs to be interoperable and we can’t keep reinventing the wheel. So audience, questions? I’m going to collect questions, maybe three to five. So the gentleman at the front, the gentleman at the back, and then the lady with the hand up.
Hi there. Thanks for taking my question. Maybe I have a bit of a tricky question for you. You know, on the panel, obviously, we have a lot of commercial interests. My question is this. How do we know in your assurance program or whatever you’re proposing that it’s going to be done since it’s driven primarily by industry, how do we know that you’re not just going to create something that cheaply satisfies the industry in front of… of us versus what the public actually needs. And assuming you do have a program that you’re going to talk about, how does a government or external agency audit such a program, given the skill gap to create such a very sophisticated compliance program, how can world governments come?
Because I’ve been on a lot of panels this week. The fear, uncertainty, and doubt is not only just the policy gap. It’s actually the technical gap, the inability of world governments to audit properly whatever you have. Thank you.
Thank you. So keep the questions brief. Thank you for that. So that’s about, like, how do we make it real? How do we make it not performative? I’m going to collect two other questions, and then we’ll throw them to the panelists. So keep your hands raised. We have a gentleman at the back. And I think there was a lady or a gentleman with a tie. Yeah, hi.
So… As a recent computer science student, I’m interested in building AI for India. Specifically with such a distinguished panel, I thought I’d shoot my shot. I’m a little nervous, so I apologize about that. I want to talk specifically about language bias. Being in India, there are 22 official languages, and I’m constantly thinking in two to three different languages. And when I utilize tools, such amazing tools built by everybody here, I’m wondering how you guys would go about tackling language bias and building guardrails around that to ensure that, you know, a small model like a student like me is making does not go haywire. Yeah, great
question about language. Thank you, sir. And then, gentleman with a tie. Which doesn’t mean, like, more gentlemen wear ties, but, yes, please. Hi, Jules
Polonetsky at the Future of Privacy Forum and our AI Governance Center. The standards always say… seem to be an easier path when they are more technical than… and challenging social policy, and AI governance seems to capture the most broad potential collections of social policy. And given that there’s a lot of disagreement and some debate over whether one should even measure certain areas, do you imagine that we’re talking about minimum viable consensus with the broadest number of stakeholders, or is there a path to in some way address some issues that some stakeholders see as absolutely necessary and others don’t want on the table? Yep. All
right. Soundbite responses panel. Like how do we make it real? How do we deal with the skills gap? How do we deal with the MVP? Anyone? Go on, Jocelyn. On the
performative question, I think now that standards have been referred to within actual regulation, I think to the extent that we want to use these standards as evidence of conformity with those particular regulations, that’s set up a lot of the work that we’re doing. that’s a kind of minimum bar at the very least, because I think if we make these things too high level, too abstract, or too essentially lowest common denominator, I don’t think regulators are going to look at those standards as evidence of conformity. So I think there is that kind of interlocking pressure created by the regulation itself for some sort of degree of quality. Thank you.
And Esther, do you want to comment on the language perspective and how you’re thinking about that at OpenAI? Thank you.
Yes, we do a series of evaluations like MMLU for determining how well our models perform on a variety of languages. We also have a specific test actually in QA. There’s also a specific test in QA that we also kind of test our models on that has a variety of dialects within India. So I think the short answer is that this is an area where we need more participants. And I believe ML Commons is playing an active role in helping further our capacity building. And I think working with local ecosystems to help clean and collect good data so that we can do this appropriately. This is another area, right, just like we’ve been saying, where we need to work in partnership to figure out how do we both collect the type of information, how do we measure this stuff, how do we build the evaluations, and then how do we build an industry standard where all of the actors are kind of held to that standard.
And it’s going to have to be a collective effort. Yeah. Okay.
Just to add a little bit on the question regarding the language. In the end, I don’t think there’s like a – there’s no silver bullet solution, right? There’s going to be a need to have this type of – Either safety test or safety prompt. which are required for different type of languages. And you’re not going to be able to address every single thing because there’s just a huge amount of diversity. I mean, take me. I’m French from cultural background. I speak English and think in French and English all the time. There’s weird stuff that I say that will not be captured by a model that’s only for American English, right? So there’s going to be a need for more than one language which are captured, and probably a lot of them, but this is where the community of basically everybody needs to come and say, hey, this is what I want to capture for my type of language.
What matters to make sure that there is scale and that it still remains efficient is that hopefully the tool and the software framework around it can be reused. And that’s really a big advantage for that. Thank you.
So in summary, and thank you, dear panelists, for the great discussion. So you heard today that standards are important. This is a fast -moving world. We’ve got to be designing for consumers, for business people. There’s a commitment. There’s a commitment here around measurement. It’s both art and science. We need to have the process that’s consistent. But across regulators, across standard -setters, around policymakers, and the business and the tech community, there’s a consistent understanding. So it’s going to be an emerging topic, which I know we’ll continue to discuss. Thank you, panelists, and thank you to the audience. Thank you. Thank you. Thank you. Thank you. Thank you.
Onoe acknowledged the rise of a novel AI innovation ecosystem and the indispensable role of standards in extending this ecosystem globally. He stressed that effective standards must emerge from a proc…
EventI think it’s worth backing up from this thing. One of the original questions was, what are standards for? Is Chris’s mind working? I was just saying, one of the things we should maybe do is back up a …
Event“Trust also can influence economic confidence and cross -border collaboration.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-govern…
EventStandards are voluntary codes of best practice that companies adhere to. They assure quality, safety, environmental targets, ethical development, and promote interoperability between products. Adherin…
EventIgnacio Castro:Thank you. My name is Ignacio Castro, and I’m a lecturer in Queen Mary University of London, and I also chair a research group at the IRTF on analyzing standardization processes, in par…
Eventvon Knebel Moritz: Yeah, thank you and thanks for having me. People have often asked this question, what are the regulatory gaps that we see? And I think that assumption already is a bit flawed and mi…
EventSustained collaboration between governments, industry, and other stakeholders is essential for translating recommendations into meaningful outcomes
EventShe stresses that human leaders, not automated loops, must decide how AI tools are deployed responsibly, and that global safety and sector‑specific standards are essential for equitable impact. Collab…
EventInternational cooperation and alignment of policies/standards is crucial
EventCharlyne Restivo:Ladies and gentlemen, distinguished guests, good afternoon, and welcome to this WSIS High-Level Dialogue on International Standards, a Commitment to Inclusivity. My name is Charlene R…
EventBenjamin Frisch offered CERN’s perspective on open collaboration, explaining how creating open ecosystems around technologies like White Rabbit enables broader participation and faster adoption across…
Event### Dynamic Standards for Rapidly Evolving Technologies Dimitrios Kalogeropoulos, an expert in AI applications in healthcare, argued that traditional static standards are inadequate for governing rap…
EventMelissa Michelle Munoz Suro: When I was 25, I found myself standing in a room full of policymakers, developers, designers tasked with a monumental responsibility, leading the development of the Domini…
Event“Standards solve a collective‑action problem by providing an open, credible process that levels the playing field”
The knowledge base notes that open standards act as a common technical language that can level the playing field for small companies and promote fairness, confirming the panel’s description of standards as a collective-action solution [S30] and highlights the role of standards in governance and risk-management coordination [S38].
“Kshitij Bathla said standards are tools that build consumer trust, assure quality, must be adaptable to Indian‑specific use‑cases while aligning with ISO”
Kshitij Bathla’s participation is recorded in the transcript (introductory remark) and the discussion references ISO-based frameworks such as ISO 42001 that provide a common set of requirements for national bodies, adding context to his emphasis on Indian-specific adaptation and ISO alignment [S71] and [S75].
“Open governance models (ML Commons, ISO, IEEE) enable smaller firms to adopt standards without building bespoke risk‑management systems”
Several knowledge-base entries describe how open, inclusive standards lower barriers for smaller actors, promote participation from diverse stakeholders, and are promoted through multistakeholder collaborations, supporting the panel’s point about open governance models [S30] and [S67] and the broader multistakeholder cooperation described in [S24].
“Regulators are already referencing standards that have not yet been created, creating an urgent need for industry‑driven standardisation”
The knowledge base discusses how regulators rely on industry standards as part of AI governance and often look to standards processes to fill regulatory gaps, providing context for the claim that regulators cite yet-to-be-finalised standards, though it does not explicitly confirm the non-existence of those standards [S38] and [S79].
The panel displayed strong consensus that AI standards are critical for building trust, defining “good enough”, ensuring inclusivity, and providing measurable benchmarks, with agreement across industry, policy, and technical perspectives. There is also shared recognition of the need for open, multistakeholder processes and future‑proof, modular designs.
High consensus across most thematic areas, indicating a unified stance that standards will be a cornerstone for responsible AI deployment and that coordinated, inclusive efforts are essential for their success.
The panel largely converged on the importance of AI standards for trust, risk management, and global cooperation, but diverged on the relationship between standards and regulation, the pace of standard development, and the risk of performative, industry‑driven processes. These disagreements highlight the need for clearer governance frameworks, faster yet inclusive standard‑setting mechanisms, and stronger auditability to ensure standards serve public interests.
Moderate – while there is broad consensus on goals, the differing views on implementation pathways and regulatory interplay suggest potential friction that could affect the timely and effective adoption of AI standards.
The discussion was driven forward by a handful of pivotal remarks that moved it from a generic endorsement of standards to a concrete, problem‑oriented dialogue. Joslyn’s observation about regulators demanding non‑existent standards created urgency; Chris’s framing of standards as a collective‑action solution gave the conversation a governance backbone; Rebecca’s ‘good enough’ question forced the panel to confront the scientific‑political trade‑offs and stakeholder inclusion; Etienne’s telecom analogy exposed the timing mismatch between product rollout and safety standards; Lee’s point about market‑driven differentiation showed standards’ value even without regulation; Chris’s future‑proofing insight introduced a strategic design principle; Amanda’s call for modular, interoperable standards offered a practical roadmap; and Joslyn’s warning against performative standards reminded everyone of the need for rigor. Together, these comments shifted the tone from abstract optimism to a nuanced, action‑oriented plan, shaping the panel’s consensus around speed, openness, legitimacy, and the balance between accessibility and regulatory credibility.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

