U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence
20 Feb 2026 18:00h - 19:00h
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence
Summary
The panel, convened by the White House OSTP and featuring senior officials from the U.S. government and leaders from Anthropic, Google DeepMind, OpenAI and XAI, focused on how open standards and protocols can make AI agents interoperable and secure [1-3][5-11][15-22]. Sihao Huang noted that billions of dollars are being invested in AI infrastructure and that competing firms are racing to make models cheaper and more powerful, underscoring the need for common interfaces [13-15]. He introduced the emerging ecosystem of agent protocols-including Anthropic’s Model Context Protocol (MCP), DeepMind’s A2A, OpenAI’s Agentic Commerce Protocol and XAI’s MacroHearts project-as the basis for the discussion [17-21][23-24].
Michael Sellitto explained that MCP is a universal open standard that lets models discover and use enterprise or government data sources and tools through simple descriptions, eliminating bespoke integrations [28-34][36-38]. He added that the companion Skills protocol lets developers encode repeatable task instructions that can be transferred across vendors, further enhancing data portability and competition [46-48]. Owen Lauder described DeepMind’s agent-to-agent standard as a digitized “clipboard” sharing identity, capabilities and security requirements, and its Universal Commerce Protocol (UCP) as a way for agents to interact with websites and payment systems [63-71][74-76]. Michael Brown highlighted that shared commerce protocols enable agents from different companies to coordinate tasks such as booking travel, illustrating how common standards can democratize AI-driven services worldwide [94-102].
Austin Marin announced the new Agent Standards Initiative, housed within NIST’s voluntary-consensus framework, and a request for information on AI-agent security that closes in March [130-138][155-162]. He outlined upcoming sector-specific listening sessions on education, healthcare and finance to identify challenges such as handling personally identifiable information and to develop metrology, benchmarks and best-practice documents [165-172]. The initiative also builds on existing drafts for AI-agent identity and authorization, aiming to create interoperable security layers analogous to the historic development of SSL and HTTPS for e-commerce [163-168][206-207]. Sihao Huang linked this effort to the open-internet legacy, arguing that decentralized protocols like TCP/IP and HTTPS spurred global prosperity and that similar open AI standards are essential for worldwide adoption and secure commerce [186-198][199-202].
Participants used analogies from the automobile and electrical industries to stress that standardized metrics and safety certifications can give users confidence in AI agents, while open standards preserve sovereignty and allow switching between providers [211-230][232-250]. The discussion concluded that government can facilitate cross-industry dialogue, but industry must lead the technical work, and international collaborations such as the INAEMS network are already shaping measurement and evaluation consensus for AI agents [252-254][276-281].
Keypoints
Major discussion points
– Emergence of AI agent protocols to enable interoperability and competition – The panel highlighted several open standards such as the Anthropic Model Context Protocol (MCP), Google DeepMind’s agent-to-agent protocol, OpenAI’s commerce protocol, and XAI’s MacroHearts project, all aimed at letting agents “talk” to data sources, each other, and commerce systems [17-21][28-38][63-76][98-101].
– U.S. government’s coordinating role in standards development – OSTP and the newly-rebranded Center for AI Standards and Innovation (within the Department of Commerce and NIST) act as the “front door” for industry, avoiding duplicated agency requests, issuing requests for information on agent security, and convening sector-specific listening sessions [132-146][155-164][165-172].
– Security, trust, and evaluation as prerequisites for adoption – Speakers stressed that without robust security, identity, and authorization standards, builders cannot safely grant agents access to sensitive data or real-world actions; analogies to SSL/HTTPS and automotive safety metrics were used to illustrate the need for measurable, trustworthy standards [206-207][211-230][158-162][163-164].
– International collaboration and a global “AI Internet” vision – The discussion repeatedly referenced builders in India, Kenya, and other regions, noting that open protocols should let any developer plug into AI services worldwide; the U.S. engages with ten-country networks and shares best-practice measurements to foster a truly global ecosystem [186-190][52-56][276-280][108-115].
– Learning from historical standards (Internet, electrical, automotive) to shape AI standards – Examples such as the 802.11 Wi-Fi standard, NIST’s taillight-color standard, early HTTPS adoption, and automotive safety ratings were invoked to argue that open, consensus-based standards drive widespread, secure adoption [124-126][147-152][233-238][211-218].
Overall purpose / goal of the discussion
The panel was convened to explain and promote a coordinated effort-led by both industry leaders and U.S. government agencies-to create open, interoperable, and secure AI agent standards. By establishing common protocols, testing frameworks, and security guidelines, the participants aim to lower barriers for global developers, accelerate innovation, and ensure that AI systems can be safely integrated into commerce, public services, and everyday applications.
Tone of the conversation
The tone remained constructive and collaborative throughout. It began with a formal introduction and factual overview, moved into enthusiastic descriptions of technical progress, incorporated light-hearted remarks (e.g., Michael Brown’s “red means stop” analogy), and shifted into reflective, historical analogies that underscored the importance of standards. No adversarial moments appeared; the dialogue stayed optimistic about the potential of open standards to “grow the pie” for all stakeholders.
Speakers
– Sihao Huang – Senior Policy Advisor for AI, Emerging Tech, White House [S1][S2]
– Austin Marin – Acting Director, U.S. Center for AI Standards and Innovation, Department of Commerce [S4]
– Wifredo Fernandez – Director for Global Government Affairs, XAI [S5][S6]
– Owen Lauder – Senior Director and Head of Frontier Policy and Public Affairs, Google DeepMind [S7][S8]
– Michael Sellitto – Head of Global Affairs, Anthropic [S9]
– Michael Brown – Head of Growth and Operations, OpenAI [S10][S11][S12]
Additional speakers:
– Michael Kratzios – Director, Office of Science and Technology Policy (OSTP) (mentioned as OSTP director)
– Craig Burkhart – Acting Director, National Institute of Standards and Technology (NIST) (mentioned as Acting Director of NIST)
– Howard Lutnick – U.S. Secretary of Commerce (Commerce Secretary)
– George Osborne – Colleague of Michael Brown, name on placard (referenced in discussion)
– Casey – Individual referenced in connection with the NIST Agent Standards Initiative (no further detail)
Opening & Context – Sihao Huang, Senior Policy Advisor for AI at the White House OSTP, opened the session, noting that U.S. firms are investing roughly $700 billion in AI infrastructure this year and are competing fiercely to deliver cheaper, more powerful models, making common interfaces urgent [13-15]. He introduced the panel: Austin Marin, Acting Director of the Centre for AI Standards and Innovation at the Department of Commerce, and senior representatives from Anthropic, Google DeepMind, OpenAI and XAI [3-12].
Company-Specific Protocol Overviews
– Anthropic – Model Context Protocol (MCP) & SKILLZ – Michael Sellitto described MCP as a universal open standard that lets models describe a data source and its tools, enabling automatic discovery and retrieval of enterprise information such as payroll or revenue [28-38]. He contrasted this with the prior landscape of bespoke, vendor-locked integrations and highlighted the companion SKILLZ protocol, which encodes repeatable task instructions that can be taught once and transferred across models, enhancing data portability and reducing lock-in [39-48].
– Google DeepMind – Agent-to-Agent (A2A) & Universal Commerce Protocol (UCP) – Owen Lauder explained A2A as a “digitised clipboard” that conveys an agent’s identity, capabilities, intent, data requirements and security constraints to another agent, removing the need for custom code [63-73]. He also outlined UCP, which standardises how agents interact with websites and payment systems, with pilot partners ranging from Walmart and Target in the United States to Flipkart and Infosys in India [74-77].
– OpenAI – Commerce Protocol – Michael Brown noted that OpenAI’s commerce protocol enables agents to plan a family vacation, book flights and hotels, demonstrating how shared commerce standards allow agents from different companies to cooperate on real-world tasks [94-102].
– XAI – MacroHearts & “parallel Internet” – Wifredo Fernandez positioned XAI’s MacroHearts project as part of a “parallel Internet” that will sit alongside the existing web, accelerating AI development while raising regulatory questions such as governance of agent-driven social-media platforms [119-123].
Historical Analogies & Security Emphasis – Sihao likened the need for secure AI-agent interfaces to the historic development of SSL and HTTPS, which unlocked e-commerce on the open web [206-207]. Sellitto reinforced the security argument with an automobile-industry analogy, suggesting that standardized crash-test-style safety metrics would give users confidence in AI agents [212-218].
U.S. Government Initiatives – Austin Marin clarified the Centre’s “front-door” role: it coordinates agency requests, avoids duplication, and ensures companies engage with advisers who understand frontier and agentic AI [138-152]. The Centre follows NIST’s long-standing voluntary-consensus approach, exemplified by the historic taillight-colour standard that defined the exact shade of red for vehicle lights [146-152]. Marin announced a Request for Information on AI-agent security (deadline in March) and referenced a draft NIST document on agent identity and authorisation that is open for comment [155-165]. He also outlined upcoming sector-specific listening sessions (education, healthcare, finance) in April to surface challenges such as handling personally identifiable information, with the aim of producing metrology, benchmarks and best-practice guidance [165-172].
International Collaboration – The discussion highlighted the International Network for Advanced AI Measurement, Evaluation and Science (INAEMS), a ten-country consortium that meets regularly to share best practices and develop consensus on measurement methodologies [276-280]. Sihao stressed that standards should enable builders in India, Kenya and elsewhere to use and switch between U.S. AI products without lock-in [186-190].
Next Steps (as stated in the transcript) –
1. Submit comments to the AI-agent security RFI before the March deadline [155-165].
2. Review and comment on the NIST draft on agent identity and authorisation [163-165].
3. Participate in the April sector-specific listening sessions [166-172].
4. Continue engagement with the Centre for AI Standards and Innovation and the broader INAEMS network to shape forthcoming voluntary standards [276-280].
Closing Observation – Participants expressed broad agreement that open, interoperable AI-agent standards-covering data access (MCP), task encoding (SKILLZ), inter-agent communication (A2A) and commerce (UCP)-are essential to prevent vendor lock-in, foster global innovation and create a “parallel Internet” for AI [28-48][63-77][94-102][119-123]. They invoked historical precedents such as TCP/IP, HTTPS, electrical-plug standards and automotive safety metrics to argue that voluntary, consensus-based standards can drive secure, widespread adoption while avoiding fragmentation [198-201][233-242][248-251].
of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable and open to the rest of the world to sort of build on that for your own businesses, for your own benefits. And so we have an amazing panel here today. We have, so first of all, I’m Sihao Huang. I’m Senior Policy Advisor for AI at Emerging Tech at the White House. We’re joined with Austin Marin, who’s the Director for the Center for AI Standards and Innovation at the Department of Commerce, which really is the center for a lot of AI activity within the U .S. government, setting standards, driving innovation, measuring AI systems, improving metrology, and a lot of the smartest people in the U .S.
government are within Austin’s organization. And then we have the four frontier AI companies from the United States. So we’re very happy to be joined by Mike Salito, who is the Head of Global Affairs at Anthropic. We have Owen Lauder at Google DeepMind, who’s the Senior, Director and Head of Frontier Policy and Public Affairs. We have Mike Brown, who is head of growth and operations for OpenAI for our countries. And, of course, we have Weefi Fernandez, who is the director for global government affairs at XAI. So really the amazing lineup of U .S. industry. I said this in a previous panel, but American companies are spending $700 billion into infrastructure this year, just this year alone. And they probably won’t like it that I say this, but they’re competing very hard against each other to make AI models cheaper and more powerful for you guys to build on and to drive those applications.
And so this is going to be a panel on how we make that happen, how we standardize interfaces with those AI systems. And so first I’m just going to ask a question to the AI companies that are sat here. So over the past few months, I think, we’ve seen the emergence of an ecosystem of standards that move. To support the deployment of AI agents. I think one of the most notable ones is Anthropix Model Context Protocol, which a lot of other companies are building off of right now and is sort of becoming the industry standards. Of course, you have Google DeepMind’s A2A Agent -to -Agent Protocol, OpenAI’s Agentic Commerce Protocol, and then XAI, of course, has been working on its highly secretive and famous MacroHearts agent project.
And so all the companies here are very much involved in sort of this agent discussion. And so maybe open it up to the companies here to tell us a little bit about what these agent protocols actually do and what they have unlocked. And that’s sort of the builders who are sat here, the audience. What do they enable a software engineer or an AI engineer at India or other countries to create?
Okay. Well, first I want to start off by thanking Suhao and OSTP for organizing this panel and all the people who are here. Thank you. So it’s great to be here with Austin. I think Anthropic has really had a really strong partnership with the Trump administration and appreciated the leadership of Secretary Lutnick in expanding and enhancing the Center for AI Standards and Innovation, which is really critical to making this technology work for everybody in a manner that’s safe, responsible, and open. MCP is a universal open standard for connecting AI systems to the tools and data sources that people already use. So imagine the knowledge bases inside of an enterprise. You can imagine government data sources.
The Indian government, of course, is a real leader in, why am I forgetting the acronym right now, DPI, sorry, and just has massive amounts of data that are already digitized. And so MCP is a way that you can connect your AI models and agents to those data sets and also tools. And it really, you know, simple. intuitive way. You just need to give the model a rough description of what’s in the data source and what kind of tools or how can it access it. And then the model will intuitively know how it can use those data sources the same way that somebody in your enterprise or your organization would know if I want to get payroll data, I need to go to this human resources system.
If I want to get data about, you know, our revenue, I need to go into HEX or whatever your particular tools are. You know, before MCP, you really had to build all these systems in a very bespoke manner, which meant that if you built them with one model or one vendor, you were kind of stuck because you’d have to rewrite everything if you wanted to switch. MCP being this open source protocol that’s supported by all of the major AI companies means that you really have this degree of interoperability, which just enables the whole system to be much more open and competitive. We also recently built SKILLZ, which is a software that’s been around for a long time.
It’s a software that’s been around for a long time. It’s a set of instructions that teach agents how to perform. specific tasks. The way that I describe this or think about it is, you know, imagine a new person joins your team. You spend a little bit of time teaching them, you know, how to do work the way that your organization does it. And then you expect them to just be able to follow those instructions all the time. So you kind of teach once and then they’re able to do that. It’s the same thing with skills, which also is another open protocol where you can build these skills. And then if you decide that, you know, you want to switch from Anthropic to any of the other fine companies here on the panel, you can move those skills over.
And so that interoperability and data portability is really a critical piece of making this an open and competitive environment.
Amazing. Thank you, Mike. And, yeah, thank you to Sehow. Thank you to OSTP and the U .S. government for the event and all the partnership. And a big thank you and congrats to our Indian hosts on a fantastic summit week. If you take a step back, it has been, I think, a really exciting week, a demonstration of how advanced AI is now being used around the world to do incredible things. It’s been really exciting. I think seeing the way that people are using Gemini right across India, really exciting to see the way that everyone in India from world -class scientists using AlphaFold to teachers and students using AI in the classroom. And I think with all of the progress that we’ve seen in the last few years, it’s easy to forget sometimes that this is still relatively new technology.
We’re still in the relatively early innings of working out how to develop this technology and use it for good. And one of the things that we need to do, I think Seahaw covered this very well in his opening gambit, is build out this ecosystem of technical standards to make sure that we can continue using this technology in the right ways. There’s a couple of ways that we’re thinking about these standards. One is technical standards, interoperable standards, and then also standards for testing these systems, making sure that we can use them in a reliable and secure way. We really want to contribute right across the piece here, so we’re excited. We have various standards that we have contributed to the ecosystem.
Our agent -to -agent standard that Seahaw mentioned. This is basically a standard for how… agentic systems talk to each other. At the moment, it’s a little bit tricky for agents to converse with each other. You have to often write bits of bespoke code for an agent to talk to an agent, or they have to be running on the same walled garden code base. So what we do with agent -to -agent is essentially have a sort of digitized clipboard of information that an agent will share with another agent. What’s my ID as an agent? What are my capabilities? What am I trying to do? How do I take data? What are my security requirements? This is going to be absolutely fundamental to sort of greasing the wheels of the agentic economy.
UCP, another standard that we’re working on, so we have our universal commerce protocol at Google. This essentially does the same thing, but it’s for how agents talk to websites and payment systems. This is going to be transformative for business. It’s great to be able to partner with companies right around the world, whether it’s Walmart and Target in the U .S. or Flipkart and Infosys in India that we’re working with across these agents. Excited to see what… everyone is going to do with the technology that we can enable with this.
Thanks for the tip. Hi, everyone. My name is Michael Brown. My name placard says George Osborne, who’s a colleague. He got tied up in another panel, so I’m here. George and I work extremely closely together, but he has a much nicer accent because he’s from the U .K. I’m doing my best here. You’re doing very well, I might say, very well. For me, this is a fun panel because it feels like a very collaborative and cooperative opportunity to grow the pie, and the companies that are on either of our side are extraordinary companies with extraordinary humans, and it’s fun to just work with them in some of these areas. If I were going to kind of explain, why we’re here in this particular panel to my kids who are 9 -11, I would sort of say, look, are there countries out there in the world where when you get to a stoplight, red means go?
I don’t think so. I think mostly red means stop and green means go. I mean, if I’m wrong, I apologize. I’m not an expert. But, you know, having sort of shared understanding in countries, rich and poor, advanced and still developing around how things work, I think grows the pie because it allows builders to build in a way that everyone can kind of know that what they’re building is going to be both secure and is going to be accessible and hopefully enjoyable or useful to people anywhere in the world. And I think each of the companies up here is contributing something great to that. You know, I’ve joined OpenAthens. I relatively recently, but like MCP to me is something like I just knew it’s like that’s really important.
And like, well, Anthropic introduced it. Hopefully, Anthropic would agree with this, that now it’s just like the thing, right? And I think that’s terrific that it’s the thing. You know, Owen also mentioned in commerce, I don’t know if these standards compete or if it’s cooperative, but at OpenAI, we have a commerce protocol as well for the same thing, because there’s a world where these agents are going to be out shopping for us, which is kind of fun, right? So, you know, if the agent knows that you’re planning on taking a family vacation and it knows that you want to visit Goa and the agent can go actually secure your travel flights and your hotel, these commerce protocols can do that.
So agents of different companies, potentially in different countries, can all partner and work well together because they understand how they’re supposed to be looking for shared information and how that information should be shared. There’s kind of a shared understanding there. And so I think all of us are working to build these protocols to grow the pie, to create more democratization, more commerce, more benefit for everyone by having these common protocols in place.
Thank you, Sihal. Great to be with you all here, and thank you to the government for having us. What an exciting week, frenetic and kinetic and chaotic, as I was saying earlier. So it’s just an honor to be here and to feel the energy and all the innovation and to meet a bunch of different builders across India. So Wefredo Fernandez, folks call me Weefy for short. It’s a nickname I got in the 90s before wireless Internet was a thing, so my name became relevant later. But, yeah, this is certainly a topic that brings us all together, which is wonderful. You know, XAI is only two and a half years old. So we’re all in this together.
So we’re all in this together. So we’re all in this together. So the foundational work done by these peer companies have enabled us to accelerate our development. We’re better because of those, and we’re better because we can all build on top of those. And these standards and protocols that folks have built and that we sort of lay out and sort of agree to as an industry and as governments really make sure that not just us four compete, right? This enables a ton of innovation. So, you know, on the X side, and, you know, XAI and X sort of operate in tandem, it’s been really neat to see the AI community sort of build and test and discuss and debate in public.
So, like, when Malt Book was taking off, I think you likely found out about it on X. And so it’s just neat to see the ecosystem sort of converge in that discussion space. And just in thinking about this panel and thinking about MoldBook in particular, it’s like, well, do we regulate social media platforms that are agent driven? Just it brings like all these really novel questions about about how we regulate. But I think at the end of the day, we all agree that these open standards that are creating sort of this call it a layer, call it a new ecosystem, call it a parallel Internet. I just really crucial for for our development of the Internet writ large.
And so, yeah, excited about the panel and the discussion here today.
Thank you so much. Your name is formalized in the 802 dot 11 protocol, which is what allows my phone to connect to the Internet in D .C. and here in India. So it’s extremely relevant. I’m going to use that. That’s awesome. So I think we’ve heard a little bit from our companies who are engaging a lot of dynamic activity, pushing out agent protocols of all kinds. And I think. There’s a lot of industry excitement over agents right now. One of the big announcements that we’re here to make, also Director Carrazio’s made early on the main stage, is the Agent Standards Initiative, and that is something that is let out of Casey in NIST. So I’ll turn to Austin to introduce that.
Absolutely, and thanks, Hal, and thank you to OSTP for convening this event and to my fellow panelists. I’ll start with a brief introduction of my organization. So I am the Acting Director for the U .S. Center for AI Standards and Innovation. Our background, we were founded about two years ago as the U .S. AI Safety Institute. In June of last year, Commerce Secretary Howard Lutnick refounded us as the Center for AI Standards and Innovation, which signaled a shift from sort of safety concepts to standards and innovation. And our remit is to be the front door to industry to working with the U .S. government. There’s, I think, two aspects of our organization I think that bear note is, first, that we’re located within the Department of Commerce.
We are commerce -focused. We are industry -focused. We work. We work with all of the companies on this panel. Some of them we have formal research. or pre -deployment evaluation agreements with so that we can work with them on their models and the research questions they’re tackling. We also do take seriously our role trying to serve as a front door to the U .S. government for industry. We want to make sure that when industry is trying to navigate government that they’re speaking to the right people, that the people in government they’re speaking to have advisors who understand frontier AI and agentic AI, and also that the industry isn’t being overwhelmed by duplicative requests from different aspects of government.
You don’t want 10 different agencies asking the same company basically the same thing and creating unnecessary work, and so we try to act in sort of a coordinating role to make sure that industry is being heard and they’re navigating U .S. government. The other aspect of our organization that bears note is we’re located within NIST, the National Institute of Standards and Technology, and NIST has an over -century -long track record of not regulating but helping industry through, consensus, develop voluntary standards and best practices. Acting Director of NIST, Craig Burkhart, he likes to talk about taillights, brake lights on the back of a car. I’m sure you all see them in India. It’s the same color red as it is in the U .S.
That’s because it was a NIST standard of what exactly color red is going to be on the taillights. But another important aspect of that anecdote is it wasn’t government that said this is the color red that you all must use. It was industry came together, and with the help of NIST experts through a convening, they agreed on what the color should be. And so now when we look at it, what does the future bring and where can NIST bring its industry -driven, consensus -based voluntary standards work into the new AI world, we’re looking to AI agent standards. So as to how said, we announced this week an AI agent standards initiative, which is looking at all facets of AI and AI agents.
There’s a couple aspects of it that have already been announced that we’re working on, and I’ll tick through those relatively quickly. The first is we have a request for information. I’m going to go ahead and get this. So we have a request for information. We have a request for information. in the field. It closes in March and we encourage you to engage with us and provide comments on AI agent security. AI agents obviously bring a whole host of new security challenges and we’d love to hear from you and your organizations about what challenges you are facing. Learning and identifying those challenges is a first step. Once we identify those challenges we can then take the next step of seeing where can NIST’s approach of voluntary standards and best practices documents, how can they help address and mitigate those those challenges.
Another aspect, our colleagues at NIST, the Information Technology Laboratory or ITL, they have a draft out for comment on AI agent identity and authorization. Again, encourage you to engage and interact with them. A third initiative that we recently announced is we’re going to hold sector specific listening sessions hopefully in April in the sectors of education, healthcare, and finance where we’re going to convene various members of industry and say to them look there’s this great technology out there called AI have you heard of it, AI agents, why aren’t you adopting it? it? What challenges are you facing? And we may not be able to solve those challenges, but maybe we can. And so one example I give, and I don’t know that it’s going to be something we find out, but for instance, in the education and healthcare sector, there’s business concerns and existing regulatory concerns about PII, personally identifiable information.
And perhaps what we’ll learn through these listening sessions is that hospitals or schools aren’t deploying AI because they can’t reliably evaluate how AI agents are handling the PII. And so that’s something that KC, my organization, could develop metrology, benchmarks, evaluations, best practices, documents that could give confidence to those types of institutions that the agents are performing as desired. And maybe that’s a step that we could take through voluntary consensus driven best practices and standards that unlocks adoption. So we’re very focused on that. We’re looking forward to learning what those challenges are. I don’t know if the challenge I mentioned is actually a challenge facing industry. And that’s part of NIST’s approach, which is we … In D .C., we only see a small slice of what’s going on in industry.
We only have a tiny window into the world. And so it comes from a place of humility. We don’t know what the challenges people are facing. The companies that are on this panel, they’re doing an incredible job coming up with protocols for some of the challenges that they’re facing. We talked about agent -to -agent for how agents communicate. We talked about MCP for how agents navigate databases. We talked about UCP and OpenAI’s commerce protocol for engaging in e -commerce. And I’m sure through these conversations, we’re going to identify other areas where open source protocols, where standards, best practices could help unlock adoption and implementation. And we’re really excited to work with both you and all your institutions and companies on stage to identify those opportunities and see how we can leverage NIST’s convening authority to help.
Thank you so much for that, Austin. I think to reemphasize, this standards initiative is really wanting to make sure that the products that we build, on top of it, are able to connect with each other into our… such that if there’s a builder in India, a builder in Kenya, building on top of our AI products, American companies can use them as well. American companies can buy from them as well. And similarly, if you want to switch to a different model, nothing is sort of locked in. And I think this really ties back to a perspective that we sort of, as U .S. government, in particular the Trump administration, has about AI and AI products. We think back a lot on the history of the Internet and what that enabled for the world, but also what that enabled for America.
I think there was a perspective in the U .S. from a previous administration that technology had to be strictly locked down, and we think that’s a mistake. We want to share the best AI technologies with the rest of the world, and that’s also a sort of leading message that our delegation has here at the India AI Summit. And when we think back at the success of the Internet, what enabled that? There’s actually a number of companies and countries that tried to create their own closed version of the Internet that were centralized, that were tied to particular nations, at their own telecom networks. and they saw a little bit of success. A lot of them were state -subsidized, but none of them really scaled to the global level of the World Wide Web.
And the World Wide Web became so successful precisely because of the protocols that the U .S. government had supported. The U .S. government had made a very intentional effort to make sure that the Internet was a decentralized system, created protocols like TCPIP, HTTPS, the sort of Internet suite that was actually funded by the U .S. government back then to create independent development of these protocols that enabled the rest of the world to build on that. And what you had is really this win -win situation where the entire world now benefits from sort of the access of the Internet, the ability to build applications, companies on top of that that’s driven so much prosperity for countries around the world, but also made Silicon Valley one of the most wealthy places in human history.
And it is because of this open commerce. And that’s what we really want to create with a world of AI in the future as well. I think just to add a bit on to what Austin had said, sort of the agent security. piece. Why is agent security so important to us? It’s precisely because of adoption. You need security -driven adoption. If you look back again also at the history of the internet, the development of the secure sockets layer, SSL, and then eventually HTTPS, was what enabled e -commerce. And so, again, I think it’s a lot about the efforts that we’re going to be working with industry together to make sure that there is this standards ecosystem, that there are these interoperable interfaces that everyone can build on and trust to create the AI economy that we’re all looking forward to.
So I’ll stop ranting, but I’ll turn to the companies here. And I guess I’ll ask you all, how do you see sort of the future of AI standards and agent development? And how can AI agent standards really reflect the same principles that enable the open internet, including interoperability and including security?
I feel like I need to somehow fix this. an automobile analogy in here since there’s been a setting. Maybe I’ll use my favorite one, which is right now if you go to buy a car and you go down to the car dealership, those cars are going to have a bunch of metrics that you can use that have been independently determined to understand the characteristics of that vehicle. So it will tell you what the fuel economy is, how far can you drive on a gallon or liter of gas, how does it perform in various types of crash tests. These are all metrics that are done in a standardized way that are oftentimes done by third parties, and so you can have sort of trust and confidence in them, and you can know what kind of car you want to buy.
Maybe I’m a single person and I like to drive fast, and so I’m just worried about head -on collisions because I’m going to be going as fast as the car can. I’m going to be driving as fast as the car can possibly go, and that’s the biggest danger for me. Or maybe I have a family and I’m worried about you know, what happens if we get hit from the side and I’ve got kids in the back seats. You know, a piece that this standardization can help us get to is having that same kind of confidence in knowing what you’re purchasing that, you know, customers and governments and the public, you know, can have. I think another real benefit, and it’s really aligned with, I think, some things that Michael Kratzios, the OSTP director, talked about today and also in an op -ed that he had in the Financial Times around exporting the American AI stack, right?
There’s a lot of concerns today about sovereignty, about having control over systems in your data and so on. And a way that I think you can both use the best technology in the world, which sometimes comes from American companies, but also have confidence that there’s resilience in the system, is really having things be built to open. Open standards, right? And that gives you the ability to… to decide to make changes. If today Anthropic is producing the best technology and tomorrow it’s X or it’s OpenAI or someone else, you can change. Or maybe an open source model gets good enough at the use case that you want and you want to switch over from a proprietary model to an open source model.
So I think that’s what this can enable. I think that’s sort of the opportunity that we have ahead of us. And I think that the vision of the AI security standards work that Casey’s going to be working on is, if you’re going to entrust these systems with access to your personal data or your financial data or the ability to do things in the real world on behalf of your enterprise or what have you, you need to have some sense that there’s security, there’s authentication for things, that there’s an ability to come back and check with the user before making certain significant decisions or taking certain decisions. Certain significant actions. And that’s… You can test and evaluate and report that information in a way that is intelligible to the customer, that they know what they’re buying, and they know when to trust, and they know when not to trust.
What’s up there?
Yeah, well said, and I endorse a lot of what Mike mentioned there and Austin and Sihau as well. I do think there’s a lot you can learn from the history of standards in various different industries that we can apply to AI. Sihau mentioned some of the early Internet standards. I mean, I’m just about old enough to remember people in the early 90s talking about how they would never, ever, ever put credit card information on the Internet. That would be absolutely insane. And it sort of was when you had information being shared in plain text in a totally unencrypted way. Then you have the secure layer that Sihau mentioned, HTTPS, and it’s completely unlocked the modern Internet economy as we know it to be.
History of electrical standards as well. Actually, this was something that drove the adoption of electrical products in the late 20th and early 21st century. You had a scientific approach to standardizing units of. measurement like ohms and volts and amperes, which allowed power supplies to connect their energy to the grid. It also meant that you could invent things like fuses, which could be set to a certain amperage, and if you had an electrical current above that, it would shut itself off. So I think we need to continue learning from history. I think there are a few principles that we should take forward as we do that. I think open standards, as we’ve been discussing, is the right way to go.
You need technically robust standards that are really informed by an understanding of the technology and how they work, and we should be looking to prioritize interoperability as well. Maybe a final thought for this piece is also learning from standards that are not done well. There are many industries that have not quite gotten this right. A lot of us have traveled here from around the world having to bring adapters with us because our electrical products won’t plug into the wall. It’s really, really annoying. It’s actually also a massive hindrance on commerce as well, because it means if you’re producing a computer or another electronic application, you have to have a different plug socket in every single country around the world that you’re developing your product for.
So things to avoid. as well we need to be mindful of.
automobile industry or something, two humongous but separate industries, and how they’re going to have to come together to set up norms for how agentic systems work and how data is shared, I think government can probably play an important role in bringing together industries to establish those dialogues. But the industries certainly still need to be front and center in establishing what works for them because they are the practitioners and the experts on what their customers need, what their colleagues need. And so I think we’re all going to have to kind of navigate that world together and figure out what is the role for the research labs, how does government support, and then how does industry play a leadership role in both governing and building for itself industry -specific standards for the future of AI.
Yeah, I think this conversation has been a bit of a history lesson. I appreciate that. Thank you. And it made me think about how I used to get music when I was a kid, and some of the panelists may appreciate. You know, there were these music catalogs that would come to your house. You’d select however many compact discs you wanted, CDs. You’d put cash or a check in an envelope and send it away. And some weeks later, magically, some CDs would appear on your doorstep. So when I think about, you know, instructing an Asian to go download music or acquire music on my behalf, like, I’d much rather have that than I don’t know how we used to put so much trust in a system without standards or, you know, a process that could not be audited.
So I think sort of the guiding principles that have developed the Internet still apply. We want privacy -preserving technology. We want technology that allows us to audit. We want technology that considers authenticity. We want technology that considers means of consent. And to Michael’s point, I think ultimately agents serve the user and agents serve organizations. And so if we view it through that lens, it should guide us right. They don’t serve us as the model developers.
Great. Thank you all so much for that. So that was a bit of a nerdy discussion on standards, a bit of a history lesson. I love that. But we’re also here right now at the India AI Impact Summit talking to a country of builders, talking to the developing worlds, which are some of the most dynamic AI markets in the world. And so I think it will also be amazing to hear from the panelists here, including Austin, how you all are engaging with the rest of the world on these standards, how your organizations are engaging with other countries on AI. And one of the most exciting applications you’ve seen develop on top of your standards and products.
I guess I’ll lead off. One of the main forums that Casey engages internationally is through the International Network for Advanced AI Measurement, Evaluation, and Science. It’s a bit of a mouthful of a name, but it’s ten countries that have established AI security institutes or, like we do, the Center for AI Standards and Innovation, and we meet a couple times a year. We also engage in informal technical and scientific exchanges and we share best practices in measurement and evaluation science. In December, we met in San Diego on the sidelines of the NURFS conference and we sat down and discussed sort of open questions in measurement science and the challenges that we’re facing, and we published a blog post, I think, about a week ago that summarizes some of the periods of consensus and the open questions.
And there, the work we’re doing, I think, is very important because when we talk about the evaluation, of AI systems, particular capabilities, particular security vulnerabilities, etc. It’s important for us to have consensus on the methodologies.
Summary:All speakers strongly advocate for open, interoperable standards that enable cross-vendor compatibility and prevent vendor lock-in, drawing parallels to successful internet protocols This pan…
EventThe conversation centered around emerging AI agent protocols that enable different AI systems to work together seamlessly. Anthropic’s Model Context Protocol (MCP) was highlighted as a universal stand…
EventCarter describes specific technical developments including Google’s agent-to-agent protocol for vendor-agnostic interactions and Anthropic’s model context protocol for safer data interactions. These r…
EventWould you like me to go and take care of you and get some more toothpaste for you? You mentioned standards, which I think is going to be a critical part of getting all of this right. There’s a couple …
Event– Yik Chan Ching- Audience Explains A2A (agent-to-agent) and MCP (model context protocol) standards and uses the analogy of the telephone parlor game to illustrate communication degradation risks Te…
EventAbsolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin Mayron, and I’m the Acting Director of the U .S. Center for AI Standards and Innova…
EventSummary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won’t use AI technology, making it essential for both individual adoption and busin…
EventAll speakers emphasize that trust and safety are prerequisites for meaningful internet adoption, particularly for vulnerable populations including children and marginalized communities.
EventStandards are needed to help build trust, and trust isn’t a property of machines but how we handle uncertainty together
Event### Regional Perspectives **Dai Wei** from the Internet Society of China highlighted the organization’s cooperation with over 20 global organizations, including the Internet Governance Forum, ICANN, …
EventDuring the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existing frameworks like the IAEA, ICAO, or IPCC could serve as models for effective g…
EventThis discussion focused on governing AI development and ensuring safe, beneficial deployment while maintaining innovation and building public trust. Brando Benifei explained how the European AI Act ad…
EventAbhishek Singh: That will really empower people globally. What do we expect from the Global Digital Compact to make this a reality? We have a lot of expectations because we are catching up with the We…
EventThe speaker explains the origins of the global AI capacity building network, crediting Saudi Arabia and Kenya with initiating this collaborative effort. The initiative emerged from recognition that ca…
Event‘Standards can underpin regulatory frameworks and […] provide appropriate guardrails for responsible, safe and trustworthy AI development.’ This very simple, yet very powerful statement comes from t…
Topic“Protocols are essential for agents to work together smoothly and enable interoperability across products and businesses.”
The knowledge base explicitly states that protocols are crucial for builders to interact with products and achieve interoperability, confirming the report’s emphasis on their importance [S1] and [S3].
“Google DeepMind’s Agent‑to‑Agent (A2A) protocol acts as a “digitised clipboard” conveying an agent’s identity, capabilities, intent, data requirements and security constraints, removing the need for custom code.”
S15 notes that Google has launched an agents-to-agents protocol, which aligns with the report’s description of A2A as a standardized way to convey agent metadata and eliminate bespoke integrations [S15].
“Google DeepMind’s Universal Commerce Protocol (UCP) standardises how agents interact with websites and payment systems.”
The knowledge base mentions a “universal” protocol for agent communication with web services, supporting the report’s claim that a Universal Commerce Protocol standardises website and payment interactions [S15].
“The panel discussion will cover the business case for agentic AI and the public‑policy implications of its use.”
S19 describes a panel that will discuss the business case for agentic AI followed by a second panel on public-policy implications, confirming the report’s outline of the session’s agenda [S19].
The panel displayed a strong, multi‑speaker consensus that open, interoperable standards—paired with robust security and trust frameworks—are the cornerstone for a globally inclusive AI ecosystem. Government is seen as a facilitator rather than a regulator, and international collaboration, especially with emerging markets, is deemed essential.
High consensus: the convergence across industry and government representatives on open standards, security, and global collaboration suggests a solid foundation for coordinated policy and technical work, likely accelerating the development and adoption of AI agent standards worldwide.
The panel shows strong overall consensus on the need for open, interoperable AI standards, security, and global inclusion. Disagreements are limited to the preferred locus of leadership (government‑coordinated versus industry‑driven), the framing of international collaboration versus a U.S.–centric export model, and the methodological path to security assurance (policy‑driven drafts versus metric‑based testing).
Low to moderate disagreement; the differences are largely about implementation pathways rather than fundamental goals, suggesting that progress on AI standards can continue with coordinated effort, though alignment on governance mechanisms will be required.
The discussion pivoted around the central theme of open, interoperable standards for AI agents. Early technical explanations (MCP, agent‑to‑agent, commerce protocols) established a shared vocabulary, while analogies from traffic lights to automobile metrics translated complex ideas into everyday terms, making the need for standards feel universal. Historical references to the Internet’s open protocols and failed standards (electrical plugs) framed the conversation within a broader policy and economic context, prompting the government representatives to announce concrete initiatives (RFI, sector listening sessions). Together, these comments moved the panel from abstract enthusiasm to a concrete, collaborative roadmap, aligning industry innovation with governmental facilitation and highlighting both technical and regulatory dimensions of the emerging AI ecosystem.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

