U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence
20 Feb 2026 18:00h - 19:00h
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence
Session at a glance
Summary
This panel discussion at the India AI Impact Summit focused on developing standards and protocols for AI agents to ensure interoperability and open collaboration in the global AI ecosystem. The panel included senior officials from the White House and Department of Commerce, along with representatives from four major US AI companies: Anthropic, Google DeepMind, OpenAI, and XAI.
The conversation centered around emerging AI agent protocols that enable different AI systems to work together seamlessly. Anthropic’s Model Context Protocol (MCP) was highlighted as a universal standard for connecting AI systems to existing tools and data sources, while Google DeepMind discussed their Agent-to-Agent Protocol for inter-agent communication. OpenAI and XAI representatives emphasized similar commerce-focused protocols that allow agents to interact with payment systems and websites on behalf of users.
Austin Marin from the Department of Commerce announced the new Agent Standards Initiative, led by the Center for AI Standards and Innovation, which aims to develop voluntary, consensus-based standards for AI agent security, identity, and authorization. The initiative includes sector-specific listening sessions in education, healthcare, and finance to identify adoption challenges and develop appropriate standards.
The panelists drew parallels between current AI standardization efforts and the historical success of internet protocols like TCP/IP and HTTPS, which enabled global commerce and innovation. They emphasized that open standards prevent vendor lock-in, promote competition, and allow builders worldwide to create interoperable applications. The discussion concluded with recognition that proper standardization, similar to early internet protocols, could unlock the full potential of AI agents while ensuring security and trust in the global AI economy.
Keypoints
Major Discussion Points:
– AI Agent Protocol Standards and Interoperability: The panelists discussed various emerging protocols like Anthropic’s Model Context Protocol (MCP), Google DeepMind’s Agent-to-Agent Protocol, and OpenAI’s Agentic Commerce Protocol. These standards enable AI systems to connect with data sources, communicate with each other, and conduct commerce while maintaining interoperability across different AI platforms.
– The U.S. Agent Standards Initiative: Austin Marin introduced the new initiative led by the Center for AI Standards and Innovation at NIST, which focuses on developing voluntary, consensus-based standards for AI agent security, identity, authorization, and sector-specific applications in education, healthcare, and finance.
– Open Standards Philosophy and Historical Parallels: The discussion emphasized learning from internet history, particularly how open protocols like TCP/IP and HTTPS enabled global commerce and innovation. Panelists drew analogies to automotive safety standards and electrical grid standardization to illustrate how proper standards can unlock adoption and trust.
– Security and Trust in AI Agent Deployment: A major focus was on developing security standards that would enable organizations to confidently deploy AI agents with access to sensitive data and real-world capabilities, similar to how SSL/HTTPS enabled e-commerce by providing security guarantees.
– International Collaboration and Global Market Access: The conversation covered how these standards initiatives engage with international partners and developing markets, particularly emphasizing the U.S. approach of sharing AI technology openly rather than restricting it, to create win-win scenarios for global AI adoption.
Overall Purpose:
The discussion aimed to explore how standardized protocols and frameworks can enable the safe, secure, and interoperable deployment of AI agents across different platforms and organizations, while fostering international collaboration and market growth in the AI ecosystem.
Overall Tone:
The tone was consistently collaborative, optimistic, and technically focused throughout the conversation. Panelists from competing AI companies demonstrated a cooperative spirit, emphasizing shared benefits rather than competitive advantages. The discussion maintained a forward-looking, solution-oriented approach with frequent historical analogies to illustrate successful precedents for technology standardization. There was notable enthusiasm about the potential for open standards to democratize AI access globally while ensuring security and interoperability.
Speakers
Speakers from the provided list:
– Sihao Huang – Senior Policy Advisor for AI at Emerging Tech at the White House
– Austin Marin – Acting Director for the U.S. Center for AI Standards and Innovation at the Department of Commerce
– Michael Sellitto – Head of Global Affairs at Anthropic
– Owen Lauder – Senior Director and Head of Frontier Policy and Public Affairs at Google DeepMind
– Michael Brown – Head of Growth and Operations for OpenAI for our countries
– Wifredo Fernandez – Director for Global Government Affairs at XAI
Additional speakers:
None – all speakers mentioned in the transcript are included in the provided speakers names list.
Full session report
This panel discussion at the India AI Impact Summit brought together senior US government officials and representatives from four major American AI companies to address the development of standardised protocols for AI agents. The conversation, moderated by Sihao Huang from the White House Office of Science and Technology Policy, explored how emerging AI agent standards can enable global interoperability whilst fostering innovation and maintaining security.
The Current Landscape of AI Agent Protocols
The discussion began with an examination of the rapidly evolving ecosystem of AI agent standards. Michael Sellitto from Anthropic highlighted their Model Context Protocol (MCP), which has gained significant industry adoption as a universal open standard for connecting AI systems to existing tools and data sources. He explained that MCP functions intuitively, allowing AI models to understand and access enterprise knowledge bases and government data sources in the same way humans navigate organisational systems.
The significance of MCP extends beyond technical functionality to address vendor lock-in. As Sellitto noted, “Before MCP, you really had to build all these systems in a very bespoke manner, which meant that if you built them with one model or one vendor, you were kind of stuck because you’d have to rewrite everything if you wanted to switch.”
Owen Lauder from Google DeepMind presented their Agent-to-Agent Protocol, which addresses the challenge of enabling different AI agents to communicate effectively with each other. The A2A protocol creates a standardised “digitised clipboard” that allows agents to share essential information including their identity, capabilities, objectives, data requirements, and security parameters. Lauder also briefly mentioned Google’s work on a Universal Commerce Protocol that enables agents to interact with websites and payment systems.
Michael Brown from OpenAI, substituting for George Osborne and noting his relative newness to the company, discussed their Agentic Commerce Protocol designed to enable agents to conduct transactions on behalf of users – from booking family holidays to making routine purchases.
Wifredo Fernandez from XAI, representing the newest company in the group at just two and a half years old, emphasised how these foundational protocols developed by peer companies have accelerated their own development. He highlighted the collaborative nature of the AI community, particularly noting how discussions often unfold publicly on the X platform, creating transparency in the innovation process. Fernandez also observed the “frenetic and kinetic and chaotic” nature of the week’s events.
The US Government’s Agent Standards Initiative
Austin Marin, Acting Director of the US Center for AI Standards and Innovation, introduced a major new government initiative aimed at supporting the development of AI agent standards. The Centre, recently refounded under Commerce Secretary Howard Lutnick, represents a strategic shift from purely safety-focused approaches to emphasising standards and innovation whilst maintaining security priorities.
The Agent Standards Initiative encompasses several key components. A Request for Information on AI agent security, closing in March, seeks to identify specific security challenges facing organisations considering AI agent adoption. The initiative also includes draft guidance from NIST’s Information Technology Laboratory on AI agent identity and authorisation, addressing fundamental questions about how AI agents authenticate themselves and gain appropriate access to systems and data.
Perhaps most significantly, the Centre plans to conduct sector-specific listening sessions in April, focusing on education, healthcare, and finance – sectors that face unique regulatory and operational challenges in AI adoption. Marin provided a concrete example: “Perhaps what we’ll learn through these listening sessions is that hospitals or schools aren’t deploying AI because they can’t reliably evaluate how AI agents are handling the PII [personally identifiable information].”
The government’s approach deliberately follows NIST’s tradition of facilitating industry-driven, consensus-based voluntary standards rather than imposing regulatory mandates. Marin illustrated this philosophy with the example of automotive brake lights, which are universally red not because of government mandate, but because industry came together through NIST’s convening authority to agree on a standard.
Historical Precedents and Strategic Vision
A significant portion of the discussion focused on drawing lessons from the historical development of internet protocols. Sihao Huang provided a comprehensive articulation of this strategic vision, arguing that the success of the World Wide Web offers a blueprint for AI development.
Huang contrasted the current administration’s approach with previous policies, advocating for sharing the best AI technologies globally. He explained: “When we think back at the success of the Internet, what enabled that? There’s actually a number of companies and countries that tried to create their own closed version of the Internet that were centralised, that were tied to particular nations… but none of them really scaled to the global level of the World Wide Web.”
The World Wide Web succeeded because of open protocols like TCP/IP and HTTPS that the US government supported and made freely available. This created what Huang described as “a win-win situation where the entire world now benefits from sort of the access of the Internet… but also made Silicon Valley one of the most wealthy places in human history.”
Owen Lauder reinforced this perspective with personal recollections of early internet adoption, noting how people in the early 1990s considered putting credit card information online “absolutely insane” until the development of HTTPS enabled trusted e-commerce. He also pointed to electrical plug incompatibility across countries as an example of how poor standardisation creates ongoing friction.
Security, Trust, and Adoption Challenges
The discussion extensively addressed how security standards enable AI adoption. Michael Sellitto drew an analogy to automotive safety standards, explaining how standardised crash test ratings allow consumers to make informed decisions. Similarly, AI systems need standardised, independently verified metrics that allow customers and governments to understand when to trust these systems.
The security challenge is particularly acute for AI agents because they may be granted access to sensitive data or the ability to take real-world actions. As Sellitto noted, “If you’re going to entrust these systems with access to your personal data or your financial data or the ability to do things in the real world on behalf of your enterprise… you need to have some sense that there’s security, there’s authentication for things.”
Wifredo Fernandez provided a personal perspective, contrasting modern AI agents with historical systems that lacked transparency. He recalled ordering music through mail-order catalogues, noting: “When I think about instructing an agent to go download music or acquire music on my behalf, I’d much rather have that than I don’t know how we used to put so much trust in a system without standards or a process that could not be audited.”
International Collaboration and Industry Consensus
Austin Marin described the Centre’s participation in the International Network for Advanced AI Measurement, Evaluation, and Science, which brings together ten countries with established AI safety institutes. These regular meetings facilitate technical exchanges and consensus development on evaluation methodologies.
The discussion revealed extraordinary consensus among representatives of competing AI companies on the importance of open standards. Despite being direct market competitors, all four companies advocated for interoperable protocols that would prevent vendor lock-in and enable customers to switch between services.
Michael Brown from OpenAI explicitly framed this as a collaborative opportunity to “grow the pie” rather than compete in a zero-sum manner. He used the analogy of universal traffic signals to illustrate how shared standards enable builders worldwide to create applications that work securely everywhere.
Sihao Huang emphasised the global vision, noting that standards should ensure “if there’s a builder in India, a builder in Kenya, building on top of our AI products, American companies can use them as well.” He specifically mentioned how AlphaFold is being used by scientists in India as an example of beneficial global AI adoption.
Practical Applications and Future Challenges
The panellists provided concrete examples of how these standards enable practical applications. Commerce protocols allow AI agents to handle complex multi-step transactions, while agent-to-agent protocols enable sophisticated workflows where different specialised AI systems collaborate on complex tasks.
Despite the consensus, several challenges remain. The relationship between competing standards from different companies needs further development. The regulatory implications of agent-driven platforms require attention, as Fernandez noted when raising questions about regulating social media platforms that become agent-driven.
The sector-specific challenges in education, healthcare, and finance represent ongoing work areas, as these industries face unique regulatory requirements and operational constraints that may require tailored approaches whilst maintaining overall interoperability.
Conclusion
The panel discussion revealed remarkable alignment between US government policy and industry perspectives on AI agent standards development. The consensus around open, interoperable standards reflects both technical necessity and strategic vision – recognising that the success of American AI companies depends on enabling global innovation rather than creating closed systems.
The Agent Standards Initiative represents a significant commitment to facilitating this development through voluntary, consensus-based approaches. The ultimate vision articulated by the panellists is of an AI ecosystem that mirrors the internet’s success – open, interoperable, secure, and enabling innovation by builders worldwide whilst creating substantial value for the companies and countries that contribute foundational technologies and standards.
Sellitto concluded by expressing Anthropic’s appreciation for “Secretary Lutnick’s leadership” and their partnership with the Trump administration, while Lauder offered congratulations to the Indian hosts on the summit, reflecting the collaborative international spirit that characterised the entire discussion.
Session transcript
<strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable and open to the rest of the world to sort of build on that for your own businesses, for your own benefits. And so we have an amazing panel here today. We have, so first of all, I’m Sihao Huang. I’m Senior Policy Advisor for AI at Emerging Tech at the White House. We’re joined with Austin Marin, who’s the Director for the Center for AI Standards and Innovation at the Department of Commerce, which really is the center for a lot of AI activity within the U .S. government, setting standards, driving innovation, measuring AI systems, improving metrology, and a lot of the smartest people in the U .S. government are within Austin’s organization. And then we have the four frontier AI companies from the United States. So we’re very happy to be joined by Mike Salito, who is the Head of Global Affairs at Anthropic. We have Owen Lauder at Google DeepMind, who’s the Senior, Director and Head of Frontier Policy and Public Affairs. We have Mike Brown, who is head of growth and operations for OpenAI for our countries. And, of course, we have Weefi Fernandez, who is the director for global government affairs at XAI. So really the amazing lineup of U .S. industry. I said this in a previous panel, but American companies are spending $700 billion into infrastructure this year, just this year alone. And they probably won’t like it that I say this, but they’re competing very hard against each other to make AI models cheaper and more powerful for you guys to build on and to drive those applications. And so this is going to be a panel on how we make that happen, how we standardize interfaces with those AI systems. And so first I’m just going to ask a question to the AI companies that are sat here. So over the past few months, I think, we’ve seen the emergence of an ecosystem of standards that move. To support the deployment of AI agents. I think one of the most notable ones is Anthropix Model Context Protocol, which a lot of other companies are building off of right now and is sort of becoming the industry standards. Of course, you have Google DeepMind’s A2A Agent -to -Agent Protocol, OpenAI’s Agentic Commerce Protocol, and then XAI, of course, has been working on its highly secretive and famous MacroHearts agent project. And so all the companies here are very much involved in sort of this agent discussion. And so maybe open it up to the companies here to tell us a little bit about what these agent protocols actually do and what they have unlocked. And that’s sort of the builders who are sat here, the audience. What do they enable a software engineer or an AI engineer at India or other countries to create? <strong>Michael Sellitto:</strong> Okay. Well, first I want to start off by thanking Suhao and OSTP for organizing this panel and all the people who are here. Thank you. So it’s great to be here with Austin. I think Anthropic has really had a really strong partnership with the Trump administration and appreciated the leadership of Secretary Lutnick in expanding and enhancing the Center for AI Standards and Innovation, which is really critical to making this technology work for everybody in a manner that’s safe, responsible, and open. MCP is a universal open standard for connecting AI systems to the tools and data sources that people already use. So imagine the knowledge bases inside of an enterprise. You can imagine government data sources. The Indian government, of course, is a real leader in, why am I forgetting the acronym right now, DPI, sorry, and just has massive amounts of data that are already digitized. And so MCP is a way that you can connect your AI models and agents to those data sets and also tools. And it really, you know, simple. intuitive way. You just need to give the model a rough description of what’s in the data source and what kind of tools or how can it access it. And then the model will intuitively know how it can use those data sources the same way that somebody in your enterprise or your organization would know if I want to get payroll data, I need to go to this human resources system. If I want to get data about, you know, our revenue, I need to go into HEX or whatever your particular tools are. You know, before MCP, you really had to build all these systems in a very bespoke manner, which meant that if you built them with one model or one vendor, you were kind of stuck because you’d have to rewrite everything if you wanted to switch. MCP being this open source protocol that’s supported by all of the major AI companies means that you really have this degree of interoperability, which just enables the whole system to be much more open and competitive. We also recently built SKILLZ, which is a software that’s been around for a long time. It’s a software that’s been around for a long time. It’s a set of instructions that teach agents how to perform. specific tasks. The way that I describe this or think about it is, you know, imagine a new person joins your team. You spend a little bit of time teaching them, you know, how to do work the way that your organization does it. And then you expect them to just be able to follow those instructions all the time. So you kind of teach once and then they’re able to do that. It’s the same thing with skills, which also is another open protocol where you can build these skills. And then if you decide that, you know, you want to switch from Anthropic to any of the other fine companies here on the panel, you can move those skills over. And so that interoperability and data portability is really a critical piece of making this an open and competitive environment. <strong>Owen Lauder:</strong> Amazing. Thank you, Mike. And, yeah, thank you to Sehow. Thank you to OSTP and the U .S. government for the event and all the partnership. And a big thank you and congrats to our Indian hosts on a fantastic summit week. If you take a step back, it has been, I think, a really exciting week, a demonstration of how advanced AI is now being used around the world to do incredible things. It’s been really exciting. I think seeing the way that people are using Gemini right across India, really exciting to see the way that everyone in India from world -class scientists using AlphaFold to teachers and students using AI in the classroom. And I think with all of the progress that we’ve seen in the last few years, it’s easy to forget sometimes that this is still relatively new technology. We’re still in the relatively early innings of working out how to develop this technology and use it for good. And one of the things that we need to do, I think Seahaw covered this very well in his opening gambit, is build out this ecosystem of technical standards to make sure that we can continue using this technology in the right ways. There’s a couple of ways that we’re thinking about these standards. One is technical standards, interoperable standards, and then also standards for testing these systems, making sure that we can use them in a reliable and secure way. We really want to contribute right across the piece here, so we’re excited. We have various standards that we have contributed to the ecosystem. Our agent -to -agent standard that Seahaw mentioned. This is basically a standard for how… agentic systems talk to each other. At the moment, it’s a little bit tricky for agents to converse with each other. You have to often write bits of bespoke code for an agent to talk to an agent, or they have to be running on the same walled garden code base. So what we do with agent -to -agent is essentially have a sort of digitized clipboard of information that an agent will share with another agent. What’s my ID as an agent? What are my capabilities? What am I trying to do? How do I take data? What are my security requirements? This is going to be absolutely fundamental to sort of greasing the wheels of the agentic economy. UCP, another standard that we’re working on, so we have our universal commerce protocol at Google. This essentially does the same thing, but it’s for how agents talk to websites and payment systems. This is going to be transformative for business. It’s great to be able to partner with companies right around the world, whether it’s Walmart and Target in the U .S. or Flipkart and Infosys in India that we’re working with across these agents. Excited to see what… everyone is going to do with the technology that we can enable with this. <strong>Michael Brown:</strong> Thanks for the tip. Hi, everyone. My name is Michael Brown. My name placard says George Osborne, who’s a colleague. He got tied up in another panel, so I’m here. George and I work extremely closely together, but he has a much nicer accent because he’s from the U .K. I’m doing my best here. You’re doing very well, I might say, very well. For me, this is a fun panel because it feels like a very collaborative and cooperative opportunity to grow the pie, and the companies that are on either of our side are extraordinary companies with extraordinary humans, and it’s fun to just work with them in some of these areas. If I were going to kind of explain, why we’re here in this particular panel to my kids who are 9 -11, I would sort of say, look, are there countries out there in the world where when you get to a stoplight, red means go? I don’t think so. I think mostly red means stop and green means go. I mean, if I’m wrong, I apologize. I’m not an expert. But, you know, having sort of shared understanding in countries, rich and poor, advanced and still developing around how things work, I think grows the pie because it allows builders to build in a way that everyone can kind of know that what they’re building is going to be both secure and is going to be accessible and hopefully enjoyable or useful to people anywhere in the world. And I think each of the companies up here is contributing something great to that. You know, I’ve joined OpenAthens. I relatively recently, but like MCP to me is something like I just knew it’s like that’s really important. And like, well, Anthropic introduced it. Hopefully, Anthropic would agree with this, that now it’s just like the thing, right? And I think that’s terrific that it’s the thing. You know, Owen also mentioned in commerce, I don’t know if these standards compete or if it’s cooperative, but at OpenAI, we have a commerce protocol as well for the same thing, because there’s a world where these agents are going to be out shopping for us, which is kind of fun, right? So, you know, if the agent knows that you’re planning on taking a family vacation and it knows that you want to visit Goa and the agent can go actually secure your travel flights and your hotel, these commerce protocols can do that. So agents of different companies, potentially in different countries, can all partner and work well together because they understand how they’re supposed to be looking for shared information and how that information should be shared. There’s kind of a shared understanding there. And so I think all of us are working to build these protocols to grow the pie, to create more democratization, more commerce, more benefit for everyone by having these common protocols in place. <strong>Wifredo Fernandez:</strong> Thank you, Sihal. Great to be with you all here, and thank you to the government for having us. What an exciting week, frenetic and kinetic and chaotic, as I was saying earlier. So it’s just an honor to be here and to feel the energy and all the innovation and to meet a bunch of different builders across India. So Wefredo Fernandez, folks call me Weefy for short. It’s a nickname I got in the 90s before wireless Internet was a thing, so my name became relevant later. But, yeah, this is certainly a topic that brings us all together, which is wonderful. You know, XAI is only two and a half years old. So we’re all in this together. So we’re all in this together. So we’re all in this together. So the foundational work done by these peer companies have enabled us to accelerate our development. We’re better because of those, and we’re better because we can all build on top of those. And these standards and protocols that folks have built and that we sort of lay out and sort of agree to as an industry and as governments really make sure that not just us four compete, right? This enables a ton of innovation. So, you know, on the X side, and, you know, XAI and X sort of operate in tandem, it’s been really neat to see the AI community sort of build and test and discuss and debate in public. So, like, when Malt Book was taking off, I think you likely found out about it on X. And so it’s just neat to see the ecosystem sort of converge in that discussion space. And just in thinking about this panel and thinking about MoldBook in particular, it’s like, well, do we regulate social media platforms that are agent driven? Just it brings like all these really novel questions about about how we regulate. But I think at the end of the day, we all agree that these open standards that are creating sort of this call it a layer, call it a new ecosystem, call it a parallel Internet. I just really crucial for for our development of the Internet writ large. And so, yeah, excited about the panel and the discussion here today. <strong>Sihao Huang:</strong> Thank you so much. Your name is formalized in the 802 dot 11 protocol, which is what allows my phone to connect to the Internet in D .C. and here in India. So it’s extremely relevant. I’m going to use that. That’s awesome. So I think we’ve heard a little bit from our companies who are engaging a lot of dynamic activity, pushing out agent protocols of all kinds. And I think. There’s a lot of industry excitement over agents right now. One of the big announcements that we’re here to make, also Director Carrazio’s made early on the main stage, is the Agent Standards Initiative, and that is something that is let out of Casey in NIST. So I’ll turn to Austin to introduce that. <strong>Austin Marin:</strong> Absolutely, and thanks, Hal, and thank you to OSTP for convening this event and to my fellow panelists. I’ll start with a brief introduction of my organization. So I am the Acting Director for the U .S. Center for AI Standards and Innovation. Our background, we were founded about two years ago as the U .S. AI Safety Institute. In June of last year, Commerce Secretary Howard Lutnick refounded us as the Center for AI Standards and Innovation, which signaled a shift from sort of safety concepts to standards and innovation. And our remit is to be the front door to industry to working with the U .S. government. There’s, I think, two aspects of our organization I think that bear note is, first, that we’re located within the Department of Commerce. We are commerce -focused. We are industry -focused. We work. We work with all of the companies on this panel. Some of them we have formal research. or pre -deployment evaluation agreements with so that we can work with them on their models and the research questions they’re tackling. We also do take seriously our role trying to serve as a front door to the U .S. government for industry. We want to make sure that when industry is trying to navigate government that they’re speaking to the right people, that the people in government they’re speaking to have advisors who understand frontier AI and agentic AI, and also that the industry isn’t being overwhelmed by duplicative requests from different aspects of government. You don’t want 10 different agencies asking the same company basically the same thing and creating unnecessary work, and so we try to act in sort of a coordinating role to make sure that industry is being heard and they’re navigating U .S. government. The other aspect of our organization that bears note is we’re located within NIST, the National Institute of Standards and Technology, and NIST has an over -century -long track record of not regulating but helping industry through, consensus, develop voluntary standards and best practices. Acting Director of NIST, Craig Burkhart, he likes to talk about taillights, brake lights on the back of a car. I’m sure you all see them in India. It’s the same color red as it is in the U .S. That’s because it was a NIST standard of what exactly color red is going to be on the taillights. But another important aspect of that anecdote is it wasn’t government that said this is the color red that you all must use. It was industry came together, and with the help of NIST experts through a convening, they agreed on what the color should be. And so now when we look at it, what does the future bring and where can NIST bring its industry -driven, consensus -based voluntary standards work into the new AI world, we’re looking to AI agent standards. So as to how said, we announced this week an AI agent standards initiative, which is looking at all facets of AI and AI agents. There’s a couple aspects of it that have already been announced that we’re working on, and I’ll tick through those relatively quickly. The first is we have a request for information. I’m going to go ahead and get this. So we have a request for information. We have a request for information. in the field. It closes in March and we encourage you to engage with us and provide comments on AI agent security. AI agents obviously bring a whole host of new security challenges and we’d love to hear from you and your organizations about what challenges you are facing. Learning and identifying those challenges is a first step. Once we identify those challenges we can then take the next step of seeing where can NIST’s approach of voluntary standards and best practices documents, how can they help address and mitigate those those challenges. Another aspect, our colleagues at NIST, the Information Technology Laboratory or ITL, they have a draft out for comment on AI agent identity and authorization. Again, encourage you to engage and interact with them. A third initiative that we recently announced is we’re going to hold sector specific listening sessions hopefully in April in the sectors of education, healthcare, and finance where we’re going to convene various members of industry and say to them look there’s this great technology out there called AI have you heard of it, AI agents, why aren’t you adopting it? it? What challenges are you facing? And we may not be able to solve those challenges, but maybe we can. And so one example I give, and I don’t know that it’s going to be something we find out, but for instance, in the education and healthcare sector, there’s business concerns and existing regulatory concerns about PII, personally identifiable information. And perhaps what we’ll learn through these listening sessions is that hospitals or schools aren’t deploying AI because they can’t reliably evaluate how AI agents are handling the PII. And so that’s something that KC, my organization, could develop metrology, benchmarks, evaluations, best practices, documents that could give confidence to those types of institutions that the agents are performing as desired. And maybe that’s a step that we could take through voluntary consensus driven best practices and standards that unlocks adoption. So we’re very focused on that. We’re looking forward to learning what those challenges are. I don’t know if the challenge I mentioned is actually a challenge facing industry. And that’s part of NIST’s approach, which is we … In D .C., we only see a small slice of what’s going on in industry. We only have a tiny window into the world. And so it comes from a place of humility. We don’t know what the challenges people are facing. The companies that are on this panel, they’re doing an incredible job coming up with protocols for some of the challenges that they’re facing. We talked about agent -to -agent for how agents communicate. We talked about MCP for how agents navigate databases. We talked about UCP and OpenAI’s commerce protocol for engaging in e -commerce. And I’m sure through these conversations, we’re going to identify other areas where open source protocols, where standards, best practices could help unlock adoption and implementation. And we’re really excited to work with both you and all your institutions and companies on stage to identify those opportunities and see how we can leverage NIST’s convening authority to help. <strong>Sihao Huang:</strong> Thank you so much for that, Austin. I think to reemphasize, this standards initiative is really wanting to make sure that the products that we build, on top of it, are able to connect with each other into our… such that if there’s a builder in India, a builder in Kenya, building on top of our AI products, American companies can use them as well. American companies can buy from them as well. And similarly, if you want to switch to a different model, nothing is sort of locked in. And I think this really ties back to a perspective that we sort of, as U .S. government, in particular the Trump administration, has about AI and AI products. We think back a lot on the history of the Internet and what that enabled for the world, but also what that enabled for America. I think there was a perspective in the U .S. from a previous administration that technology had to be strictly locked down, and we think that’s a mistake. We want to share the best AI technologies with the rest of the world, and that’s also a sort of leading message that our delegation has here at the India AI Summit. And when we think back at the success of the Internet, what enabled that? There’s actually a number of companies and countries that tried to create their own closed version of the Internet that were centralized, that were tied to particular nations, at their own telecom networks. and they saw a little bit of success. A lot of them were state -subsidized, but none of them really scaled to the global level of the World Wide Web. And the World Wide Web became so successful precisely because of the protocols that the U .S. government had supported. The U .S. government had made a very intentional effort to make sure that the Internet was a decentralized system, created protocols like TCPIP, HTTPS, the sort of Internet suite that was actually funded by the U .S. government back then to create independent development of these protocols that enabled the rest of the world to build on that. And what you had is really this win -win situation where the entire world now benefits from sort of the access of the Internet, the ability to build applications, companies on top of that that’s driven so much prosperity for countries around the world, but also made Silicon Valley one of the most wealthy places in human history. And it is because of this open commerce. And that’s what we really want to create with a world of AI in the future as well. I think just to add a bit on to what Austin had said, sort of the agent security. piece. Why is agent security so important to us? It’s precisely because of adoption. You need security -driven adoption. If you look back again also at the history of the internet, the development of the secure sockets layer, SSL, and then eventually HTTPS, was what enabled e -commerce. And so, again, I think it’s a lot about the efforts that we’re going to be working with industry together to make sure that there is this standards ecosystem, that there are these interoperable interfaces that everyone can build on and trust to create the AI economy that we’re all looking forward to. So I’ll stop ranting, but I’ll turn to the companies here. And I guess I’ll ask you all, how do you see sort of the future of AI standards and agent development? And how can AI agent standards really reflect the same principles that enable the open internet, including interoperability and including security? <strong>Michael Sellitto:</strong> I feel like I need to somehow fix this. an automobile analogy in here since there’s been a setting. Maybe I’ll use my favorite one, which is right now if you go to buy a car and you go down to the car dealership, those cars are going to have a bunch of metrics that you can use that have been independently determined to understand the characteristics of that vehicle. So it will tell you what the fuel economy is, how far can you drive on a gallon or liter of gas, how does it perform in various types of crash tests. These are all metrics that are done in a standardized way that are oftentimes done by third parties, and so you can have sort of trust and confidence in them, and you can know what kind of car you want to buy. Maybe I’m a single person and I like to drive fast, and so I’m just worried about head -on collisions because I’m going to be going as fast as the car can. I’m going to be driving as fast as the car can possibly go, and that’s the biggest danger for me. Or maybe I have a family and I’m worried about you know, what happens if we get hit from the side and I’ve got kids in the back seats. You know, a piece that this standardization can help us get to is having that same kind of confidence in knowing what you’re purchasing that, you know, customers and governments and the public, you know, can have. I think another real benefit, and it’s really aligned with, I think, some things that Michael Kratzios, the OSTP director, talked about today and also in an op -ed that he had in the Financial Times around exporting the American AI stack, right? There’s a lot of concerns today about sovereignty, about having control over systems in your data and so on. And a way that I think you can both use the best technology in the world, which sometimes comes from American companies, but also have confidence that there’s resilience in the system, is really having things be built to open. Open standards, right? And that gives you the ability to… to decide to make changes. If today Anthropic is producing the best technology and tomorrow it’s X or it’s OpenAI or someone else, you can change. Or maybe an open source model gets good enough at the use case that you want and you want to switch over from a proprietary model to an open source model. So I think that’s what this can enable. I think that’s sort of the opportunity that we have ahead of us. And I think that the vision of the AI security standards work that Casey’s going to be working on is, if you’re going to entrust these systems with access to your personal data or your financial data or the ability to do things in the real world on behalf of your enterprise or what have you, you need to have some sense that there’s security, there’s authentication for things, that there’s an ability to come back and check with the user before making certain significant decisions or taking certain decisions. Certain significant actions. And that’s… You can test and evaluate and report that information in a way that is intelligible to the customer, that they know what they’re buying, and they know when to trust, and they know when not to trust. What’s up there? <strong>Owen Lauder:</strong> Yeah, well said, and I endorse a lot of what Mike mentioned there and Austin and Sihau as well. I do think there’s a lot you can learn from the history of standards in various different industries that we can apply to AI. Sihau mentioned some of the early Internet standards. I mean, I’m just about old enough to remember people in the early 90s talking about how they would never, ever, ever put credit card information on the Internet. That would be absolutely insane. And it sort of was when you had information being shared in plain text in a totally unencrypted way. Then you have the secure layer that Sihau mentioned, HTTPS, and it’s completely unlocked the modern Internet economy as we know it to be. History of electrical standards as well. Actually, this was something that drove the adoption of electrical products in the late 20th and early 21st century. You had a scientific approach to standardizing units of. measurement like ohms and volts and amperes, which allowed power supplies to connect their energy to the grid. It also meant that you could invent things like fuses, which could be set to a certain amperage, and if you had an electrical current above that, it would shut itself off. So I think we need to continue learning from history. I think there are a few principles that we should take forward as we do that. I think open standards, as we’ve been discussing, is the right way to go. You need technically robust standards that are really informed by an understanding of the technology and how they work, and we should be looking to prioritize interoperability as well. Maybe a final thought for this piece is also learning from standards that are not done well. There are many industries that have not quite gotten this right. A lot of us have traveled here from around the world having to bring adapters with us because our electrical products won’t plug into the wall. It’s really, really annoying. It’s actually also a massive hindrance on commerce as well, because it means if you’re producing a computer or another electronic application, you have to have a different plug socket in every single country around the world that you’re developing your product for. So things to avoid. as well we need to be mindful of. <strong>Michael Brown:</strong> automobile industry or something, two humongous but separate industries, and how they’re going to have to come together to set up norms for how agentic systems work and how data is shared, I think government can probably play an important role in bringing together industries to establish those dialogues. But the industries certainly still need to be front and center in establishing what works for them because they are the practitioners and the experts on what their customers need, what their colleagues need. And so I think we’re all going to have to kind of navigate that world together and figure out what is the role for the research labs, how does government support, and then how does industry play a leadership role in both governing and building for itself industry -specific standards for the future of AI. <strong>Wifredo Fernandez:</strong> Yeah, I think this conversation has been a bit of a history lesson. I appreciate that. Thank you. And it made me think about how I used to get music when I was a kid, and some of the panelists may appreciate. You know, there were these music catalogs that would come to your house. You’d select however many compact discs you wanted, CDs. You’d put cash or a check in an envelope and send it away. And some weeks later, magically, some CDs would appear on your doorstep. So when I think about, you know, instructing an Asian to go download music or acquire music on my behalf, like, I’d much rather have that than I don’t know how we used to put so much trust in a system without standards or, you know, a process that could not be audited. So I think sort of the guiding principles that have developed the Internet still apply. We want privacy -preserving technology. We want technology that allows us to audit. We want technology that considers authenticity. We want technology that considers means of consent. And to Michael’s point, I think ultimately agents serve the user and agents serve organizations. And so if we view it through that lens, it should guide us right. They don’t serve us as the model developers. <strong>Sihao Huang:</strong> Great. Thank you all so much for that. So that was a bit of a nerdy discussion on standards, a bit of a history lesson. I love that. But we’re also here right now at the India AI Impact Summit talking to a country of builders, talking to the developing worlds, which are some of the most dynamic AI markets in the world. And so I think it will also be amazing to hear from the panelists here, including Austin, how you all are engaging with the rest of the world on these standards, how your organizations are engaging with other countries on AI. And one of the most exciting applications you’ve seen develop on top of your standards and products. <strong>Austin Marin:</strong> I guess I’ll lead off. One of the main forums that Casey engages internationally is through the International Network for Advanced AI Measurement, Evaluation, and Science. It’s a bit of a mouthful of a name, but it’s ten countries that have established AI security institutes or, like we do, the Center for AI Standards and Innovation, and we meet a couple times a year. We also engage in informal technical and scientific exchanges and we share best practices in measurement and evaluation science. In December, we met in San Diego on the sidelines of the NURFS conference and we sat down and discussed sort of open questions in measurement science and the challenges that we’re facing, and we published a blog post, I think, about a week ago that summarizes some of the periods of consensus and the open questions. And there, the work we’re doing, I think, is very important because when we talk about the evaluation, of AI systems, particular capabilities, particular security vulnerabilities, etc. It’s important for us to have consensus on the methodologies.
Michael Sellitto
Speech speed
183 words per minute
Speech length
1123 words
Speech time
366 seconds
Model Context Protocol (MCP) enables universal connection between AI systems and existing data sources/tools with interoperability across vendors
Explanation
MCP is a universal open standard that allows AI systems to connect to existing knowledge bases, government data sources, and tools in enterprises. It enables models to intuitively understand how to access different data sources, similar to how humans know where to find specific information within an organization.
Evidence
Examples include enterprise knowledge bases, government data sources, and India’s leadership in DPI (Digital Public Infrastructure). Before MCP, systems had to be built in a bespoke manner, locking users into specific vendors.
Major discussion point
AI Agent Protocols and Standards Development
Topics
Artificial intelligence | The enabling environment for digital development | Data governance
Agreed with
– Owen Lauder
– Michael Brown
– Wifredo Fernandez
– Austin Marin
– Sihao Huang
Agreed on
Open standards and interoperability are essential for AI agent development
Standardized metrics and third-party evaluations (like automotive crash tests) build customer confidence in AI systems
Explanation
Just as cars have standardized fuel economy ratings and crash test results that help consumers make informed decisions, AI systems need similar standardized metrics. These metrics should be independently determined and done by third parties to build trust and confidence.
Evidence
Automotive industry analogy where cars have standardized metrics for fuel economy and crash test performance that help consumers choose based on their specific needs (single person vs. family with children).
Major discussion point
Interoperability and Open Internet Principles
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Monitoring and measurement
Agreed with
– Owen Lauder
– Wifredo Fernandez
– Austin Marin
Agreed on
Security and trust are fundamental requirements for AI agent adoption
Open standards provide sovereignty and resilience by enabling switching between vendors and technologies
Explanation
Open standards address sovereignty concerns by allowing users to maintain control over their systems and data while using the best available technology. This approach enables flexibility to switch between different AI providers or from proprietary to open source models as needs change.
Evidence
Reference to Michael Kratzios’s op-ed in Financial Times about exporting the American AI stack, and the ability to switch from Anthropic to other companies or to open source models when they become suitable for specific use cases.
Major discussion point
Interoperability and Open Internet Principles
Topics
Artificial intelligence | The enabling environment for digital development | Internet governance
Agreed with
– Owen Lauder
– Michael Brown
– Austin Marin
Agreed on
Industry-led, consensus-based approach to standards development is preferred over government regulation
Owen Lauder
Speech speed
212 words per minute
Speech length
892 words
Speech time
251 seconds
Agent-to-Agent Protocol creates standardized communication between agentic systems through shared information formats
Explanation
The protocol addresses the current difficulty of agents communicating with each other by creating a standardized ‘digitized clipboard’ of information. This includes agent ID, capabilities, objectives, data handling methods, and security requirements, eliminating the need for bespoke code or walled garden systems.
Evidence
Currently agents need bespoke code to communicate or must run on the same code base. The protocol will be fundamental to ‘greasing the wheels of the agentic economy.’
Major discussion point
AI Agent Protocols and Standards Development
Topics
Artificial intelligence | The enabling environment for digital development
Agreed with
– Michael Sellitto
– Michael Brown
– Wifredo Fernandez
– Austin Marin
– Sihao Huang
Agreed on
Open standards and interoperability are essential for AI agent development
Historical examples like HTTPS enabling e-commerce and electrical standards driving adoption show the power of good standards
Explanation
Historical precedents demonstrate how proper standards unlock entire industries and economies. HTTPS transformed internet commerce from a situation where people would never put credit card information online to enabling the modern internet economy. Similarly, electrical standards enabled widespread adoption of electrical products.
Evidence
People in the early 1990s considered putting credit card information on the internet ‘absolutely insane’ until HTTPS provided secure encryption. Electrical standards in the late 20th/early 21st century included scientific standardization of units like ohms, volts, and amperes, enabling power grid connections and safety devices like fuses.
Major discussion point
Interoperability and Open Internet Principles
Topics
The digital economy | Building confidence and security in the use of ICTs | The enabling environment for digital development
Agreed with
– Michael Sellitto
– Wifredo Fernandez
– Austin Marin
Agreed on
Security and trust are fundamental requirements for AI agent adoption
India represents a dynamic AI market with world-class scientists using AlphaFold and widespread Gemini adoption
Explanation
India demonstrates significant AI adoption across multiple sectors, from advanced scientific research to educational applications. This represents the kind of global engagement and dynamic market development that open standards can enable and support.
Evidence
World-class scientists in India are using AlphaFold, and there is widespread adoption of Gemini across India, including teachers and students using AI in classrooms.
Major discussion point
International Collaboration and Global Engagement
Topics
Artificial intelligence | Social and economic development | Capacity development
Michael Brown
Speech speed
163 words per minute
Speech length
631 words
Speech time
232 seconds
Commerce protocols enable agents to interact with websites and payment systems for automated transactions
Explanation
Commerce protocols allow AI agents to perform complex tasks like booking travel arrangements, including securing flights and hotels for family vacations. These protocols enable agents from different companies and potentially different countries to work together seamlessly.
Evidence
Example of an agent knowing about a planned family vacation to Goa and being able to automatically secure travel flights and hotel bookings through standardized commerce protocols.
Major discussion point
AI Agent Protocols and Standards Development
Topics
Artificial intelligence | The digital economy
Open standards create shared understanding that enables global builders to create secure and accessible applications
Explanation
Using the analogy of traffic lights where red universally means stop and green means go, open standards create shared understanding across countries. This enables builders worldwide to create applications that work securely and accessibly anywhere, growing opportunities for everyone.
Evidence
Traffic light analogy where red means stop and green means go universally across countries. This shared understanding allows builders to create applications that work globally and are both secure and accessible.
Major discussion point
AI Agent Protocols and Standards Development
Topics
Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs
Agreed with
– Michael Sellitto
– Owen Lauder
– Austin Marin
Agreed on
Industry-led, consensus-based approach to standards development is preferred over government regulation
Government can facilitate dialogue between separate industries that need to collaborate on agentic systems
Explanation
When large, separate industries need to work together to establish norms for agentic systems and data sharing, government can play an important convening role. However, industries must remain central to establishing standards since they are the practitioners and experts on customer needs.
Evidence
Reference to ‘two humongous but separate industries’ that need to collaborate on agentic systems and data sharing norms.
Major discussion point
International Collaboration and Global Engagement
Topics
Artificial intelligence | The enabling environment for digital development
Agreed with
– Michael Sellitto
– Owen Lauder
– Austin Marin
Agreed on
Industry-led, consensus-based approach to standards development is preferred over government regulation
Wifredo Fernandez
Speech speed
156 words per minute
Speech length
603 words
Speech time
231 seconds
Open source protocols built by peer companies accelerate development and enable broader innovation beyond just the four major companies
Explanation
As a relatively new company (2.5 years old), XAI has benefited from foundational work done by peer companies, which has accelerated their development. These open standards and protocols enable innovation across the entire industry, not just among the four major AI companies represented on the panel.
Evidence
XAI is only 2.5 years old and has been able to build on foundational work from peer companies. The AI community builds, tests, and discusses developments publicly on platforms like X, as seen with the MaltBook phenomenon.
Major discussion point
AI Agent Protocols and Standards Development
Topics
Artificial intelligence | The enabling environment for digital development
Agreed with
– Michael Sellitto
– Owen Lauder
– Michael Brown
– Austin Marin
– Sihao Huang
Agreed on
Open standards and interoperability are essential for AI agent development
Privacy-preserving, auditable technology with authenticity and consent considerations should guide agent development
Explanation
Drawing from the principles that developed the internet, AI agent development should prioritize privacy preservation, auditability, authenticity, and consent mechanisms. Agents should ultimately serve users and organizations rather than the model developers themselves.
Evidence
Personal anecdote about ordering music CDs by mail with cash/check in an envelope, highlighting how much trust was placed in systems without standards or audit capabilities, contrasting with the need for better systems for AI agents.
Major discussion point
Interoperability and Open Internet Principles
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs
Agreed with
– Michael Sellitto
– Owen Lauder
– Austin Marin
Agreed on
Security and trust are fundamental requirements for AI agent adoption
Austin Marin
Speech speed
191 words per minute
Speech length
1263 words
Speech time
395 seconds
Agent Standards Initiative focuses on industry-driven, consensus-based voluntary standards rather than regulation
Explanation
The initiative follows NIST’s century-long approach of helping industry develop voluntary standards through consensus rather than government mandates. This approach has proven successful in areas like automotive brake light color standards, where industry came together to agree on specifications rather than having government impose requirements.
Evidence
NIST’s brake light color standard example where industry agreed on the specific shade of red rather than government mandating it. NIST has over a century of experience in consensus-based voluntary standards development.
Major discussion point
Government AI Standards Initiative
Topics
Artificial intelligence | The enabling environment for digital development
Agreed with
– Michael Sellitto
– Owen Lauder
– Michael Brown
Agreed on
Industry-led, consensus-based approach to standards development is preferred over government regulation
Request for information on AI agent security challenges closes in March to identify areas where standards can help
Explanation
NIST is actively seeking input from industry and organizations about AI agent security challenges they are facing. This information gathering is the first step in identifying where voluntary standards and best practices documents can help address and mitigate these challenges.
Evidence
Formal request for information (RFI) process with March deadline for comments on AI agent security challenges.
Major discussion point
Government AI Standards Initiative
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Agreed with
– Michael Sellitto
– Owen Lauder
– Wifredo Fernandez
Agreed on
Security and trust are fundamental requirements for AI agent adoption
Sector-specific listening sessions in education, healthcare, and finance will identify adoption barriers that standards might address
Explanation
NIST plans to hold listening sessions in April for specific sectors to understand why they aren’t adopting AI agents and what challenges they face. The goal is to identify problems that NIST might be able to solve through standards, best practices, or evaluation methodologies.
Evidence
Example of potential challenge: hospitals or schools may not deploy AI because they can’t reliably evaluate how AI agents handle personally identifiable information (PII), which could be addressed through NIST metrology, benchmarks, and evaluations.
Major discussion point
Government AI Standards Initiative
Topics
Artificial intelligence | Social and economic development | Building confidence and security in the use of ICTs
NIST’s century-long track record demonstrates successful industry collaboration on voluntary standards development
Explanation
NIST has over a century of experience in helping industry develop voluntary standards and best practices through consensus-based approaches rather than regulation. This proven track record provides confidence in applying the same methodology to AI agent standards.
Evidence
Brake light color standardization example where industry came together with NIST experts to agree on the specific shade of red, rather than government mandating the color choice.
Major discussion point
Government AI Standards Initiative
Topics
The enabling environment for digital development | Monitoring and measurement
Agreed with
– Michael Sellitto
– Owen Lauder
– Michael Brown
– Sihao Huang
Agreed on
Historical precedents demonstrate the importance of standards for technology adoption
International Network for Advanced AI Measurement brings together ten countries’ AI institutes for technical exchanges
Explanation
NIST engages internationally through a network of ten countries that have established AI security institutes or similar organizations. They meet regularly to share best practices in measurement and evaluation science and work toward consensus on methodologies for evaluating AI systems.
Evidence
December meeting in San Diego during the NURFS conference where they discussed open questions in measurement science and published a blog post summarizing areas of consensus and open questions.
Major discussion point
International Collaboration and Global Engagement
Topics
Artificial intelligence | Monitoring and measurement
Sihao Huang
Speech speed
196 words per minute
Speech length
1363 words
Speech time
415 seconds
Trump administration believes in sharing best AI technologies globally rather than strict lockdown approaches
Explanation
The current U.S. administration rejects the previous administration’s approach of strictly locking down technology and instead wants to share the best AI technologies with the rest of the world. This philosophy is a leading message of the U.S. delegation at the India AI Summit.
Evidence
Reference to previous administration’s approach of strict technology lockdown being viewed as a mistake, and the delegation’s message at the India AI Summit about sharing AI technologies globally.
Major discussion point
Government AI Standards Initiative
Topics
Artificial intelligence | The enabling environment for digital development
Internet’s success came from decentralized systems and open protocols like TCP/IP and HTTPS funded by US government
Explanation
The World Wide Web succeeded globally because the U.S. government intentionally supported decentralized systems and open protocols, unlike other countries and companies that tried to create closed, centralized versions. This created a win-win situation where the world benefits from internet access while Silicon Valley became extremely wealthy.
Evidence
Historical comparison with other countries and companies that tried closed, centralized internet systems that were often state-subsidized but never scaled globally. The U.S. government funded protocols like TCP/IP and HTTPS that enabled independent development and global adoption.
Major discussion point
Interoperability and Open Internet Principles
Topics
Internet governance | The enabling environment for digital development | Information and communication technologies for development
Agreed with
– Michael Sellitto
– Owen Lauder
– Michael Brown
– Austin Marin
Agreed on
Historical precedents demonstrate the importance of standards for technology adoption
American companies are investing $700 billion in infrastructure and competing to make AI models cheaper and more powerful globally
Explanation
U.S. companies are making massive infrastructure investments this year alone and are competing intensively with each other to reduce costs and increase power of AI models. This competition benefits global builders and application developers who can build on these increasingly accessible AI systems.
Evidence
$700 billion in infrastructure investment by American companies in the current year, with companies competing to make AI models cheaper and more powerful for global builders.
Major discussion point
International Collaboration and Global Engagement
Topics
Artificial intelligence | The digital economy | Financial mechanisms
Agreements
Agreement points
Open standards and interoperability are essential for AI agent development
Speakers
– Michael Sellitto
– Owen Lauder
– Michael Brown
– Wifredo Fernandez
– Austin Marin
– Sihao Huang
Arguments
Model Context Protocol (MCP) enables universal connection between AI systems and existing data sources/tools with interoperability across vendors
Agent-to-Agent Protocol creates standardized communication between agentic systems through shared information formats
Open standards create shared understanding that enables global builders to create secure and accessible applications
Open source protocols built by peer companies accelerate development and enable broader innovation beyond just the four major companies
Agent Standards Initiative focuses on industry-driven, consensus-based voluntary standards rather than regulation
Internet’s success came from decentralized systems and open protocols like TCP/IP and HTTPS funded by US government
Summary
All speakers strongly advocate for open, interoperable standards that enable cross-vendor compatibility and prevent vendor lock-in, drawing parallels to successful internet protocols
Topics
Artificial intelligence | The enabling environment for digital development | Internet governance
Historical precedents demonstrate the importance of standards for technology adoption
Speakers
– Michael Sellitto
– Owen Lauder
– Michael Brown
– Austin Marin
– Sihao Huang
Arguments
Standardized metrics and third-party evaluations (like automotive crash tests) build customer confidence in AI systems
Historical examples like HTTPS enabling e-commerce and electrical standards driving adoption show the power of good standards
Open standards create shared understanding that enables global builders to create secure and accessible applications
NIST’s century-long track record demonstrates successful industry collaboration on voluntary standards development
Internet’s success came from decentralized systems and open protocols like TCP/IP and HTTPS funded by US government
Summary
Speakers consistently reference successful historical examples (automotive standards, HTTPS, electrical standards, traffic lights) to demonstrate how standards enable widespread adoption and economic growth
Topics
The enabling environment for digital development | Building confidence and security in the use of ICTs | The digital economy
Security and trust are fundamental requirements for AI agent adoption
Speakers
– Michael Sellitto
– Owen Lauder
– Wifredo Fernandez
– Austin Marin
Arguments
Standardized metrics and third-party evaluations (like automotive crash tests) build customer confidence in AI systems
Historical examples like HTTPS enabling e-commerce and electrical standards driving adoption show the power of good standards
Privacy-preserving, auditable technology with authenticity and consent considerations should guide agent development
Request for information on AI agent security challenges closes in March to identify areas where standards can help
Summary
All speakers emphasize that security, authentication, auditability, and user trust are prerequisites for widespread AI agent deployment, particularly when handling sensitive data
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Industry-led, consensus-based approach to standards development is preferred over government regulation
Speakers
– Michael Sellitto
– Owen Lauder
– Michael Brown
– Austin Marin
Arguments
Open standards provide sovereignty and resilience by enabling switching between vendors and technologies
Open standards create shared understanding that enables global builders to create secure and accessible applications
Government can facilitate dialogue between separate industries that need to collaborate on agentic systems
Agent Standards Initiative focuses on industry-driven, consensus-based voluntary standards rather than regulation
Summary
Speakers agree that industry should lead standards development with government playing a convening and facilitating role rather than imposing regulations
Topics
The enabling environment for digital development | Artificial intelligence
Similar viewpoints
Both speakers from Anthropic and Google DeepMind emphasize their companies’ specific protocol contributions (MCP and A2A) while highlighting how these enable broader ecosystem interoperability
Speakers
– Michael Sellitto
– Owen Lauder
Arguments
Model Context Protocol (MCP) enables universal connection between AI systems and existing data sources/tools with interoperability across vendors
Agent-to-Agent Protocol creates standardized communication between agentic systems through shared information formats
Topics
Artificial intelligence | The enabling environment for digital development
Both speakers from OpenAI and XAI focus on collaborative aspects of standards development and how protocols enable practical applications like commerce
Speakers
– Michael Brown
– Wifredo Fernandez
Arguments
Commerce protocols enable agents to interact with websites and payment systems for automated transactions
Open source protocols built by peer companies accelerate development and enable broader innovation beyond just the four major companies
Topics
Artificial intelligence | The digital economy
Both government representatives emphasize the administration’s philosophy of open, collaborative approaches to AI development and standards, contrasting with more restrictive approaches
Speakers
– Austin Marin
– Sihao Huang
Arguments
Agent Standards Initiative focuses on industry-driven, consensus-based voluntary standards rather than regulation
Trump administration believes in sharing best AI technologies globally rather than strict lockdown approaches
Topics
Artificial intelligence | The enabling environment for digital development
Unexpected consensus
Complete alignment between competing AI companies on open standards
Speakers
– Michael Sellitto
– Owen Lauder
– Michael Brown
– Wifredo Fernandez
Arguments
Model Context Protocol (MCP) enables universal connection between AI systems and existing data sources/tools with interoperability across vendors
Agent-to-Agent Protocol creates standardized communication between agentic systems through shared information formats
Commerce protocols enable agents to interact with websites and payment systems for automated transactions
Open source protocols built by peer companies accelerate development and enable broader innovation beyond just the four major companies
Explanation
Despite being direct competitors in the AI market, all four major AI companies (Anthropic, Google DeepMind, OpenAI, XAI) show remarkable consensus on supporting open, interoperable standards rather than pursuing proprietary, closed approaches that might give them competitive advantages
Topics
Artificial intelligence | The enabling environment for digital development
Government and industry alignment on voluntary standards approach
Speakers
– Austin Marin
– Michael Sellitto
– Owen Lauder
– Michael Brown
Arguments
Agent Standards Initiative focuses on industry-driven, consensus-based voluntary standards rather than regulation
Open standards provide sovereignty and resilience by enabling switching between vendors and technologies
Historical examples like HTTPS enabling e-commerce and electrical standards driving adoption show the power of good standards
Government can facilitate dialogue between separate industries that need to collaborate on agentic systems
Explanation
There is unexpected harmony between government officials and industry representatives on avoiding regulatory mandates in favor of collaborative, voluntary standards development, which is notable given typical tensions between regulators and industry
Topics
The enabling environment for digital development | Artificial intelligence
Overall assessment
Summary
The discussion reveals extraordinary consensus among all speakers on the fundamental principles of AI agent standards development: open interoperability, industry-led consensus building, security-first design, and learning from historical precedents. All parties agree on avoiding vendor lock-in, enabling global collaboration, and prioritizing voluntary standards over regulatory mandates.
Consensus level
Very high consensus with no significant disagreements identified. This strong alignment suggests favorable conditions for successful AI agent standards development and implementation, potentially accelerating the creation of a robust, interoperable AI ecosystem that benefits global builders and users.
Differences
Different viewpoints
Unexpected differences
Overall assessment
Summary
The discussion showed remarkably high consensus among all speakers on fundamental principles of AI agent standards development, with no direct disagreements identified
Disagreement level
Very low disagreement level. All speakers aligned on core principles of open standards, interoperability, security, and industry-government collaboration. The few partial agreements identified relate to different implementation approaches rather than fundamental disagreements. This high level of consensus suggests strong industry-government alignment on AI standards development, which could facilitate rapid progress in establishing unified protocols and frameworks for AI agent deployment.
Partial agreements
Partial agreements
Both speakers agree on the need for commerce protocols to enable agent transactions, but they represent different approaches – OpenAI has its own commerce protocol while Google has the Universal Commerce Protocol (UCP). They agree on the goal of enabling automated commerce but have developed separate standards.
Speakers
– Michael Brown
– Owen Lauder
Arguments
Commerce protocols enable agents to interact with websites and payment systems for automated transactions
Agent-to-Agent Protocol creates standardized communication between agentic systems through shared information formats
Topics
Artificial intelligence | The digital economy
Both agree on the importance of standards for building confidence in AI systems, but Austin emphasizes government’s convening role through voluntary consensus-based approaches, while Michael focuses more on industry-driven standardized metrics and third-party evaluations. They share the goal of building trust but emphasize different mechanisms.
Speakers
– Austin Marin
– Michael Sellitto
Arguments
Agent Standards Initiative focuses on industry-driven, consensus-based voluntary standards rather than regulation
Standardized metrics and third-party evaluations (like automotive crash tests) build customer confidence in AI systems
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The enabling environment for digital development
Similar viewpoints
Both speakers from Anthropic and Google DeepMind emphasize their companies’ specific protocol contributions (MCP and A2A) while highlighting how these enable broader ecosystem interoperability
Speakers
– Michael Sellitto
– Owen Lauder
Arguments
Model Context Protocol (MCP) enables universal connection between AI systems and existing data sources/tools with interoperability across vendors
Agent-to-Agent Protocol creates standardized communication between agentic systems through shared information formats
Topics
Artificial intelligence | The enabling environment for digital development
Both speakers from OpenAI and XAI focus on collaborative aspects of standards development and how protocols enable practical applications like commerce
Speakers
– Michael Brown
– Wifredo Fernandez
Arguments
Commerce protocols enable agents to interact with websites and payment systems for automated transactions
Open source protocols built by peer companies accelerate development and enable broader innovation beyond just the four major companies
Topics
Artificial intelligence | The digital economy
Both government representatives emphasize the administration’s philosophy of open, collaborative approaches to AI development and standards, contrasting with more restrictive approaches
Speakers
– Austin Marin
– Sihao Huang
Arguments
Agent Standards Initiative focuses on industry-driven, consensus-based voluntary standards rather than regulation
Trump administration believes in sharing best AI technologies globally rather than strict lockdown approaches
Topics
Artificial intelligence | The enabling environment for digital development
Takeaways
Key takeaways
Open AI agent standards and protocols (MCP, Agent-to-Agent, Commerce protocols) are essential for creating interoperable systems that allow builders globally to switch between AI vendors without being locked into proprietary solutions
The US government’s Agent Standards Initiative will focus on industry-driven, consensus-based voluntary standards rather than regulation, following NIST’s century-long successful approach
Security standards are critical for AI agent adoption – similar to how HTTPS enabled e-commerce by providing trust and security for online transactions
The Trump administration advocates for sharing best AI technologies globally through open standards, drawing parallels to the Internet’s success through decentralized, open protocols
Standardized evaluation metrics and third-party assessments (like automotive crash tests) will build customer confidence in AI systems and enable informed decision-making
International collaboration through forums like the International Network for Advanced AI Measurement is essential for developing consensus on AI evaluation methodologies
Historical examples from Internet protocols, electrical standards, and other industries demonstrate that open, technically robust standards drive adoption and economic growth
Resolutions and action items
Request for Information on AI agent security challenges closes in March – stakeholders encouraged to provide comments
Sector-specific listening sessions in education, healthcare, and finance will be held in April to identify adoption barriers
NIST will develop metrology, benchmarks, evaluations, and best practices documents based on identified industry challenges
Continued engagement through the International Network for Advanced AI Measurement for technical exchanges and methodology consensus
Industry participants committed to continuing development of open source protocols and standards collaboration
Unresolved issues
Specific technical details of how different commerce protocols (OpenAI’s vs Google’s UCP) will compete or cooperate remain unclear
The regulatory approach for social media platforms that are agent-driven needs to be determined
How to balance AI sovereignty concerns with open standards and global interoperability
Specific methodologies for evaluating AI agent security vulnerabilities and capabilities are still being developed
The role division between research labs, government, and industry in establishing industry-specific standards needs clarification
Suggested compromises
Open standards approach allows countries to use best global AI technology while maintaining sovereignty through ability to switch vendors
Voluntary, consensus-based standards rather than regulatory mandates to balance innovation with safety and security needs
Industry-led standards development with government convening and coordination support to avoid duplicative requests and ensure proper expertise
Sector-specific approaches that address unique challenges in education, healthcare, and finance while maintaining overall interoperability
Thought provoking comments
Before MCP, you really had to build all these systems in a very bespoke manner, which meant that if you built them with one model or one vendor, you were kind of stuck because you’d have to rewrite everything if you wanted to switch. MCP being this open source protocol that’s supported by all of the major AI companies means that you really have this degree of interoperability, which just enables the whole system to be much more open and competitive.
Speaker
Michael Sellitto
Reason
This comment crystallizes the fundamental problem that AI agent protocols solve – vendor lock-in – and positions interoperability as essential for competition and innovation. It moves beyond technical details to explain the strategic business implications.
Impact
This framing established the core theme that all subsequent speakers built upon – the importance of open standards for preventing monopolization and enabling competition. It set the stage for the government’s perspective on why these standards matter for global AI adoption.
We think back a lot on the history of the Internet and what that enabled for the world, but also what that enabled for America… The World Wide Web became so successful precisely because of the protocols that the U.S. government had supported… what you had is really this win-win situation where the entire world now benefits from sort of the access of the Internet, the ability to build applications, companies on top of that that’s driven so much prosperity for countries around the world, but also made Silicon Valley one of the most wealthy places in human history.
Speaker
Sihao Huang
Reason
This historical analogy is profound because it reframes AI standards not just as technical necessities, but as geopolitical and economic strategy. It suggests that openness, rather than protectionism, creates more value for the originating country while benefiting the world.
Impact
This comment fundamentally shifted the discussion from technical implementation to strategic vision. It provided the intellectual framework that justified why the U.S. government supports open AI standards, and influenced subsequent speakers to think about historical precedents and long-term economic implications.
Right now if you go to buy a car and you go down to the car dealership, those cars are going to have a bunch of metrics that you can use that have been independently determined to understand the characteristics of that vehicle… a piece that this standardization can help us get to is having that same kind of confidence in knowing what you’re purchasing that customers and governments and the public can have.
Speaker
Michael Sellitto
Reason
This analogy brilliantly translates complex AI evaluation concepts into something universally understood. It highlights that standards aren’t just about technical compatibility, but about creating trust and informed decision-making in the marketplace.
Impact
This metaphor provided a concrete framework that other speakers adopted and extended. It shifted the conversation toward the consumer/user perspective and the importance of transparency, influencing how others discussed the practical benefits of standardization.
I’m just about old enough to remember people in the early 90s talking about how they would never, ever, ever put credit card information on the Internet. That would be absolutely insane… Then you have the secure layer that Sihau mentioned, HTTPS, and it’s completely unlocked the modern Internet economy as we know it to be.
Speaker
Owen Lauder
Reason
This observation powerfully illustrates how security standards can transform adoption patterns. It shows that what seems impossible or dangerous can become routine once proper standards are established, directly addressing concerns about AI agent security.
Impact
This comment reinforced and deepened Huang’s historical framework while making it more personal and relatable. It helped establish security standards as enablers rather than barriers, influencing the discussion toward viewing current AI security concerns as solvable challenges rather than permanent obstacles.
There are many industries that have not quite gotten this right. A lot of us have traveled here from around the world having to bring adapters with us because our electrical products won’t plug into the wall. It’s really, really annoying. It’s actually also a massive hindrance on commerce as well, because it means if you’re producing a computer or another electronic application, you have to have a different plug socket in every single country around the world.
Speaker
Owen Lauder
Reason
This everyday example brilliantly illustrates the real costs of failed standardization – both in terms of user experience and economic efficiency. It makes abstract concepts tangible and shows why getting standards right matters.
Impact
This practical example grounded the entire discussion in lived experience that every participant could relate to. It served as a cautionary tale that influenced the conversation toward emphasizing the importance of getting AI standards right the first time, rather than ending up with fragmented, incompatible systems.
When I think about instructing an agent to go download music or acquire music on my behalf, I’d much rather have that than I don’t know how we used to put so much trust in a system without standards or a process that could not be audited.
Speaker
Wifredo Fernandez
Reason
This personal anecdote effectively illustrates how standards and auditability actually increase trust and usability compared to opaque systems. It reframes standards not as constraints but as enablers of confidence in automated systems.
Impact
This comment brought the discussion full circle by connecting historical examples to future AI agent applications in a personal, relatable way. It reinforced the theme that standards enable trust and adoption, while also introducing the important concept of auditability in AI systems.
Overall assessment
These key comments transformed what could have been a dry technical discussion into a compelling narrative about the strategic importance of open AI standards. The historical analogies (Internet, automobiles, electrical standards, e-commerce security) provided a shared framework for understanding both the opportunities and risks of AI standardization. The progression from technical explanations to historical context to practical examples created a multi-layered argument for why open, interoperable AI agent standards are not just technically desirable but economically and strategically essential. The comments built upon each other to establish that successful standards enable trust, competition, and global adoption – while failed standardization creates fragmentation and limits growth. This framing elevated the discussion from implementation details to vision and strategy, making a compelling case for the collaborative approach to AI standards development.
Follow-up questions
Do we regulate social media platforms that are agent driven?
Speaker
Wifredo Fernandez
Explanation
This question emerged when discussing the MoldBook phenomenon and highlights novel regulatory questions about how to govern platforms when they become agent-driven, representing a new frontier in AI governance
How can NIST’s approach of voluntary standards and best practices documents help address and mitigate AI agent security challenges?
Speaker
Austin Marin
Explanation
This represents a key area for further research as NIST seeks to understand how their traditional consensus-based approach can be applied to the new challenges posed by AI agents
What challenges are education, healthcare, and finance sectors facing in adopting AI agents?
Speaker
Austin Marin
Explanation
Austin mentioned upcoming listening sessions to understand adoption barriers in these sectors, indicating this as a priority research area to unlock broader AI deployment
How can AI agents reliably handle personally identifiable information (PII) in regulated sectors like healthcare and education?
Speaker
Austin Marin
Explanation
This was identified as a potential barrier to AI adoption in regulated sectors, requiring development of metrology, benchmarks, and evaluations to give institutions confidence
How do commerce protocols from different companies (OpenAI vs others) compete or cooperate?
Speaker
Michael Brown
Explanation
Michael Brown expressed uncertainty about whether different commerce standards compete or work together, indicating need for clarity on interoperability between competing standards
What is the role of research labs, government support, and industry leadership in establishing industry-specific AI standards?
Speaker
Michael Brown
Explanation
This represents a fundamental governance question about how different stakeholders should collaborate in developing AI standards across various industries
What are the open questions in AI measurement science and evaluation methodologies?
Speaker
Austin Marin
Explanation
Austin referenced ongoing work through international collaboration to establish consensus on AI evaluation methodologies, indicating this as an active area of research
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

