U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence

20 Feb 2026 18:00h - 19:00h

U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, convened by the White House OSTP and featuring senior officials from the U.S. government and leaders from Anthropic, Google DeepMind, OpenAI and XAI, focused on how open standards and protocols can make AI agents interoperable and secure [1-3][5-11][15-22]. Sihao Huang noted that billions of dollars are being invested in AI infrastructure and that competing firms are racing to make models cheaper and more powerful, underscoring the need for common interfaces [13-15]. He introduced the emerging ecosystem of agent protocols-including Anthropic’s Model Context Protocol (MCP), DeepMind’s A2A, OpenAI’s Agentic Commerce Protocol and XAI’s MacroHearts project-as the basis for the discussion [17-21][23-24].


Michael Sellitto explained that MCP is a universal open standard that lets models discover and use enterprise or government data sources and tools through simple descriptions, eliminating bespoke integrations [28-34][36-38]. He added that the companion Skills protocol lets developers encode repeatable task instructions that can be transferred across vendors, further enhancing data portability and competition [46-48]. Owen Lauder described DeepMind’s agent-to-agent standard as a digitized “clipboard” sharing identity, capabilities and security requirements, and its Universal Commerce Protocol (UCP) as a way for agents to interact with websites and payment systems [63-71][74-76]. Michael Brown highlighted that shared commerce protocols enable agents from different companies to coordinate tasks such as booking travel, illustrating how common standards can democratize AI-driven services worldwide [94-102].


Austin Marin announced the new Agent Standards Initiative, housed within NIST’s voluntary-consensus framework, and a request for information on AI-agent security that closes in March [130-138][155-162]. He outlined upcoming sector-specific listening sessions on education, healthcare and finance to identify challenges such as handling personally identifiable information and to develop metrology, benchmarks and best-practice documents [165-172]. The initiative also builds on existing drafts for AI-agent identity and authorization, aiming to create interoperable security layers analogous to the historic development of SSL and HTTPS for e-commerce [163-168][206-207]. Sihao Huang linked this effort to the open-internet legacy, arguing that decentralized protocols like TCP/IP and HTTPS spurred global prosperity and that similar open AI standards are essential for worldwide adoption and secure commerce [186-198][199-202].


Participants used analogies from the automobile and electrical industries to stress that standardized metrics and safety certifications can give users confidence in AI agents, while open standards preserve sovereignty and allow switching between providers [211-230][232-250]. The discussion concluded that government can facilitate cross-industry dialogue, but industry must lead the technical work, and international collaborations such as the INAEMS network are already shaping measurement and evaluation consensus for AI agents [252-254][276-281].


Keypoints


Major discussion points


Emergence of AI agent protocols to enable interoperability and competition – The panel highlighted several open standards such as the Anthropic Model Context Protocol (MCP), Google DeepMind’s agent-to-agent protocol, OpenAI’s commerce protocol, and XAI’s MacroHearts project, all aimed at letting agents “talk” to data sources, each other, and commerce systems [17-21][28-38][63-76][98-101].


U.S. government’s coordinating role in standards development – OSTP and the newly-rebranded Center for AI Standards and Innovation (within the Department of Commerce and NIST) act as the “front door” for industry, avoiding duplicated agency requests, issuing requests for information on agent security, and convening sector-specific listening sessions [132-146][155-164][165-172].


Security, trust, and evaluation as prerequisites for adoption – Speakers stressed that without robust security, identity, and authorization standards, builders cannot safely grant agents access to sensitive data or real-world actions; analogies to SSL/HTTPS and automotive safety metrics were used to illustrate the need for measurable, trustworthy standards [206-207][211-230][158-162][163-164].


International collaboration and a global “AI Internet” vision – The discussion repeatedly referenced builders in India, Kenya, and other regions, noting that open protocols should let any developer plug into AI services worldwide; the U.S. engages with ten-country networks and shares best-practice measurements to foster a truly global ecosystem [186-190][52-56][276-280][108-115].


Learning from historical standards (Internet, electrical, automotive) to shape AI standards – Examples such as the 802.11 Wi-Fi standard, NIST’s taillight-color standard, early HTTPS adoption, and automotive safety ratings were invoked to argue that open, consensus-based standards drive widespread, secure adoption [124-126][147-152][233-238][211-218].


Overall purpose / goal of the discussion


The panel was convened to explain and promote a coordinated effort-led by both industry leaders and U.S. government agencies-to create open, interoperable, and secure AI agent standards. By establishing common protocols, testing frameworks, and security guidelines, the participants aim to lower barriers for global developers, accelerate innovation, and ensure that AI systems can be safely integrated into commerce, public services, and everyday applications.


Tone of the conversation


The tone remained constructive and collaborative throughout. It began with a formal introduction and factual overview, moved into enthusiastic descriptions of technical progress, incorporated light-hearted remarks (e.g., Michael Brown’s “red means stop” analogy), and shifted into reflective, historical analogies that underscored the importance of standards. No adversarial moments appeared; the dialogue stayed optimistic about the potential of open standards to “grow the pie” for all stakeholders.


Speakers

Sihao Huang – Senior Policy Advisor for AI, Emerging Tech, White House [S1][S2]


Austin Marin – Acting Director, U.S. Center for AI Standards and Innovation, Department of Commerce [S4]


Wifredo Fernandez – Director for Global Government Affairs, XAI [S5][S6]


Owen Lauder – Senior Director and Head of Frontier Policy and Public Affairs, Google DeepMind [S7][S8]


Michael Sellitto – Head of Global Affairs, Anthropic [S9]


Michael Brown – Head of Growth and Operations, OpenAI [S10][S11][S12]


Additional speakers:


Michael Kratzios – Director, Office of Science and Technology Policy (OSTP) (mentioned as OSTP director)


Craig Burkhart – Acting Director, National Institute of Standards and Technology (NIST) (mentioned as Acting Director of NIST)


Howard Lutnick – U.S. Secretary of Commerce (Commerce Secretary)


George Osborne – Colleague of Michael Brown, name on placard (referenced in discussion)


Casey – Individual referenced in connection with the NIST Agent Standards Initiative (no further detail)


Full session reportComprehensive analysis and detailed insights

Opening & Context – Sihao Huang, Senior Policy Advisor for AI at the White House OSTP, opened the session, noting that U.S. firms are investing roughly $700 billion in AI infrastructure this year and are competing fiercely to deliver cheaper, more powerful models, making common interfaces urgent [13-15]. He introduced the panel: Austin Marin, Acting Director of the Centre for AI Standards and Innovation at the Department of Commerce, and senior representatives from Anthropic, Google DeepMind, OpenAI and XAI [3-12].


Company-Specific Protocol Overviews


Anthropic – Model Context Protocol (MCP) & SKILLZ – Michael Sellitto described MCP as a universal open standard that lets models describe a data source and its tools, enabling automatic discovery and retrieval of enterprise information such as payroll or revenue [28-38]. He contrasted this with the prior landscape of bespoke, vendor-locked integrations and highlighted the companion SKILLZ protocol, which encodes repeatable task instructions that can be taught once and transferred across models, enhancing data portability and reducing lock-in [39-48].


Google DeepMind – Agent-to-Agent (A2A) & Universal Commerce Protocol (UCP) – Owen Lauder explained A2A as a “digitised clipboard” that conveys an agent’s identity, capabilities, intent, data requirements and security constraints to another agent, removing the need for custom code [63-73]. He also outlined UCP, which standardises how agents interact with websites and payment systems, with pilot partners ranging from Walmart and Target in the United States to Flipkart and Infosys in India [74-77].


OpenAI – Commerce Protocol – Michael Brown noted that OpenAI’s commerce protocol enables agents to plan a family vacation, book flights and hotels, demonstrating how shared commerce standards allow agents from different companies to cooperate on real-world tasks [94-102].


XAI – MacroHearts & “parallel Internet” – Wifredo Fernandez positioned XAI’s MacroHearts project as part of a “parallel Internet” that will sit alongside the existing web, accelerating AI development while raising regulatory questions such as governance of agent-driven social-media platforms [119-123].


Historical Analogies & Security Emphasis – Sihao likened the need for secure AI-agent interfaces to the historic development of SSL and HTTPS, which unlocked e-commerce on the open web [206-207]. Sellitto reinforced the security argument with an automobile-industry analogy, suggesting that standardized crash-test-style safety metrics would give users confidence in AI agents [212-218].


U.S. Government Initiatives – Austin Marin clarified the Centre’s “front-door” role: it coordinates agency requests, avoids duplication, and ensures companies engage with advisers who understand frontier and agentic AI [138-152]. The Centre follows NIST’s long-standing voluntary-consensus approach, exemplified by the historic taillight-colour standard that defined the exact shade of red for vehicle lights [146-152]. Marin announced a Request for Information on AI-agent security (deadline in March) and referenced a draft NIST document on agent identity and authorisation that is open for comment [155-165]. He also outlined upcoming sector-specific listening sessions (education, healthcare, finance) in April to surface challenges such as handling personally identifiable information, with the aim of producing metrology, benchmarks and best-practice guidance [165-172].


International Collaboration – The discussion highlighted the International Network for Advanced AI Measurement, Evaluation and Science (INAEMS), a ten-country consortium that meets regularly to share best practices and develop consensus on measurement methodologies [276-280]. Sihao stressed that standards should enable builders in India, Kenya and elsewhere to use and switch between U.S. AI products without lock-in [186-190].


Next Steps (as stated in the transcript)


1. Submit comments to the AI-agent security RFI before the March deadline [155-165].


2. Review and comment on the NIST draft on agent identity and authorisation [163-165].


3. Participate in the April sector-specific listening sessions [166-172].


4. Continue engagement with the Centre for AI Standards and Innovation and the broader INAEMS network to shape forthcoming voluntary standards [276-280].


Closing Observation – Participants expressed broad agreement that open, interoperable AI-agent standards-covering data access (MCP), task encoding (SKILLZ), inter-agent communication (A2A) and commerce (UCP)-are essential to prevent vendor lock-in, foster global innovation and create a “parallel Internet” for AI [28-48][63-77][94-102][119-123]. They invoked historical precedents such as TCP/IP, HTTPS, electrical-plug standards and automotive safety metrics to argue that voluntary, consensus-based standards can drive secure, widespread adoption while avoiding fragmentation [198-201][233-242][248-251].


Session transcriptComplete transcript of the session
Sihao Huang

of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable and open to the rest of the world to sort of build on that for your own businesses, for your own benefits. And so we have an amazing panel here today. We have, so first of all, I’m Sihao Huang. I’m Senior Policy Advisor for AI at Emerging Tech at the White House. We’re joined with Austin Marin, who’s the Director for the Center for AI Standards and Innovation at the Department of Commerce, which really is the center for a lot of AI activity within the U .S. government, setting standards, driving innovation, measuring AI systems, improving metrology, and a lot of the smartest people in the U .S.

government are within Austin’s organization. And then we have the four frontier AI companies from the United States. So we’re very happy to be joined by Mike Salito, who is the Head of Global Affairs at Anthropic. We have Owen Lauder at Google DeepMind, who’s the Senior, Director and Head of Frontier Policy and Public Affairs. We have Mike Brown, who is head of growth and operations for OpenAI for our countries. And, of course, we have Weefi Fernandez, who is the director for global government affairs at XAI. So really the amazing lineup of U .S. industry. I said this in a previous panel, but American companies are spending $700 billion into infrastructure this year, just this year alone. And they probably won’t like it that I say this, but they’re competing very hard against each other to make AI models cheaper and more powerful for you guys to build on and to drive those applications.

And so this is going to be a panel on how we make that happen, how we standardize interfaces with those AI systems. And so first I’m just going to ask a question to the AI companies that are sat here. So over the past few months, I think, we’ve seen the emergence of an ecosystem of standards that move. To support the deployment of AI agents. I think one of the most notable ones is Anthropix Model Context Protocol, which a lot of other companies are building off of right now and is sort of becoming the industry standards. Of course, you have Google DeepMind’s A2A Agent -to -Agent Protocol, OpenAI’s Agentic Commerce Protocol, and then XAI, of course, has been working on its highly secretive and famous MacroHearts agent project.

And so all the companies here are very much involved in sort of this agent discussion. And so maybe open it up to the companies here to tell us a little bit about what these agent protocols actually do and what they have unlocked. And that’s sort of the builders who are sat here, the audience. What do they enable a software engineer or an AI engineer at India or other countries to create?

Michael Sellitto

Okay. Well, first I want to start off by thanking Suhao and OSTP for organizing this panel and all the people who are here. Thank you. So it’s great to be here with Austin. I think Anthropic has really had a really strong partnership with the Trump administration and appreciated the leadership of Secretary Lutnick in expanding and enhancing the Center for AI Standards and Innovation, which is really critical to making this technology work for everybody in a manner that’s safe, responsible, and open. MCP is a universal open standard for connecting AI systems to the tools and data sources that people already use. So imagine the knowledge bases inside of an enterprise. You can imagine government data sources.

The Indian government, of course, is a real leader in, why am I forgetting the acronym right now, DPI, sorry, and just has massive amounts of data that are already digitized. And so MCP is a way that you can connect your AI models and agents to those data sets and also tools. And it really, you know, simple. intuitive way. You just need to give the model a rough description of what’s in the data source and what kind of tools or how can it access it. And then the model will intuitively know how it can use those data sources the same way that somebody in your enterprise or your organization would know if I want to get payroll data, I need to go to this human resources system.

If I want to get data about, you know, our revenue, I need to go into HEX or whatever your particular tools are. You know, before MCP, you really had to build all these systems in a very bespoke manner, which meant that if you built them with one model or one vendor, you were kind of stuck because you’d have to rewrite everything if you wanted to switch. MCP being this open source protocol that’s supported by all of the major AI companies means that you really have this degree of interoperability, which just enables the whole system to be much more open and competitive. We also recently built SKILLZ, which is a software that’s been around for a long time.

It’s a software that’s been around for a long time. It’s a set of instructions that teach agents how to perform. specific tasks. The way that I describe this or think about it is, you know, imagine a new person joins your team. You spend a little bit of time teaching them, you know, how to do work the way that your organization does it. And then you expect them to just be able to follow those instructions all the time. So you kind of teach once and then they’re able to do that. It’s the same thing with skills, which also is another open protocol where you can build these skills. And then if you decide that, you know, you want to switch from Anthropic to any of the other fine companies here on the panel, you can move those skills over.

And so that interoperability and data portability is really a critical piece of making this an open and competitive environment.

Owen Lauder

Amazing. Thank you, Mike. And, yeah, thank you to Sehow. Thank you to OSTP and the U .S. government for the event and all the partnership. And a big thank you and congrats to our Indian hosts on a fantastic summit week. If you take a step back, it has been, I think, a really exciting week, a demonstration of how advanced AI is now being used around the world to do incredible things. It’s been really exciting. I think seeing the way that people are using Gemini right across India, really exciting to see the way that everyone in India from world -class scientists using AlphaFold to teachers and students using AI in the classroom. And I think with all of the progress that we’ve seen in the last few years, it’s easy to forget sometimes that this is still relatively new technology.

We’re still in the relatively early innings of working out how to develop this technology and use it for good. And one of the things that we need to do, I think Seahaw covered this very well in his opening gambit, is build out this ecosystem of technical standards to make sure that we can continue using this technology in the right ways. There’s a couple of ways that we’re thinking about these standards. One is technical standards, interoperable standards, and then also standards for testing these systems, making sure that we can use them in a reliable and secure way. We really want to contribute right across the piece here, so we’re excited. We have various standards that we have contributed to the ecosystem.

Our agent -to -agent standard that Seahaw mentioned. This is basically a standard for how… agentic systems talk to each other. At the moment, it’s a little bit tricky for agents to converse with each other. You have to often write bits of bespoke code for an agent to talk to an agent, or they have to be running on the same walled garden code base. So what we do with agent -to -agent is essentially have a sort of digitized clipboard of information that an agent will share with another agent. What’s my ID as an agent? What are my capabilities? What am I trying to do? How do I take data? What are my security requirements? This is going to be absolutely fundamental to sort of greasing the wheels of the agentic economy.

UCP, another standard that we’re working on, so we have our universal commerce protocol at Google. This essentially does the same thing, but it’s for how agents talk to websites and payment systems. This is going to be transformative for business. It’s great to be able to partner with companies right around the world, whether it’s Walmart and Target in the U .S. or Flipkart and Infosys in India that we’re working with across these agents. Excited to see what… everyone is going to do with the technology that we can enable with this.

Michael Brown

Thanks for the tip. Hi, everyone. My name is Michael Brown. My name placard says George Osborne, who’s a colleague. He got tied up in another panel, so I’m here. George and I work extremely closely together, but he has a much nicer accent because he’s from the U .K. I’m doing my best here. You’re doing very well, I might say, very well. For me, this is a fun panel because it feels like a very collaborative and cooperative opportunity to grow the pie, and the companies that are on either of our side are extraordinary companies with extraordinary humans, and it’s fun to just work with them in some of these areas. If I were going to kind of explain, why we’re here in this particular panel to my kids who are 9 -11, I would sort of say, look, are there countries out there in the world where when you get to a stoplight, red means go?

I don’t think so. I think mostly red means stop and green means go. I mean, if I’m wrong, I apologize. I’m not an expert. But, you know, having sort of shared understanding in countries, rich and poor, advanced and still developing around how things work, I think grows the pie because it allows builders to build in a way that everyone can kind of know that what they’re building is going to be both secure and is going to be accessible and hopefully enjoyable or useful to people anywhere in the world. And I think each of the companies up here is contributing something great to that. You know, I’ve joined OpenAthens. I relatively recently, but like MCP to me is something like I just knew it’s like that’s really important.

And like, well, Anthropic introduced it. Hopefully, Anthropic would agree with this, that now it’s just like the thing, right? And I think that’s terrific that it’s the thing. You know, Owen also mentioned in commerce, I don’t know if these standards compete or if it’s cooperative, but at OpenAI, we have a commerce protocol as well for the same thing, because there’s a world where these agents are going to be out shopping for us, which is kind of fun, right? So, you know, if the agent knows that you’re planning on taking a family vacation and it knows that you want to visit Goa and the agent can go actually secure your travel flights and your hotel, these commerce protocols can do that.

So agents of different companies, potentially in different countries, can all partner and work well together because they understand how they’re supposed to be looking for shared information and how that information should be shared. There’s kind of a shared understanding there. And so I think all of us are working to build these protocols to grow the pie, to create more democratization, more commerce, more benefit for everyone by having these common protocols in place.

Wifredo Fernandez

Thank you, Sihal. Great to be with you all here, and thank you to the government for having us. What an exciting week, frenetic and kinetic and chaotic, as I was saying earlier. So it’s just an honor to be here and to feel the energy and all the innovation and to meet a bunch of different builders across India. So Wefredo Fernandez, folks call me Weefy for short. It’s a nickname I got in the 90s before wireless Internet was a thing, so my name became relevant later. But, yeah, this is certainly a topic that brings us all together, which is wonderful. You know, XAI is only two and a half years old. So we’re all in this together.

So we’re all in this together. So we’re all in this together. So the foundational work done by these peer companies have enabled us to accelerate our development. We’re better because of those, and we’re better because we can all build on top of those. And these standards and protocols that folks have built and that we sort of lay out and sort of agree to as an industry and as governments really make sure that not just us four compete, right? This enables a ton of innovation. So, you know, on the X side, and, you know, XAI and X sort of operate in tandem, it’s been really neat to see the AI community sort of build and test and discuss and debate in public.

So, like, when Malt Book was taking off, I think you likely found out about it on X. And so it’s just neat to see the ecosystem sort of converge in that discussion space. And just in thinking about this panel and thinking about MoldBook in particular, it’s like, well, do we regulate social media platforms that are agent driven? Just it brings like all these really novel questions about about how we regulate. But I think at the end of the day, we all agree that these open standards that are creating sort of this call it a layer, call it a new ecosystem, call it a parallel Internet. I just really crucial for for our development of the Internet writ large.

And so, yeah, excited about the panel and the discussion here today.

Sihao Huang

Thank you so much. Your name is formalized in the 802 dot 11 protocol, which is what allows my phone to connect to the Internet in D .C. and here in India. So it’s extremely relevant. I’m going to use that. That’s awesome. So I think we’ve heard a little bit from our companies who are engaging a lot of dynamic activity, pushing out agent protocols of all kinds. And I think. There’s a lot of industry excitement over agents right now. One of the big announcements that we’re here to make, also Director Carrazio’s made early on the main stage, is the Agent Standards Initiative, and that is something that is let out of Casey in NIST. So I’ll turn to Austin to introduce that.

Austin Marin

Absolutely, and thanks, Hal, and thank you to OSTP for convening this event and to my fellow panelists. I’ll start with a brief introduction of my organization. So I am the Acting Director for the U .S. Center for AI Standards and Innovation. Our background, we were founded about two years ago as the U .S. AI Safety Institute. In June of last year, Commerce Secretary Howard Lutnick refounded us as the Center for AI Standards and Innovation, which signaled a shift from sort of safety concepts to standards and innovation. And our remit is to be the front door to industry to working with the U .S. government. There’s, I think, two aspects of our organization I think that bear note is, first, that we’re located within the Department of Commerce.

We are commerce -focused. We are industry -focused. We work. We work with all of the companies on this panel. Some of them we have formal research. or pre -deployment evaluation agreements with so that we can work with them on their models and the research questions they’re tackling. We also do take seriously our role trying to serve as a front door to the U .S. government for industry. We want to make sure that when industry is trying to navigate government that they’re speaking to the right people, that the people in government they’re speaking to have advisors who understand frontier AI and agentic AI, and also that the industry isn’t being overwhelmed by duplicative requests from different aspects of government.

You don’t want 10 different agencies asking the same company basically the same thing and creating unnecessary work, and so we try to act in sort of a coordinating role to make sure that industry is being heard and they’re navigating U .S. government. The other aspect of our organization that bears note is we’re located within NIST, the National Institute of Standards and Technology, and NIST has an over -century -long track record of not regulating but helping industry through, consensus, develop voluntary standards and best practices. Acting Director of NIST, Craig Burkhart, he likes to talk about taillights, brake lights on the back of a car. I’m sure you all see them in India. It’s the same color red as it is in the U .S.

That’s because it was a NIST standard of what exactly color red is going to be on the taillights. But another important aspect of that anecdote is it wasn’t government that said this is the color red that you all must use. It was industry came together, and with the help of NIST experts through a convening, they agreed on what the color should be. And so now when we look at it, what does the future bring and where can NIST bring its industry -driven, consensus -based voluntary standards work into the new AI world, we’re looking to AI agent standards. So as to how said, we announced this week an AI agent standards initiative, which is looking at all facets of AI and AI agents.

There’s a couple aspects of it that have already been announced that we’re working on, and I’ll tick through those relatively quickly. The first is we have a request for information. I’m going to go ahead and get this. So we have a request for information. We have a request for information. in the field. It closes in March and we encourage you to engage with us and provide comments on AI agent security. AI agents obviously bring a whole host of new security challenges and we’d love to hear from you and your organizations about what challenges you are facing. Learning and identifying those challenges is a first step. Once we identify those challenges we can then take the next step of seeing where can NIST’s approach of voluntary standards and best practices documents, how can they help address and mitigate those those challenges.

Another aspect, our colleagues at NIST, the Information Technology Laboratory or ITL, they have a draft out for comment on AI agent identity and authorization. Again, encourage you to engage and interact with them. A third initiative that we recently announced is we’re going to hold sector specific listening sessions hopefully in April in the sectors of education, healthcare, and finance where we’re going to convene various members of industry and say to them look there’s this great technology out there called AI have you heard of it, AI agents, why aren’t you adopting it? it? What challenges are you facing? And we may not be able to solve those challenges, but maybe we can. And so one example I give, and I don’t know that it’s going to be something we find out, but for instance, in the education and healthcare sector, there’s business concerns and existing regulatory concerns about PII, personally identifiable information.

And perhaps what we’ll learn through these listening sessions is that hospitals or schools aren’t deploying AI because they can’t reliably evaluate how AI agents are handling the PII. And so that’s something that KC, my organization, could develop metrology, benchmarks, evaluations, best practices, documents that could give confidence to those types of institutions that the agents are performing as desired. And maybe that’s a step that we could take through voluntary consensus driven best practices and standards that unlocks adoption. So we’re very focused on that. We’re looking forward to learning what those challenges are. I don’t know if the challenge I mentioned is actually a challenge facing industry. And that’s part of NIST’s approach, which is we … In D .C., we only see a small slice of what’s going on in industry.

We only have a tiny window into the world. And so it comes from a place of humility. We don’t know what the challenges people are facing. The companies that are on this panel, they’re doing an incredible job coming up with protocols for some of the challenges that they’re facing. We talked about agent -to -agent for how agents communicate. We talked about MCP for how agents navigate databases. We talked about UCP and OpenAI’s commerce protocol for engaging in e -commerce. And I’m sure through these conversations, we’re going to identify other areas where open source protocols, where standards, best practices could help unlock adoption and implementation. And we’re really excited to work with both you and all your institutions and companies on stage to identify those opportunities and see how we can leverage NIST’s convening authority to help.

Sihao Huang

Thank you so much for that, Austin. I think to reemphasize, this standards initiative is really wanting to make sure that the products that we build, on top of it, are able to connect with each other into our… such that if there’s a builder in India, a builder in Kenya, building on top of our AI products, American companies can use them as well. American companies can buy from them as well. And similarly, if you want to switch to a different model, nothing is sort of locked in. And I think this really ties back to a perspective that we sort of, as U .S. government, in particular the Trump administration, has about AI and AI products. We think back a lot on the history of the Internet and what that enabled for the world, but also what that enabled for America.

I think there was a perspective in the U .S. from a previous administration that technology had to be strictly locked down, and we think that’s a mistake. We want to share the best AI technologies with the rest of the world, and that’s also a sort of leading message that our delegation has here at the India AI Summit. And when we think back at the success of the Internet, what enabled that? There’s actually a number of companies and countries that tried to create their own closed version of the Internet that were centralized, that were tied to particular nations, at their own telecom networks. and they saw a little bit of success. A lot of them were state -subsidized, but none of them really scaled to the global level of the World Wide Web.

And the World Wide Web became so successful precisely because of the protocols that the U .S. government had supported. The U .S. government had made a very intentional effort to make sure that the Internet was a decentralized system, created protocols like TCPIP, HTTPS, the sort of Internet suite that was actually funded by the U .S. government back then to create independent development of these protocols that enabled the rest of the world to build on that. And what you had is really this win -win situation where the entire world now benefits from sort of the access of the Internet, the ability to build applications, companies on top of that that’s driven so much prosperity for countries around the world, but also made Silicon Valley one of the most wealthy places in human history.

And it is because of this open commerce. And that’s what we really want to create with a world of AI in the future as well. I think just to add a bit on to what Austin had said, sort of the agent security. piece. Why is agent security so important to us? It’s precisely because of adoption. You need security -driven adoption. If you look back again also at the history of the internet, the development of the secure sockets layer, SSL, and then eventually HTTPS, was what enabled e -commerce. And so, again, I think it’s a lot about the efforts that we’re going to be working with industry together to make sure that there is this standards ecosystem, that there are these interoperable interfaces that everyone can build on and trust to create the AI economy that we’re all looking forward to.

So I’ll stop ranting, but I’ll turn to the companies here. And I guess I’ll ask you all, how do you see sort of the future of AI standards and agent development? And how can AI agent standards really reflect the same principles that enable the open internet, including interoperability and including security?

Michael Sellitto

I feel like I need to somehow fix this. an automobile analogy in here since there’s been a setting. Maybe I’ll use my favorite one, which is right now if you go to buy a car and you go down to the car dealership, those cars are going to have a bunch of metrics that you can use that have been independently determined to understand the characteristics of that vehicle. So it will tell you what the fuel economy is, how far can you drive on a gallon or liter of gas, how does it perform in various types of crash tests. These are all metrics that are done in a standardized way that are oftentimes done by third parties, and so you can have sort of trust and confidence in them, and you can know what kind of car you want to buy.

Maybe I’m a single person and I like to drive fast, and so I’m just worried about head -on collisions because I’m going to be going as fast as the car can. I’m going to be driving as fast as the car can possibly go, and that’s the biggest danger for me. Or maybe I have a family and I’m worried about you know, what happens if we get hit from the side and I’ve got kids in the back seats. You know, a piece that this standardization can help us get to is having that same kind of confidence in knowing what you’re purchasing that, you know, customers and governments and the public, you know, can have. I think another real benefit, and it’s really aligned with, I think, some things that Michael Kratzios, the OSTP director, talked about today and also in an op -ed that he had in the Financial Times around exporting the American AI stack, right?

There’s a lot of concerns today about sovereignty, about having control over systems in your data and so on. And a way that I think you can both use the best technology in the world, which sometimes comes from American companies, but also have confidence that there’s resilience in the system, is really having things be built to open. Open standards, right? And that gives you the ability to… to decide to make changes. If today Anthropic is producing the best technology and tomorrow it’s X or it’s OpenAI or someone else, you can change. Or maybe an open source model gets good enough at the use case that you want and you want to switch over from a proprietary model to an open source model.

So I think that’s what this can enable. I think that’s sort of the opportunity that we have ahead of us. And I think that the vision of the AI security standards work that Casey’s going to be working on is, if you’re going to entrust these systems with access to your personal data or your financial data or the ability to do things in the real world on behalf of your enterprise or what have you, you need to have some sense that there’s security, there’s authentication for things, that there’s an ability to come back and check with the user before making certain significant decisions or taking certain decisions. Certain significant actions. And that’s… You can test and evaluate and report that information in a way that is intelligible to the customer, that they know what they’re buying, and they know when to trust, and they know when not to trust.

What’s up there?

Owen Lauder

Yeah, well said, and I endorse a lot of what Mike mentioned there and Austin and Sihau as well. I do think there’s a lot you can learn from the history of standards in various different industries that we can apply to AI. Sihau mentioned some of the early Internet standards. I mean, I’m just about old enough to remember people in the early 90s talking about how they would never, ever, ever put credit card information on the Internet. That would be absolutely insane. And it sort of was when you had information being shared in plain text in a totally unencrypted way. Then you have the secure layer that Sihau mentioned, HTTPS, and it’s completely unlocked the modern Internet economy as we know it to be.

History of electrical standards as well. Actually, this was something that drove the adoption of electrical products in the late 20th and early 21st century. You had a scientific approach to standardizing units of. measurement like ohms and volts and amperes, which allowed power supplies to connect their energy to the grid. It also meant that you could invent things like fuses, which could be set to a certain amperage, and if you had an electrical current above that, it would shut itself off. So I think we need to continue learning from history. I think there are a few principles that we should take forward as we do that. I think open standards, as we’ve been discussing, is the right way to go.

You need technically robust standards that are really informed by an understanding of the technology and how they work, and we should be looking to prioritize interoperability as well. Maybe a final thought for this piece is also learning from standards that are not done well. There are many industries that have not quite gotten this right. A lot of us have traveled here from around the world having to bring adapters with us because our electrical products won’t plug into the wall. It’s really, really annoying. It’s actually also a massive hindrance on commerce as well, because it means if you’re producing a computer or another electronic application, you have to have a different plug socket in every single country around the world that you’re developing your product for.

So things to avoid. as well we need to be mindful of.

Michael Brown

automobile industry or something, two humongous but separate industries, and how they’re going to have to come together to set up norms for how agentic systems work and how data is shared, I think government can probably play an important role in bringing together industries to establish those dialogues. But the industries certainly still need to be front and center in establishing what works for them because they are the practitioners and the experts on what their customers need, what their colleagues need. And so I think we’re all going to have to kind of navigate that world together and figure out what is the role for the research labs, how does government support, and then how does industry play a leadership role in both governing and building for itself industry -specific standards for the future of AI.

Wifredo Fernandez

Yeah, I think this conversation has been a bit of a history lesson. I appreciate that. Thank you. And it made me think about how I used to get music when I was a kid, and some of the panelists may appreciate. You know, there were these music catalogs that would come to your house. You’d select however many compact discs you wanted, CDs. You’d put cash or a check in an envelope and send it away. And some weeks later, magically, some CDs would appear on your doorstep. So when I think about, you know, instructing an Asian to go download music or acquire music on my behalf, like, I’d much rather have that than I don’t know how we used to put so much trust in a system without standards or, you know, a process that could not be audited.

So I think sort of the guiding principles that have developed the Internet still apply. We want privacy -preserving technology. We want technology that allows us to audit. We want technology that considers authenticity. We want technology that considers means of consent. And to Michael’s point, I think ultimately agents serve the user and agents serve organizations. And so if we view it through that lens, it should guide us right. They don’t serve us as the model developers.

Sihao Huang

Great. Thank you all so much for that. So that was a bit of a nerdy discussion on standards, a bit of a history lesson. I love that. But we’re also here right now at the India AI Impact Summit talking to a country of builders, talking to the developing worlds, which are some of the most dynamic AI markets in the world. And so I think it will also be amazing to hear from the panelists here, including Austin, how you all are engaging with the rest of the world on these standards, how your organizations are engaging with other countries on AI. And one of the most exciting applications you’ve seen develop on top of your standards and products.

Austin Marin

I guess I’ll lead off. One of the main forums that Casey engages internationally is through the International Network for Advanced AI Measurement, Evaluation, and Science. It’s a bit of a mouthful of a name, but it’s ten countries that have established AI security institutes or, like we do, the Center for AI Standards and Innovation, and we meet a couple times a year. We also engage in informal technical and scientific exchanges and we share best practices in measurement and evaluation science. In December, we met in San Diego on the sidelines of the NURFS conference and we sat down and discussed sort of open questions in measurement science and the challenges that we’re facing, and we published a blog post, I think, about a week ago that summarizes some of the periods of consensus and the open questions.

And there, the work we’re doing, I think, is very important because when we talk about the evaluation, of AI systems, particular capabilities, particular security vulnerabilities, etc. It’s important for us to have consensus on the methodologies.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Protocols are essential for agents to work together smoothly and enable interoperability across products and businesses.”

The knowledge base explicitly states that protocols are crucial for builders to interact with products and achieve interoperability, confirming the report’s emphasis on their importance [S1] and [S3].

Confirmedhigh

“Google DeepMind’s Agent‑to‑Agent (A2A) protocol acts as a “digitised clipboard” conveying an agent’s identity, capabilities, intent, data requirements and security constraints, removing the need for custom code.”

S15 notes that Google has launched an agents-to-agents protocol, which aligns with the report’s description of A2A as a standardized way to convey agent metadata and eliminate bespoke integrations [S15].

Confirmedhigh

“Google DeepMind’s Universal Commerce Protocol (UCP) standardises how agents interact with websites and payment systems.”

The knowledge base mentions a “universal” protocol for agent communication with web services, supporting the report’s claim that a Universal Commerce Protocol standardises website and payment interactions [S15].

Confirmedmedium

“The panel discussion will cover the business case for agentic AI and the public‑policy implications of its use.”

S19 describes a panel that will discuss the business case for agentic AI followed by a second panel on public-policy implications, confirming the report’s outline of the session’s agenda [S19].

External Sources (71)
S1
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S2
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Great. Thank you all so much for that. So that was a bit of a nerdy discussion on standards, a bit of a history lesson. …
S3
https://app.faicon.ai/ai-impact-summit-2026/us-ai-standards_-shaping-the-future-of-trustworthy-artificial-intelligence — And it is because of this open commerce. And that’s what we really want to create with a world of AI in the future as we…
S5
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — -Wifredo Fernandez- Director for Global Government Affairs at XAI
S6
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Thank you, Sihal. Great to be with you all here, and thank you to the government for having us. What an exciting week, f…
S7
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S8
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with …
S9
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S10
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S11
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Thanks for the tip. Hi, everyone. My name is Michael Brown. My name placard says George Osborne, who’s a colleague. He g…
S12
https://app.faicon.ai/ai-impact-summit-2026/us-ai-standards_-shaping-the-future-of-trustworthy-artificial-intelligence — And like, well, Anthropic introduced it. Hopefully, Anthropic would agree with this, that now it’s just like the thing, …
S13
From Innovation to Impact_ Bringing AI to the Public — Yeah, this was going to be my question to him. But it’s always fun to answer. I think that models are just not what we k…
S14
Driving Enterprise Impact Through Scalable AI Adoption — But with AI, we’re able to create programs much faster. The models are infinitely scalable. They’re always awake 24 -7. …
S15
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — All right. Just speaking for myself, I can’t wait to use agents. I feel like it’s a lot of developer communities that ha…
S16
Challenging the status quo of AI security — Debora Comparin addressed the critical issue of identity management for AI agents, highlighting several open problems th…
S17
Digital standards — Besides developing standards for AI, SDOs can also rely on AI techniques tofacilitate and improve some of their activiti…
S18
WS #187 Bridging Internet AI Governance From Theory to Practice — Vint Cerf: Well, thank you so much for this opportunity. I want to remind everyone that I am not an expert on artificial…
S19
Agentic AI in Focus Opportunities Risks and Governance — Evidence:CAISI launched an AI agent standards initiative, issued an RFI on AI agent security, and announced sector-speci…
S20
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S21
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Boutife Adisa: Yeah, so, Bullet Defe Adisa for the record. In my opinion, I think in addition to what Lily has already s…
S22
Open Forum #33 Building an International AI Cooperation Ecosystem — – **Multi-stakeholder Approach and Inclusive Development**: Drawing parallels to internet governance, speakers stressed …
S23
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — This discussion focused on governing AI development and ensuring safe, beneficial deployment while maintaining innovatio…
S24
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh: That will really empower people globally. What do we expect from the Global Digital Compact to make this…
S25
Main Session on Artificial Intelligence | IGF 2023 — In today’s world, Artificial Intelligence (AI) plays a pivotal role in transforming industries and daily life. By emulat…
S26
Setting the Rules_ Global AI Standards for Growth and Governance — And it’s going to have to be a collective effort. Yeah. Okay. Key areas of convergence included the importance of proce…
S27
Setting the Rules_ Global AI Standards for Growth and Governance — Key areas of convergence included the importance of process-oriented standards that can adapt to evolving capabilities, …
S28
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Context is highlighted as a crucial element for effective engagement in standards development. Australia’s experts have …
S29
Global AI Policy Framework: International Cooperation and Historical Perspectives — But I think this is a global forum and I would like to talk about this classical debate which started in 1960s and 70s a…
S30
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Audience:Thank you so much, Dr. Ali Mahmood. I’m from Pakistan. I’m heading a provincial government entity that is invol…
S31
Harmonizing High-Tech: The role of AI standards as an implementation tool — Sergio Mujica:Very well, thank you, Bilel, and good afternoon to all of you. I think the most important point here is th…
S32
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Michael Sellitto- Owen Lauder- Austin Marin Industry-led, consensus-based approach to standards development is prefer…
S33
Open Forum #26 High-level review of AI governance from Inter-governmental P — 1. Governments: Responsible for balancing innovation and security, and creating appropriate regulatory frameworks. Yoic…
S34
Challenging the status quo of AI security — Connection between observed security challenges and need for standards Given the new security challenges that emerge wh…
S35
WS #283 AI Agents: Ensuring Responsible Deployment — ### Government and Regulatory Approaches ### Introduction and Context ### Technical Infrastructure and Standards Anne…
S36
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Patel outlines a three‑layer security approach: protect agents from malicious inputs, protect the world from rogue agent…
S37
Not Losing Sight of Soft Power — Thailand’s strategy is built on a 13-pillar approach to soft power, encompassing diverse areas such as food, film, touri…
S38
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Summary:Speakers agree that industry should lead standards development with government playing a convening and facilitat…
S39
Fast-tracking a digital economy future in developing countries (UNCTAD) — They have also released laws supporting small business start-ups and providing incentives for import and local market sh…
S40
Artificial intelligence — Despite their technical nature – or rather because of that – standards have an important role to play in bridging techno…
S41
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S42
Agentic AI in Focus Opportunities Risks and Governance — So understanding that risk picture is going to be critically important. And last, I think that really pivots into one of…
S43
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Disagreement level:Very low level of disagreement with high implications for successful AI implementation. The consensus…
S44
From Technical Safety to Societal Impact Rethinking AI Governanc — Disagreement level:Moderate disagreement with significant implications – while speakers generally agree on the need to m…
S45
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — Very low level of disagreement with high implications for successful AI implementation. The consensus suggests strong al…
S46
AI in Mobility_ Accelerating the Next Era of Intelligent Transport — Disagreement level:Moderate disagreement level with significant implications for policy and investment decisions. The di…
S47
From Technical Safety to Societal Impact Rethinking AI Governanc — Moderate disagreement with significant implications – while speakers generally agree on the need to move beyond purely t…
S48
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Summary:All speakers strongly advocate for open, interoperable standards that enable cross-vendor compatibility and prev…
S49
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — The conversation centered around emerging AI agent protocols that enable different AI systems to work together seamlessl…
S50
WS #283 AI Agents: Ensuring Responsible Deployment — Carter describes specific technical developments including Google’s agent-to-agent protocol for vendor-agnostic interact…
S51
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Would you like me to go and take care of you and get some more toothpaste for you? You mentioned standards, which I thin…
S52
WS #187 Bridging Internet AI Governance From Theory to Practice — – Yik Chan Ching- Audience Explains A2A (agent-to-agent) and MCP (model context protocol) standards and uses the analog…
S53
Agentic AI in Focus Opportunities Risks and Governance — Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin May…
S54
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S55
WS #204 Closing Digital Divides by Universal Access Acceptance — All speakers emphasize that trust and safety are prerequisites for meaningful internet adoption, particularly for vulner…
S56
Closing remarks — Standards are needed to help build trust, and trust isn’t a property of machines but how we handle uncertainty together
S57
Open Forum #33 Building an International AI Cooperation Ecosystem — ### Regional Perspectives **Dai Wei** from the Internet Society of China highlighted the organization’s cooperation wit…
S58
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S59
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — This discussion focused on governing AI development and ensuring safe, beneficial deployment while maintaining innovatio…
S60
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh: That will really empower people globally. What do we expect from the Global Digital Compact to make this…
S61
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — The speaker explains the origins of the global AI capacity building network, crediting Saudi Arabia and Kenya with initi…
S62
Digital standards — ‘Standards can underpin regulatory frameworks and […] provide appropriate guardrails for responsible, safe and trustwo…
S63
US AI Safety Institute staff left out of Paris summit delegation — Vice President JD Vancewill lead the US delegationto a major AI summit in Paris next week, but technical staff from the …
S64
AI demand drives record power sector deals — The US power industry isexperiencinga surge in mergers and acquisitions (M&A) as record demand for electricity, particul…
S65
Data first in the AI era — – Someone from the Department of Commerce
S66
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Hello. Yeah. Thank you very much, Professor Karandika. This is a perfect question for me to talk about. This is why I’m …
S67
Building Population-Scale Digital Public Infrastructure for AI — Very interesting. And I’ll just try to kind of paint the picture by giving a context. Now, think about it. We’re talking…
S68
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part II — Mr. Timothy Grosser from EY discussed leveraging technology for sustainable development goals, focusing on digital publi…
S69
AI for food systems — Dejan Jakovljevic argues that standardization and reference architectures serve as foundational elements that enable res…
S70
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — The discussion featured Giordano Albertazzi, CEO of Vertiv, addressing the critical physical infrastructure requirements…
S71
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — This discussion focused on AI assurance and the challenges of ensuring AI systems, particularly emerging agentic AI, are…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Michael Sellitto
4 arguments183 words per minute1123 words366 seconds
Argument 1
MCP as universal open standard for connecting AI models to enterprise data and tools (Michael Sellitto)
EXPLANATION
MCP is presented as a universal, open protocol that lets AI systems link directly to existing enterprise knowledge bases and governmental data sources. It simplifies integration by allowing models to understand and access data with a simple description.
EVIDENCE
Sellitto explains that MCP connects AI models to enterprise knowledge bases and government data sources, such as the Indian government’s digitized datasets, by providing a rough description of the data source and tools, enabling intuitive access similar to how a human would retrieve payroll or revenue data [28-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources describe MCP as a universal open standard that connects AI models to enterprise knowledge bases and government data, and note that prior to MCP integrations were bespoke and vendor-locked <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Universal data connectivity
AGREED WITH
Owen Lauder, Michael Brown, Sihao Huang, Wifredo Fernandez
Argument 2
Skills protocol enables one‑time teaching of tasks and portable agent capabilities across models (Michael Sellitto)
EXPLANATION
The Skills protocol allows developers to encode task instructions once and attach them to agents, making the skills portable across different AI providers. This promotes reusability and reduces the need for repeated training.
EVIDENCE
Sellitto describes SKILLZ as a set of instructions that teach agents how to perform specific tasks, likening it to onboarding a new employee who learns a process once and can then repeat it, and notes that these skills can be transferred when switching between AI companies [39-48].
MAJOR DISCUSSION POINT
Task portability
AGREED WITH
Owen Lauder, Michael Brown, Sihao Huang, Wifredo Fernandez
Argument 3
Open standards prevent vendor lock‑in and allow seamless switching between AI models (Michael Sellitto)
EXPLANATION
By adopting open, interoperable protocols like MCP, organizations avoid being tied to a single vendor’s proprietary system. This flexibility encourages competition and innovation across the AI market.
EVIDENCE
Sellitto contrasts the pre-MCP era, where bespoke integrations locked users into a single model, with the current open-source protocol that supports all major AI companies, thereby preventing vendor lock-in [37-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sources highlight that open standards avoid vendor lock-in and enable switching between providers, contrasting pre-MCP bespoke solutions <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Avoiding vendor lock‑in
AGREED WITH
Owen Lauder, Michael Brown, Sihao Huang, Wifredo Fernandez
Argument 4
Security metrics analogous to automotive crash‑test standards provide confidence in AI agents (Michael Sellitto)
EXPLANATION
Sellitto draws an analogy between standardized automotive safety metrics and the need for comparable AI security metrics. Standardized testing would give users confidence in the safety and performance of AI agents.
EVIDENCE
He compares buying a car-where fuel economy, crash-test results, and other standardized metrics inform consumer confidence-to the need for similar standardized security metrics for AI agents, emphasizing trust and safety [212-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion references automotive crash-test analogies for AI security metrics, emphasizing the need for standardized, independently verified testing [S2]<a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Standardized security metrics
AGREED WITH
Sihao Huang, Austin Marin, Owen Lauder, Wifredo Fernandez
A
Austin Marin
6 arguments191 words per minute1263 words395 seconds
Argument 1
Request for information on AI agent security challenges to shape future standards (Austin Marin)
EXPLANATION
The Center issues an RFI to collect industry input on security challenges faced by AI agents. The feedback will guide the development of future voluntary standards.
EVIDENCE
Marin announces a request for information that closes in March, urging stakeholders to comment on AI agent security challenges as a first step toward developing standards [155-161].
MAJOR DISCUSSION POINT
RFI on agent security
AGREED WITH
Michael Sellitto, Sihao Huang, Owen Lauder, Wifredo Fernandez
Argument 2
Draft guidance on AI agent identity and authorization to ensure trustworthy interactions (Austin Marin)
EXPLANATION
NIST’s Information Technology Laboratory has released a draft document on agent identity and authorization, inviting public comment to shape trustworthy agent interactions.
EVIDENCE
Marin points to a draft for comment on AI agent identity and authorization prepared by NIST’s ITL, encouraging engagement from the community [163-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A draft on AI agent identity and authorization is cited, outlining open problems such as defining agent identity and verification mechanisms [S16].
MAJOR DISCUSSION POINT
Identity and authorization draft
AGREED WITH
Michael Sellitto, Sihao Huang, Owen Lauder, Wifredo Fernandez
Argument 3
Center for AI Standards and Innovation serves as industry “front door,” coordinating across agencies to avoid duplication (Austin Marin)
EXPLANATION
The Center acts as the primary liaison between industry and U.S. government, streamlining communication and preventing multiple agencies from issuing redundant requests to companies.
EVIDENCE
Marin describes the Center’s role as the “front door” for industry, coordinating with various agencies to avoid duplicate requests and ensuring companies speak to the right government advisors [138-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Center for AI Standards and Innovation is introduced as the industry “front door,” coordinating agency interactions and preventing redundant requests [S2].
MAJOR DISCUSSION POINT
Industry front‑door coordination
AGREED WITH
Sihao Huang, Michael Brown, Michael Sellitto
Argument 4
NIST‑led Agent Standards Initiative aims to develop voluntary, consensus‑based AI standards (Austin Marin)
EXPLANATION
NIST, through its long‑standing voluntary standards process, is launching an AI Agent Standards Initiative to create consensus‑driven standards for AI agents.
EVIDENCE
Marin explains that NIST’s historic role in voluntary standards is being extended to AI agents via the newly announced AI Agent Standards Initiative, emphasizing industry-driven consensus [146-154].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
NIST’s century-long voluntary standards process is described as the basis for the new AI Agent Standards Initiative [S2].
MAJOR DISCUSSION POINT
Voluntary consensus standards
AGREED WITH
Sihao Huang, Michael Brown, Michael Sellitto
Argument 5
Government seeks to provide guidance while letting industry lead technical standard development (Austin Marin)
EXPLANATION
The government’s approach is to offer high‑level guidance and coordination while allowing industry experts to define the technical details of standards.
EVIDENCE
Marin notes that the Center works to ensure industry receives appropriate guidance without being overwhelmed, positioning the government as a coordinator rather than a regulator [144-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The approach of government providing high-level guidance while industry leads technical standard development is noted as the preferred model [S2].
MAJOR DISCUSSION POINT
Guidance vs. industry leadership
Argument 6
Participation in the International Network for Advanced AI Measurement, Evaluation, and Science (INAEMS) to share best practices globally (Austin Marin)
EXPLANATION
Through INAEMS, the Center collaborates with ten other countries’ AI security institutes to exchange measurement and evaluation methodologies, fostering global consensus.
EVIDENCE
Marin outlines INAEMS as a ten-country network that meets regularly, conducts informal technical exchanges, and recently published a blog summarizing consensus and open questions in AI measurement [276-280].
MAJOR DISCUSSION POINT
Global measurement collaboration
AGREED WITH
Sihao Huang, Owen Lauder, Wifredo Fernandez, Michael Brown
W
Wifredo Fernandez
4 arguments156 words per minute603 words231 seconds
Argument 1
XAI builds on peer standards to create a “parallel Internet” for agent development (Wifredo Fernandez)
EXPLANATION
XAI leverages existing industry standards to construct a new, layered ecosystem—described as a parallel Internet—that supports rapid agent development and deployment.
EVIDENCE
Fernandez states that open standards are creating a new layer, a “parallel Internet,” which is crucial for broader internet development and for XAI’s own progress [121-123].
MAJOR DISCUSSION POINT
Parallel Internet concept
AGREED WITH
Michael Sellitto, Owen Lauder, Michael Brown, Sihao Huang
Argument 2
Privacy‑preserving, auditable, consent‑driven design is critical for agent‑driven services (Wifredo Fernandez)
EXPLANATION
He emphasizes that agents must incorporate privacy safeguards, auditability, authenticity, and consent mechanisms to be trustworthy for users and organizations.
EVIDENCE
Fernandez lists guiding principles-privacy-preserving technology, auditability, authenticity, and consent-as essential for trustworthy agent services [264-268].
MAJOR DISCUSSION POINT
Trustworthy design principles
AGREED WITH
Michael Sellitto, Sihao Huang, Austin Marin, Owen Lauder
Argument 3
Music‑catalog delivery analogy underscores the need for trust, auditability, and standards in agent services (Wifredo Fernandez)
EXPLANATION
He draws a parallel between old music‑catalog ordering systems and modern agent services, highlighting how standards and auditability build user trust.
EVIDENCE
Fernandez recounts how, in the past, ordering music CDs required trust in a system without standards, suggesting that modern agents need similar trust, auditability, and standards [257-263].
MAJOR DISCUSSION POINT
Trust through standards
Argument 4
XAI’s engagement with the global X platform illustrates cross‑border collaboration on standards (Wifredo Fernandez)
EXPLANATION
XAI collaborates with the broader X ecosystem, demonstrating how standards can be co‑developed across platforms and geographies.
EVIDENCE
Fernandez mentions that XAI and X operate in tandem, noting the visibility of initiatives like “Malt Book” on X and the collaborative discussion space it provides [118-119].
MAJOR DISCUSSION POINT
Cross‑platform collaboration
AGREED WITH
Sihao Huang, Austin Marin, Owen Lauder, Michael Brown
O
Owen Lauder
4 arguments212 words per minute892 words251 seconds
Argument 1
Agent‑to‑Agent standard defines shared identity, capabilities, and security requirements for inter‑agent communication (Owen Lauder)
EXPLANATION
The standard provides a structured “digital clipboard” that conveys an agent’s ID, capabilities, goals, data handling, and security needs, enabling seamless agent‑to‑agent interaction.
EVIDENCE
Lauder explains that the agent-to-agent protocol includes fields for ID, capabilities, intent, data handling, and security requirements, forming a digitized clipboard for agents to share information [63-73].
MAJOR DISCUSSION POINT
Shared agent identity
AGREED WITH
Michael Sellitto, Michael Brown, Sihao Huang, Wifredo Fernandez
Argument 2
Universal Commerce Protocol (UCP) lets agents interact with websites and payment systems for business transactions (Owen Lauder)
EXPLANATION
UCP standardizes how agents communicate with e‑commerce sites and payment gateways, facilitating automated business transactions across platforms and retailers.
EVIDENCE
Lauder describes Google’s Universal Commerce Protocol as enabling agents to talk to websites and payment systems, citing collaborations with retailers like Walmart, Target, Flipkart, and Infosys [74-77].
MAJOR DISCUSSION POINT
Agent‑driven commerce
AGREED WITH
Michael Sellitto, Michael Brown, Sihao Huang, Wifredo Fernandez
Argument 3
Technically robust, interoperable standards are essential for reliable AI ecosystems (Owen Lauder)
EXPLANATION
He stresses that standards must be technically sound and interoperable to ensure reliable, secure, and testable AI systems across the industry.
EVIDENCE
Lauder notes the need for technical, interoperable, and testing standards to guarantee reliability and security of AI systems [60-62].
MAJOR DISCUSSION POINT
Robust technical standards
AGREED WITH
Michael Sellitto, Sihao Huang, Austin Marin, Wifredo Fernandez
Argument 4
Electrical standards (volts, amps, fuses) demonstrate how common units enable safe, interoperable products (Owen Lauder)
EXPLANATION
He draws a parallel to historical electrical standards that allowed universal power connections and safety devices, illustrating the power of shared measurement units.
EVIDENCE
Lauder references the standardization of electrical units-ohms, volts, amperes-and safety devices like fuses, which enabled safe, interoperable electrical products [238-242].
MAJOR DISCUSSION POINT
Historical electrical standards
M
Michael Brown
4 arguments163 words per minute631 words232 seconds
Argument 1
OpenAI commerce protocol enables agents to book travel and handle e‑commerce actions (Michael Brown)
EXPLANATION
OpenAI’s protocol allows agents to autonomously arrange flights, hotels, and other travel logistics, showcasing practical e‑commerce capabilities of AI agents.
EVIDENCE
Brown explains that OpenAI’s commerce protocol lets an agent know a user wants a family vacation to Goa and can automatically secure flights and hotels, illustrating cross-company agent commerce [98-101].
MAJOR DISCUSSION POINT
AI‑driven travel booking
Argument 2
Shared global understanding (e.g., traffic‑light analogy) fosters secure, accessible agent development worldwide (Michael Brown)
EXPLANATION
He uses the universal traffic‑light system as an analogy for the need for shared conventions that enable secure and accessible AI development across nations.
EVIDENCE
Brown compares traffic-light meanings (red = stop, green = go) across countries to illustrate how shared understanding underpins secure, accessible agent development [86-92].
MAJOR DISCUSSION POINT
Universal conventions
AGREED WITH
Sihao Huang, Austin Marin, Owen Lauder, Wifredo Fernandez
Argument 3
Open standards drive democratization, competition, and broader AI “pie” growth (Michael Brown)
EXPLANATION
He argues that open standards expand the overall AI market, fostering competition and allowing more participants to benefit from AI advancements.
EVIDENCE
Brown describes the panel as a collaborative opportunity to “grow the pie,” emphasizing that open standards democratize AI and create broader benefits for all [85-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists stress that open standards prevent lock-in, promote competition and expand the AI market globally <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Democratizing AI
AGREED WITH
Michael Sellitto, Owen Lauder, Sihao Huang, Wifredo Fernandez
Argument 4
Collaborative government‑industry partnership highlighted as key to advancing standards (Michael Brown)
EXPLANATION
Brown underscores the importance of close cooperation between government bodies and industry to develop and adopt AI standards effectively.
EVIDENCE
He thanks the government and notes the collaborative spirit of the panel, highlighting that such partnerships are essential for advancing standards [85-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel thanked the government and highlighted collaborative efforts between agencies and industry as essential for advancing standards [S2].
MAJOR DISCUSSION POINT
Gov‑industry collaboration
AGREED WITH
Austin Marin, Sihao Huang, Michael Sellitto
S
Sihao Huang
6 arguments196 words per minute1363 words415 seconds
Argument 1
Historical U.S. internet standards (TCP/IP, HTTPS) illustrate how open protocols generate global prosperity (Sihao Huang)
EXPLANATION
He points to the foundational U.S. protocols that enabled a decentralized, open internet, which in turn spurred worldwide economic growth and innovation.
EVIDENCE
Huang recounts how U.S. government-backed protocols like TCP/IP and HTTPS created a decentralized internet that drove global prosperity and Silicon Valley’s wealth [198-201].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists drew parallels to TCP/IP and HTTPS as open protocols that generated worldwide prosperity and innovation <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Open internet foundations
AGREED WITH
Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez
Argument 2
SSL/HTTPS analogy shows how security standards enable e‑commerce adoption for AI agents (Sihao Huang)
EXPLANATION
He draws a parallel between the historical adoption of SSL/HTTPS and the need for similar security standards to foster AI‑driven e‑commerce.
EVIDENCE
Huang explains that SSL and later HTTPS were pivotal in enabling e-commerce, suggesting analogous security standards are needed for AI agents [206-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The analogy to SSL/HTTPS illustrates how security standards enable e-commerce, a point made for AI agents as well <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Security enabling commerce
AGREED WITH
Michael Sellitto, Austin Marin, Owen Lauder, Wifredo Fernandez
Argument 3
U.S. policy promotes exporting an open AI stack, mirroring the open‑internet model (Sihao Huang)
EXPLANATION
He notes that U.S. policy aims to share open AI technologies globally, echoing the historic strategy of exporting an open internet architecture.
EVIDENCE
Huang references Michael Kratzios’s op-ed about exporting the American AI stack, emphasizing openness to allow other nations to adopt and switch technologies [219-224].
MAJOR DISCUSSION POINT
Open AI export policy
AGREED WITH
Austin Marin, Michael Brown, Michael Sellitto
Argument 4
Early internet protocols (TCP/IP, HTTPS) enabled global connectivity and economic growth (Sihao Huang)
EXPLANATION
Reiterates that early open internet standards were critical for worldwide connectivity and the subsequent economic boom.
EVIDENCE
He again highlights the role of TCP/IP and HTTPS in creating a globally connected, prosperous internet ecosystem [198-201].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists highlighted how early open internet protocols like TCP/IP and HTTPS created global connectivity and spurred economic growth <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Impact of early internet standards
Argument 5
Emphasis on building standards that work for builders in India, Kenya, and other emerging markets (Sihao Huang)
EXPLANATION
He stresses the need for standards that enable developers worldwide—including those in emerging economies—to build, switch, and purchase AI services without barriers.
EVIDENCE
Huang mentions that standards should allow builders in India and Kenya to develop on top of U.S. AI products and enable cross-border buying and switching [186-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
References to India’s Digital Public Infrastructure and the need for standards that support builders in emerging economies are provided <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Inclusive global standards
AGREED WITH
Austin Marin, Owen Lauder, Wifredo Fernandez, Michael Brown
Argument 6
Goal to create a globally interoperable AI ecosystem that benefits both developed and developing economies (Sihao Huang)
EXPLANATION
He envisions an AI ecosystem where interoperable standards allow seamless collaboration and commerce across nations, mirroring the open internet’s success.
EVIDENCE
Huang ties the vision of a globally interoperable AI ecosystem to the historical success of open internet protocols, emphasizing benefits for both developed and developing economies [187-194].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The vision of a globally interoperable AI ecosystem benefiting all economies is linked to the success of open internet standards <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Global AI ecosystem vision
Agreements
Agreement Points
Open standards are essential for interoperability, avoiding vendor lock‑in and fostering a global AI ecosystem.
Speakers: Michael Sellitto, Owen Lauder, Michael Brown, Sihao Huang, Wifredo Fernandez
MCP as universal open standard for connecting AI models to enterprise data and tools (Michael Sellitto) Open standards prevent vendor lock‑in and allow seamless switching between AI models (Michael Sellitto) Skills protocol enables one‑time teaching of tasks and portable agent capabilities across models (Michael Sellitto) Agent‑to‑Agent standard defines shared identity, capabilities, and security requirements for inter‑agent communication (Owen Lauder) Universal Commerce Protocol (UCP) lets agents interact with websites and payment systems for business transactions (Owen Lauder) Technically robust, interoperable standards are essential for reliable AI ecosystems (Owen Lauder) Open standards drive democratization, competition, and broader AI “pie” growth (Michael Brown) Historical U.S. internet standards (TCP/IP, HTTPS) illustrate how open protocols generate global prosperity (Sihao Huang) SSL/HTTPS analogy shows how security standards enable e‑commerce adoption for AI agents (Sihao Huang) XAI builds on peer standards to create a “parallel Internet” for agent development (Wifredo Fernandez)
All speakers highlighted that open, interoperable standards-whether MCP, agent-to-agent, UCP, or historical internet protocols-prevent vendor lock-in, enable cross-border collaboration, and expand the AI market, echoing the vision of a globally interoperable AI ecosystem [28-38][39-48][60-62][63-73][74-77][85-92][198-201][206-207][121-123].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the consensus at IGF 2023 that process-oriented, open standards are needed to ensure interoperability and prevent vendor lock-in, as highlighted in the “Setting the Rules” reports and the call for international cooperation among standards bodies [S26][S27][S31][S40].
Security and trust standards are critical to enable safe adoption and commercial use of AI agents.
Speakers: Michael Sellitto, Sihao Huang, Austin Marin, Owen Lauder, Wifredo Fernandez
Security metrics analogous to automotive crash‑test standards provide confidence in AI agents (Michael Sellitto) SSL/HTTPS analogy shows how security standards enable e‑commerce adoption for AI agents (Sihao Huang) Request for information on AI agent security challenges to shape future standards (Austin Marin) Draft guidance on AI agent identity and authorization to ensure trustworthy interactions (Austin Marin) Technically robust, interoperable standards are essential for reliable AI ecosystems (Owen Lauder) Privacy‑preserving, auditable, consent‑driven design is critical for agent‑driven services (Wifredo Fernandez)
Speakers converged on the need for security-focused standards-ranging from metric analogies, SSL/HTTPS precedents, formal RFIs, identity drafts, to privacy-preserving design-to build trust and enable AI agents to handle sensitive data and commerce safely [212-218][206-207][155-161][163-165][60-62][264-268].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for dedicated security standards for AI agents is emphasized in recent IGF discussions, which outline a three-layer security model and stress that standards reduce risk and build trust for commercial deployment [S34][S36][S41][S42].
Government should act as a coordinator and facilitator, providing high‑level guidance while allowing industry to lead technical standard development.
Speakers: Austin Marin, Sihao Huang, Michael Brown, Michael Sellitto
Center for AI Standards and Innovation serves as industry “front door,” coordinating across agencies to avoid duplication (Austin Marin) NIST‑led Agent Standards Initiative aims to develop voluntary, consensus‑based AI standards (Austin Marin) U.S. policy promotes exporting an open AI stack, mirroring the open‑internet model (Sihao Huang) Collaborative government‑industry partnership highlighted as key to advancing standards (Michael Brown) Anthropic partnership with the Trump administration underscores productive government‑industry collaboration (Michael Sellitto)
All four speakers emphasized a government role that coordinates, issues RFIs, and supports voluntary consensus standards, while leaving detailed technical work to industry, reflecting a collaborative, non-regulatory approach [138-145][146-154][219-224][85-92][27].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple U.S. and international forums advocate an industry-led, consensus-based approach with governments playing a convening role rather than imposing regulations, reflecting the stance of the U.S. AI Standards initiative and IGF panels [S32][S38][S33][S28][S30].
Global collaboration and inclusion of emerging economies are essential for developing AI standards that serve all builders.
Speakers: Sihao Huang, Austin Marin, Owen Lauder, Wifredo Fernandez, Michael Brown
Emphasis on building standards that work for builders in India, Kenya, and other emerging markets (Sihao Huang) Participation in the International Network for Advanced AI Measurement, Evaluation, and Science (INAEMS) to share best practices globally (Austin Marin) Partnering with companies worldwide such as Walmart, Target, Flipkart, and Infosys demonstrates global engagement (Owen Lauder) XAI’s engagement with the global X platform illustrates cross‑border collaboration on standards (Wifredo Fernandez) Shared global understanding (e.g., traffic‑light analogy) fosters secure, accessible agent development worldwide (Michael Brown)
Speakers agreed that standards must be co-created with global partners, especially builders in developing regions, and cited concrete mechanisms like INAEMS, multinational corporate partnerships, and analogies that underscore universal conventions [186-190][276-280][77][118-119][86-92].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of multistakeholder, globally inclusive processes is documented in IGF 2023 sessions and the Global AI Policy Framework, which stress participation of Global South actors to avoid a US-centric model [S26][S29][S31][S28].
Similar Viewpoints
Both emphasize that open, technically sound standards are the foundation for interoperability and avoiding vendor lock‑in across AI agents and data integrations [28-38][39-48][60-62][63-73].
Speakers: Michael Sellitto, Owen Lauder
MCP as universal open standard for connecting AI models to enterprise data and tools (Michael Sellitto) Open standards prevent vendor lock‑in and allow seamless switching between AI models (Michael Sellitto) Technically robust, interoperable standards are essential for reliable AI ecosystems (Owen Lauder) Agent‑to‑Agent standard defines shared identity, capabilities, and security requirements for inter‑agent communication (Owen Lauder)
Both view U.S. government action as pivotal in fostering open, globally beneficial AI standards, drawing on historic internet protocol successes [138-145][219-224][198-201].
Speakers: Austin Marin, Sihao Huang
Center for AI Standards and Innovation serves as industry “front door,” coordinating across agencies (Austin Marin) U.S. policy promotes exporting an open AI stack, mirroring the open‑internet model (Sihao Huang) Historical U.S. internet standards (TCP/IP, HTTPS) illustrate how open protocols generate global prosperity (Sihao Huang)
Both use familiar analogies from other domains to argue that common standards are necessary for secure, interoperable AI services worldwide [86-92][206-207].
Speakers: Michael Brown, Sihao Huang
Shared global understanding (traffic‑light analogy) fosters secure, accessible agent development worldwide (Michael Brown) SSL/HTTPS analogy shows how security standards enable e‑commerce adoption for AI agents (Sihao Huang)
Both stress that measurable security and trust mechanisms (auditability, privacy, standardized metrics) are required for agents to be adopted safely [264-268][212-218].
Speakers: Wifredo Fernandez, Michael Sellitto
Privacy‑preserving, auditable, consent‑driven design is critical for agent‑driven services (Wifredo Fernandez) Security metrics analogous to automotive crash‑test standards provide confidence in AI agents (Michael Sellitto)
Unexpected Consensus
Alignment on the need for auditability and trust in AI agents despite differing primary framings (privacy vs. safety metrics).
Speakers: Wifredo Fernandez, Michael Sellitto, Sihao Huang
Privacy‑preserving, auditable, consent‑driven design is critical for agent‑driven services (Wifredo Fernandez) Security metrics analogous to automotive crash‑test standards provide confidence in AI agents (Michael Sellitto) SSL/HTTPS analogy shows how security standards enable e‑commerce adoption for AI agents (Sihao Huang)
While Fernandez focuses on privacy and auditability, Sellitto and Huang discuss safety metrics and security protocols; nevertheless, all three converge on the principle that transparent, measurable trust mechanisms are indispensable for agent deployment-a convergence not explicitly anticipated at the start of the panel [264-268][212-218][206-207].
Overall Assessment

The panel displayed a strong, multi‑speaker consensus that open, interoperable standards—paired with robust security and trust frameworks—are the cornerstone for a globally inclusive AI ecosystem. Government is seen as a facilitator rather than a regulator, and international collaboration, especially with emerging markets, is deemed essential.

High consensus: the convergence across industry and government representatives on open standards, security, and global collaboration suggests a solid foundation for coordinated policy and technical work, likely accelerating the development and adoption of AI agent standards worldwide.

Differences
Different Viewpoints
Role of government in driving AI standards versus industry‑led open standards
Speakers: Austin Marin, Michael Sellitto
Center for AI Standards and Innovation serves as industry “front door,” coordinating across agencies and using NIST-led voluntary consensus processes [138-145][146-154] MCP as universal open standard for connecting AI models to enterprise data and tools, emphasizing industry-driven open source protocol without need for government solicitation [28-38]
Austin proposes a coordinated government role that issues RFIs and drafts to shape voluntary standards, while Michael stresses that the market-driven open protocols like MCP already provide the needed interoperability, suggesting less direct government involvement [155-161][28-38].
POLICY CONTEXT (KNOWLEDGE BASE)
This tension is captured in the U.S. AI Standards discussions where industry-led consensus is preferred and governments are urged to facilitate rather than dictate, highlighting divergent views on regulatory involvement [S32][S38][S30][S33].
Global multilateral collaboration versus a U.S.–centric export model for AI standards
Speakers: Austin Marin, Sihao Huang
Participation in the International Network for Advanced AI Measurement, Evaluation, and Science (INAEMS) to share best practices globally [276-280] U.S. policy promotes exporting an open AI stack, echoing historic U.S. internet protocols as a model for worldwide adoption [219-224][190-200]
Austin emphasizes a ten-country network and multilateral exchanges to develop standards, whereas Sihao frames the U.S. approach as exporting an open AI stack rooted in historic U.S. internet standards, implying a more U.S.-led direction [276-280][219-224][190-200].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on a US-centric export approach versus broader multilateral cooperation have been raised in the Global AI Policy Framework and IGF panels, underscoring historical North-South dynamics in standards setting [S29][S26][S31].
Approach to security standards for AI agents
Speakers: Austin Marin, Michael Sellitto
Draft guidance on AI agent identity and authorization; request for information to collect industry security challenges [163-165][155-161] Security metrics analogous to automotive crash-test standards provide confidence in AI agents [212-218]
Austin focuses on identity/authorization drafts and sector-specific RFIs to build security standards, while Michael advocates concrete, metric-based safety testing modeled on automotive standards, reflecting different pathways to achieve trustworthy agents [163-165][155-161][212-218].
POLICY CONTEXT (KNOWLEDGE BASE)
Divergent proposals for AI agent security-ranging from process-oriented uncertainty quantification to layered technical safeguards-are reflected in recent IGF workshops and security-focused briefs, indicating ongoing disagreement on the optimal framework [S34][S36][S44][S42].
Unexpected Differences
Misstatement of traffic‑light conventions indicating differing levels of technical precision
Speakers: Michael Brown, Sihao Huang, Owen Lauder
Brown says “red means go” and then corrects himself, showing uncertainty about basic conventions [86-88] Huang references precise historical standards (TCP/IP, HTTPS) that underpin global interoperability [198-201]
Brown’s casual, inaccurate traffic-light analogy contrasts with other speakers’ emphasis on rigorously defined technical standards, revealing an unexpected gap in shared understanding of baseline conventions [86-88][198-201].
Overall Assessment

The panel shows strong overall consensus on the need for open, interoperable AI standards, security, and global inclusion. Disagreements are limited to the preferred locus of leadership (government‑coordinated versus industry‑driven), the framing of international collaboration versus a U.S.–centric export model, and the methodological path to security assurance (policy‑driven drafts versus metric‑based testing).

Low to moderate disagreement; the differences are largely about implementation pathways rather than fundamental goals, suggesting that progress on AI standards can continue with coordinated effort, though alignment on governance mechanisms will be required.

Partial Agreements
All agree that interoperable standards are essential to avoid vendor lock‑in and enable global builders, but they propose different technical mechanisms (MCP, agent‑to‑agent clipboard, commerce protocols) to achieve that goal [28-38][63-73][98-101][186-190].
Speakers: Michael Sellitto, Owen Lauder, Michael Brown, Sihao Huang
MCP as universal open standard for data connectivity [28-38] Agent-to-Agent standard defines shared identity, capabilities, and security requirements [63-73] OpenAI commerce protocol enables agents to book travel and handle e-commerce actions [98-101] Standards should let builders in India, Kenya, etc., switch models and buy across borders [186-190]
They share the aim of inclusive global standards, yet differ on the method: Sihao stresses universal protocols, Austin proposes targeted listening sessions, and Owen draws lessons from historical technical standards [186-190][165-170][238-242].
Speakers: Sihao Huang, Austin Marin, Owen Lauder
Goal of standards that work for builders in emerging markets like India and Kenya [186-190] Sector-specific listening sessions to surface challenges in education, health, finance [165-170] Historical electrical standards illustrate how common units enable safe, interoperable products [238-242]
All concur that security and shared conventions are vital, but propose different routes: policy‑driven RFIs, metric‑based testing, or simple universal analogies to foster trust [202-207][155-161][163-165][212-218][86-92].
Speakers: Sihao Huang, Austin Marin, Michael Sellitto, Michael Brown
Security is a prerequisite for adoption (SSL/HTTPS analogy) [202-207] RFI and draft identity/authorization to shape security standards [155-161][163-165] Automotive-style safety metrics to build confidence [212-218] Universal traffic-light analogy for shared conventions [86-92]
Takeaways
Key takeaways
Open, interoperable AI agent protocols (e.g., MCP, Skills, Agent‑to‑Agent, Universal Commerce Protocol) are essential to prevent vendor lock‑in and enable a global AI ecosystem. Standardization efforts are being led by industry with coordinated support from the U.S. government (Center for AI Standards and Innovation, NIST) to create voluntary, consensus‑based specifications. Security, authentication, privacy, and auditability are critical for the adoption of AI agents, analogous to SSL/HTTPS for the web and automotive safety standards. Historical precedents (TCP/IP, HTTPS, electrical standards, automotive crash‑test metrics) illustrate how open standards drive widespread innovation and economic growth. International collaboration (e.g., INAEMS, engagement with builders in India, Kenya, and other emerging markets) is necessary to ensure standards are globally applicable and inclusive. Sector‑specific challenges (PII handling in education/healthcare, evaluation metrics, metrology) must be identified through listening sessions and stakeholder input.
Resolutions and action items
Release of a Request for Information (RFI) on AI agent security challenges, closing in March; stakeholders are invited to submit comments. Draft guidance on AI agent identity and authorization is available for public comment via NIST’s Information Technology Laboratory. Plan to hold sector‑specific listening sessions in April (education, healthcare, finance) to gather adoption barriers and security concerns. Commitment from the Center for AI Standards and Innovation to act as a “front door” for industry, coordinating agency interactions and avoiding duplicate requests. Encouragement for companies and international partners to engage with the ongoing standards initiatives and contribute to consensus documents.
Unresolved issues
Specific technical and policy solutions for AI agent security, especially around authentication, consent, and handling of personally identifiable information (PII). How to create universally accepted metrics and benchmarks for agent performance and safety comparable to automotive crash‑test standards. The exact mechanisms for ensuring interoperability across competing commercial protocols (e.g., Anthropic’s MCP vs. OpenAI’s commerce protocol). Adoption hurdles in emerging markets and developing economies, including infrastructure constraints and regulatory environments. Potential regulatory approaches for agent‑driven social media platforms and other novel use cases that were raised but not detailed.
Suggested compromises
Adopt open, voluntary standards while allowing industry to lead technical development, with government providing coordination and guidance rather than prescriptive regulation. Balance security requirements with interoperability by developing shared identity/authorization frameworks that do not lock developers into a single vendor. Leverage historical lessons by implementing standards that are robust yet flexible, avoiding overly rigid specifications that could hinder innovation.
Thought Provoking Comments
MCP is a universal open standard for connecting AI systems to the tools and data sources that people already use… you just need to give the model a rough description of what’s in the data source and what kind of tools or how can it access it. The model will intuitively know how it can use those data sources the same way that somebody in your enterprise would know which system to query for payroll or revenue data.
Introduces a concrete, vendor‑agnostic protocol that solves the interoperability bottleneck and explains how it lowers the cost of switching models, a core challenge for builders worldwide.
Set the technical baseline for the discussion, prompting other panelists to reference their own protocols (agent‑to‑agent, commerce) and leading the conversation toward the need for open, interchangeable standards.
Speaker: Michael Sellitto (Anthropic)
Our agent‑to‑agent standard is basically a digitized clipboard of information that an agent will share with another agent: ID, capabilities, intent, data requirements, security requirements. This is fundamental to greasing the wheels of the agentic economy.
Provides a clear, tangible description of how agents can communicate without bespoke code, highlighting a key technical hurdle and proposing a solution that can be widely adopted.
Expanded the scope from data‑access standards to inter‑agent communication, prompting further discussion on commerce protocols and reinforcing the theme of building a layered, interoperable ecosystem.
Speaker: Owen Lauder (Google DeepMind)
Imagine a country where red means go at a stoplight… shared understanding—like traffic‑light rules—allows builders everywhere to know that what they’re building will be secure, accessible, and useful. That shared understanding grows the pie for everyone.
Uses a simple, relatable analogy to illustrate why common standards are essential for global interoperability and democratization of AI services.
Reframed the technical debate in terms of everyday experience, making the need for standards feel universal and prompting others (e.g., Sihao, Austin) to link it to historical internet standards.
Speaker: Michael Brown (OpenAI)
These open standards create a new layer—a parallel Internet—that is crucial for the development of the Internet writ large. They also raise novel regulatory questions, like whether we should regulate social‑media platforms that are agent‑driven.
Broadens the conversation from pure technical interoperability to governance, highlighting emerging policy challenges that accompany agent‑driven ecosystems.
Shifted the tone toward regulatory considerations, leading Sihao and Austin to discuss the role of government and NIST in shaping standards and security frameworks.
Speaker: Wifredo Fernandez (XAI)
The success of the World Wide Web came from open protocols like TCP/IP and HTTPS that the U.S. government helped fund. Those standards made the Internet decentralized and globally interoperable, and we need the same approach for AI.
Draws a powerful historical parallel, arguing that open, government‑backed standards were essential to past technological revolutions and should be replicated for AI.
Anchored the discussion in a policy narrative, reinforcing the call for voluntary, consensus‑based standards and influencing Austin’s description of the new Agent Standards Initiative.
Speaker: Sihao Huang (White House OSTP)
We have a Request for Information on AI‑agent security, a draft on agent identity and authorization, and we’ll hold sector‑specific listening sessions (education, healthcare, finance) to surface real‑world challenges and develop voluntary standards.
Moves the conversation from abstract ideas to concrete actions the U.S. government is taking, showing how industry input will shape forthcoming standards.
Provided a roadmap for collaboration, prompting panelists to reference their own protocols as contributions and encouraging participants to engage with the upcoming RFI.
Speaker: Austin Marin (Center for AI Standards and Innovation, Dept. of Commerce)
Just like car metrics (fuel economy, crash‑test ratings) give consumers confidence, AI standards should give us confidence in security, authentication, and the ability to audit decisions—especially when agents act on our behalf.
Uses an automobile analogy to explain why measurable, standardized security metrics are critical for trust and sovereignty in AI deployments.
Deepened the discussion on security, reinforcing Sihao’s point about SSL/HTTPS and prompting further emphasis on the need for evaluative benchmarks and metrology.
Speaker: Michael Sellitto (Anthropic)
We’ve all traveled with adapters because electrical plugs aren’t standardized globally—this is a massive hindrance to commerce. We must avoid repeating such failures in AI standards.
Provides a cautionary historical example of what happens when standards are fragmented, underscoring the urgency of global alignment.
Served as a warning that reinforced the urgency expressed by other speakers, adding a practical perspective that highlighted the economic cost of non‑standardization.
Speaker: Owen Lauder (Google DeepMind)
Overall Assessment

The discussion pivoted around the central theme of open, interoperable standards for AI agents. Early technical explanations (MCP, agent‑to‑agent, commerce protocols) established a shared vocabulary, while analogies from traffic lights to automobile metrics translated complex ideas into everyday terms, making the need for standards feel universal. Historical references to the Internet’s open protocols and failed standards (electrical plugs) framed the conversation within a broader policy and economic context, prompting the government representatives to announce concrete initiatives (RFI, sector listening sessions). Together, these comments moved the panel from abstract enthusiasm to a concrete, collaborative roadmap, aligning industry innovation with governmental facilitation and highlighting both technical and regulatory dimensions of the emerging AI ecosystem.

Follow-up Questions
How do you see the future of AI standards and agent development? And how can AI agent standards reflect the same principles that enable the open internet, including interoperability and security?
Guides the overall direction of standard‑setting efforts and ensures that new AI standards promote openness, interoperability, and security, mirroring the success of early Internet protocols.
Speaker: Sihao Huang
Should social media platforms that are agent‑driven be regulated, and if so, how?
Raises a policy gap concerning the governance of emerging agent‑driven social media, requiring legal and regulatory analysis to protect users and maintain trust.
Speaker: Wifredo Fernandez
What are the key security challenges faced by AI agents that should be addressed in forthcoming standards?
A request for information (RFI) aimed at gathering industry input on AI agent security risks, essential for shaping effective, risk‑based standards.
Speaker: Austin Marin
How should AI agent identity and authorization be defined and standardized?
Calls for comments on a draft NIST document, highlighting the need for clear identity and access‑control frameworks to ensure trustworthy agent interactions.
Speaker: Austin Marin
What specific challenges do sectors such as education, healthcare, and finance encounter when adopting AI agents, especially regarding PII handling?
Sector‑specific listening sessions are proposed to uncover practical barriers and data‑privacy concerns, informing targeted standards and best‑practice guidance.
Speaker: Austin Marin
What technical standards are needed for testing AI systems to ensure reliability and security?
Identifies a gap in robust testing methodologies, which is critical for validating agent behavior before widespread deployment.
Speaker: Owen Lauder
What metrics and evaluation methods should be developed for AI agents (e.g., performance, safety, security) analogous to automotive standards?
Suggests creating standardized measurement frameworks to build confidence among users, regulators, and purchasers of AI‑driven solutions.
Speaker: Michael Sellitto
What role should government play in convening industries to establish norms for agentic systems and data sharing?
Highlights the need for coordinated governance structures that balance industry expertise with public oversight to shape effective standards.
Speaker: Michael Brown
What lessons can be learned from failed or problematic standards (e.g., incompatible electrical plugs) to avoid similar pitfalls in AI agent standards?
Emphasizes the importance of designing universally compatible standards to prevent market fragmentation and hindered commerce.
Speaker: Owen Lauder
What are the open questions in AI measurement and evaluation science that need consensus and further research?
References ongoing discussions and a recent blog post, indicating unresolved methodological issues that must be addressed to support standard development.
Speaker: Austin Marin

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.