U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence

20 Feb 2026 18:00h - 19:00h

U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel at the India AI Impact Summit brought together U.S. government officials and leaders from four frontier AI companies to discuss the development of open standards for AI agents, emphasizing the need for interoperable interfaces that enable global builders to create applications on top of these systems [5-11][15-22]. Sihao Huang highlighted that American firms are investing $700 billion in AI infrastructure this year and are competing fiercely to make models cheaper and more powerful for developers worldwide [13-14].


Michael Sellitto described Anthropic’s Model Context Protocol (MCP) as a universal open standard that lets AI models connect to existing enterprise and government data sources and tools through simple descriptions, thereby eliminating bespoke integrations and fostering competition [28-38]. He also introduced the SKILLZ protocol, which encodes reusable task instructions that can be transferred across vendors, further enhancing portability and interoperability [46-48].


Owen Lauder explained Google DeepMind’s agent-to-agent standard, which provides a digitized “clipboard” of an agent’s identity, capabilities, goals, data handling, and security requirements to enable seamless communication between agents [63-71]. He added that Google’s Universal Commerce Protocol (UCP) lets agents interact with websites and payment systems, opening new business possibilities [74-76]. Michael Brown illustrated how OpenAI’s commerce protocol could allow an agent to arrange a family vacation by booking flights and hotels autonomously, showcasing the practical benefits of shared standards [99-101].


Wifredo Fernandez noted that open standards accelerate innovation, create a “parallel Internet” for agents, and raise novel regulatory questions about agent-driven platforms [115-122]. Austin Marin outlined the Center for AI Standards and Innovation’s role within the Department of Commerce and NIST, announcing the Agent Standards Initiative, a request for information on agent security, and a draft on agent identity and authorization [132-154][146-152][155-165]. He also described upcoming sector-specific listening sessions to address challenges such as PII handling in education and healthcare, aiming to produce voluntary best-practice documents that build confidence in AI deployments [165-172].


Sihao drew parallels to early Internet standards like TCP/IP and HTTPS, arguing that security-driven standards are essential for widespread adoption of AI agents, just as SSL enabled e-commerce [198-207]. Michael Sellitto used an automobile analogy to show how standardized performance metrics and open standards give users confidence and allow switching between vendors or to open-source models [211-218]. Owen reinforced these lessons by referencing historical electrical standards that enabled global plug compatibility, urging the AI community to adopt technically robust, interoperable standards while avoiding fragmented solutions [239-250][231-250]. Participants agreed that open, consensus-based standards are crucial for a global AI ecosystem and that international collaboration through networks such as the International Network for Advanced AI Measurement, Evaluation, and Science is already underway [276-280][231-250].


The discussion concluded with a shared commitment to develop voluntary, interoperable, and secure AI agent standards that will foster innovation, democratize access, and support worldwide adoption of AI technologies [185-188][209-210].


Keypoints


Major discussion points


Rapid emergence of AI-agent protocols and their functional benefits – The panel highlighted a growing ecosystem of standards such as the Anthropic Model Context Protocol (MCP), Google DeepMind’s A2A agent-to-agent protocol, OpenAI’s commerce protocol, and XAI’s MacroHearts project, which are beginning to serve as industry de-facto standards [17-21]. MCP is described as a “universal open standard for connecting AI systems to the tools and data sources that people already use,” enabling agents to discover and use enterprise or government data without bespoke integration [28-38]. Google’s A2A protocol provides a “digitized clipboard” that shares an agent’s identity, capabilities, intent, data handling, and security requirements to facilitate direct agent-to-agent communication [64-71]. The Universal Commerce Protocol (UCP) and OpenAI’s commerce protocol aim to let agents transact with websites and payment systems, opening a new “agentic economy” [74-76][98-101].


U.S. government’s coordinating role through the Center for AI Standards and Innovation (CASI) and NIST – CASI, housed in the Department of Commerce and partnered with NIST, positions itself as the “front door” for industry to engage with the government, avoiding duplicated requests and fostering consensus-based voluntary standards [132-140][146-152]. Recent actions include issuing a Request for Information on AI-agent security [155-161], supporting NIST’s draft on agent identity and authorization [163-165], and planning sector-specific listening sessions (education, healthcare, finance) to surface real-world challenges [165-173].


Security, trust, and evaluation as prerequisites for widespread adoption – Panelists repeatedly linked trustworthy standards to the historic rollout of internet security (SSL/HTTPS) and automotive safety metrics, arguing that standardized security assessments will give users confidence to “trust…when to trust, and when not to trust” AI agents [206-207][211-218][219-227]. Analogies to car safety ratings and fuel-economy metrics illustrate how third-party, consensus-driven benchmarks can enable informed purchasing decisions for AI-enabled services [211-218].


Open, interoperable standards to prevent lock-in and promote global collaboration – The discussion emphasized that open protocols allow builders in India, Kenya, or elsewhere to switch models or providers without re-engineering, mirroring how early internet protocols (TCP/IP, HTTPS) unlocked global innovation and economic growth [188-190][194-199][202-207][224-226]. This openness is presented as a strategic U.S. policy choice contrasting with “closed” national internet initiatives [191-197].


International engagement and future work – Beyond the U.S., the panel noted active participation in the International Network for Advanced AI Measurement, Evaluation, and Science (IN-AIMES) and upcoming sector-specific listening sessions to gather global input, especially from emerging markets like India [165-170][276-280][274-275].


Overall purpose / goal of the discussion


The panel was convened to showcase the nascent ecosystem of AI-agent standards, explain how these protocols unlock interoperability, security, and commerce, and to outline the U.S. government’s role (through OSTP, CASI, and NIST) in coordinating voluntary, consensus-based standards that will enable a globally accessible, lock-in-free AI economy.


Tone of the discussion


The conversation remained collaborative and forward-looking throughout, beginning with enthusiastic introductions and a celebratory tone about industry progress. As technical details emerged, the tone shifted to a more explanatory, “building-the-foundation” mode, using historical analogies (internet, automotive standards) to underscore seriousness. Interspersed moments of light humor (e.g., Michael Brown’s accent joke) kept the atmosphere informal yet constructive. Overall, the tone stayed optimistic, emphasizing partnership between industry and government and a shared commitment to open standards.


Speakers

Sihao Huang – Senior Policy Advisor for AI, Emerging Tech, White House [S1]


Austin Marin – Acting Director, U.S. Center for AI Standards and Innovation, Department of Commerce [S4]


Wifredo Fernandez – Director for Global Government Affairs, XAI [S5]


Owen Lauder – Senior Director and Head of Frontier Policy and Public Affairs, Google DeepMind [S7]


Michael Sellitto – Head of Global Affairs, Anthropic [S9]


Michael Brown – Head of Growth and Operations (International), OpenAI [S11]


Additional speakers:


Mike Salito – Head of Global Affairs, Anthropic (mentioned in panel introduction)


Full session reportComprehensive analysis and detailed insights

The panel at the India AI Impact Summit brought together senior U.S. officials and leaders from four frontier AI companies-Anthropic, Google DeepMind, OpenAI and XAI-to examine how open standards can make AI agents interoperable and commercially viable. Sihao Huang opened by introducing himself as the White House senior policy adviser for AI and noting the presence of Austin Marin, director of the Department of Commerce’s Center for AI Standards and Innovation (CASI) together with the company representatives [3-6][7-12]. He reminded the audience that American firms are investing roughly $700 billion in AI infrastructure this year and are competing fiercely to deliver cheaper, more powerful models for developers worldwide [13-15]. He also cited the 802.11 Wi-Fi protocol as a concrete illustration of how government-backed standards enable global interoperability [190-193]. The session’s purpose, he explained, was to explore how standardized interfaces can enable a thriving “agentic economy” [16-22].


Panelists quickly identified a nascent ecosystem of agent-centric protocols that are already shaping the market. The most prominent is Anthropic’s Model Context Protocol (MCP), which many companies are adopting as a de-facto industry standard [20-22][28-38]. Google DeepMind presented its agent-to-agent (A2A) protocol, OpenAI described its own commerce protocol, and XAI referenced its secretive MacroHearts project [21][63-71][74-76][98-101]. Collectively, these efforts aim to replace bespoke, vendor-locked integrations with open, reusable specifications that developers in any country can leverage [23-24][46-48].


MCP is framed as a universal, open-source contract that lets an AI model discover and use existing data sources and tools simply by receiving a high-level description of the resource [28-36]. In practice, an agent can be told that “payroll data lives in the HR system” or that “revenue figures are stored in HEX”, and it will know how to retrieve the information just as a human employee would [34-36]. By eliminating the need to rewrite connectors for each new vendor, MCP creates a degree of interoperability that encourages competition, reduces lock-in, and enables data portability across vendors [37-38].


A complementary initiative is the SKILLZ protocol, which encodes task-specific instructions that can be taught to an agent once and then reused across different providers [46-48]. This mirrors the way a new employee is trained on organisational procedures; once the skill set is captured, any compatible agent can execute the task, and the skill set can be ported if a user switches from Anthropic to another vendor [46-48].


Google’s A2A protocol tackles the long-standing problem of agents communicating directly with one another. It defines a “digitised clipboard” that carries an agent’s identifier, capabilities, intent, data-access requirements and security constraints, thereby allowing two agents to exchange information without custom code or a shared code base [63-71]. Owen Lauder stressed that this metadata-rich exchange is fundamental to “greasing the wheels of the agentic economy” [72-73].


The commercial dimension is addressed by Google’s Universal Commerce Protocol (UCP) and OpenAI’s commerce protocol. Both enable agents to interact with web sites, payment gateways and other e-commerce services, opening the possibility for agents to autonomously book flights, reserve hotels or purchase goods on behalf of users [74-76][98-101]. Michael Brown illustrated this with a scenario in which an agent arranges a family vacation, highlighting how shared standards allow agents from different companies and jurisdictions to cooperate safely [99-101].


Security and trust were repeatedly identified as prerequisites for mass adoption. Sihao Huang likened the need for AI-agent security standards to the historic introduction of SSL/HTTPS, which unlocked e-commerce by assuring users that their transactions were safe [206-207]. Michael Sellitto reinforced this analogy with an automobile-industry metaphor: just as independent crash-test ratings and fuel-economy figures give consumers confidence in a vehicle, third-party benchmarks for AI-agent performance and safety would enable informed purchasing decisions and geopolitical resilience [211-218]. Owen Lauder added that security fields-such as authentication, data-handling policies and user-confirmation requirements-should be baked into the agent-to-agent metadata itself [71-73][227-229].


Austin Marin outlined the U.S. government’s coordinating role, describing CASI as the “front door” for industry to engage with the federal apparatus, reducing duplicated agency requests and fostering consensus-based voluntary standards [132-145]. He noted that CASI already has formal research or pre-deployment evaluation agreements with the panel companies [132-138]. CASI operates within NIST, which has a century-long tradition of facilitating industry-driven standards rather than imposing regulation [146-152]. Recent actions include a Request for Information on AI-agent security (closing in March) [155-161], a draft NIST ITL document on agent identity and authorisation [163-165], and planned sector-specific listening sessions (education, healthcare, finance) to surface real-world challenges such as handling personally identifiable information [165-173].


Panelists repeatedly drew parallels with historic standard-setting successes. Sihao Huang argued that the open, decentralised protocols that underpinned the early Internet-TCP/IP, HTTPS-were deliberately supported by the U.S. government and generated global prosperity while keeping the system open to competition [188-199]. He warned against “closed” national versions of the Internet, noting that only the open suite achieved worldwide scale [194-199]. Owen Lauder recalled early scepticism about online credit-card use, underscoring how security standards transformed the digital economy [206-207].


International collaboration was presented as essential to avoid fragmented ecosystems. CASI participates in the International Network for Advanced AI Measurement, Evaluation and Science (IN-AIMES), a ten-country forum that shares best-practice measurement methods and aligns evaluation methodologies across borders [274-278]. Austin Marin also highlighted that CASI publishes a blog post summarising the consensus reached within IN-AIMES, ensuring that standards are globally relevant and that emerging markets such as India can both contribute to and benefit from the shared layer [279-280].


Across the discussion, there was strong consensus that open, interoperable standards are the cornerstone of a democratic AI future. All speakers agreed that such standards prevent vendor lock-in, enable builders in India, Kenya and elsewhere to switch models without re-engineering, and create a “parallel Internet” of AI services [186-190][121-123][138-145][146-152]. Points of tension emerged, however. Sihao Huang and Austin Marin advocated a prominent U.S. leadership role in exporting an open AI stack, whereas Michael Brown cautioned that industry should remain the primary driver of technical norm-setting, with government acting only as a convenor [191-199][138-145][252-254]. Security priorities also diverged: Sihao emphasized the historical SSL/HTTPS model, Owen focused on embedding security metadata in agent-to-agent exchanges, Michael Sellitto highlighted the need for third-party performance metrics, and Wifredo Fernandez called for broader privacy, auditability and consent mechanisms [206-207][71-73][219-227][264-268]. Fernandez raised the novel regulatory question of whether agent-driven social-media platforms should be regulated, linking it to his broader call for privacy, auditability and consent [119-122]. He also referenced the “Malt Book” phenomenon as an example of how AI-driven content spreads on X (formerly Twitter) [98-101].


Future of AI standards discussion (chronological order)


Sihao Huang opened the forward-looking segment by asking how the standards ecosystem should evolve to support an expanding agentic economy. Michael Sellitto responded with an automobile-industry analogy, arguing that independent safety ratings and fuel-efficiency metrics-analogous to third-party AI-agent benchmarks-will be essential for consumer confidence and geopolitical resilience [211-218]. Owen Lauder then reflected on historic lessons, noting that the introduction of SSL/HTTPS and early credit-card security standards were pivotal in unlocking digital commerce [206-207]. Michael Brown added that while government can facilitate coordination, the technical details of standards should be driven by industry expertise, with the state acting as a convenor rather than a regulator [191-199]. Finally, Wifredo Fernandez emphasized privacy-centric principles, calling for robust auditability, consent mechanisms, and explicit regulation of agent-driven social-media platforms [119-122][264-268].


In conclusion, the panel affirmed a shared vision: develop voluntary, consensus-based standards that are technically robust, security-focused and globally interoperable, thereby fostering innovation, competition and trust in AI-agent ecosystems. Immediate actions include submitting comments to the CASI RFI on agent security, reviewing NIST’s draft identity standards, and participating in the upcoming sector-specific listening sessions [155-165][165-173]. Longer-term goals involve harmonising measurement practices through IN-AIMES, expanding open protocols such as MCP, SKILLZ, A2A and commerce standards, and ensuring that these frameworks embed privacy, auditability and user consent to support responsible deployment worldwide [276-280][186-190][211-218].


Session transcriptComplete transcript of the session
Sihao Huang

of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable and open to the rest of the world to sort of build on that for your own businesses, for your own benefits. And so we have an amazing panel here today. We have, so first of all, I’m Sihao Huang. I’m Senior Policy Advisor for AI at Emerging Tech at the White House. We’re joined with Austin Marin, who’s the Director for the Center for AI Standards and Innovation at the Department of Commerce, which really is the center for a lot of AI activity within the U .S. government, setting standards, driving innovation, measuring AI systems, improving metrology, and a lot of the smartest people in the U .S.

government are within Austin’s organization. And then we have the four frontier AI companies from the United States. So we’re very happy to be joined by Mike Salito, who is the Head of Global Affairs at Anthropic. We have Owen Lauder at Google DeepMind, who’s the Senior, Director and Head of Frontier Policy and Public Affairs. We have Mike Brown, who is head of growth and operations for OpenAI for our countries. And, of course, we have Weefi Fernandez, who is the director for global government affairs at XAI. So really the amazing lineup of U .S. industry. I said this in a previous panel, but American companies are spending $700 billion into infrastructure this year, just this year alone. And they probably won’t like it that I say this, but they’re competing very hard against each other to make AI models cheaper and more powerful for you guys to build on and to drive those applications.

And so this is going to be a panel on how we make that happen, how we standardize interfaces with those AI systems. And so first I’m just going to ask a question to the AI companies that are sat here. So over the past few months, I think, we’ve seen the emergence of an ecosystem of standards that move. To support the deployment of AI agents. I think one of the most notable ones is Anthropix Model Context Protocol, which a lot of other companies are building off of right now and is sort of becoming the industry standards. Of course, you have Google DeepMind’s A2A Agent -to -Agent Protocol, OpenAI’s Agentic Commerce Protocol, and then XAI, of course, has been working on its highly secretive and famous MacroHearts agent project.

And so all the companies here are very much involved in sort of this agent discussion. And so maybe open it up to the companies here to tell us a little bit about what these agent protocols actually do and what they have unlocked. And that’s sort of the builders who are sat here, the audience. What do they enable a software engineer or an AI engineer at India or other countries to create?

Michael Sellitto

Okay. Well, first I want to start off by thanking Suhao and OSTP for organizing this panel and all the people who are here. Thank you. So it’s great to be here with Austin. I think Anthropic has really had a really strong partnership with the Trump administration and appreciated the leadership of Secretary Lutnick in expanding and enhancing the Center for AI Standards and Innovation, which is really critical to making this technology work for everybody in a manner that’s safe, responsible, and open. MCP is a universal open standard for connecting AI systems to the tools and data sources that people already use. So imagine the knowledge bases inside of an enterprise. You can imagine government data sources.

The Indian government, of course, is a real leader in, why am I forgetting the acronym right now, DPI, sorry, and just has massive amounts of data that are already digitized. And so MCP is a way that you can connect your AI models and agents to those data sets and also tools. And it really, you know, simple. intuitive way. You just need to give the model a rough description of what’s in the data source and what kind of tools or how can it access it. And then the model will intuitively know how it can use those data sources the same way that somebody in your enterprise or your organization would know if I want to get payroll data, I need to go to this human resources system.

If I want to get data about, you know, our revenue, I need to go into HEX or whatever your particular tools are. You know, before MCP, you really had to build all these systems in a very bespoke manner, which meant that if you built them with one model or one vendor, you were kind of stuck because you’d have to rewrite everything if you wanted to switch. MCP being this open source protocol that’s supported by all of the major AI companies means that you really have this degree of interoperability, which just enables the whole system to be much more open and competitive. We also recently built SKILLZ, which is a software that’s been around for a long time.

It’s a software that’s been around for a long time. It’s a set of instructions that teach agents how to perform. specific tasks. The way that I describe this or think about it is, you know, imagine a new person joins your team. You spend a little bit of time teaching them, you know, how to do work the way that your organization does it. And then you expect them to just be able to follow those instructions all the time. So you kind of teach once and then they’re able to do that. It’s the same thing with skills, which also is another open protocol where you can build these skills. And then if you decide that, you know, you want to switch from Anthropic to any of the other fine companies here on the panel, you can move those skills over.

And so that interoperability and data portability is really a critical piece of making this an open and competitive environment.

Owen Lauder

Amazing. Thank you, Mike. And, yeah, thank you to Sehow. Thank you to OSTP and the U .S. government for the event and all the partnership. And a big thank you and congrats to our Indian hosts on a fantastic summit week. If you take a step back, it has been, I think, a really exciting week, a demonstration of how advanced AI is now being used around the world to do incredible things. It’s been really exciting. I think seeing the way that people are using Gemini right across India, really exciting to see the way that everyone in India from world -class scientists using AlphaFold to teachers and students using AI in the classroom. And I think with all of the progress that we’ve seen in the last few years, it’s easy to forget sometimes that this is still relatively new technology.

We’re still in the relatively early innings of working out how to develop this technology and use it for good. And one of the things that we need to do, I think Seahaw covered this very well in his opening gambit, is build out this ecosystem of technical standards to make sure that we can continue using this technology in the right ways. There’s a couple of ways that we’re thinking about these standards. One is technical standards, interoperable standards, and then also standards for testing these systems, making sure that we can use them in a reliable and secure way. We really want to contribute right across the piece here, so we’re excited. We have various standards that we have contributed to the ecosystem.

Our agent -to -agent standard that Seahaw mentioned. This is basically a standard for how… agentic systems talk to each other. At the moment, it’s a little bit tricky for agents to converse with each other. You have to often write bits of bespoke code for an agent to talk to an agent, or they have to be running on the same walled garden code base. So what we do with agent -to -agent is essentially have a sort of digitized clipboard of information that an agent will share with another agent. What’s my ID as an agent? What are my capabilities? What am I trying to do? How do I take data? What are my security requirements? This is going to be absolutely fundamental to sort of greasing the wheels of the agentic economy.

UCP, another standard that we’re working on, so we have our universal commerce protocol at Google. This essentially does the same thing, but it’s for how agents talk to websites and payment systems. This is going to be transformative for business. It’s great to be able to partner with companies right around the world, whether it’s Walmart and Target in the U .S. or Flipkart and Infosys in India that we’re working with across these agents. Excited to see what… everyone is going to do with the technology that we can enable with this.

Michael Brown

Thanks for the tip. Hi, everyone. My name is Michael Brown. My name placard says George Osborne, who’s a colleague. He got tied up in another panel, so I’m here. George and I work extremely closely together, but he has a much nicer accent because he’s from the U .K. I’m doing my best here. You’re doing very well, I might say, very well. For me, this is a fun panel because it feels like a very collaborative and cooperative opportunity to grow the pie, and the companies that are on either of our side are extraordinary companies with extraordinary humans, and it’s fun to just work with them in some of these areas. If I were going to kind of explain, why we’re here in this particular panel to my kids who are 9 -11, I would sort of say, look, are there countries out there in the world where when you get to a stoplight, red means go?

I don’t think so. I think mostly red means stop and green means go. I mean, if I’m wrong, I apologize. I’m not an expert. But, you know, having sort of shared understanding in countries, rich and poor, advanced and still developing around how things work, I think grows the pie because it allows builders to build in a way that everyone can kind of know that what they’re building is going to be both secure and is going to be accessible and hopefully enjoyable or useful to people anywhere in the world. And I think each of the companies up here is contributing something great to that. You know, I’ve joined OpenAthens. I relatively recently, but like MCP to me is something like I just knew it’s like that’s really important.

And like, well, Anthropic introduced it. Hopefully, Anthropic would agree with this, that now it’s just like the thing, right? And I think that’s terrific that it’s the thing. You know, Owen also mentioned in commerce, I don’t know if these standards compete or if it’s cooperative, but at OpenAI, we have a commerce protocol as well for the same thing, because there’s a world where these agents are going to be out shopping for us, which is kind of fun, right? So, you know, if the agent knows that you’re planning on taking a family vacation and it knows that you want to visit Goa and the agent can go actually secure your travel flights and your hotel, these commerce protocols can do that.

So agents of different companies, potentially in different countries, can all partner and work well together because they understand how they’re supposed to be looking for shared information and how that information should be shared. There’s kind of a shared understanding there. And so I think all of us are working to build these protocols to grow the pie, to create more democratization, more commerce, more benefit for everyone by having these common protocols in place.

Wifredo Fernandez

Thank you, Sihal. Great to be with you all here, and thank you to the government for having us. What an exciting week, frenetic and kinetic and chaotic, as I was saying earlier. So it’s just an honor to be here and to feel the energy and all the innovation and to meet a bunch of different builders across India. So Wefredo Fernandez, folks call me Weefy for short. It’s a nickname I got in the 90s before wireless Internet was a thing, so my name became relevant later. But, yeah, this is certainly a topic that brings us all together, which is wonderful. You know, XAI is only two and a half years old. So we’re all in this together.

So we’re all in this together. So we’re all in this together. So the foundational work done by these peer companies have enabled us to accelerate our development. We’re better because of those, and we’re better because we can all build on top of those. And these standards and protocols that folks have built and that we sort of lay out and sort of agree to as an industry and as governments really make sure that not just us four compete, right? This enables a ton of innovation. So, you know, on the X side, and, you know, XAI and X sort of operate in tandem, it’s been really neat to see the AI community sort of build and test and discuss and debate in public.

So, like, when Malt Book was taking off, I think you likely found out about it on X. And so it’s just neat to see the ecosystem sort of converge in that discussion space. And just in thinking about this panel and thinking about MoldBook in particular, it’s like, well, do we regulate social media platforms that are agent driven? Just it brings like all these really novel questions about about how we regulate. But I think at the end of the day, we all agree that these open standards that are creating sort of this call it a layer, call it a new ecosystem, call it a parallel Internet. I just really crucial for for our development of the Internet writ large.

And so, yeah, excited about the panel and the discussion here today.

Sihao Huang

Thank you so much. Your name is formalized in the 802 dot 11 protocol, which is what allows my phone to connect to the Internet in D .C. and here in India. So it’s extremely relevant. I’m going to use that. That’s awesome. So I think we’ve heard a little bit from our companies who are engaging a lot of dynamic activity, pushing out agent protocols of all kinds. And I think. There’s a lot of industry excitement over agents right now. One of the big announcements that we’re here to make, also Director Carrazio’s made early on the main stage, is the Agent Standards Initiative, and that is something that is let out of Casey in NIST. So I’ll turn to Austin to introduce that.

Austin Marin

Absolutely, and thanks, Hal, and thank you to OSTP for convening this event and to my fellow panelists. I’ll start with a brief introduction of my organization. So I am the Acting Director for the U .S. Center for AI Standards and Innovation. Our background, we were founded about two years ago as the U .S. AI Safety Institute. In June of last year, Commerce Secretary Howard Lutnick refounded us as the Center for AI Standards and Innovation, which signaled a shift from sort of safety concepts to standards and innovation. And our remit is to be the front door to industry to working with the U .S. government. There’s, I think, two aspects of our organization I think that bear note is, first, that we’re located within the Department of Commerce.

We are commerce -focused. We are industry -focused. We work. We work with all of the companies on this panel. Some of them we have formal research. or pre -deployment evaluation agreements with so that we can work with them on their models and the research questions they’re tackling. We also do take seriously our role trying to serve as a front door to the U .S. government for industry. We want to make sure that when industry is trying to navigate government that they’re speaking to the right people, that the people in government they’re speaking to have advisors who understand frontier AI and agentic AI, and also that the industry isn’t being overwhelmed by duplicative requests from different aspects of government.

You don’t want 10 different agencies asking the same company basically the same thing and creating unnecessary work, and so we try to act in sort of a coordinating role to make sure that industry is being heard and they’re navigating U .S. government. The other aspect of our organization that bears note is we’re located within NIST, the National Institute of Standards and Technology, and NIST has an over -century -long track record of not regulating but helping industry through, consensus, develop voluntary standards and best practices. Acting Director of NIST, Craig Burkhart, he likes to talk about taillights, brake lights on the back of a car. I’m sure you all see them in India. It’s the same color red as it is in the U .S.

That’s because it was a NIST standard of what exactly color red is going to be on the taillights. But another important aspect of that anecdote is it wasn’t government that said this is the color red that you all must use. It was industry came together, and with the help of NIST experts through a convening, they agreed on what the color should be. And so now when we look at it, what does the future bring and where can NIST bring its industry -driven, consensus -based voluntary standards work into the new AI world, we’re looking to AI agent standards. So as to how said, we announced this week an AI agent standards initiative, which is looking at all facets of AI and AI agents.

There’s a couple aspects of it that have already been announced that we’re working on, and I’ll tick through those relatively quickly. The first is we have a request for information. I’m going to go ahead and get this. So we have a request for information. We have a request for information. in the field. It closes in March and we encourage you to engage with us and provide comments on AI agent security. AI agents obviously bring a whole host of new security challenges and we’d love to hear from you and your organizations about what challenges you are facing. Learning and identifying those challenges is a first step. Once we identify those challenges we can then take the next step of seeing where can NIST’s approach of voluntary standards and best practices documents, how can they help address and mitigate those those challenges.

Another aspect, our colleagues at NIST, the Information Technology Laboratory or ITL, they have a draft out for comment on AI agent identity and authorization. Again, encourage you to engage and interact with them. A third initiative that we recently announced is we’re going to hold sector specific listening sessions hopefully in April in the sectors of education, healthcare, and finance where we’re going to convene various members of industry and say to them look there’s this great technology out there called AI have you heard of it, AI agents, why aren’t you adopting it? it? What challenges are you facing? And we may not be able to solve those challenges, but maybe we can. And so one example I give, and I don’t know that it’s going to be something we find out, but for instance, in the education and healthcare sector, there’s business concerns and existing regulatory concerns about PII, personally identifiable information.

And perhaps what we’ll learn through these listening sessions is that hospitals or schools aren’t deploying AI because they can’t reliably evaluate how AI agents are handling the PII. And so that’s something that KC, my organization, could develop metrology, benchmarks, evaluations, best practices, documents that could give confidence to those types of institutions that the agents are performing as desired. And maybe that’s a step that we could take through voluntary consensus driven best practices and standards that unlocks adoption. So we’re very focused on that. We’re looking forward to learning what those challenges are. I don’t know if the challenge I mentioned is actually a challenge facing industry. And that’s part of NIST’s approach, which is we … In D .C., we only see a small slice of what’s going on in industry.

We only have a tiny window into the world. And so it comes from a place of humility. We don’t know what the challenges people are facing. The companies that are on this panel, they’re doing an incredible job coming up with protocols for some of the challenges that they’re facing. We talked about agent -to -agent for how agents communicate. We talked about MCP for how agents navigate databases. We talked about UCP and OpenAI’s commerce protocol for engaging in e -commerce. And I’m sure through these conversations, we’re going to identify other areas where open source protocols, where standards, best practices could help unlock adoption and implementation. And we’re really excited to work with both you and all your institutions and companies on stage to identify those opportunities and see how we can leverage NIST’s convening authority to help.

Sihao Huang

Thank you so much for that, Austin. I think to reemphasize, this standards initiative is really wanting to make sure that the products that we build, on top of it, are able to connect with each other into our… such that if there’s a builder in India, a builder in Kenya, building on top of our AI products, American companies can use them as well. American companies can buy from them as well. And similarly, if you want to switch to a different model, nothing is sort of locked in. And I think this really ties back to a perspective that we sort of, as U .S. government, in particular the Trump administration, has about AI and AI products. We think back a lot on the history of the Internet and what that enabled for the world, but also what that enabled for America.

I think there was a perspective in the U .S. from a previous administration that technology had to be strictly locked down, and we think that’s a mistake. We want to share the best AI technologies with the rest of the world, and that’s also a sort of leading message that our delegation has here at the India AI Summit. And when we think back at the success of the Internet, what enabled that? There’s actually a number of companies and countries that tried to create their own closed version of the Internet that were centralized, that were tied to particular nations, at their own telecom networks. and they saw a little bit of success. A lot of them were state -subsidized, but none of them really scaled to the global level of the World Wide Web.

And the World Wide Web became so successful precisely because of the protocols that the U .S. government had supported. The U .S. government had made a very intentional effort to make sure that the Internet was a decentralized system, created protocols like TCPIP, HTTPS, the sort of Internet suite that was actually funded by the U .S. government back then to create independent development of these protocols that enabled the rest of the world to build on that. And what you had is really this win -win situation where the entire world now benefits from sort of the access of the Internet, the ability to build applications, companies on top of that that’s driven so much prosperity for countries around the world, but also made Silicon Valley one of the most wealthy places in human history.

And it is because of this open commerce. And that’s what we really want to create with a world of AI in the future as well. I think just to add a bit on to what Austin had said, sort of the agent security. piece. Why is agent security so important to us? It’s precisely because of adoption. You need security -driven adoption. If you look back again also at the history of the internet, the development of the secure sockets layer, SSL, and then eventually HTTPS, was what enabled e -commerce. And so, again, I think it’s a lot about the efforts that we’re going to be working with industry together to make sure that there is this standards ecosystem, that there are these interoperable interfaces that everyone can build on and trust to create the AI economy that we’re all looking forward to.

So I’ll stop ranting, but I’ll turn to the companies here. And I guess I’ll ask you all, how do you see sort of the future of AI standards and agent development? And how can AI agent standards really reflect the same principles that enable the open internet, including interoperability and including security?

Michael Sellitto

I feel like I need to somehow fix this. an automobile analogy in here since there’s been a setting. Maybe I’ll use my favorite one, which is right now if you go to buy a car and you go down to the car dealership, those cars are going to have a bunch of metrics that you can use that have been independently determined to understand the characteristics of that vehicle. So it will tell you what the fuel economy is, how far can you drive on a gallon or liter of gas, how does it perform in various types of crash tests. These are all metrics that are done in a standardized way that are oftentimes done by third parties, and so you can have sort of trust and confidence in them, and you can know what kind of car you want to buy.

Maybe I’m a single person and I like to drive fast, and so I’m just worried about head -on collisions because I’m going to be going as fast as the car can. I’m going to be driving as fast as the car can possibly go, and that’s the biggest danger for me. Or maybe I have a family and I’m worried about you know, what happens if we get hit from the side and I’ve got kids in the back seats. You know, a piece that this standardization can help us get to is having that same kind of confidence in knowing what you’re purchasing that, you know, customers and governments and the public, you know, can have. I think another real benefit, and it’s really aligned with, I think, some things that Michael Kratzios, the OSTP director, talked about today and also in an op -ed that he had in the Financial Times around exporting the American AI stack, right?

There’s a lot of concerns today about sovereignty, about having control over systems in your data and so on. And a way that I think you can both use the best technology in the world, which sometimes comes from American companies, but also have confidence that there’s resilience in the system, is really having things be built to open. Open standards, right? And that gives you the ability to… to decide to make changes. If today Anthropic is producing the best technology and tomorrow it’s X or it’s OpenAI or someone else, you can change. Or maybe an open source model gets good enough at the use case that you want and you want to switch over from a proprietary model to an open source model.

So I think that’s what this can enable. I think that’s sort of the opportunity that we have ahead of us. And I think that the vision of the AI security standards work that Casey’s going to be working on is, if you’re going to entrust these systems with access to your personal data or your financial data or the ability to do things in the real world on behalf of your enterprise or what have you, you need to have some sense that there’s security, there’s authentication for things, that there’s an ability to come back and check with the user before making certain significant decisions or taking certain decisions. Certain significant actions. And that’s… You can test and evaluate and report that information in a way that is intelligible to the customer, that they know what they’re buying, and they know when to trust, and they know when not to trust.

What’s up there?

Owen Lauder

Yeah, well said, and I endorse a lot of what Mike mentioned there and Austin and Sihau as well. I do think there’s a lot you can learn from the history of standards in various different industries that we can apply to AI. Sihau mentioned some of the early Internet standards. I mean, I’m just about old enough to remember people in the early 90s talking about how they would never, ever, ever put credit card information on the Internet. That would be absolutely insane. And it sort of was when you had information being shared in plain text in a totally unencrypted way. Then you have the secure layer that Sihau mentioned, HTTPS, and it’s completely unlocked the modern Internet economy as we know it to be.

History of electrical standards as well. Actually, this was something that drove the adoption of electrical products in the late 20th and early 21st century. You had a scientific approach to standardizing units of. measurement like ohms and volts and amperes, which allowed power supplies to connect their energy to the grid. It also meant that you could invent things like fuses, which could be set to a certain amperage, and if you had an electrical current above that, it would shut itself off. So I think we need to continue learning from history. I think there are a few principles that we should take forward as we do that. I think open standards, as we’ve been discussing, is the right way to go.

You need technically robust standards that are really informed by an understanding of the technology and how they work, and we should be looking to prioritize interoperability as well. Maybe a final thought for this piece is also learning from standards that are not done well. There are many industries that have not quite gotten this right. A lot of us have traveled here from around the world having to bring adapters with us because our electrical products won’t plug into the wall. It’s really, really annoying. It’s actually also a massive hindrance on commerce as well, because it means if you’re producing a computer or another electronic application, you have to have a different plug socket in every single country around the world that you’re developing your product for.

So things to avoid. as well we need to be mindful of.

Michael Brown

automobile industry or something, two humongous but separate industries, and how they’re going to have to come together to set up norms for how agentic systems work and how data is shared, I think government can probably play an important role in bringing together industries to establish those dialogues. But the industries certainly still need to be front and center in establishing what works for them because they are the practitioners and the experts on what their customers need, what their colleagues need. And so I think we’re all going to have to kind of navigate that world together and figure out what is the role for the research labs, how does government support, and then how does industry play a leadership role in both governing and building for itself industry -specific standards for the future of AI.

Wifredo Fernandez

Yeah, I think this conversation has been a bit of a history lesson. I appreciate that. Thank you. And it made me think about how I used to get music when I was a kid, and some of the panelists may appreciate. You know, there were these music catalogs that would come to your house. You’d select however many compact discs you wanted, CDs. You’d put cash or a check in an envelope and send it away. And some weeks later, magically, some CDs would appear on your doorstep. So when I think about, you know, instructing an Asian to go download music or acquire music on my behalf, like, I’d much rather have that than I don’t know how we used to put so much trust in a system without standards or, you know, a process that could not be audited.

So I think sort of the guiding principles that have developed the Internet still apply. We want privacy -preserving technology. We want technology that allows us to audit. We want technology that considers authenticity. We want technology that considers means of consent. And to Michael’s point, I think ultimately agents serve the user and agents serve organizations. And so if we view it through that lens, it should guide us right. They don’t serve us as the model developers.

Sihao Huang

Great. Thank you all so much for that. So that was a bit of a nerdy discussion on standards, a bit of a history lesson. I love that. But we’re also here right now at the India AI Impact Summit talking to a country of builders, talking to the developing worlds, which are some of the most dynamic AI markets in the world. And so I think it will also be amazing to hear from the panelists here, including Austin, how you all are engaging with the rest of the world on these standards, how your organizations are engaging with other countries on AI. And one of the most exciting applications you’ve seen develop on top of your standards and products.

Austin Marin

I guess I’ll lead off. One of the main forums that Casey engages internationally is through the International Network for Advanced AI Measurement, Evaluation, and Science. It’s a bit of a mouthful of a name, but it’s ten countries that have established AI security institutes or, like we do, the Center for AI Standards and Innovation, and we meet a couple times a year. We also engage in informal technical and scientific exchanges and we share best practices in measurement and evaluation science. In December, we met in San Diego on the sidelines of the NURFS conference and we sat down and discussed sort of open questions in measurement science and the challenges that we’re facing, and we published a blog post, I think, about a week ago that summarizes some of the periods of consensus and the open questions.

And there, the work we’re doing, I think, is very important because when we talk about the evaluation, of AI systems, particular capabilities, particular security vulnerabilities, etc. It’s important for us to have consensus on the methodologies.

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“The panel at the India AI Impact Summit brought together senior U.S. officials and leaders from four frontier AI companies—Anthropic, Google DeepMind, OpenAI and XAI.”

The transcript excerpt S3 lists four frontier AI companies from the United States (including Anthropic and Google DeepMind) participating in the summit, confirming the presence of senior U.S. officials and the four companies mentioned.

Confirmedhigh

“Austin Marin is the director of the Department of Commerce’s Center for AI Standards and Innovation (CASI).”

S21 identifies Austin Mayron (likely the same individual) as the Acting Director of the U.S. Center for AI Standards and Innovation, confirming his leadership role at the agency.

!
Correctionhigh

“American firms are investing roughly $700 billion in AI infrastructure this year.”

The knowledge base entry S31 cites $500 billion invested in AI and frontier technologies, indicating that the $700 billion figure in the report is not supported and appears overstated.

Confirmedmedium

“Anthropic’s Model Context Protocol (MCP) is a de‑facto industry standard for agent‑centric protocols.”

S1 highlights MCP as a universal standard for connecting AI systems to tools and data, and S68 notes that market adoption can create de‑facto industry standards, supporting the claim.

Confirmedmedium

“Speakers emphasized that open, government‑backed standards (like the 802.11 Wi‑Fi protocol) enable global interoperability.”

S2 summarizes that all speakers strongly advocated for open, interoperable standards that enable cross‑vendor compatibility, echoing the report’s point about standards such as Wi‑Fi.

Confirmedmedium

“Google DeepMind presented its agent‑to‑agent (A2A) protocol.”

S1 records that Google DeepMind discussed its A2A protocol during the summit, confirming the presentation.

Additional Contextlow

“Google’s A2A protocol defines a “digitised clipboard” that carries an agent’s identifier, capabilities, intent, data‑access requirements and security constraints.”

While S1 confirms the existence of an A2A protocol, it does not provide the detailed “digitised clipboard” description; the report adds this specific technical detail.

External Sources (70)
S1
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S2
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Great. Thank you all so much for that. So that was a bit of a nerdy discussion on standards, a bit of a history lesson. …
S3
https://app.faicon.ai/ai-impact-summit-2026/us-ai-standards_-shaping-the-future-of-trustworthy-artificial-intelligence — And it is because of this open commerce. And that’s what we really want to create with a world of AI in the future as we…
S5
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — -Wifredo Fernandez- Director for Global Government Affairs at XAI
S6
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Thank you, Sihal. Great to be with you all here, and thank you to the government for having us. What an exciting week, f…
S7
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S8
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with …
S9
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S10
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S11
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Thanks for the tip. Hi, everyone. My name is Michael Brown. My name placard says George Osborne, who’s a colleague. He g…
S12
https://app.faicon.ai/ai-impact-summit-2026/us-ai-standards_-shaping-the-future-of-trustworthy-artificial-intelligence — And like, well, Anthropic introduced it. Hopefully, Anthropic would agree with this, that now it’s just like the thing, …
S13
WS #75 An Open and Democratic Internet in the Digitization Era — Open standards promote interoperability and prevent lock-in to proprietary systems
S14
WS #283 AI Agents: Ensuring Responsible Deployment — Carter describes specific technical developments including Google’s agent-to-agent protocol for vendor-agnostic interact…
S15
Building Population-Scale Digital Public Infrastructure for AI — Very interesting. And I’ll just try to kind of paint the picture by giving a context. Now, think about it. We’re talking…
S17
Challenging the status quo of AI security — Connection between observed security challenges and need for standards Given the new security challenges that emerge wh…
S18
Interdisciplinary approaches — With regard to standardisation, almost continuous efforts are made to replace public standards with private and propriet…
S19
Launch / Award Event #96 Empower the Global Internet Standards Testing Community — Alena Muravska: colleagues here in the room but also colleagues online and I’m very grateful for this opportunity to be …
S20
AI for Social Good Using Technology to Create Real-World Impact — This discussion at the India AI Impact Summit focused on how open networks and digital public infrastructure (DPI) can e…
S21
Agentic AI in Focus Opportunities Risks and Governance — Yeah, absolutely. So at CAISI , our focus right now is truly on unlocking innovation and adoption. And we work in the st…
S22
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Larter emphasised that the emerging agentic economy requires new technical protocols for agents to communicate with each…
S23
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S24
The Global Power Shift India’s Rise in AI &amp; Semiconductors — “I think what we need to do is we need to go for a, you know, a strategic decision -making in the sense that what is it …
S25
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — And finally, there’s a global dimension where International Solar Alliance is involved. What are going to be the interop…
S26
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Michael Karanicolas encouraged interactive participation from the audience, inviting comments and engagement from all pr…
S27
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S29
Agentic AI in Focus Opportunities Risks and Governance — “These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic….
S30
Challenging the status quo of AI security — This is necessary for establishing audit trails and accountability in agent systems
S31
Comprehensive Report: President Trump’s Address to the World Economic Forum in Davos — It’s a beautiful thing to see. The leadership of the country has been very good. They’ve been very, very smart. Number …
S32
Trump administration poised to boost crypto influence in US policy — The incomingTrump administrationis set to shape the future of cryptocurrency andblockchaintechnology in the United State…
S33
https://app.faicon.ai/ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S34
AI analysis of an interview Musk-Trump — Further, Musk lends support to Trump as a political leader, endorsing his policies as ‘the right path’ for ensuring Amer…
S35
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — The regulatory approach for social media platforms that are agent-driven needs to be determined
S36
 WSIS Action Line C9: Milestones, Challenges and Emerging Trends in Freedom of Expression and Media Development — Educational initiatives for diverse age groups, such as the young and elderly, are vital in preventing the spread of mis…
S37
Young Brains and Screens — Regulation is seen as a necessary measure for social media platforms. Concerns about the rapid erosion of shared humanit…
S38
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — Le Fevre Cervini advocates for regulation to prevent deepfakes and hold social media platforms accountable for spreading…
S39
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — In conclusion, the analysis highlights the importance of collaboration and inclusivity in the development of AI standard…
S40
From Technical Safety to Societal Impact Rethinking AI Governanc — Explanation:Both speakers support government involvement but disagree on scope – Ioannidis wants to keep core technology…
S41
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Tomiwa Ilori:Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiat…
S42
High-level AI Standards panel — Effective coordination requires mechanisms for standards development organizations to coordinate globally through strate…
S43
From principles to practice: Governing advanced AI in action — Chris emphasizes the importance of coordinating globally to standardize frontier AI risk management frameworks. He notes…
S44
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Michael Sellitto- Owen Lauder- Austin Marin Industry-led, consensus-based approach to standards development is prefer…
S45
Digital standards — ‘Standards can underpin regulatory frameworks and […] provide appropriate guardrails for responsible, safe and trustwo…
S46
Multistakeholder platform regulation and the Global South | IGF 2023 Town Hall #170 — In conclusion, platform regulation involves addressing various legal challenges, promoting competition, and addressing c…
S47
WS #283 AI Agents: Ensuring Responsible Deployment — Carter describes specific technical developments including Google’s agent-to-agent protocol for vendor-agnostic interact…
S48
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Evidence:Currently agents need bespoke code to communicate or must run on the same code base. The protocol will be funda…
S49
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Larter emphasised that the emerging agentic economy requires new technical protocols for agents to communicate with each…
S50
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Michael Sellitto- Michael Brown- Wifredo Fernandez- Austin Marin- Sihao Huang Currently agents need bespoke code to c…
S51
Agentic AI in Focus Opportunities Risks and Governance — Evidence:CAISI launched an AI agent standards initiative, issued an RFI on AI agent security, and announced sector-speci…
S52
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S53
International Cooperation for AI &amp; Digital Governance | IGF 2023 Networking Session #109 — Matthew Liao:Thank you, Kyung. So hi, everybody. Sorry, I couldn’t be there in person, but I’m very honored and delighte…
S54
Launch / Award Event #96 Empower the Global Internet Standards Testing Community — Alena Muravska: colleagues here in the room but also colleagues online and I’m very grateful for this opportunity to be …
S55
Séance d’ouverture : « La gouvernance internationale du numérique et de l’IA : à la croisée des chemins ? » — Sally Wentworth Merci beaucoup. Je m’appelle Sally Wentworth. Je suis la présidente et le directrice général de l’Intern…
S56
WS #75 An Open and Democratic Internet in the Digitization Era — Open standards are foundational to the Internet and technological innovation, promoting interoperability and preventing …
S57
Lightning Talk #7 Privacy Redefined: equitable Access in the AI Age — Patricia Larasgita opened by explaining that the Safer Internet Lab is a multi-stakeholder partnership focused on disinf…
S58
The Global Power Shift India’s Rise in AI & Semiconductors — This panel discussion focused on India’s strategic positioning in artificial intelligence and semiconductor technologies…
S59
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The initial panel featured Ambassador Sergio Gor, Secretary S. Krishnan, and industry representatives discussing the fou…
S60
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — This panel discussion at a major technology conference examined India’s artificial intelligence ambitions through the le…
S61
Panel Discussion AI in Healthcare India AI Impact Summit — This comprehensive discussion on AI in healthcare brought together diverse perspectives from technology, clinical practi…
S62
Panel Discussion AI in Healthcare India AI Impact Summit — Thank you for having me. I’d say we think healthcare, is certainly one of the areas where we’re going to be able to do a…
S63
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — This discussion focused on AI assurance and the challenges of ensuring AI systems, particularly emerging agentic AI, are…
S64
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Hello. Yeah. Thank you very much, Professor Karandika. This is a perfect question for me to talk about. This is why I’m …
S65
WS #204 Closing Digital Divides by Universal Access Acceptance — Allison O’Beirne: So Allison, please you have the floor for your first intervention. Thank you very much. Thanks so much…
S66
Internet Engineering Task Force Open Forum | IGF 2023 Town Hall #32 — The IETF is the premier Standards Development Organization for Internet protocols. Its mission is to make the Internet w…
S67
Widening Lens: A New Narrative for Media Coverage of Cyberspace — The event’s panelists agreed that ecosystem development plays a pivotal role in stimulating the cybersecurity market. Th…
S68
Global Standards for a Sustainable Digital Future — Market adoption sometimes overtakes formal standards development, creating de facto industry standards
S69
WS #187 Bridging Internet AI Governance From Theory to Practice — Vint Cerf: Well, thank you so much for this opportunity. I want to remind everyone that I am not an expert on artificial…
S70
Mistral AI unveils powerful API for autonomous agents — French AI startup Mistral AIhas steppedinto the agentic AI arena by launching a new Agents API. The move puts it in dire…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sihao Huang
6 arguments196 words per minute1363 words415 seconds
Argument 1
Open, Interoperable AI Agent Standards – Open standards prevent lock‑in and enable global interoperability (Sihao Huang)
EXPLANATION
Sihao argues that open, interoperable AI standards are essential to avoid vendor lock‑in and to allow AI agents to work together across different countries and platforms. By keeping standards open, builders worldwide can adopt, switch, and combine AI models without being tied to a single provider.
EVIDENCE
He explained that standards should let a builder in India or Kenya use American AI products and switch models without being locked in, emphasizing the need for global interoperability and open access to AI technologies [186-190]. He also referenced the broader goal of sharing the best AI technologies with the world, likening it to the historic open Internet model [191-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open, voluntary standards are highlighted as essential to avoid vendor lock-in and promote worldwide competition in <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and [S13].
MAJOR DISCUSSION POINT
Preventing lock‑in through open standards
AGREED WITH
Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez, Austin Marin
Argument 2
Specific Protocols and Their Functions – Anthropic’s Model Context Protocol is becoming the de‑facto industry standard (Sihao Huang)
EXPLANATION
Sihao notes that the Anthropix Model Context Protocol (MCP) is emerging as the de‑facto standard that many companies are building upon for AI agent interactions. This protocol is shaping the industry’s approach to connecting models with external data and tools.
EVIDENCE
He highlighted MCP as one of the most notable emerging standards that many other companies are adopting, describing it as becoming an industry standard [19-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Anthropic’s Model Context Protocol is identified as an emerging industry standard in the discussion of early AI agent protocols [S14].
MAJOR DISCUSSION POINT
MCP as emerging industry standard
Argument 3
Security, Trust, and Audibility – Historical SSL/HTTPS security standards enabled e‑commerce; similar security standards are needed for AI agents (Sihao Huang)
EXPLANATION
Sihao draws a parallel between the historical development of SSL/HTTPS, which enabled secure e‑commerce, and the current need for comparable security standards for AI agents. He suggests that establishing such standards will build trust and facilitate AI‑driven commerce.
EVIDENCE
He referenced the development of SSL and HTTPS as foundational security standards that unlocked e-commerce, arguing that similar security standards are required for AI agents today [206-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel draws a direct parallel between SSL/HTTPS enabling e-commerce and the need for comparable AI security standards <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and notes the broader requirement for trust in digital commerce [S16].
MAJOR DISCUSSION POINT
Need for AI security standards analogous to HTTPS
Argument 4
Historical Lessons on Standards – Internet protocols (TCP/IP, HTTPS) illustrate how open standards drive global adoption and prosperity (Sihao Huang)
EXPLANATION
Sihao explains that open Internet protocols such as TCP/IP and HTTPS were crucial in creating a decentralized, globally adopted network that spurred economic growth. He uses this history to argue for similar open AI standards.
EVIDENCE
He described how the U.S. government funded and supported protocols like TCP/IP and HTTPS, which enabled a decentralized Internet and generated worldwide prosperity, including the wealth of Silicon Valley [198-200].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Historical success of TCP/IP and HTTPS as open protocols that spurred global economic growth is emphasized as a model for AI standards <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and [S13].
MAJOR DISCUSSION POINT
Open Internet protocols as a model for AI standards
AGREED WITH
Owen Lauder, Michael Sellitto
Argument 5
Government and International Coordination – The U.S. government’s historic role in internet standards guides the push to export an open AI stack (Sihao Huang)
EXPLANATION
Sihao argues that the U.S. government’s past involvement in establishing open Internet standards should inform its current strategy to promote an open AI stack globally. Exporting such an open AI ecosystem can replicate the economic benefits seen with the Internet.
EVIDENCE
He referenced the U.S. government’s intentional effort to create decentralized Internet protocols and the resulting global benefits, suggesting a similar approach for AI to ensure worldwide access and prosperity [191-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The U.S. government’s past involvement in creating open Internet standards is cited as a template for promoting an open AI ecosystem <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and [S2].
MAJOR DISCUSSION POINT
Leveraging historic Internet standards for AI export
Argument 6
Global Impact and Democratization – Standards allow builders in India, Kenya, and elsewhere to use and switch between AI models freely (Sihao Huang)
EXPLANATION
Sihao emphasizes that open AI standards enable developers in diverse regions to adopt, integrate, and transition between different AI models without technical barriers. This democratizes AI access and supports global innovation.
EVIDENCE
He stated that standards should let builders in India, Kenya, and other countries use and switch AI models freely, ensuring no lock-in and fostering global collaboration [186-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of builders in India and Kenya benefiting from open AI standards are provided in the discussion of global interoperability <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and [S2].
MAJOR DISCUSSION POINT
Enabling global AI model interoperability
A
Austin Marin
4 arguments191 words per minute1263 words395 seconds
Argument 1
Open, Interoperable AI Agent Standards – Voluntary, consensus‑based standards avoid duplicated government requests (Austin Marin)
EXPLANATION
Austin explains that voluntary, consensus‑based standards reduce the burden on industry by preventing multiple government agencies from issuing overlapping requests. This coordinated approach streamlines engagement between the private sector and the government.
EVIDENCE
He described the Center’s role in acting as a front door for industry, emphasizing the need to avoid ten different agencies asking the same company for similar information, and highlighted the importance of coordinated, voluntary standards [138-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voluntary, consensus-based standards are presented as a way to reduce overlapping agency requests and streamline industry engagement <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and [S2].
MAJOR DISCUSSION POINT
Avoiding duplicate government requests through consensus standards
AGREED WITH
Sihao Huang, Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez
Argument 2
Security, Trust, and Audibility – Request for information on AI agent security; draft standards on identity and authorization; sector listening sessions to address PII concerns (Austin Marin)
EXPLANATION
Austin outlines a series of initiatives: a Request for Information (RFI) on AI agent security, a draft NIST standard on agent identity and authorization, and sector‑specific listening sessions to identify challenges such as PII handling in education, healthcare, and finance.
EVIDENCE
He announced the RFI that closes in March and encourages comments on AI agent security [155-162]; mentioned a draft NIST document on identity and authorization [163-165]; and detailed upcoming listening sessions in education, healthcare, and finance to uncover adoption barriers and privacy concerns [165-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RFI on AI agent security, the draft NIST identity/authorization standard, and sector-specific listening sessions are all described in the panel summary <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Gathering stakeholder input on AI agent security and privacy
Argument 3
Government and International Coordination – The Center for AI Standards and Innovation serves as the industry front door; NIST provides voluntary consensus processes; participation in the International Network for Advanced AI Measurement, Evaluation, and Science (Austin Marin)
EXPLANATION
Austin describes the Center’s role as the primary liaison between industry and the U.S. government, its placement within NIST for voluntary standard development, and its participation in an international network of AI measurement and evaluation institutes.
EVIDENCE
He introduced the Center’s background, its location within the Department of Commerce and NIST, and its mission to coordinate industry engagement and avoid duplicated requests [132-145]; highlighted NIST’s century-long tradition of voluntary standards [146-152]; and noted involvement in the International Network for Advanced AI Measurement, Evaluation, and Science with ten member countries [276-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Center’s role as a front door, its placement within NIST’s voluntary standards framework, and its participation in an international AI measurement network are outlined in <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and further detailed in [S21].
MAJOR DISCUSSION POINT
Center’s coordinating role and international collaboration
Argument 4
Global Impact and Democratization – Sector‑specific listening sessions aim to uncover and address adoption barriers in education, healthcare, and finance globally (Austin Marin)
EXPLANATION
Austin explains that the Center will hold listening sessions in key sectors to identify challenges—especially around PII—and develop metrology, benchmarks, and best‑practice documents that can increase confidence in AI agent deployments.
EVIDENCE
He detailed plans for sector-specific listening sessions in education, healthcare, and finance, describing how they will gather challenges such as PII concerns and potentially lead to standards that enable adoption [165-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sector-focused listening sessions in education, healthcare, and finance are highlighted as mechanisms to identify and mitigate adoption challenges <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Sector listening sessions to drive AI adoption
W
Wifredo Fernandez
5 arguments156 words per minute603 words231 seconds
Argument 1
Open, Interoperable AI Agent Standards – Open standards create a “parallel Internet” that fuels innovation and competition (Wifredo Fernandez)
EXPLANATION
Wifredo contends that open AI standards will form a new, parallel layer to the Internet, fostering competition and rapid innovation across the AI ecosystem. This “parallel Internet” will enable diverse builders to develop on shared protocols.
EVIDENCE
He described open standards as creating a “layer, call it a new ecosystem, call it a parallel Internet” that is crucial for the development of the broader Internet [121-123].
MAJOR DISCUSSION POINT
Parallel Internet concept for AI
AGREED WITH
Sihao Huang, Michael Sellitto, Owen Lauder, Michael Brown, Austin Marin
Argument 2
Specific Protocols and Their Functions – XAI’s MacroHearts project contributes to the emerging agent ecosystem (Wifredo Fernandez)
EXPLANATION
Wifredo notes that XAI’s secretive MacroHearts project is part of the broader movement toward agent‑centric AI, adding to the ecosystem of standards and capabilities being built by frontier AI companies.
MAJOR DISCUSSION POINT
MacroHearts as part of agent ecosystem
Argument 3
Security, Trust, and Audibility – Standards must embed privacy‑preserving, auditability, authenticity, and consent mechanisms (Wifredo Fernandez)
EXPLANATION
Wifredo argues that AI standards need to incorporate core privacy and security principles, including auditability, authenticity, and user consent, to ensure trustworthy AI deployments.
EVIDENCE
He listed guiding principles such as privacy-preserving technology, auditability, authenticity, and consent mechanisms as essential for AI standards [264-268].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for privacy-preserving, auditable, and consent-driven AI standards are echoed in discussions of emerging security needs for AI agents [S17] and the push for open, privacy-focused standards [S18].
MAJOR DISCUSSION POINT
Embedding privacy and consent in AI standards
AGREED WITH
Sihao Huang, Austin Marin, Michael Sellitto, Owen Lauder, Michael Brown
Argument 4
Government and International Coordination – Discussion of regulating agent‑driven social media highlights the need for policy alignment (Wifredo Fernandez)
EXPLANATION
Wifredo raises the question of whether social media platforms that are driven by AI agents should be regulated, underscoring the need for coordinated policy responses as AI agents become more pervasive.
EVIDENCE
He asked whether regulators should address social media platforms that are agent-driven, noting the novel regulatory questions this raises [119-121].
MAJOR DISCUSSION POINT
Regulating agent‑driven social media
Argument 5
Global Impact and Democratization – Open standards foster competition, innovation, and a “parallel Internet” that benefits all regions (Wifredo Fernandez)
EXPLANATION
Wifredo reiterates that open AI standards will generate competition and innovation, creating a parallel Internet that offers benefits globally, especially for emerging markets.
EVIDENCE
He emphasized that open standards create a “parallel Internet” that is crucial for the development of the broader Internet and benefits all regions [121-123].
MAJOR DISCUSSION POINT
Parallel Internet as a democratizing force
AGREED WITH
Sihao Huang, Michael Brown, Austin Marin
O
Owen Lauder
6 arguments212 words per minute892 words251 seconds
Argument 1
Open, Interoperable AI Agent Standards – Interoperability is essential for reliable agent‑to‑agent communication (Owen Lauder)
EXPLANATION
Owen stresses that for agents to work together effectively, a standardized way of sharing identity, capabilities, and security requirements is required. Interoperability is the foundation of a functional agentic economy.
EVIDENCE
He described the agent-to-agent standard that includes a digitized clipboard sharing ID, capabilities, goals, data handling, and security metadata, which he says is fundamental to greasing the wheels of the agentic economy [64-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of interoperable agent-to-agent communication is underscored by the description of Google’s agent-to-agent protocol as a foundational standard [S14] and by broader remarks on interoperability <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Need for interoperable agent‑to‑agent communication
AGREED WITH
Sihao Huang, Michael Sellitto, Michael Brown, Wifredo Fernandez, Austin Marin
Argument 2
Specific Protocols and Their Functions – Agent‑to‑Agent protocol shares ID, capabilities, and security metadata; UCP enables agent‑website/payment interactions (Owen Lauder)
EXPLANATION
Owen outlines two protocols: the agent‑to‑agent protocol that conveys essential metadata for agent interaction, and the Universal Commerce Protocol (UCP) that lets agents interact with websites and payment systems, enabling commerce.
EVIDENCE
He explained the agent-to-agent protocol’s metadata fields (ID, capabilities, security) [68-73] and introduced the Universal Commerce Protocol (UCP) for agent-website and payment interactions, describing its transformative potential for business [74-76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The agent-to-agent protocol that conveys ID, capabilities, and security metadata is identified as an early standard for vendor-agnostic interactions [S14]; the commerce-related aspects align with the broader discussion of AI commerce protocols.
MAJOR DISCUSSION POINT
Agent‑to‑agent and commerce protocols
Argument 3
Government and International Coordination – Collaboration with global partners (e.g., Walmart, Flipkart, Infosys) through shared standards (Owen Lauder)
EXPLANATION
Owen highlights that Google DeepMind is partnering with global retailers and technology firms, demonstrating how shared standards enable cross‑border collaboration and commerce.
EVIDENCE
He mentioned partnerships with Walmart and Target in the U.S., as well as Flipkart and Infosys in India, facilitated by shared agent standards [77-78].
MAJOR DISCUSSION POINT
Global commercial partnerships via standards
Argument 4
Historical Lessons on Standards – Early skepticism about online credit‑card use was overcome by HTTPS, unlocking the modern digital economy (Owen Lauder)
EXPLANATION
Owen reflects on the early belief that putting credit‑card information online was unsafe, and how the adoption of HTTPS changed that perception, enabling secure e‑commerce.
EVIDENCE
He recalled the 1990s mindset that credit-card data should never be online and how the secure HTTPS layer later unlocked the modern digital economy [235-238].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The transformation of e-commerce through HTTPS is cited as a historic lesson for AI security standards <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and reinforced by the emphasis on trust for digital transactions [S16].
MAJOR DISCUSSION POINT
HTTPS as a turning point for e‑commerce
AGREED WITH
Sihao Huang, Michael Sellitto
Argument 5
Historical Lessons on Standards – Electrical standards (volts, amps) enabled safe, universal power grid integration (Owen Lauder)
EXPLANATION
Owen points out that standardizing electrical units allowed devices to safely connect to power grids worldwide, illustrating how technical standards facilitate global interoperability.
EVIDENCE
He described how standardizing units like volts, amps, and ohms enabled power supplies to connect to the grid and allowed inventions such as fuses to protect circuits [239-242].
MAJOR DISCUSSION POINT
Electrical standards as a model for AI standards
Argument 6
Security, Trust, and Audibility – Security requirements are embedded in agent‑to‑agent metadata and are fundamental to the agentic economy (Owen Lauder)
EXPLANATION
Owen asserts that security metadata—such as authentication and data handling requirements—must be part of the agent‑to‑agent exchange to ensure safe and trustworthy interactions within the agentic economy.
EVIDENCE
He listed security requirements as part of the agent-to-agent metadata (e.g., security requirements field) and emphasized its fundamental role for the agentic economy [71-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding security metadata within agent-to-agent exchanges is highlighted as a core component of emerging AI agent standards [S14] and the broader security framework discussed in <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Embedding security in agent metadata
AGREED WITH
Sihao Huang, Austin Marin, Michael Sellitto, Michael Brown, Wifredo Fernandez
M
Michael Sellitto
4 arguments183 words per minute1123 words366 seconds
Argument 1
Open, Interoperable AI Agent Standards – MCP provides a universal way for models to connect to data and tools (Michael Sellitto)
EXPLANATION
Michael describes the Model Context Protocol (MCP) as a universal, open standard that lets AI models access enterprise knowledge bases, government data, and other tools in a consistent manner, simplifying integration.
EVIDENCE
He explained that MCP connects AI models to enterprise knowledge bases and government data sources by providing a rough description of the data source and tools, enabling intuitive access similar to human users [28-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Model Context Protocol (MCP) is presented as a universal, open standard for linking AI models to data and tools in the early AI standards landscape [S14].
MAJOR DISCUSSION POINT
MCP as universal data‑tool connector
AGREED WITH
Sihao Huang, Owen Lauder, Michael Brown, Wifredo Fernandez, Austin Marin
Argument 2
Specific Protocols and Their Functions – MCP links AI agents to enterprise data sources; Skills protocol enables reusable task instructions (Michael Sellitto)
EXPLANATION
Michael expands on MCP’s role in linking agents to data and introduces the Skills protocol, which encodes reusable task instructions that can be transferred across models and vendors.
EVIDENCE
He detailed MCP’s function of describing data sources and tools for model access [28-36] and described Skills as a set of instructions that teach agents tasks, allowing portability across providers [39-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
MCP’s role in describing data sources for model access is documented in the discussion of emerging protocols [S14]; the concept of reusable task instructions aligns with the broader push for portable AI capabilities.
MAJOR DISCUSSION POINT
MCP and Skills for data access and task portability
Argument 3
Security, Trust, and Audibility – Authentication, auditability, and user‑confirmation are required before agents take significant actions (Michael Sellitto)
EXPLANATION
Michael stresses that agents must incorporate authentication, audit trails, and mechanisms for user confirmation before performing high‑impact actions, ensuring accountability and trust.
EVIDENCE
He stated that agents need security, authentication, and the ability to check back with the user before making significant decisions, and that this information should be intelligible to customers [227-229].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for authentication, audit trails, and user confirmation in AI agent actions is emphasized as a security imperative for trustworthy AI [S17] and reflected in the panel’s security focus <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Authentication and auditability for agent actions
AGREED WITH
Sihao Huang, Austin Marin, Owen Lauder, Michael Brown, Wifredo Fernandez
Argument 4
Historical Lessons on Standards – Automotive standards (fuel economy, crash tests) provide an analogy for how standardized metrics build consumer confidence (Michael Sellitto)
EXPLANATION
Michael uses the automobile industry’s standardized metrics—such as fuel economy ratings and crash‑test results—to illustrate how consistent, third‑party metrics give consumers confidence in products, a model applicable to AI standards.
EVIDENCE
He compared AI standards to car industry metrics, describing how standardized fuel-economy and crash-test data provide trust and allow consumers to make informed choices [212-219].
MAJOR DISCUSSION POINT
Analogy of automotive standards for AI trust
AGREED WITH
Sihao Huang, Owen Lauder
M
Michael Brown
5 arguments163 words per minute631 words232 seconds
Argument 1
Open, Interoperable AI Agent Standards – Shared “traffic‑light” style standards create consistent, secure AI worldwide (Michael Brown)
EXPLANATION
Michael likens AI standards to universal traffic‑light signals, arguing that shared conventions (red means stop, green means go) provide a common understanding that ensures consistent and secure AI behavior across nations.
EVIDENCE
He used the traffic-light analogy, noting that red universally means stop and green means go, and argued that such shared understanding enables secure, interoperable AI worldwide [86-90].
MAJOR DISCUSSION POINT
Traffic‑light analogy for global AI standards
AGREED WITH
Sihao Huang, Michael Sellitto, Owen Lauder, Wifredo Fernandez, Austin Marin
Argument 2
Specific Protocols and Their Functions – OpenAI’s commerce protocol lets agents autonomously book travel, shop, etc. (Michael Brown)
EXPLANATION
Michael describes OpenAI’s commerce protocol, which enables agents to act on behalf of users to arrange travel, secure flights, book hotels, and perform other e‑commerce activities autonomously.
EVIDENCE
He gave the example of an agent knowing a family wants to vacation in Goa and then automatically securing flights and hotels using the commerce protocol [98-100].
MAJOR DISCUSSION POINT
Autonomous commerce via OpenAI protocol
Argument 3
Security, Trust, and Audibility – Trust in agents handling personal/financial data is a prerequisite for widespread adoption (Michael Brown)
EXPLANATION
Michael argues that for AI agents to be widely adopted, users must trust that agents can securely manage personal and financial information, which requires robust security and transparency mechanisms.
EVIDENCE
He linked shared understanding (traffic-light analogy) to security and trust, stating that trust is needed for agents handling personal/financial data before broad adoption can occur [91-92] and later emphasized the need for authentication and auditability before agents take significant actions [226-229].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust and security for personal and financial data are identified as critical for AI adoption, mirroring the security-trust requirements for e-commerce [S16] and the broader call for AI security standards [S17].
MAJOR DISCUSSION POINT
Trust as a prerequisite for AI adoption
AGREED WITH
Sihao Huang, Austin Marin, Michael Sellitto, Owen Lauder, Wifredo Fernandez
Argument 4
Government and International Coordination – Government can convene industries to set norms, but industry leads the technical work (Michael Brown)
EXPLANATION
Michael notes that while governments can play a convening role to bring together different sectors, the technical development of standards should be driven by industry experts who understand user needs.
EVIDENCE
He stated that government can bring industries together for dialogue, but the industry must remain front-center in establishing technical norms because they are the practitioners and experts [252-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel stresses a government-led convening role while keeping technical standard development industry-driven, consistent with the voluntary, consensus-based approach described in <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and the NIST tradition [S2].
MAJOR DISCUSSION POINT
Industry‑led technical standards with government facilitation
Argument 5
Global Impact and Democratization – Shared protocols let agents serve users worldwide, exemplified by autonomous travel booking (Michael Brown)
EXPLANATION
Michael highlights that shared AI protocols enable agents from different companies and countries to collaborate seamlessly, illustrated by agents autonomously arranging travel for users across borders.
EVIDENCE
He described an agent that knows a user wants to travel to Goa and can automatically secure flights and hotels, demonstrating cross-border, shared-protocol functionality [98-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-border agent services enabled by shared protocols are highlighted as a benefit of open standards for global interoperability <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and the discussion of builders in diverse regions [S13].
MAJOR DISCUSSION POINT
Cross‑border agent services via shared protocols
Agreements
Agreement Points
Open, interoperable AI agent standards are essential to avoid vendor lock‑in and enable global interoperability.
Speakers: Sihao Huang, Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez, Austin Marin
Open, Interoperable AI Agent Standards – Open standards prevent lock‑in and enable global interoperability (Sihao Huang) Open, Interoperable AI Agent Standards – MCP provides a universal way for models to connect to data and tools (Michael Sellitto) Open, Interoperable AI Agent Standards – Interoperability is essential for reliable agent‑to‑agent communication (Owen Lauder) Open, Interoperable AI Agent Standards – Shared “traffic‑light” style standards create consistent, secure AI worldwide (Michael Brown) Open, Interoperable AI Agent Standards – Open standards create a “parallel Internet” that fuels innovation and competition (Wifredo Fernandez) Open, Interoperable AI Agent Standards – Voluntary, consensus‑based standards avoid duplicated government requests (Austin Marin)
All speakers emphasized that open, interoperable standards-whether MCP, agent-to-agent, commerce protocols, or broader voluntary frameworks-prevent lock-in, allow builders in any country to adopt or switch models, and create a shared layer akin to the Internet [186-190][191-199][28-36][39-47][64-73][86-92][121-123][138-145][146-152].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for open, interoperable AI standards echo the role of digital standards in providing guardrails for trustworthy AI development and preventing lock-in, as highlighted by international standards bodies [S45] and multistakeholder cooperation initiatives [S39].
Security, trust, and auditability are critical for AI agents, analogous to SSL/HTTPS for e‑commerce.
Speakers: Sihao Huang, Austin Marin, Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez
Security, Trust, and Audibility – Historical SSL/HTTPS security standards enabled e‑commerce (Sihao Huang) Security, Trust, and Audibility – Request for information on AI agent security; draft standards on identity and authorization (Austin Marin) Security, Trust, and Audibility – Authentication, auditability, and user‑confirmation are required before agents take significant actions (Michael Sellitto) Security, Trust, and Audibility – Security requirements are embedded in agent‑to‑agent metadata and are fundamental to the agentic economy (Owen Lauder) Security, Trust, and Audibility – Trust in agents handling personal/financial data is a prerequisite for widespread adoption (Michael Brown) Security, Trust, and Audibility – Standards must embed privacy‑preserving, auditability, authenticity, and consent mechanisms (Wifredo Fernandez)
The panel repeatedly linked the need for robust security, authentication, audit trails and privacy safeguards to the trust required for AI agents, drawing parallels to SSL/HTTPS and emphasizing upcoming RFI, draft identity standards, and protocol-level security fields [206-207][155-165][227-229][71-73][91-92][226-229][264-268].
POLICY CONTEXT (KNOWLEDGE BASE)
Security, trust and auditability are repeatedly emphasized as foundational for AI agents, mirroring requirements for transparency and traceability in UN discussions [S27], e-commerce security in ECOWAS [S28], and the need for robust security layers to build trust in agentic AI [S29][S30].
Historical standards (Internet, electrical, automotive) provide lessons for designing AI standards.
Speakers: Sihao Huang, Owen Lauder, Michael Sellitto
Historical Lessons on Standards – Internet protocols (TCP/IP, HTTPS) illustrate how open standards drive global adoption and prosperity (Sihao Huang) Historical Lessons on Standards – Early skepticism about online credit‑card use was overcome by HTTPS, unlocking the modern digital economy (Owen Lauder) Historical Lessons on Standards – Automotive standards (fuel economy, crash tests) provide an analogy for how standardized metrics build consumer confidence (Michael Sellitto)
All three speakers cited past standard-setting successes-Internet protocols, HTTPS for e-commerce, electrical units, and automotive safety metrics-to argue that similar open, technically robust standards can guide AI development [198-200][235-242][212-219].
Government and international coordination are essential for effective AI standards development.
Speakers: Austin Marin, Sihao Huang, Michael Brown, Owen Lauder
Government and International Coordination – The Center serves as the industry front door; NIST provides voluntary consensus processes; participation in International Network (Austin Marin) Government and International Coordination – U.S. government’s historic role in Internet standards guides export of an open AI stack (Sihao Huang) Government and International Coordination – Government can convene industries to set norms, but industry leads technical work (Michael Brown) Government and International Coordination – Collaboration with global partners (e.g., Walmart, Flipkart, Infosys) demonstrates cross‑border coordination via standards (Owen Lauder)
Speakers highlighted the Center’s role as a front-door liaison, the legacy of U.S. government support for open standards, the need for government convening while industry drives technical details, and examples of global partnerships enabled by standards [132-145][276-280][191-199][252-254][77-78].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports stress the necessity of coordinated government and international effort to bridge technical standards with policy, citing multistakeholder cooperation [S39], divergent views on the scope of government involvement [S40], and the need for global coordination mechanisms among standards bodies [S42][S43].
Open standards democratize AI, allowing builders worldwide to create and exchange services across borders.
Speakers: Sihao Huang, Michael Brown, Wifredo Fernandez, Austin Marin
Global Impact and Democratization – Standards allow builders in India, Kenya, and elsewhere to use and switch AI models freely (Sihao Huang) Global Impact and Democratization – Shared protocols let agents serve users worldwide, e.g., autonomous travel booking (Michael Brown) Global Impact and Democratization – Open standards foster competition, innovation, and a “parallel Internet” that benefits all regions (Wifredo Fernandez) Government and International Coordination – International Network for Advanced AI Measurement, Evaluation, and Science facilitates global engagement (Austin Marin)
All agreed that open, interoperable protocols enable developers in emerging markets to build on AI services, support cross-border commerce, and create a new “parallel Internet” that benefits the global digital economy [186-190][98-100][91-92][121-123][276-280].
POLICY CONTEXT (KNOWLEDGE BASE)
Open digital standards are portrayed as enablers of democratized AI development and cross-border service exchange, providing the guardrails for responsible AI while fostering inclusive participation [S45] and supporting international multistakeholder collaboration [S39].
Similar Viewpoints
Both stress that open, voluntary, consensus‑based standards are the best way to prevent lock‑in and reduce bureaucratic duplication for industry [191-199][138-145][146-152].
Speakers: Sihao Huang, Austin Marin
Open, Interoperable AI Agent Standards – Open standards prevent lock‑in and enable global interoperability (Sihao Huang) Open, Interoperable AI Agent Standards – Voluntary, consensus‑based standards avoid duplicated government requests (Austin Marin)
Both argue that protocols must embed clear metadata (data source description, capabilities, security) to make agents interoperable across platforms [28-36][64-73].
Speakers: Michael Sellitto, Owen Lauder
Open, Interoperable AI Agent Standards – MCP provides a universal way for models to connect to data and tools (Michael Sellitto) Open, Interoperable AI Agent Standards – Interoperability is essential for reliable agent‑to‑agent communication (Owen Lauder)
Both use the HTTPS/SSL story to illustrate how security standards unlock commerce and trust in digital systems [206-207][235-238].
Speakers: Michael Brown, Owen Lauder
Security, Trust, and Audibility – Historical SSL/HTTPS security standards enabled e‑commerce (Sihao Huang) Historical Lessons on Standards – Early skepticism about online credit‑card use was overcome by HTTPS, unlocking the modern digital economy (Owen Lauder)
Both see a complementary role where government convenes and coordinates, while industry drives the technical development of standards [132-145][252-254].
Speakers: Austin Marin, Michael Brown
Government and International Coordination – The Center serves as the industry front door; NIST provides voluntary consensus processes (Austin Marin) Government and International Coordination – Government can convene industries to set norms, but industry leads technical work (Michael Brown)
Unexpected Consensus
Positive references to the Trump administration from both a U.S. government official and an industry representative.
Speakers: Sihao Huang, Michael Sellitto
Government and International Coordination – U.S. government’s historic role … (Sihao Huang) (includes mention of the Trump administration) Open, Interoperable AI Agent Standards – MCP provides a universal way … (Michael Sellitto) (mentions partnership with the Trump administration)
It is uncommon for a technical standards discussion to contain explicit praise of a specific administration; both Sihao and Michael Sellitto highlighted the Trump administration’s role in supporting AI standards and partnerships [190][27].
POLICY CONTEXT (KNOWLEDGE BASE)
Both a U.S. government official’s remarks at the World Economic Forum and industry commentary highlighted favorable views of the Trump administration’s leadership and policy direction [S31][S32].
Overall Assessment

The panel displayed strong convergence on four main themes: the necessity of open, interoperable AI standards; the critical role of security, trust, and auditability; the value of historical standard‑setting lessons; and the importance of coordinated government‑industry and international collaboration to democratize AI worldwide.

High consensus – the repeated alignment across all speakers suggests a shared vision that will likely translate into coordinated policy initiatives, industry road‑maps, and international cooperation, accelerating the development of a secure, open AI ecosystem.

Differences
Different Viewpoints
Governance model for AI standards – government‑led coordination vs industry‑led technical development
Speakers: Sihao Huang, Austin Marin, Michael Brown
Open standards should be driven by U.S. government initiatives to export an open AI stack globally (Sihao Huang) The Center for AI Standards and Innovation acts as the front‑door for industry, coordinating to avoid duplicated agency requests (Austin Marin) Government can convene but industry must remain front‑center in establishing technical norms (Michael Brown)
Sihao and Austin argue that a strong U.S. government role is essential to shape and export open AI standards, positioning the government as the primary driver and coordinator of the ecosystem [191-199][138-145]. Michael Brown acknowledges a governmental convening role but stresses that the technical work and norm-setting must be led by industry itself, suggesting a more limited governmental influence [252-254]. This creates a tension between a government-centric versus industry-centric approach to standard development.
POLICY CONTEXT (KNOWLEDGE BASE)
Debate over whether AI standards should be driven by government coordination or industry consensus is reflected in analyses of multistakeholder cooperation [S39], differing expert positions on the extent of governmental oversight [S40], and industry-preferred consensus approaches [S44].
Prioritisation of security mechanisms in AI agent standards
Speakers: Sihao Huang, Michael Sellitto, Owen Lauder, Wifredo Fernandez
Security standards analogous to SSL/HTTPS are needed to enable trustworthy AI commerce (Sihao Huang) Authentication, auditability and user‑confirmation before significant actions are essential (Michael Sellitto) Security requirements should be embedded as metadata in agent‑to‑agent exchanges (Owen Lauder) Standards must embed privacy‑preserving, auditability, authenticity and consent mechanisms (Wifredo Fernandez)
All speakers agree security is critical, but they emphasise different components: Sihao draws a historical analogy to SSL/HTTPS as a foundation for e-commerce [206-207]; Michael Sellitto focuses on authentication, audit trails and user confirmation before high-impact actions [227-229]; Owen proposes that security metadata be part of the agent-to-agent protocol itself [71-73]; Wifredo stresses broader privacy, auditability and consent principles [264-268]. The divergence lies in which security layer should be prioritised and how it should be operationalised.
POLICY CONTEXT (KNOWLEDGE BASE)
Security mechanisms are identified as a foundational layer for trust in AI agents, with calls to prioritize them in standards development [S29][S30].
Scope of regulation – whether agent‑driven social media platforms should be specifically regulated
Speakers: Wifredo Fernandez, Other panelists (implicit disagreement)
Regulating social media platforms that are agent‑driven raises novel policy questions (Wifredo Fernandez) No other speaker directly addressed regulation of agent‑driven social media, focusing instead on standards and interoperability
Wifredo raises the issue of regulating agent-driven social media platforms as a new challenge [119-121]. The rest of the panel does not engage with this regulatory angle, concentrating on technical standards and industry coordination, indicating an unexpected gap in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions highlight the open question of regulating agent-driven social media, with U.S. AI standards reports noting the need to determine a regulatory approach [S35] and broader calls for social media platform regulation to address misinformation and safety concerns [S36][S37][S38].
International versus U.S.–centric approach to AI standards deployment
Speakers: Sihao Huang, Austin Marin
The U.S. should leverage its historic role to export an open AI stack worldwide (Sihao Huang) Engagement is pursued through a multilateral International Network for Advanced AI Measurement, Evaluation, and Science (Austin Marin)
Sihao emphasizes a U.S.-led export model based on historical Internet standards to spread open AI globally [191-199]. Austin, while acknowledging U.S. leadership, highlights participation in a ten-country international network to develop consensus-based standards, suggesting a more collaborative multilateral approach [276-280]. This reflects differing views on the balance between national leadership and international cooperation.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between international multistakeholder coordination and a U.S.-centric deployment strategy is noted in global cooperation analyses [S39] and regional AI governance initiatives that stress inclusive, cross-regional policy making [S41].
Unexpected Differences
Regulation of agent‑driven social media platforms
Speakers: Wifredo Fernandez, Other panelists
Raises novel regulatory questions about agent‑driven social media (Wifredo Fernandez) No other panelist addresses this regulatory dimension, focusing on standards and interoperability
While the panel largely concentrates on technical standards and industry coordination, Wifredo uniquely brings up the need to consider regulatory frameworks for agent‑driven social media, a topic not reflected elsewhere in the discussion, indicating an unexpected divergence in focus.
POLICY CONTEXT (KNOWLEDGE BASE)
The broader issue of regulating platforms powered by AI agents is addressed in U.S. AI standards deliberations [S35] and in discussions of platform regulation frameworks for the Global South and beyond [S46].
Overall Assessment

The panel shows strong consensus on the importance of open, interoperable AI standards to drive global innovation and avoid lock‑in. The primary disagreements revolve around who should lead the standard‑setting process (government versus industry), the prioritisation of specific security components, the balance between U.S. leadership and multilateral cooperation, and the emerging question of regulating agent‑driven social media.

Moderate – while all participants share the overarching vision of open standards, the differing views on governance structures, security priorities, and regulatory scope suggest that coordination will require careful negotiation. These tensions could affect the speed and inclusivity of standard adoption, especially across jurisdictions and sectors.

Partial Agreements
All speakers concur that open, interoperable standards are a shared goal that will foster innovation, prevent vendor lock‑in and support global AI development. However, they diverge on the mechanisms: Sihao and Austin focus on government‑facilitated consensus processes; Michael Sellitto highlights a specific technical protocol (MCP); Owen stresses metadata‑rich agent‑to‑agent standards; Michael Brown uses a metaphor for universal conventions; Wifredo frames it as a new ecosystem layer. The consensus on the goal contrasts with varied pathways to achieve it.
Speakers: Sihao Huang, Austin Marin, Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez
Open, interoperable AI agent standards are essential to avoid lock‑in and enable global collaboration (Sihao Huang) Voluntary, consensus‑based standards reduce duplicated government requests and streamline industry engagement (Austin Marin) MCP provides a universal way for models to connect to data and tools (Michael Sellitto) Interoperability is essential for reliable agent‑to‑agent communication (Owen Lauder) Shared “traffic‑light” style standards create consistent, secure AI worldwide (Michael Brown) Open standards create a “parallel Internet” that fuels innovation (Wifredo Fernandez)
Takeaways
Key takeaways
Open, interoperable AI agent standards are essential to avoid vendor lock‑in and to enable global collaboration (e.g., MCP, Agent‑to‑Agent, UCP, Skills). Voluntary, consensus‑based processes led by NIST and the Center for AI Standards and Innovation are preferred over prescriptive regulation. Security, authentication, auditability, privacy, and user consent must be baked into agent protocols before widespread adoption. Historical precedents (TCP/IP, HTTPS, automotive and electrical standards) illustrate how open standards drive innovation, commerce, and trust. International coordination (through the International Network for Advanced AI Measurement, Evaluation, and Science) is underway to align measurement, evaluation, and security practices. Sector‑specific challenges (e.g., PII in education, healthcare, finance) need targeted guidance and benchmarks. Industry participants (Anthropic, Google DeepMind, OpenAI, XAI) are actively developing and sharing protocols, and see them as building blocks for a “parallel Internet” of AI services.
Resolutions and action items
Submit comments to the Center for AI Standards and Innovation’s Request for Information on AI agent security (deadline March). Review and comment on NIST ITL’s draft standards for AI agent identity and authorization. Participate in upcoming sector‑specific listening sessions (education, healthcare, finance) planned for April. Continue collaborative development of open protocols (MCP, Skills, Agent‑to‑Agent, UCP, commerce protocols) and share implementations across companies. Engage with international partners via the International Network for Advanced AI Measurement, Evaluation, and Science to harmonize measurement and evaluation methods.
Unresolved issues
How to create a unified, cross‑company standard for agent‑to‑agent communication without fragmenting the ecosystem. Specific mechanisms for handling personally identifiable information (PII) and ensuring compliance across diverse regulatory regimes. Regulatory approach for agent‑driven social media platforms and other novel use‑cases. Balancing openness with security—determining the right level of mandatory authentication and user‑confirmation for high‑impact actions. Potential overlap or competition among similar protocols (e.g., OpenAI’s commerce protocol vs. Google’s UCP).
Suggested compromises
Adopt voluntary, industry‑driven standards while allowing government to act as a convening and coordination body, avoiding duplicate agency requests. Emphasize open standards to preserve interoperability and competition, but embed security and privacy requirements to satisfy adoption concerns. Allow multiple protocol implementations to coexist (e.g., different commerce protocols) as long as they adhere to shared security and interoperability guidelines.
Thought Provoking Comments
MCP is a universal open standard for connecting AI systems to the tools and data sources that people already use… you just need to give the model a rough description of what’s in the data source and what kind of tools it can access, and the model will intuitively know how to use those data sources.
Introduces a concrete, vendor‑agnostic protocol that solves the painful bespoke integration problem, highlighting how open standards can unlock interoperability and data portability across enterprises and governments.
Set the technical baseline for the discussion, prompting other panelists to reference their own protocols (agent‑to‑agent, commerce) and framing the rest of the conversation around how such standards can be adopted globally.
Speaker: Michael Sellitto (Anthropic)
Our agent‑to‑agent standard is essentially a digitized clipboard of information that an agent will share with another agent – ID, capabilities, intent, data needs, and security requirements.
Provides a vivid metaphor for how agents can communicate securely and efficiently, moving the conversation from abstract standards to a tangible mechanism for an emerging ‘agent economy.’
Shifted the dialogue toward inter‑agent communication and commerce, leading other speakers (e.g., Michael Brown) to discuss cross‑company and cross‑border agent interactions.
Speaker: Owen Lauder (Google DeepMind)
Having a shared understanding in countries, rich and poor, advanced and developing, around how things work… like traffic lights – it lets builders know that what they’re building will be secure, accessible, and useful everywhere.
Uses a simple, universal analogy to illustrate why global standards matter, linking technical interoperability to everyday safety and trust, and emphasizing the societal dimension of standards.
Reframed the technical discussion as a matter of global public good, prompting Sihao and Austin to connect AI standards to historical internet standards and to discuss policy implications.
Speaker: Michael Brown (OpenAI)
The Internet succeeded because the U.S. government supported open, decentralized protocols like TCP/IP and HTTPS. We must repeat that model for AI – open standards, not closed, nation‑centric systems.
Draws a powerful historical parallel, arguing against protectionist approaches and positioning open AI standards as a strategic economic and diplomatic tool.
Created a turning point toward policy‑focused dialogue, leading Austin to describe NIST’s voluntary consensus process and prompting the panel to consider security, sovereignty, and international cooperation.
Speaker: Sihao Huang (White House OSTP)
We’ve issued a Request for Information on AI agent security, have a draft on agent identity and authorization, and will hold sector‑specific listening sessions in education, healthcare, and finance to surface real‑world challenges.
Outlines concrete, actionable steps the government is taking, moving the conversation from abstract ideals to tangible engagement mechanisms with industry and stakeholders.
Steered the discussion toward next‑steps and collaboration pathways, encouraging participants to think about how their protocols can feed into NIST’s voluntary standards and sectoral pilots.
Speaker: Austin Marin (Center for AI Standards and Innovation, Dept. of Commerce)
When we think about agents buying music for us, we need privacy‑preserving, auditable, authentic, consent‑driven technology… agents should serve users and organizations, not just model developers.
Broadens the scope beyond technical interoperability to ethical and regulatory concerns, highlighting the need for standards that embed privacy, auditability, and consent.
Introduced a regulatory dimension that prompted others (e.g., Michael Sellitto’s security analogy) to discuss trust, authentication, and the role of standards in safeguarding user rights.
Speaker: Wifredo Fernandez (XAI)
Think of car metrics – fuel economy, crash test results – measured by independent, standardized third parties. That same confidence is needed for AI agents, especially around security and sovereignty.
Uses a relatable automobile safety analogy to explain why standardized, third‑party evaluated metrics are essential for trust in AI agents, linking security standards to market adoption and geopolitical concerns.
Deepened the security discussion, reinforcing Sihao’s point about SSL/HTTPS, and leading Owen and others to stress the importance of robust, interoperable security standards for the emerging AI economy.
Speaker: Michael Sellitto (Anthropic)
Overall Assessment

The discussion pivoted around a handful of high‑impact remarks that moved the panel from a generic overview of AI protocols to a nuanced, multi‑layered conversation about interoperability, security, global policy, and ethical governance. Michael Sellitto’s exposition of MCP and SKILLZ established the technical foundation, while Owen Lauder’s ‘digitized clipboard’ metaphor expanded the scope to inter‑agent commerce. Michael Brown’s traffic‑light analogy reframed standards as a universal safety language, prompting Sihao Huang to invoke the historic success of open internet protocols as a blueprint for AI. Austin Marin then translated these ideas into concrete government actions, and Wifredo Fernandez reminded the group of privacy and consent imperatives. Collectively, these comments created turning points that shifted the tone from descriptive to prescriptive, aligned industry and government perspectives, and highlighted the intertwined technical, security, and societal challenges that must be addressed through open, consensus‑driven standards.

Follow-up Questions
How do you see the future of AI standards and agent development, and how can AI agent standards reflect the same principles that enabled the open internet, including interoperability and security?
Guides the overall direction of standard‑setting to ensure openness, cross‑border compatibility, and trustworthy security, which are essential for a global AI ecosystem.
Speaker: Sihao Huang
How are your organizations engaging with the rest of the world on AI standards, and what are the most exciting applications developed on top of your standards and products?
Understanding international collaboration and real‑world use cases helps assess the effectiveness of standards and showcases tangible benefits for builders worldwide.
Speaker: Sihao Huang
What are the key AI agent security challenges that need to be addressed through voluntary standards and best practices?
Security is a prerequisite for widespread adoption; identifying threats and gaps is the first step toward creating robust, trusted standards.
Speaker: Austin Marin
How should AI agent identity and authorization be standardized to ensure trustworthy interactions?
A common framework for identity and authorization will enable secure agent‑to‑agent and agent‑to‑service communications across platforms.
Speaker: Austin Marin
What sector‑specific barriers (e.g., in education, healthcare, finance) hinder AI agent adoption, especially regarding handling of personally identifiable information (PII)?
Different industries face unique regulatory and technical constraints; pinpointing these informs targeted standards and accelerates deployment.
Speaker: Austin Marin
What standardized metrics and third‑party evaluation methods are needed to assess AI agent performance, safety, and security?
Objective, comparable metrics build confidence for buyers, facilitate model switching, and support transparent reporting of agent capabilities.
Speaker: Michael Sellitto
What lessons can be learned from standards that have failed or caused fragmentation (e.g., incompatible electrical plugs) to avoid similar pitfalls in AI standards?
Avoiding interoperability problems and market inefficiencies is crucial for seamless global AI commerce and integration.
Speaker: Owen Lauder
What should be the respective roles of government and industry in creating and governing AI standards, and how can they best collaborate?
Clarifying responsibilities ensures effective coordination, prevents duplicated regulatory burdens, and leverages expertise from both sectors.
Speaker: Michael Brown
How can AI agents incorporate privacy‑preserving, auditability, authenticity, and consent mechanisms into their design?
These principles protect users, satisfy regulatory expectations, and build trust in agent‑driven services.
Speaker: Wifredo Fernandez
How can international consensus on AI measurement, evaluation, and security methodologies be achieved through networks like the International Network for Advanced AI Measurement, Evaluation, and Science?
Harmonized evaluation standards enable comparable assessments across borders, fostering interoperability and shared confidence in AI systems.
Speaker: Austin Marin
Do agent commerce protocols across different companies operate competitively or cooperatively, and how can alignment be achieved?
Understanding the competitive vs cooperative dynamics informs how standards can be designed to promote interoperability without stifling innovation.
Speaker: Michael Brown

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.