U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence
20 Feb 2026 18:00h - 19:00h
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence
Summary
The panel at the India AI Impact Summit brought together U.S. government officials and leaders from four frontier AI companies to discuss the development of open standards for AI agents, emphasizing the need for interoperable interfaces that enable global builders to create applications on top of these systems [5-11][15-22]. Sihao Huang highlighted that American firms are investing $700 billion in AI infrastructure this year and are competing fiercely to make models cheaper and more powerful for developers worldwide [13-14].
Michael Sellitto described Anthropic’s Model Context Protocol (MCP) as a universal open standard that lets AI models connect to existing enterprise and government data sources and tools through simple descriptions, thereby eliminating bespoke integrations and fostering competition [28-38]. He also introduced the SKILLZ protocol, which encodes reusable task instructions that can be transferred across vendors, further enhancing portability and interoperability [46-48].
Owen Lauder explained Google DeepMind’s agent-to-agent standard, which provides a digitized “clipboard” of an agent’s identity, capabilities, goals, data handling, and security requirements to enable seamless communication between agents [63-71]. He added that Google’s Universal Commerce Protocol (UCP) lets agents interact with websites and payment systems, opening new business possibilities [74-76]. Michael Brown illustrated how OpenAI’s commerce protocol could allow an agent to arrange a family vacation by booking flights and hotels autonomously, showcasing the practical benefits of shared standards [99-101].
Wifredo Fernandez noted that open standards accelerate innovation, create a “parallel Internet” for agents, and raise novel regulatory questions about agent-driven platforms [115-122]. Austin Marin outlined the Center for AI Standards and Innovation’s role within the Department of Commerce and NIST, announcing the Agent Standards Initiative, a request for information on agent security, and a draft on agent identity and authorization [132-154][146-152][155-165]. He also described upcoming sector-specific listening sessions to address challenges such as PII handling in education and healthcare, aiming to produce voluntary best-practice documents that build confidence in AI deployments [165-172].
Sihao drew parallels to early Internet standards like TCP/IP and HTTPS, arguing that security-driven standards are essential for widespread adoption of AI agents, just as SSL enabled e-commerce [198-207]. Michael Sellitto used an automobile analogy to show how standardized performance metrics and open standards give users confidence and allow switching between vendors or to open-source models [211-218]. Owen reinforced these lessons by referencing historical electrical standards that enabled global plug compatibility, urging the AI community to adopt technically robust, interoperable standards while avoiding fragmented solutions [239-250][231-250]. Participants agreed that open, consensus-based standards are crucial for a global AI ecosystem and that international collaboration through networks such as the International Network for Advanced AI Measurement, Evaluation, and Science is already underway [276-280][231-250].
The discussion concluded with a shared commitment to develop voluntary, interoperable, and secure AI agent standards that will foster innovation, democratize access, and support worldwide adoption of AI technologies [185-188][209-210].
Keypoints
Major discussion points
– Rapid emergence of AI-agent protocols and their functional benefits – The panel highlighted a growing ecosystem of standards such as the Anthropic Model Context Protocol (MCP), Google DeepMind’s A2A agent-to-agent protocol, OpenAI’s commerce protocol, and XAI’s MacroHearts project, which are beginning to serve as industry de-facto standards [17-21]. MCP is described as a “universal open standard for connecting AI systems to the tools and data sources that people already use,” enabling agents to discover and use enterprise or government data without bespoke integration [28-38]. Google’s A2A protocol provides a “digitized clipboard” that shares an agent’s identity, capabilities, intent, data handling, and security requirements to facilitate direct agent-to-agent communication [64-71]. The Universal Commerce Protocol (UCP) and OpenAI’s commerce protocol aim to let agents transact with websites and payment systems, opening a new “agentic economy” [74-76][98-101].
– U.S. government’s coordinating role through the Center for AI Standards and Innovation (CASI) and NIST – CASI, housed in the Department of Commerce and partnered with NIST, positions itself as the “front door” for industry to engage with the government, avoiding duplicated requests and fostering consensus-based voluntary standards [132-140][146-152]. Recent actions include issuing a Request for Information on AI-agent security [155-161], supporting NIST’s draft on agent identity and authorization [163-165], and planning sector-specific listening sessions (education, healthcare, finance) to surface real-world challenges [165-173].
– Security, trust, and evaluation as prerequisites for widespread adoption – Panelists repeatedly linked trustworthy standards to the historic rollout of internet security (SSL/HTTPS) and automotive safety metrics, arguing that standardized security assessments will give users confidence to “trust…when to trust, and when not to trust” AI agents [206-207][211-218][219-227]. Analogies to car safety ratings and fuel-economy metrics illustrate how third-party, consensus-driven benchmarks can enable informed purchasing decisions for AI-enabled services [211-218].
– Open, interoperable standards to prevent lock-in and promote global collaboration – The discussion emphasized that open protocols allow builders in India, Kenya, or elsewhere to switch models or providers without re-engineering, mirroring how early internet protocols (TCP/IP, HTTPS) unlocked global innovation and economic growth [188-190][194-199][202-207][224-226]. This openness is presented as a strategic U.S. policy choice contrasting with “closed” national internet initiatives [191-197].
– International engagement and future work – Beyond the U.S., the panel noted active participation in the International Network for Advanced AI Measurement, Evaluation, and Science (IN-AIMES) and upcoming sector-specific listening sessions to gather global input, especially from emerging markets like India [165-170][276-280][274-275].
Overall purpose / goal of the discussion
The panel was convened to showcase the nascent ecosystem of AI-agent standards, explain how these protocols unlock interoperability, security, and commerce, and to outline the U.S. government’s role (through OSTP, CASI, and NIST) in coordinating voluntary, consensus-based standards that will enable a globally accessible, lock-in-free AI economy.
Tone of the discussion
The conversation remained collaborative and forward-looking throughout, beginning with enthusiastic introductions and a celebratory tone about industry progress. As technical details emerged, the tone shifted to a more explanatory, “building-the-foundation” mode, using historical analogies (internet, automotive standards) to underscore seriousness. Interspersed moments of light humor (e.g., Michael Brown’s accent joke) kept the atmosphere informal yet constructive. Overall, the tone stayed optimistic, emphasizing partnership between industry and government and a shared commitment to open standards.
Speakers
– Sihao Huang – Senior Policy Advisor for AI, Emerging Tech, White House [S1]
– Austin Marin – Acting Director, U.S. Center for AI Standards and Innovation, Department of Commerce [S4]
– Wifredo Fernandez – Director for Global Government Affairs, XAI [S5]
– Owen Lauder – Senior Director and Head of Frontier Policy and Public Affairs, Google DeepMind [S7]
– Michael Sellitto – Head of Global Affairs, Anthropic [S9]
– Michael Brown – Head of Growth and Operations (International), OpenAI [S11]
Additional speakers:
– Mike Salito – Head of Global Affairs, Anthropic (mentioned in panel introduction)
The panel at the India AI Impact Summit brought together senior U.S. officials and leaders from four frontier AI companies-Anthropic, Google DeepMind, OpenAI and XAI-to examine how open standards can make AI agents interoperable and commercially viable. Sihao Huang opened by introducing himself as the White House senior policy adviser for AI and noting the presence of Austin Marin, director of the Department of Commerce’s Center for AI Standards and Innovation (CASI) together with the company representatives [3-6][7-12]. He reminded the audience that American firms are investing roughly $700 billion in AI infrastructure this year and are competing fiercely to deliver cheaper, more powerful models for developers worldwide [13-15]. He also cited the 802.11 Wi-Fi protocol as a concrete illustration of how government-backed standards enable global interoperability [190-193]. The session’s purpose, he explained, was to explore how standardized interfaces can enable a thriving “agentic economy” [16-22].
Panelists quickly identified a nascent ecosystem of agent-centric protocols that are already shaping the market. The most prominent is Anthropic’s Model Context Protocol (MCP), which many companies are adopting as a de-facto industry standard [20-22][28-38]. Google DeepMind presented its agent-to-agent (A2A) protocol, OpenAI described its own commerce protocol, and XAI referenced its secretive MacroHearts project [21][63-71][74-76][98-101]. Collectively, these efforts aim to replace bespoke, vendor-locked integrations with open, reusable specifications that developers in any country can leverage [23-24][46-48].
MCP is framed as a universal, open-source contract that lets an AI model discover and use existing data sources and tools simply by receiving a high-level description of the resource [28-36]. In practice, an agent can be told that “payroll data lives in the HR system” or that “revenue figures are stored in HEX”, and it will know how to retrieve the information just as a human employee would [34-36]. By eliminating the need to rewrite connectors for each new vendor, MCP creates a degree of interoperability that encourages competition, reduces lock-in, and enables data portability across vendors [37-38].
A complementary initiative is the SKILLZ protocol, which encodes task-specific instructions that can be taught to an agent once and then reused across different providers [46-48]. This mirrors the way a new employee is trained on organisational procedures; once the skill set is captured, any compatible agent can execute the task, and the skill set can be ported if a user switches from Anthropic to another vendor [46-48].
Google’s A2A protocol tackles the long-standing problem of agents communicating directly with one another. It defines a “digitised clipboard” that carries an agent’s identifier, capabilities, intent, data-access requirements and security constraints, thereby allowing two agents to exchange information without custom code or a shared code base [63-71]. Owen Lauder stressed that this metadata-rich exchange is fundamental to “greasing the wheels of the agentic economy” [72-73].
The commercial dimension is addressed by Google’s Universal Commerce Protocol (UCP) and OpenAI’s commerce protocol. Both enable agents to interact with web sites, payment gateways and other e-commerce services, opening the possibility for agents to autonomously book flights, reserve hotels or purchase goods on behalf of users [74-76][98-101]. Michael Brown illustrated this with a scenario in which an agent arranges a family vacation, highlighting how shared standards allow agents from different companies and jurisdictions to cooperate safely [99-101].
Security and trust were repeatedly identified as prerequisites for mass adoption. Sihao Huang likened the need for AI-agent security standards to the historic introduction of SSL/HTTPS, which unlocked e-commerce by assuring users that their transactions were safe [206-207]. Michael Sellitto reinforced this analogy with an automobile-industry metaphor: just as independent crash-test ratings and fuel-economy figures give consumers confidence in a vehicle, third-party benchmarks for AI-agent performance and safety would enable informed purchasing decisions and geopolitical resilience [211-218]. Owen Lauder added that security fields-such as authentication, data-handling policies and user-confirmation requirements-should be baked into the agent-to-agent metadata itself [71-73][227-229].
Austin Marin outlined the U.S. government’s coordinating role, describing CASI as the “front door” for industry to engage with the federal apparatus, reducing duplicated agency requests and fostering consensus-based voluntary standards [132-145]. He noted that CASI already has formal research or pre-deployment evaluation agreements with the panel companies [132-138]. CASI operates within NIST, which has a century-long tradition of facilitating industry-driven standards rather than imposing regulation [146-152]. Recent actions include a Request for Information on AI-agent security (closing in March) [155-161], a draft NIST ITL document on agent identity and authorisation [163-165], and planned sector-specific listening sessions (education, healthcare, finance) to surface real-world challenges such as handling personally identifiable information [165-173].
Panelists repeatedly drew parallels with historic standard-setting successes. Sihao Huang argued that the open, decentralised protocols that underpinned the early Internet-TCP/IP, HTTPS-were deliberately supported by the U.S. government and generated global prosperity while keeping the system open to competition [188-199]. He warned against “closed” national versions of the Internet, noting that only the open suite achieved worldwide scale [194-199]. Owen Lauder recalled early scepticism about online credit-card use, underscoring how security standards transformed the digital economy [206-207].
International collaboration was presented as essential to avoid fragmented ecosystems. CASI participates in the International Network for Advanced AI Measurement, Evaluation and Science (IN-AIMES), a ten-country forum that shares best-practice measurement methods and aligns evaluation methodologies across borders [274-278]. Austin Marin also highlighted that CASI publishes a blog post summarising the consensus reached within IN-AIMES, ensuring that standards are globally relevant and that emerging markets such as India can both contribute to and benefit from the shared layer [279-280].
Across the discussion, there was strong consensus that open, interoperable standards are the cornerstone of a democratic AI future. All speakers agreed that such standards prevent vendor lock-in, enable builders in India, Kenya and elsewhere to switch models without re-engineering, and create a “parallel Internet” of AI services [186-190][121-123][138-145][146-152]. Points of tension emerged, however. Sihao Huang and Austin Marin advocated a prominent U.S. leadership role in exporting an open AI stack, whereas Michael Brown cautioned that industry should remain the primary driver of technical norm-setting, with government acting only as a convenor [191-199][138-145][252-254]. Security priorities also diverged: Sihao emphasized the historical SSL/HTTPS model, Owen focused on embedding security metadata in agent-to-agent exchanges, Michael Sellitto highlighted the need for third-party performance metrics, and Wifredo Fernandez called for broader privacy, auditability and consent mechanisms [206-207][71-73][219-227][264-268]. Fernandez raised the novel regulatory question of whether agent-driven social-media platforms should be regulated, linking it to his broader call for privacy, auditability and consent [119-122]. He also referenced the “Malt Book” phenomenon as an example of how AI-driven content spreads on X (formerly Twitter) [98-101].
Future of AI standards discussion (chronological order)
Sihao Huang opened the forward-looking segment by asking how the standards ecosystem should evolve to support an expanding agentic economy. Michael Sellitto responded with an automobile-industry analogy, arguing that independent safety ratings and fuel-efficiency metrics-analogous to third-party AI-agent benchmarks-will be essential for consumer confidence and geopolitical resilience [211-218]. Owen Lauder then reflected on historic lessons, noting that the introduction of SSL/HTTPS and early credit-card security standards were pivotal in unlocking digital commerce [206-207]. Michael Brown added that while government can facilitate coordination, the technical details of standards should be driven by industry expertise, with the state acting as a convenor rather than a regulator [191-199]. Finally, Wifredo Fernandez emphasized privacy-centric principles, calling for robust auditability, consent mechanisms, and explicit regulation of agent-driven social-media platforms [119-122][264-268].
In conclusion, the panel affirmed a shared vision: develop voluntary, consensus-based standards that are technically robust, security-focused and globally interoperable, thereby fostering innovation, competition and trust in AI-agent ecosystems. Immediate actions include submitting comments to the CASI RFI on agent security, reviewing NIST’s draft identity standards, and participating in the upcoming sector-specific listening sessions [155-165][165-173]. Longer-term goals involve harmonising measurement practices through IN-AIMES, expanding open protocols such as MCP, SKILLZ, A2A and commerce standards, and ensuring that these frameworks embed privacy, auditability and user consent to support responsible deployment worldwide [276-280][186-190][211-218].
of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable and open to the rest of the world to sort of build on that for your own businesses, for your own benefits. And so we have an amazing panel here today. We have, so first of all, I’m Sihao Huang. I’m Senior Policy Advisor for AI at Emerging Tech at the White House. We’re joined with Austin Marin, who’s the Director for the Center for AI Standards and Innovation at the Department of Commerce, which really is the center for a lot of AI activity within the U .S. government, setting standards, driving innovation, measuring AI systems, improving metrology, and a lot of the smartest people in the U .S.
government are within Austin’s organization. And then we have the four frontier AI companies from the United States. So we’re very happy to be joined by Mike Salito, who is the Head of Global Affairs at Anthropic. We have Owen Lauder at Google DeepMind, who’s the Senior, Director and Head of Frontier Policy and Public Affairs. We have Mike Brown, who is head of growth and operations for OpenAI for our countries. And, of course, we have Weefi Fernandez, who is the director for global government affairs at XAI. So really the amazing lineup of U .S. industry. I said this in a previous panel, but American companies are spending $700 billion into infrastructure this year, just this year alone. And they probably won’t like it that I say this, but they’re competing very hard against each other to make AI models cheaper and more powerful for you guys to build on and to drive those applications.
And so this is going to be a panel on how we make that happen, how we standardize interfaces with those AI systems. And so first I’m just going to ask a question to the AI companies that are sat here. So over the past few months, I think, we’ve seen the emergence of an ecosystem of standards that move. To support the deployment of AI agents. I think one of the most notable ones is Anthropix Model Context Protocol, which a lot of other companies are building off of right now and is sort of becoming the industry standards. Of course, you have Google DeepMind’s A2A Agent -to -Agent Protocol, OpenAI’s Agentic Commerce Protocol, and then XAI, of course, has been working on its highly secretive and famous MacroHearts agent project.
And so all the companies here are very much involved in sort of this agent discussion. And so maybe open it up to the companies here to tell us a little bit about what these agent protocols actually do and what they have unlocked. And that’s sort of the builders who are sat here, the audience. What do they enable a software engineer or an AI engineer at India or other countries to create?
Okay. Well, first I want to start off by thanking Suhao and OSTP for organizing this panel and all the people who are here. Thank you. So it’s great to be here with Austin. I think Anthropic has really had a really strong partnership with the Trump administration and appreciated the leadership of Secretary Lutnick in expanding and enhancing the Center for AI Standards and Innovation, which is really critical to making this technology work for everybody in a manner that’s safe, responsible, and open. MCP is a universal open standard for connecting AI systems to the tools and data sources that people already use. So imagine the knowledge bases inside of an enterprise. You can imagine government data sources.
The Indian government, of course, is a real leader in, why am I forgetting the acronym right now, DPI, sorry, and just has massive amounts of data that are already digitized. And so MCP is a way that you can connect your AI models and agents to those data sets and also tools. And it really, you know, simple. intuitive way. You just need to give the model a rough description of what’s in the data source and what kind of tools or how can it access it. And then the model will intuitively know how it can use those data sources the same way that somebody in your enterprise or your organization would know if I want to get payroll data, I need to go to this human resources system.
If I want to get data about, you know, our revenue, I need to go into HEX or whatever your particular tools are. You know, before MCP, you really had to build all these systems in a very bespoke manner, which meant that if you built them with one model or one vendor, you were kind of stuck because you’d have to rewrite everything if you wanted to switch. MCP being this open source protocol that’s supported by all of the major AI companies means that you really have this degree of interoperability, which just enables the whole system to be much more open and competitive. We also recently built SKILLZ, which is a software that’s been around for a long time.
It’s a software that’s been around for a long time. It’s a set of instructions that teach agents how to perform. specific tasks. The way that I describe this or think about it is, you know, imagine a new person joins your team. You spend a little bit of time teaching them, you know, how to do work the way that your organization does it. And then you expect them to just be able to follow those instructions all the time. So you kind of teach once and then they’re able to do that. It’s the same thing with skills, which also is another open protocol where you can build these skills. And then if you decide that, you know, you want to switch from Anthropic to any of the other fine companies here on the panel, you can move those skills over.
And so that interoperability and data portability is really a critical piece of making this an open and competitive environment.
Amazing. Thank you, Mike. And, yeah, thank you to Sehow. Thank you to OSTP and the U .S. government for the event and all the partnership. And a big thank you and congrats to our Indian hosts on a fantastic summit week. If you take a step back, it has been, I think, a really exciting week, a demonstration of how advanced AI is now being used around the world to do incredible things. It’s been really exciting. I think seeing the way that people are using Gemini right across India, really exciting to see the way that everyone in India from world -class scientists using AlphaFold to teachers and students using AI in the classroom. And I think with all of the progress that we’ve seen in the last few years, it’s easy to forget sometimes that this is still relatively new technology.
We’re still in the relatively early innings of working out how to develop this technology and use it for good. And one of the things that we need to do, I think Seahaw covered this very well in his opening gambit, is build out this ecosystem of technical standards to make sure that we can continue using this technology in the right ways. There’s a couple of ways that we’re thinking about these standards. One is technical standards, interoperable standards, and then also standards for testing these systems, making sure that we can use them in a reliable and secure way. We really want to contribute right across the piece here, so we’re excited. We have various standards that we have contributed to the ecosystem.
Our agent -to -agent standard that Seahaw mentioned. This is basically a standard for how… agentic systems talk to each other. At the moment, it’s a little bit tricky for agents to converse with each other. You have to often write bits of bespoke code for an agent to talk to an agent, or they have to be running on the same walled garden code base. So what we do with agent -to -agent is essentially have a sort of digitized clipboard of information that an agent will share with another agent. What’s my ID as an agent? What are my capabilities? What am I trying to do? How do I take data? What are my security requirements? This is going to be absolutely fundamental to sort of greasing the wheels of the agentic economy.
UCP, another standard that we’re working on, so we have our universal commerce protocol at Google. This essentially does the same thing, but it’s for how agents talk to websites and payment systems. This is going to be transformative for business. It’s great to be able to partner with companies right around the world, whether it’s Walmart and Target in the U .S. or Flipkart and Infosys in India that we’re working with across these agents. Excited to see what… everyone is going to do with the technology that we can enable with this.
Thanks for the tip. Hi, everyone. My name is Michael Brown. My name placard says George Osborne, who’s a colleague. He got tied up in another panel, so I’m here. George and I work extremely closely together, but he has a much nicer accent because he’s from the U .K. I’m doing my best here. You’re doing very well, I might say, very well. For me, this is a fun panel because it feels like a very collaborative and cooperative opportunity to grow the pie, and the companies that are on either of our side are extraordinary companies with extraordinary humans, and it’s fun to just work with them in some of these areas. If I were going to kind of explain, why we’re here in this particular panel to my kids who are 9 -11, I would sort of say, look, are there countries out there in the world where when you get to a stoplight, red means go?
I don’t think so. I think mostly red means stop and green means go. I mean, if I’m wrong, I apologize. I’m not an expert. But, you know, having sort of shared understanding in countries, rich and poor, advanced and still developing around how things work, I think grows the pie because it allows builders to build in a way that everyone can kind of know that what they’re building is going to be both secure and is going to be accessible and hopefully enjoyable or useful to people anywhere in the world. And I think each of the companies up here is contributing something great to that. You know, I’ve joined OpenAthens. I relatively recently, but like MCP to me is something like I just knew it’s like that’s really important.
And like, well, Anthropic introduced it. Hopefully, Anthropic would agree with this, that now it’s just like the thing, right? And I think that’s terrific that it’s the thing. You know, Owen also mentioned in commerce, I don’t know if these standards compete or if it’s cooperative, but at OpenAI, we have a commerce protocol as well for the same thing, because there’s a world where these agents are going to be out shopping for us, which is kind of fun, right? So, you know, if the agent knows that you’re planning on taking a family vacation and it knows that you want to visit Goa and the agent can go actually secure your travel flights and your hotel, these commerce protocols can do that.
So agents of different companies, potentially in different countries, can all partner and work well together because they understand how they’re supposed to be looking for shared information and how that information should be shared. There’s kind of a shared understanding there. And so I think all of us are working to build these protocols to grow the pie, to create more democratization, more commerce, more benefit for everyone by having these common protocols in place.
Thank you, Sihal. Great to be with you all here, and thank you to the government for having us. What an exciting week, frenetic and kinetic and chaotic, as I was saying earlier. So it’s just an honor to be here and to feel the energy and all the innovation and to meet a bunch of different builders across India. So Wefredo Fernandez, folks call me Weefy for short. It’s a nickname I got in the 90s before wireless Internet was a thing, so my name became relevant later. But, yeah, this is certainly a topic that brings us all together, which is wonderful. You know, XAI is only two and a half years old. So we’re all in this together.
So we’re all in this together. So we’re all in this together. So the foundational work done by these peer companies have enabled us to accelerate our development. We’re better because of those, and we’re better because we can all build on top of those. And these standards and protocols that folks have built and that we sort of lay out and sort of agree to as an industry and as governments really make sure that not just us four compete, right? This enables a ton of innovation. So, you know, on the X side, and, you know, XAI and X sort of operate in tandem, it’s been really neat to see the AI community sort of build and test and discuss and debate in public.
So, like, when Malt Book was taking off, I think you likely found out about it on X. And so it’s just neat to see the ecosystem sort of converge in that discussion space. And just in thinking about this panel and thinking about MoldBook in particular, it’s like, well, do we regulate social media platforms that are agent driven? Just it brings like all these really novel questions about about how we regulate. But I think at the end of the day, we all agree that these open standards that are creating sort of this call it a layer, call it a new ecosystem, call it a parallel Internet. I just really crucial for for our development of the Internet writ large.
And so, yeah, excited about the panel and the discussion here today.
Thank you so much. Your name is formalized in the 802 dot 11 protocol, which is what allows my phone to connect to the Internet in D .C. and here in India. So it’s extremely relevant. I’m going to use that. That’s awesome. So I think we’ve heard a little bit from our companies who are engaging a lot of dynamic activity, pushing out agent protocols of all kinds. And I think. There’s a lot of industry excitement over agents right now. One of the big announcements that we’re here to make, also Director Carrazio’s made early on the main stage, is the Agent Standards Initiative, and that is something that is let out of Casey in NIST. So I’ll turn to Austin to introduce that.
Absolutely, and thanks, Hal, and thank you to OSTP for convening this event and to my fellow panelists. I’ll start with a brief introduction of my organization. So I am the Acting Director for the U .S. Center for AI Standards and Innovation. Our background, we were founded about two years ago as the U .S. AI Safety Institute. In June of last year, Commerce Secretary Howard Lutnick refounded us as the Center for AI Standards and Innovation, which signaled a shift from sort of safety concepts to standards and innovation. And our remit is to be the front door to industry to working with the U .S. government. There’s, I think, two aspects of our organization I think that bear note is, first, that we’re located within the Department of Commerce.
We are commerce -focused. We are industry -focused. We work. We work with all of the companies on this panel. Some of them we have formal research. or pre -deployment evaluation agreements with so that we can work with them on their models and the research questions they’re tackling. We also do take seriously our role trying to serve as a front door to the U .S. government for industry. We want to make sure that when industry is trying to navigate government that they’re speaking to the right people, that the people in government they’re speaking to have advisors who understand frontier AI and agentic AI, and also that the industry isn’t being overwhelmed by duplicative requests from different aspects of government.
You don’t want 10 different agencies asking the same company basically the same thing and creating unnecessary work, and so we try to act in sort of a coordinating role to make sure that industry is being heard and they’re navigating U .S. government. The other aspect of our organization that bears note is we’re located within NIST, the National Institute of Standards and Technology, and NIST has an over -century -long track record of not regulating but helping industry through, consensus, develop voluntary standards and best practices. Acting Director of NIST, Craig Burkhart, he likes to talk about taillights, brake lights on the back of a car. I’m sure you all see them in India. It’s the same color red as it is in the U .S.
That’s because it was a NIST standard of what exactly color red is going to be on the taillights. But another important aspect of that anecdote is it wasn’t government that said this is the color red that you all must use. It was industry came together, and with the help of NIST experts through a convening, they agreed on what the color should be. And so now when we look at it, what does the future bring and where can NIST bring its industry -driven, consensus -based voluntary standards work into the new AI world, we’re looking to AI agent standards. So as to how said, we announced this week an AI agent standards initiative, which is looking at all facets of AI and AI agents.
There’s a couple aspects of it that have already been announced that we’re working on, and I’ll tick through those relatively quickly. The first is we have a request for information. I’m going to go ahead and get this. So we have a request for information. We have a request for information. in the field. It closes in March and we encourage you to engage with us and provide comments on AI agent security. AI agents obviously bring a whole host of new security challenges and we’d love to hear from you and your organizations about what challenges you are facing. Learning and identifying those challenges is a first step. Once we identify those challenges we can then take the next step of seeing where can NIST’s approach of voluntary standards and best practices documents, how can they help address and mitigate those those challenges.
Another aspect, our colleagues at NIST, the Information Technology Laboratory or ITL, they have a draft out for comment on AI agent identity and authorization. Again, encourage you to engage and interact with them. A third initiative that we recently announced is we’re going to hold sector specific listening sessions hopefully in April in the sectors of education, healthcare, and finance where we’re going to convene various members of industry and say to them look there’s this great technology out there called AI have you heard of it, AI agents, why aren’t you adopting it? it? What challenges are you facing? And we may not be able to solve those challenges, but maybe we can. And so one example I give, and I don’t know that it’s going to be something we find out, but for instance, in the education and healthcare sector, there’s business concerns and existing regulatory concerns about PII, personally identifiable information.
And perhaps what we’ll learn through these listening sessions is that hospitals or schools aren’t deploying AI because they can’t reliably evaluate how AI agents are handling the PII. And so that’s something that KC, my organization, could develop metrology, benchmarks, evaluations, best practices, documents that could give confidence to those types of institutions that the agents are performing as desired. And maybe that’s a step that we could take through voluntary consensus driven best practices and standards that unlocks adoption. So we’re very focused on that. We’re looking forward to learning what those challenges are. I don’t know if the challenge I mentioned is actually a challenge facing industry. And that’s part of NIST’s approach, which is we … In D .C., we only see a small slice of what’s going on in industry.
We only have a tiny window into the world. And so it comes from a place of humility. We don’t know what the challenges people are facing. The companies that are on this panel, they’re doing an incredible job coming up with protocols for some of the challenges that they’re facing. We talked about agent -to -agent for how agents communicate. We talked about MCP for how agents navigate databases. We talked about UCP and OpenAI’s commerce protocol for engaging in e -commerce. And I’m sure through these conversations, we’re going to identify other areas where open source protocols, where standards, best practices could help unlock adoption and implementation. And we’re really excited to work with both you and all your institutions and companies on stage to identify those opportunities and see how we can leverage NIST’s convening authority to help.
Thank you so much for that, Austin. I think to reemphasize, this standards initiative is really wanting to make sure that the products that we build, on top of it, are able to connect with each other into our… such that if there’s a builder in India, a builder in Kenya, building on top of our AI products, American companies can use them as well. American companies can buy from them as well. And similarly, if you want to switch to a different model, nothing is sort of locked in. And I think this really ties back to a perspective that we sort of, as U .S. government, in particular the Trump administration, has about AI and AI products. We think back a lot on the history of the Internet and what that enabled for the world, but also what that enabled for America.
I think there was a perspective in the U .S. from a previous administration that technology had to be strictly locked down, and we think that’s a mistake. We want to share the best AI technologies with the rest of the world, and that’s also a sort of leading message that our delegation has here at the India AI Summit. And when we think back at the success of the Internet, what enabled that? There’s actually a number of companies and countries that tried to create their own closed version of the Internet that were centralized, that were tied to particular nations, at their own telecom networks. and they saw a little bit of success. A lot of them were state -subsidized, but none of them really scaled to the global level of the World Wide Web.
And the World Wide Web became so successful precisely because of the protocols that the U .S. government had supported. The U .S. government had made a very intentional effort to make sure that the Internet was a decentralized system, created protocols like TCPIP, HTTPS, the sort of Internet suite that was actually funded by the U .S. government back then to create independent development of these protocols that enabled the rest of the world to build on that. And what you had is really this win -win situation where the entire world now benefits from sort of the access of the Internet, the ability to build applications, companies on top of that that’s driven so much prosperity for countries around the world, but also made Silicon Valley one of the most wealthy places in human history.
And it is because of this open commerce. And that’s what we really want to create with a world of AI in the future as well. I think just to add a bit on to what Austin had said, sort of the agent security. piece. Why is agent security so important to us? It’s precisely because of adoption. You need security -driven adoption. If you look back again also at the history of the internet, the development of the secure sockets layer, SSL, and then eventually HTTPS, was what enabled e -commerce. And so, again, I think it’s a lot about the efforts that we’re going to be working with industry together to make sure that there is this standards ecosystem, that there are these interoperable interfaces that everyone can build on and trust to create the AI economy that we’re all looking forward to.
So I’ll stop ranting, but I’ll turn to the companies here. And I guess I’ll ask you all, how do you see sort of the future of AI standards and agent development? And how can AI agent standards really reflect the same principles that enable the open internet, including interoperability and including security?
I feel like I need to somehow fix this. an automobile analogy in here since there’s been a setting. Maybe I’ll use my favorite one, which is right now if you go to buy a car and you go down to the car dealership, those cars are going to have a bunch of metrics that you can use that have been independently determined to understand the characteristics of that vehicle. So it will tell you what the fuel economy is, how far can you drive on a gallon or liter of gas, how does it perform in various types of crash tests. These are all metrics that are done in a standardized way that are oftentimes done by third parties, and so you can have sort of trust and confidence in them, and you can know what kind of car you want to buy.
Maybe I’m a single person and I like to drive fast, and so I’m just worried about head -on collisions because I’m going to be going as fast as the car can. I’m going to be driving as fast as the car can possibly go, and that’s the biggest danger for me. Or maybe I have a family and I’m worried about you know, what happens if we get hit from the side and I’ve got kids in the back seats. You know, a piece that this standardization can help us get to is having that same kind of confidence in knowing what you’re purchasing that, you know, customers and governments and the public, you know, can have. I think another real benefit, and it’s really aligned with, I think, some things that Michael Kratzios, the OSTP director, talked about today and also in an op -ed that he had in the Financial Times around exporting the American AI stack, right?
There’s a lot of concerns today about sovereignty, about having control over systems in your data and so on. And a way that I think you can both use the best technology in the world, which sometimes comes from American companies, but also have confidence that there’s resilience in the system, is really having things be built to open. Open standards, right? And that gives you the ability to… to decide to make changes. If today Anthropic is producing the best technology and tomorrow it’s X or it’s OpenAI or someone else, you can change. Or maybe an open source model gets good enough at the use case that you want and you want to switch over from a proprietary model to an open source model.
So I think that’s what this can enable. I think that’s sort of the opportunity that we have ahead of us. And I think that the vision of the AI security standards work that Casey’s going to be working on is, if you’re going to entrust these systems with access to your personal data or your financial data or the ability to do things in the real world on behalf of your enterprise or what have you, you need to have some sense that there’s security, there’s authentication for things, that there’s an ability to come back and check with the user before making certain significant decisions or taking certain decisions. Certain significant actions. And that’s… You can test and evaluate and report that information in a way that is intelligible to the customer, that they know what they’re buying, and they know when to trust, and they know when not to trust.
What’s up there?
Yeah, well said, and I endorse a lot of what Mike mentioned there and Austin and Sihau as well. I do think there’s a lot you can learn from the history of standards in various different industries that we can apply to AI. Sihau mentioned some of the early Internet standards. I mean, I’m just about old enough to remember people in the early 90s talking about how they would never, ever, ever put credit card information on the Internet. That would be absolutely insane. And it sort of was when you had information being shared in plain text in a totally unencrypted way. Then you have the secure layer that Sihau mentioned, HTTPS, and it’s completely unlocked the modern Internet economy as we know it to be.
History of electrical standards as well. Actually, this was something that drove the adoption of electrical products in the late 20th and early 21st century. You had a scientific approach to standardizing units of. measurement like ohms and volts and amperes, which allowed power supplies to connect their energy to the grid. It also meant that you could invent things like fuses, which could be set to a certain amperage, and if you had an electrical current above that, it would shut itself off. So I think we need to continue learning from history. I think there are a few principles that we should take forward as we do that. I think open standards, as we’ve been discussing, is the right way to go.
You need technically robust standards that are really informed by an understanding of the technology and how they work, and we should be looking to prioritize interoperability as well. Maybe a final thought for this piece is also learning from standards that are not done well. There are many industries that have not quite gotten this right. A lot of us have traveled here from around the world having to bring adapters with us because our electrical products won’t plug into the wall. It’s really, really annoying. It’s actually also a massive hindrance on commerce as well, because it means if you’re producing a computer or another electronic application, you have to have a different plug socket in every single country around the world that you’re developing your product for.
So things to avoid. as well we need to be mindful of.
automobile industry or something, two humongous but separate industries, and how they’re going to have to come together to set up norms for how agentic systems work and how data is shared, I think government can probably play an important role in bringing together industries to establish those dialogues. But the industries certainly still need to be front and center in establishing what works for them because they are the practitioners and the experts on what their customers need, what their colleagues need. And so I think we’re all going to have to kind of navigate that world together and figure out what is the role for the research labs, how does government support, and then how does industry play a leadership role in both governing and building for itself industry -specific standards for the future of AI.
Yeah, I think this conversation has been a bit of a history lesson. I appreciate that. Thank you. And it made me think about how I used to get music when I was a kid, and some of the panelists may appreciate. You know, there were these music catalogs that would come to your house. You’d select however many compact discs you wanted, CDs. You’d put cash or a check in an envelope and send it away. And some weeks later, magically, some CDs would appear on your doorstep. So when I think about, you know, instructing an Asian to go download music or acquire music on my behalf, like, I’d much rather have that than I don’t know how we used to put so much trust in a system without standards or, you know, a process that could not be audited.
So I think sort of the guiding principles that have developed the Internet still apply. We want privacy -preserving technology. We want technology that allows us to audit. We want technology that considers authenticity. We want technology that considers means of consent. And to Michael’s point, I think ultimately agents serve the user and agents serve organizations. And so if we view it through that lens, it should guide us right. They don’t serve us as the model developers.
Great. Thank you all so much for that. So that was a bit of a nerdy discussion on standards, a bit of a history lesson. I love that. But we’re also here right now at the India AI Impact Summit talking to a country of builders, talking to the developing worlds, which are some of the most dynamic AI markets in the world. And so I think it will also be amazing to hear from the panelists here, including Austin, how you all are engaging with the rest of the world on these standards, how your organizations are engaging with other countries on AI. And one of the most exciting applications you’ve seen develop on top of your standards and products.
I guess I’ll lead off. One of the main forums that Casey engages internationally is through the International Network for Advanced AI Measurement, Evaluation, and Science. It’s a bit of a mouthful of a name, but it’s ten countries that have established AI security institutes or, like we do, the Center for AI Standards and Innovation, and we meet a couple times a year. We also engage in informal technical and scientific exchanges and we share best practices in measurement and evaluation science. In December, we met in San Diego on the sidelines of the NURFS conference and we sat down and discussed sort of open questions in measurement science and the challenges that we’re facing, and we published a blog post, I think, about a week ago that summarizes some of the periods of consensus and the open questions.
And there, the work we’re doing, I think, is very important because when we talk about the evaluation, of AI systems, particular capabilities, particular security vulnerabilities, etc. It’s important for us to have consensus on the methodologies.
Carter describes specific technical developments including Google’s agent-to-agent protocol for vendor-agnostic interactions and Anthropic’s model context protocol for safer data interactions. These r…
EventEvidence:Currently agents need bespoke code to communicate or must run on the same code base. The protocol will be fundamental to ‘greasing the wheels of the agentic economy.’ Evidence:Examples inclu…
EventLarter emphasised that the emerging agentic economy requires new technical protocols for agents to communicate with each other and with websites, similar to how HTTP and URL standards underpinned the …
Event– Michael Sellitto- Michael Brown- Wifredo Fernandez- Austin Marin- Sihao Huang Currently agents need bespoke code to communicate or must run on the same code base. The protocol will be fundamental t…
EventEvidence:CAISI launched an AI agent standards initiative, issued an RFI on AI agent security, and announced sector-specific listening sessions for healthcare, education, and finance. The organization …
EventSummary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won’t use AI technology, making it essential for both individual adoption and busin…
EventMatthew Liao:Thank you, Kyung. So hi, everybody. Sorry, I couldn’t be there in person, but I’m very honored and delighted to join you. So we all know that AI has very incredible capabilities. They’re …
EventAlena Muravska: colleagues here in the room but also colleagues online and I’m very grateful for this opportunity to be a part of this important launching event. It’s an important milestone for the gl…
EventSally Wentworth Merci beaucoup. Je m’appelle Sally Wentworth. Je suis la présidente et le directrice général de l’Internet Society. Nous sommes une grande partie des collègues dans l’écosystème intern…
EventOpen standards are foundational to the Internet and technological innovation, promoting interoperability and preventing lock-in to proprietary systems
EventPatricia Larasgita opened by explaining that the Safer Internet Lab is a multi-stakeholder partnership focused on disinformation research, electoral disinformation, and information ecosystem issues th…
EventThis panel discussion focused on India’s strategic positioning in artificial intelligence and semiconductor technologies, examining the country’s potential to achieve leadership in these critical sect…
Event“The panel at the India AI Impact Summit brought together senior U.S. officials and leaders from four frontier AI companies—Anthropic, Google DeepMind, OpenAI and XAI.”
The transcript excerpt S3 lists four frontier AI companies from the United States (including Anthropic and Google DeepMind) participating in the summit, confirming the presence of senior U.S. officials and the four companies mentioned.
“Austin Marin is the director of the Department of Commerce’s Center for AI Standards and Innovation (CASI).”
S21 identifies Austin Mayron (likely the same individual) as the Acting Director of the U.S. Center for AI Standards and Innovation, confirming his leadership role at the agency.
“American firms are investing roughly $700 billion in AI infrastructure this year.”
The knowledge base entry S31 cites $500 billion invested in AI and frontier technologies, indicating that the $700 billion figure in the report is not supported and appears overstated.
“Anthropic’s Model Context Protocol (MCP) is a de‑facto industry standard for agent‑centric protocols.”
S1 highlights MCP as a universal standard for connecting AI systems to tools and data, and S68 notes that market adoption can create de‑facto industry standards, supporting the claim.
“Speakers emphasized that open, government‑backed standards (like the 802.11 Wi‑Fi protocol) enable global interoperability.”
S2 summarizes that all speakers strongly advocated for open, interoperable standards that enable cross‑vendor compatibility, echoing the report’s point about standards such as Wi‑Fi.
“Google DeepMind presented its agent‑to‑agent (A2A) protocol.”
S1 records that Google DeepMind discussed its A2A protocol during the summit, confirming the presentation.
“Google’s A2A protocol defines a “digitised clipboard” that carries an agent’s identifier, capabilities, intent, data‑access requirements and security constraints.”
While S1 confirms the existence of an A2A protocol, it does not provide the detailed “digitised clipboard” description; the report adds this specific technical detail.
The panel displayed strong convergence on four main themes: the necessity of open, interoperable AI standards; the critical role of security, trust, and auditability; the value of historical standard‑setting lessons; and the importance of coordinated government‑industry and international collaboration to democratize AI worldwide.
High consensus – the repeated alignment across all speakers suggests a shared vision that will likely translate into coordinated policy initiatives, industry road‑maps, and international cooperation, accelerating the development of a secure, open AI ecosystem.
The panel shows strong consensus on the importance of open, interoperable AI standards to drive global innovation and avoid lock‑in. The primary disagreements revolve around who should lead the standard‑setting process (government versus industry), the prioritisation of specific security components, the balance between U.S. leadership and multilateral cooperation, and the emerging question of regulating agent‑driven social media.
Moderate – while all participants share the overarching vision of open standards, the differing views on governance structures, security priorities, and regulatory scope suggest that coordination will require careful negotiation. These tensions could affect the speed and inclusivity of standard adoption, especially across jurisdictions and sectors.
The discussion pivoted around a handful of high‑impact remarks that moved the panel from a generic overview of AI protocols to a nuanced, multi‑layered conversation about interoperability, security, global policy, and ethical governance. Michael Sellitto’s exposition of MCP and SKILLZ established the technical foundation, while Owen Lauder’s ‘digitized clipboard’ metaphor expanded the scope to inter‑agent commerce. Michael Brown’s traffic‑light analogy reframed standards as a universal safety language, prompting Sihao Huang to invoke the historic success of open internet protocols as a blueprint for AI. Austin Marin then translated these ideas into concrete government actions, and Wifredo Fernandez reminded the group of privacy and consent imperatives. Collectively, these comments created turning points that shifted the tone from descriptive to prescriptive, aligned industry and government perspectives, and highlighted the intertwined technical, security, and societal challenges that must be addressed through open, consensus‑driven standards.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

