Agentic AI in Focus Opportunities Risks and Governance

20 Feb 2026 11:00h - 12:00h

Agentic AI in Focus Opportunities Risks and Governance

Session at a glance

Summary

This discussion at the AI Impact Summit focused on the business applications and policy implications of agentic AI, featuring two panels that examined both enterprise use cases and regulatory considerations. Austin Mayron from the U.S. Center for AI Standards and Innovation opened by announcing a new AI agent standards initiative, emphasizing the government’s role in facilitating industry-driven adoption through voluntary standards rather than top-down regulation.


The business panel highlighted diverse applications of agentic AI across industries. Prith Banerjee from Synopsys described “agentic engineers” that complement human designers in creating complex chips and systems, particularly for autonomous vehicles and aircraft where safety is paramount. Caroline Louveaux from MasterCard explained how agentic AI enables real-time fraud detection and secure payment processing, moving from assistive to operational AI that can act autonomously within defined parameters. Syam Nair from NetApp discussed agents that prepare and secure data at the storage level, helping identify cybersecurity threats before they spread.


All panelists emphasized the critical importance of enterprise guardrails and human oversight. They stressed that while agents can operate autonomously, humans must remain accountable and maintain ultimate control. The discussion revealed that agentic AI exists on a continuum of autonomy rather than as a binary concept, requiring different levels of human involvement depending on the application’s risk profile.


The policy panel recommended that governments focus on voluntary standards development through organizations like NIST, sector-specific regulations led by agencies with domain expertise, and international coordination through platforms like the OECD. Panelists emphasized the need for practical guidance rather than abstract principles, calling for harmonized global standards that enable innovation while ensuring safety and security across borders.


Keypoints

Major Discussion Points:

Agentic AI Use Cases Across Industries: The discussion explored practical applications of agentic AI in various sectors, including chip design and systems engineering (Synopsys), real-time fraud detection in payments (MasterCard), and data quality management in cloud infrastructure (NetApp). These examples demonstrated the shift from “assistive AI” to “operational AI” where agents can take autonomous actions rather than just provide recommendations.


Enterprise Guardrails and Risk Management: Panelists emphasized the critical importance of implementing robust safety measures for agentic AI systems, particularly in high-stakes environments. Key themes included maintaining human oversight (human-in-the-loop vs. human-on-the-loop), ensuring clear consumer intent, implementing security by design, and establishing accountability frameworks. The discussion highlighted that unlike humans, AI agents cannot take responsibility for their actions.


Government Standards and Industry Collaboration: Austin Mayron from the U.S. Center for AI Standards and Innovation outlined the government’s approach to developing voluntary, industry-driven standards rather than top-down regulation. The emphasis was on bottom-up collaboration with industry to identify barriers to adoption and develop practical solutions, particularly in areas like AI agent security and interoperability.


Global Policy Coordination and Harmonization: The panel addressed the challenge of creating unified international standards for agentic AI to avoid fragmented regulatory approaches across different countries. Discussions focused on the need for compatible frameworks between regions like the U.S., Singapore, and India to enable global technology deployment.


Multilateral Platforms for AI Governance: Panelists identified several key international venues for coordinating agentic AI policy, including the OECD (most frequently mentioned), the International Consortium of Safety Institutes, Singapore International Cyber Week, and the ITU’s AI for Good initiative. The goal is to create inclusive, global standards that prevent regulatory fragmentation.


Overall Purpose:

The discussion aimed to bridge the gap between business applications and policy frameworks for agentic AI. The first panel focused on demonstrating real-world use cases and enterprise risk management strategies, while the second panel provided policy recommendations for governments on how to encourage innovation while ensuring safety and security. The overarching goal was to inform policymakers about industry needs and best practices while advocating for collaborative, standards-based approaches over prescriptive regulation.


Overall Tone:

The discussion maintained a professional, collaborative tone throughout, with industry representatives positioning themselves as partners rather than adversaries to government regulation. The tone was generally optimistic about agentic AI’s potential while being appropriately cautious about risks. There were moments of deliberate concern-raising (particularly around physical AI applications in autonomous vehicles and aircraft) to emphasize the stakes involved. The atmosphere was constructive, with panelists building on each other’s points and showing alignment on key principles like voluntary standards, international cooperation, and human oversight. The tone remained consistent from business-focused to policy-focused discussions, reflecting a shared understanding between industry and government representatives about the need for responsible development and deployment of agentic AI systems.


Speakers

Speakers from the provided list:


Jason Oxman – Moderator/Host, appears to be with ITI (Information Technology Industry Council)


Austin Mayron – Acting Director of the U.S. Center for AI Standards and Innovation (CAISI), Senior Legal Advisor to the Undersecretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office


Prith Banerjee – CTO and SVP of Synopsys (design software automation semiconductor company)


Caroline Louveaux – Chief Privacy AI and Data Responsibility Officer at MasterCard


Syam Nair – Chief Product Officer at NetApp (global multi-cloud service provider)


Jennifer Mulvaney – Public policy role at Adobe


Ellie Sakhaee – Public policy team member at Google, Ph.D. in computer science/machine learning


Carly Ramsey – Leads public policy for Asia Pacific at Cloudflare, based in Singapore


Sam Kaplan – Assistant General Counsel for Global Policy at Palo Alto Networks (cybersecurity company)


Danielle Gilliam-Moore – Director of Global Public Policy at Salesforce, leads AI policy work


Combiz Abdolrahimi – Role/company not clearly specified, appears to work in industry with former government experience


Additional speakers:


No additional speakers were identified beyond those in the provided speakers names list.


Full session report

This comprehensive discussion at the AI Impact Summit examined both the business applications and policy implications of agentic AI, featuring two interconnected panels that explored enterprise use cases and regulatory frameworks. Jason Oxman opened by framing the session’s goal: to understand how agentic AI is being deployed in practice and what policy frameworks are needed to support responsible development.


Government Standards Initiative and Collaborative Approach

Austin Mayron from the U.S. Center for AI Standards and Innovation (CAISI) opened the discussion by outlining the organization’s collaborative approach to AI governance. CAISI, positioned within both the Department of Commerce and NIST, aims to serve as “the front door for industry to the United States government” in developing AI standards. Mayron emphasized their bottom-up, grassroots approach, stating that rather than prescriptively defining problems from Washington, CAISI actively seeks industry input to understand real-world challenges and barriers to adoption.


This week marked the launch of CAISI’s AI agent standards initiative, including a request for information on AI agent security and planned sector-specific listening sessions for healthcare, education, and finance. Mayron highlighted particular challenges in regulated fields where handling personally identifiable information (PII) creates complex compliance requirements that current AI systems struggle to navigate effectively.


Diverse Industry Applications and Use Cases

The business panel revealed the breadth of current agentic AI implementations across different sectors. Prith Banerjee from Synopsys described their vision of “agentic engineers” – a conceptual framework where human designers would be “complemented with another 200,000 agentic engineers from Synopsys” to handle the complexity of designing billion and trillion-transistor chips for systems ranging from autonomous vehicles to spacecraft. Banerjee emphasized that “we are dealing with physical AI interacting with the real world,” distinguishing these applications from purely digital AI systems.


Banerjee illustrated the accelerating pace driving this need: where car manufacturers previously updated models every five to seven years, companies like Tesla now aim for annual updates. Similarly, semiconductor design cycles have compressed from three years to one year, creating complexity that exceeds traditional design approaches. Synopsys’s recent $35 billion acquisition of Ansys reflects their “chips to systems” vision, integrating silicon design, software validation, and physics simulation.


Caroline Louveaux from MasterCard provided insight into agentic AI in financial services, particularly in fraud detection and payment processing. She articulated a crucial distinction between “assistive AI” that provides recommendations and “operational AI” that takes autonomous actions. In MasterCard’s payment network, which processes millions of transactions per second globally, AI agents must detect suspicious transactions and initiate secure payment flows in milliseconds.


Syam Nair from NetApp focused on data infrastructure challenges, describing agents that operate at the storage controller level to prepare data for AI consumption without requiring movement through complex pipelines. With cyber threats now averaging 59-second breakout times, agents positioned at the data source can identify and respond to risks before they propagate through systems.


Enterprise Guardrails and Risk Management

The discussion of enterprise guardrails revealed sophisticated approaches to risk management. Banerjee provided sobering perspective on risk, distinguishing between digital AI applications and “physical AI” systems. His examples included software-defined cars with over 100 million lines of code controlling critical functions like electric brakes and steering, where failures could have catastrophic consequences.


Caroline Louveaux outlined MasterCard’s four-pillar approach to agentic commerce guardrails. First, “know your agent” requires verification and authentication of AI agents before they can act. Second, security by design utilizes advanced customer authentication and tokenization. Third, clear consumer intent must be verified and maintained – she cited a specific incident where an employee’s casual question about purchasing sushi resulted in an actual order being placed using stored payment details. Fourth, complete traceability and auditability enable dispute resolution and regulatory compliance.


Louveaux emphasized that “autonomy can only scale if there’s trust,” highlighting the fundamental relationship between reliability and adoption. Syam Nair highlighted the accountability gap: unlike humans, AI agents cannot take responsibility for their actions, meaning business owners must maintain ultimate accountability even as systems become more autonomous.


Human Oversight and Graduated Autonomy

Ellie Sakhaee from Google introduced a framework for graduated autonomy, drawing parallels with Federal Aviation Administration regulations for drone operations. Just as drone pilots have transitioned from maintaining visual line-of-sight to operating “beyond visual line-of-sight” with appropriate safety systems, AI agent oversight can evolve from “human-in-the-loop” to “human-on-the-loop” or “human-in-command” as systems demonstrate reliability.


This graduated approach recognizes that agentic AI exists on a continuum rather than as a binary concept. The appropriate level of human oversight should correspond to the agent’s capabilities, the context of use, and the potential consequences of errors.


Policy Recommendations and Regulatory Approaches

The policy panel provided concrete recommendations for government approaches to agentic AI governance. Jennifer Mulvaney from Adobe emphasized the principle of “humans before models,” arguing that policy should focus on preventing harm to humans rather than regulating technology for its own sake.


Ellie Sakhaee advocated for regulating applications and harms rather than underlying technologies, noting that technology-specific regulations often become obsolete before implementation. Danielle Gilliam-Moore from Salesforce distinguished between governance and regulation, emphasizing that governance encompasses standards, global norms, and risk management practices beyond formal regulation. She advocated for sector-specific approaches where agencies with domain expertise take the lead.


Sam Kaplan from Palo Alto Networks emphasized security as foundational, noting that agentic AI transforms digital threats into systems with “arms and legs” capable of physical world consequences.


International Coordination Challenges

The discussion revealed strong consensus on the need for international coordination to prevent regulatory fragmentation. Carly Ramsey from Cloudflare highlighted compatibility challenges between different national frameworks, citing Singapore’s recent agentic AI governance framework as an example.


The OECD emerged as the most frequently mentioned coordination platform. Gilliam-Moore noted that OECD AI principles from 2019 have influenced legislation globally, from the EU AI Act to U.S. state-level proposals. Other venues mentioned included the International Consortium of Safety Institutes, Singapore International Cyber Week, and UN AI for Good initiatives.


Technical Challenges and Future Considerations

Several unresolved challenges emerged from the discussion. Sakhaee noted that while single-agent systems are becoming better understood, multi-agent systems present entirely new risk profiles that are not yet fully comprehended. The pace mismatch between technological development and regulatory processes remains significant, with companies moving to annual cycles while standards development takes years.


Data governance emerged as critical, since AI agents make decisions based on data without human empathy or situational awareness. Nair emphasized that if data can be manipulated or its provenance is unclear, agents may produce harmful outcomes despite operating within designed parameters.


Conclusion

This discussion represents a significant evolution in AI governance thinking, moving from abstract principles to practical implementation frameworks. The collaborative tone between industry and government representatives suggests shared recognition that effective governance requires partnership rather than adversarial relationships. The emphasis on graduated autonomy, sector-specific approaches, and international coordination provides a roadmap for managing the transition to more autonomous AI systems while maintaining appropriate oversight and accountability.


Session transcript

Jason Oxman

Our second discussion will be this panel, which will discuss the business case use of agentic AI. And then we’ll follow that with a second panel, which will discuss the public policy implications of agentic AI. That is to say, what government should be doing to encourage and to safeguard the use of agentic AI. We all know that agentic AI is quite literally the AI of agents. And there’s been a lot of discussion here at the AI Impact Summit about how agentic AI is creating new opportunities for jobs, for societal benefits, for use cases across different industries. And one of the most important questions is, of course, what public policy solutions are going to be necessary to encourage the use of agentic AI.

So I’m very pleased to welcome as our opening speaker, Austin Mayron, who is the Acting Director of the Center for AI Standards and Innovation, and a senior, you have the longest title in the world, Austin. Thank you. Senior Legal Advisor to the Undersecretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office. office. Austin, we are thrilled to have you here. You have some very interesting updates on how the U.S. administration is approaching agentic AI, including what the office is doing, which I think is enormously important as well. So you’re going to join us for a few minutes of table -setting remarks, if you will, and we’re thrilled to have you here.

Austin, I’ll turn it over to you.

Austin Mayron

Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin Mayron, and I’m the Acting Director of the U.S. Center for AI Standards and Innovation, also called CAISI. CAISI was originally founded as the U.S. AI Safety Institute, but last year in June of 2025, Secretary of Commerce Howard Lutnick refounded us as the Center for AI Standards and Innovation. That signaled a shift away from safety principles, more towards standards and innovation. I think there’s two organizational aspects of CAISI that are worth note. The first is that we’re located within the Department of Commerce. We are very focused on helping industry. The Secretary has tasked us to be the front door for industry to the United States government, and we really see ourselves as serving in that role.

We collaborate with various aspects of the AI ecosystem, including the Frontier Labs, for instance, on pre -deployment evaluations. And we like to partner with industry to help understand government. As one example, sometimes there’s a lack of AI expertise within the U.S. government. And CAISI, because we have talent from Frontier AI Labs, we’re able to help explain novel concepts to other aspects of the administration. The other aspect of our organization that bears no is that we’re located with the NIST, the National Institute for Standards and Technology. And the thing that’s worth noting there is that NIST, throughout its history, it hasn’t been a regulatory organization. It’s been an organization that’s promoted economic growth and technological development by developing standards and facilitating the development of standards and best practices.

And so CAISI, we see our role as partnering with industry to develop the standards and best practices they need to flourish. And here, we’re here today to talk about AI agents, which is an incredibly timely topic. And so I thank ITI for organizing this. Just this week, CAISI, my organization, we kicked off an AI agent standards initiative. Our goal is to hear from industry how traditional standards work, best practices, guidelines can help unlock and facilitate adoption. So one area where we’ve already started that work is on AI agent security. We put out a request for information or RFI about what challenges industry is facing with AI agent security. Our colleagues at NIST at the Information Technology Laboratory also have a publication out for comment on AI identity and verification, which we encourage you, if you’re interested, please look at the documents, review them, send in your comments.

We also announced this week that we’re going to be holding sector specific listening sessions on barriers to adoption, the sectors of health care, education and finance. And our goal here is we want to learn actually what are the challenges that industry is facing. These AI agents, they have tremendous potential, but we want to understand. How CAISI and NIST and the U.S. government can help unlock adoption through standards and best practices. So I’m delighted to be here and take part in this conversation and learn more from my fellow panelists.

Jason Oxman

Thanks, Austin, so much. Really appreciate your being here and helping set the stage for us for our discussion of agentic AI. As I mentioned, we have three great experts here to start us off on the business side discussion before we move to the policy side discussion, because I really think it’s important for us to understand exactly what use cases of agentic AI are happening across different segments of the AI stack. So we’re very fortunate to have three experts here to help us with this discussion. Prith Banerjee is the CTO and SVP of Synopsys, the design software automation semiconductor company. Great to have you here, Prith. Caroline Louveaux is Chief Privacy AI and Data Responsibility Officer at MasterCard.

Caroline, thanks for being here. And also delighted to have Syam Nair, who is Chief Product Officer at NetApp, the global multi -cloud service provider. And so the three of them are each going to share a couple. A couple minutes. of opening remarks on agentic AI use cases. What we’ve asked them each to do is share with all of you kind of the top favorite agentic AI use case that’s happening so that we can use that as a way to frame the discussion around business and policy to solutions. So if we could, Prith, I’ll start with you for your favorite agentic AI use case that’s happening at Synopsys.

Prith Banerjee

Sure. So I’m Prith Banerjee, and my role is to look at sort of future directions of where Synopsys is headed. And agentic AI is actually the core of this. But before I do that, I want to share with you what Synopsys does. Synopsys is the leading provider of electronic design automation tools and IP to design chips. So the chips from, say, NVIDIA or AMD or Broadcom, Qualcomm are designed with these billion transistor chips, trillion transistor chips designed with Synopsys tools. But the opportunity that Synopsys has, seen is these chips are going into systems, systems that are like cars or… aircraft or spacecraft or system data centers, healthcare, et cetera, right? So we have this vision of chips to systems that, and because of that, Synopsys recently acquired Ansys for $35 billion, right, to be a chips to systems company.

I came into Synopsys as CTO at Ansys. So now the challenge that I want to share with all of you is as you are designing a car, right, it’s a software -defined car, right, a Tesla car has more than 100 million lines of C code in that car. That code runs on an ECU, an ECU designed by NXP or STMicro or Qualcomm. And that chip is still not yet designed, right? It is being designed with, say, Synopsys tools, but you’re writing software on the tool or on that chip, and so you have to do what is called software -defined verification validation, right, before the software is, before the chip is designed. Right. And that. that control will control the electric brakes, the electric steering, the autonomous driving of the car.

And the car is, it’s a physical product, it is being driven on the road, right? And so you use ANSYS physics simulation like Fluent for aerodynamics or LS Dyna for crash or HFS for electromagnetic. So essentially what we are doing is bringing the physics of the world around us powered by AI along with the chip design in this what we call intelligent product design which is silicon design. So the chip inside any complex design, software enabled, so you can do software updates over there, updates and AI driven. So that’s all the context and if we are a $10 billion company with a market cap of 100 billion. So the agentic AI part is the following, that the pace of innovation in the world is changing.

You used to design a new car every 7 years or maybe 5 years. That pace of innovation is changing. like Tesla, Elon Musk said we have to do it every year. Every year they want to bring a new car to market. Or NVIDIA Jensen, right? The chip design used to be every three years. NVIDIA Jensen says you have to do it every year. So the pace of innovation is becoming faster and the complexity. You used to have a chip with maybe a million transistors. Now it’s a billion transistors. It’s a trillion transistors. It’s incredibly complex. And then you have the chip with all the complicated system. The complexity is so hard that you used to have human designers at the Qualcomm, NVIDIA, etc.

who could use those things using the Synopsys tools. You cannot do that anymore. It is very, very hard. That’s where agentic AI is coming in. So at Synopsys what we have created is agentic engineers. These are like human engineers that are not trying to take the jobs of human engineers away. They are going to complement the job of a human engineer so you at Broadcom, Qualcomm, we have a hundred thousand engineers. but you will be complemented with another 200 ,000 agentic engineers from Synopsys who will do the lower level reasoning job like a human, right? But the human will still be in the loop to make sure that you are not doing drastic sort of bad things, right?

This is the incredible opportunity. But as the world talks about agentic AI in the world of large language models and data and words as tokens, our world is what we call physical AI, which is physics, and it’s the physical AI part where we are applying our agentic engineering technology to. Very, very exciting area.

Jason Oxman

That’s great. And I love how you described the human engineers being complemented by, not replaced by, the agentic AI that’s helping them be more efficient and do their jobs better. Caroline, I think of payments networks as having used AI for decades, literally. The fact that you can take a plastic card and tie it back to a, a human being, no matter where they are in the world, is actually truly remarkable. When you think about how payments networks work, it is truly remarkable, the technology. especially since you’re processing literally millions of transactions a second around the world. So with that, you look over global AI for MasterCard, and I’m curious how agentic AI is influencing the work that you and your colleagues do to make these payments rails run around the world.

Caroline Louveaux

Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have been leveraging AI for decades to make our payment network safer and more secure for everyone. Now with agentic, we are moving from AI systems that recommend to AI systems that act, right? And in cybersecurity and payments, the shift is already real today. AI agentic systems are being deployed, for example, to detect suspicious transactions, to triage fraud signals, to initiate secure payment flows. If you think about it, if we want to be able to detect and to block fraud in real time, decisions have to be made in milliseconds. at scale. And of course, while speed and scale matter a lot, accountability is a must.

What’s important is that these agents don’t make decisions with open -ended autonomy. They must act within clear values, principles, within clear permissions. What is the agent allowed to do? What is not allowed to do? And when does a human need to step in? And of course, humans have to have full oversight end -to -end. So, I mean, there are many other use cases. I’m happy to talk more about that, but I think that’s really our main use case. But of course, the technology is moving really, really fast. We are now talking about this multi -agent ecosystem that raises a whole new range of opportunities as well as novel challenges. And so that’s where these kind of summits where we all come together are really, really important to really get it right.

Jason Oxman

I love how you characterize it as moving from what we call assistive AI to operational AI. In other words, instead of just helping with a task, the AI, as an agent, can actually take a task on. Still oversight in the system, and that, I should have previewed this. We’re going to come back around and talk to the panelists about guidelines and protections, and as Austin importantly noted at the outset, the security of the system, how that’s built in as well. And, Siam, I want to come to you next. The multi -cloud that NetApp operates obviously is moving data around the world on behalf of customers, storing data around the world and allowing your customers to access data in a multi -cloud environment.

How is agentic AI helping NetApp with that level of customer service?

Syam Nair

Thank you. So NetApp actually, as you said, multi -cloud, we both power public cloud as well as private cloud. Many of the largest infrastructure is actually the data infrastructure. It’s built on NetApp. I’m a file storage standpoint. One of the key challenges in AI itself is having quality of data. Data quality is super important, and the previous session actually talked about it. And data quality, especially from unstructured, truly unstructured, how do you really get the structured value out of it? And that’s where agents can actually help and agents help, which is we are developing agents which are sitting closer to the storage controller. If you know the storage architecture says that without moving data and going through cluttered pipelines and, you know, positioning the data ready for AI, you can actually have the data at the source itself, which will be ready for AI.

And how this helps is, you know, many of the areas, cybersecurity, as it continues to grow as a threat, you know, 59 seconds is the average breakout of a threat these days, risk and threat will become super important to manage. And you need to do that at the layer where the data sits. So agentic has a really good use case with respect to that. We are still in our journey, early journey in terms of building these capabilities. One would say, look, if you have five levels of AI where, you know, agentic AI where level one is mostly assisted, co -pilot to autonomous agents, running a network of agents at level five, we’re still in that journey somewhere in the three range.

And that’s what we see from customers in terms of how they want to leverage data. So that’s one of my favorite use cases in preparing the data, making sure that the right data is available both for the agents and the agents can make it available for the use cases.

Jason Oxman

Yeah, interesting. So the agents are actually helping you expose any risks that may need to be addressed as part of that provisioning of data. And, Austin, I’m going to ask you to set up our second round question with me, not for me. And that is, you know, the industry has a responsibility to inform governments about risks and how they’re being addressed. So as we move into the next question for the panel around enterprise guardrails that companies are seeing. So, Austin, I’m going to ask you to set up your question. And then I’m going to ask you to set up your question. And then I’m going to ask you to set up your question. So, Austin, I’m going to ask you to set up your question.

And then I’m going to ask you to set up your question. anything in particular you would flag that you’re looking to hear from industry in the U.S. administration about those guardrails. You are overseeing an operation that asks for industry input, which I think is rare and particularly great. So thank you for doing that. Perhaps some practice tips that you can provide to everyone in the room about what it is helpful to provide government, the U.S. administration or other government colleagues that you’ve heard from on these issues and how it’s helpful to provide that information.

Austin Mayron

Yeah, absolutely. So at CAISI , our focus right now is truly on unlocking innovation and adoption. And we work in the standards space, and so we look to how NIST -fostered standards and best practices and guidelines documents can help with that innovation and that adoption. And so the NIST process, the way it normally works is we like to gather and collaborate. It can be an industry to… understand the challenges they’re facing. It’s more of a bottom -up, grassroots approach than a top -down. We’re not sitting there in Washington and saying, you know, this is the problem and we’re going to fix it. We take a little bit of humility and say, we don’t actually know what the problem is until we talk to the people who are closest to the issue, because we only have a narrow slice of the world from our vantage point, and the people who are actually in the field working on innovation, working on adoption, they have a better sense of what the barriers are.

And so we encourage everyone in industry and across the ecosystem to really engage with us, to tell us the problems that you’re encountering, and we have structured formal ways for you to do that. For instance, the request for information on AI agent security, I think it’s open for about another month, and some have already submitted comments, but we look forward to comments. As I said, we’re also convening listening sessions, I think in April, on barriers to adoption, particularly on agent issues for education, healthcare, and finance. We’re starting with those three sectors, but we really welcome that type of engagement, because we want to facilitate adoption. And one example that I sort of like to use…

I don’t know if it’s actually a barrier to adoption, but let’s say in a regulated field like healthcare or education, there’s PII, and there’s a reluctance to adopt because it’s unclear how the AI agents and systems are treating PII and whether it will satisfy regulatory burdens. CAISI could play a role in helping settle concerns about that because we could develop benchmarks, methodologies, and evaluation methods to give industry the confidence they need that, for instance, the model that they’re looking to procure and adopt and implement handles PII the way they need to to satisfy their regulatory obligations. So that’s a way where Casey, through measurement science, best practices, and standards, can help facilitate adoption. We’re also looking at interoperability, and we’ll have more about that in the coming months.

Jason Oxman

That’s great. Really appreciate that, Austin. And I love the focus on voluntary, industry -driven, consensus -based standards because that’s how the tech industry prefers to operate. It’s better than government regulation, particularly because those standards are global in nature, and NIST is a great example, as you noted, of support of those voluntary. consensus -based industry standards, which we would all prefer to operate. And, Prith, I’ll come back to you on this question of, I guess I’d call them guardrails, kind of the enterprise guardrails around risk management that you’re putting in place. Governments are paying attention. We want to handle these issues in the private sector. What are you seeing that’s important as far as those enterprise guardrails for risk management?

Prith Banerjee

So that’s a great question. Actually, at the AI Summit yesterday, there were a lot of speakers, from starting with Prime Minister Modi to President Macron, everybody kind of talked about responsible, safe AI and AI for everyone. But I want everybody in the audience to understand what is going on in this world, right? So there is a problem, right? You have a video that you can watch on, say, YouTube or Facebook, and you want to prevent a young child from watching that, right? And that is responsible AI, and you want to make sure that a 12 -year -old doesn’t watch it. But if he or she watches it, it’s not the end of the world. I mean, yes, you have seen this, but the world that we live in is this intelligent product design, right?

You are designing a car, and we have, as Syam was mentioning, level 1, which is assistive, all the way to level 5, which is fully autonomous. Now, imagine a world – I’m now doing the scary part so you understand how scary it can be, right? An autonomous car that is driving on the streets of Mumbai, right? And it’s supposed to be autonomous, making sure the pedestrians and the cows are being avoided. But suppose there is a cyber attack, right? And somebody goes in, and you want to use that car as a weapon, right? As you know, there are terrorists that go in, and they bang into these things, right? So we have to make sure that these software -defined systems – just imagine an airplane, right?

You know what has happened in the past. In 9 -11, an airplane hit a thing. So you could imagine a software -defined airplane being used as a – as a missile, right? So this is how important it is because unlike the world of Facebook and Google, and I’m not undermining Facebook, Google, I’m just saying you are dealing with people watching stuff and saying like, unlike, right? We are dealing with physical AI interacting with the real world. If real world some things happen, some really dangerous things can happen, right? And so we have to be extra careful. So that’s the challenge. What we are trying to do is to make sure that as part of this agentic engineering workflow, we are doing it in a responsible manner, in a safe manner, right?

And the work that we are doing in terms of verification, validation. So the software flow that we do before we actually do a hardware prototyping, we do full like 100 % coverage at the digital level. So we are designing the airplane on the computer, designing the car on the computer with as close to 100 % guarantee. Nothing is 100%. but I want you to understand how much more complicated this is right because in the hands we can design software defined sort of data centers or software defined nuclear arsenals right in the hands of the wrong person some bad things can happen so we have to be extra careful about the responsible safe AI that we do for our intelligent product design.

It is happening software defined is happening but we have to be super careful.

Jason Oxman

Thank you, sometimes the best way to get people to pay attention to what you’re saying is to scare them and so you’ve certainly done that and Caroline there’s a lot of bad stuff happening on the payment systems as well and the consequences of fraud and security breaches are or actual shutdown of the network is almost impossible to contemplate global commerce grinding to a halt I don’t know if you want to scare people like that as well when you talk about.

Caroline Louveaux

Let me go there.

Jason Oxman

Go ahead.

Caroline Louveaux

With enterprise guidelines coming to New Delhi I watched the companion it’s a movie around romance robot I’m not going to spoil the end, but that’s actually a scary story for sure. Now, back to the MasterCard vault. The principle is very simple. Autonomy can only scale if there’s trust. And so at MasterCard, we think we have a role to play when it comes to agentic commerce, meaning you use an agent to make payments on your behalf. And so we want these agentic payments to be safe and secure and trusted. And therefore, we came up with a playbook with four key guardrails. The first one is know your agent. Before an agent acts and before it makes a payment, we want to make sure that it’s verified and trusted.

So everyone needs to know that it’s a legitimate agent and not a rogue robot or a fraudster. Important, right? The second one, of course, is security by design. It has to remain the foundation. And so we are leveraging advanced technologies around customer authentication, tokenization. to make sure that the sensitive credentials, for example, your card number, is not visible and not exposed to third parties, to the merchants, to the agents, or anything like that. Third, and that’s a bit new, we want to make sure that we have clear consumer intent. The consumer has to be always in control of what he or she authorizes the agent to purchase on his or her behalf. We learned this the practical way just a couple of months ago.

An employee at Massaca decided to ask an agent, hey, are you able to buy sushis? The idea was just to test the agent’s capability to do so, but the agent took the question literally and placed an order using the employee card details on file. So, lesson learned, clarity matters, clarity of the intent that can be verified, otherwise you end up with these platters of sushis. And then last but not least, everything has to be, traceable and auditable. and that’s needed if you want to be able to give consumers the redress if things go wrong, dispute resolution and of course to make the regulators happy and comfortable and so these guardrails are not there to slow adoption, you know, if done well they’re going to be key to scale adoption in a way that is trusted by design.

Jason Oxman

Great, sushi is not scary but the use case you described is, so appreciate that

Caroline Louveaux

It’s only sushi, we’re good.

Jason Oxman

It’s only sushi, that’s right Syam, you get to wrap us up because we’re closing the panel out You don’t have to scare people if you don’t want to but I’d love to hear how NetApp is thinking about enterprise guardrails for risk management around agentic AI

Syam Nair

Yeah, no scary stories I think one of the ways I would say this is, you know, as humans we used to make mistakes but it was much more contained. As sometimes in enterprises you had insider threat but it was much more contained. But now you’re talking about a network of agents where the blast radius in terms of an error or a mistake or a threat is much more profound. So guardrails become important. They need to be at multiple levels. Number one is public -private partnership in identifying the guardrails in terms of how agents need to operate, being very specific to the enterprise, being very specific to the business is important, and working together with the customers, in some cases consumers, others in business -to -business, understanding the use case and for which how we need to build guardrails within the system.

And more importantly, I think, and I’ll go back to what one needs to figure out is the governance of the data because data is the one that is actually going to power how agents make these decisions, right? Unlike human, there is no empathy built into the agent, at least not at this point, and it is not making decisions based on situational awareness. It’s making decisions based on the data. And if the data can be manipulated, if the lineage of data is not properly understood, if it is not really governed, if there are no guardrails for that, then you could actually get outcomes from agents that are going to be scary. The last piece of this is, look, unlike agents, which can do everything, agents cannot take accountability.

They’re not responsible. They can’t take accountability. It’s the humans. It’s the business owner who takes it. So having those guardrails work in tandem with the customer, consumer, with the public -private sector partnership is super important in terms of defending.

Jason Oxman

Thank you. Thank you. policymakers looking at. And what should policymakers look at? Our goal in the tech industry, obviously, is to ensure that public policy is inspirational to innovators, that it doesn’t interfere with the ability of innovators to get the products and services out to market that we all want to see and benefit from. But of course, policymakers have other things in mind. They want to make sure that consumers are protected. They want to make sure that safety and security is part of the design of products that are deployed into the market. So we have a great industry panel of experts who are going to share their views on what policymakers should be thinking about and what they should be doing to inspire the use of agentic AI while also addressing important public policy concerns.

So I’ll ask each of our panelists to address that and to introduce themselves. Jennifer, I already said who you are. You can just introduce yourself and your company, and let’s take that as the prompt. And you get to pick one thing that you think policymakers should be most focused on. focusing on.

Jennifer Mulvaney

Great. Thank you, Jason. Jennifer Mulvaney with Adobe. And, you know, I learned a great Hindi term yesterday watching the prime minister speak, and that is mahaf, human. I mean, you really think about policy. Policy, you know, has been around since the dawn of time, and it really is about helping to prevent harms against humans. And so that is what policy still is meant to do today. I think when policymakers look at anything, whether it’s tech or welfare or tax policy, it’s what does this policy mean for humans and how to prevent harm and what does that mean? And we as lobbyists in Washington, D.C., or my former role there, you humans go in and talk about what it means for whatever that stakeholder group you’re talking about is.

So we’re now in a world of policy actually governing systems, not just people. But I think that the prime minister’s focus on human is something that Adobe talks a lot about as well, that should be humans before models. Our CEO of Adobe often says it’s not what we can do with technology, it’s what we should do. And I really love that statement because that really does think about what is this going to mean for humans? How can we advance that agenda?

Jason Oxman

Love that. Thank you, Jennifer. Yep. Ellie Sakhaee.

Ellie Sakhaee

Hi, everyone. I’m Ellie Sakhaee. I am part of public policy team within Google. Several of our colleagues in the previous panel mentioned that agentic AI is not a point in development, right? So it’s, as we think about agentic AI, we should be thinking about the continuum, depending on agent’s autonomy, depending on their access to memory, depending on the context of use, and depending on their ability to do long -term planning and basically act on the real world. So that is why I think it’s important when we think about policy to think about this continuum of agents rather than something is agentic and something is not agentic. That being said, I think that one of the main safeguards that we talk about is human in the loop for agentic AI.

And that also varies significantly with the ability or the reliability of an agent. So as we move from agents that need confirmation for every single step that they want to take, they need human approval. As we move from them to agents that are more autonomous, we should be thinking about moving from human in the loop to human on the loop or human in command. A similar analogy to this is how Federal Aviation Administration in the U.S. thinks about moving from pilot being always in sight of drones to pilots being in command of drones. So as the safety of these drones improve and safety of AI systems to keep track of these drones through detection and avoid system improves, we can move from pilot.

always keeping an eye line with the drone to pilot being on the loop or pilot being in command. So I think these analogies within different industries allow us to think about agents. And another thing that I think policymakers, as they think about agents, should consider is that agents may be a new technology, but they, at the end of the day, they may cause harm. So we should be thinking about regulating the use or application or the harm that they actualize compared to regulating the underlying technology. Otherwise, we end up regulating, let’s say, the AI models that by the time that the regulation goes into effect, the AI model has evolved into something that is now agentic.

Jason Oxman

Makes sense, and appreciate your perspective. And I should have noted that you’re not only doing public policy work for Google, but you’re actually a real agent. You’re a real computer scientist, Ph .D., machine learning. She knows how the machines think. which is important as well. And sometimes they talk to us, right? Sometimes. Let’s go to Carly Cloudflare next.

Carly Ramsey

Great. Thank you. Hi, everyone. My name is Carly Ramsey. I lead public policy for Asia Pacific for Cloudflare. I’m based in Singapore. And Cloudflare, just for those of you who don’t know us, Cloudflare runs a global network, and we kind of sit in between our customers and their users, and we protect the traffic that goes back and forth and take a large majority of all the AI model providers are our customers as well, so we’re protecting that traffic as it goes back and forth. So we have a unique viewpoint. We also offer developer tools as well, and people are building AI agents off of Cloudflare, so there’s that angle that Cloudflare sees as well. So, like you said, choose one thing that we recommend to policymakers.

That’s a hard one, but I was thinking in keeping with the theme of this summit, which is very much about inclusive AI, I think that something that policymakers should consider is whether or not we’re making agentic AI specifically available. for everyone, right? So that becomes, is it accessible? Are the standards perhaps open? I think open models, open standards are really interesting and are allowing people to access tools that they might not normally be able to access. And so as policymakers think about diffusing this technology more widely, maybe just even outside of the enterprises, one thing that as someone who sits in Asia Pacific, and this is really concerning to me, is like how do we ensure that the different governments when they’re making these tools accessible are talking to each other?

And I think ITI has a really neat role to play in that actually because we all know that NIST is the gold standard and these are voluntary standards. They’re often referenced a lot in Asia actually. Singapore just came out with their own framework on agentic AI governance, right? And the question is, is that going to be compatible with whatever NIST is going to put out? Big question. Singapore is a leader in cybersecurity standards in this region. And I’ve had some interesting conversations here in these past couple of days about India. India, obviously, with the bastion of tech talent that we see in India, they want to be involved in standard development and for the global south.

You know what I mean? So great. And how do we get them involved? And how do we make sure that as global companies that they’re not – all of these standards aren’t contradicting each other as well, right? So that harmonization piece is very important.

Jason Oxman

So important. Technology doesn’t want to stop at borders. It wants to serve the world, and such an important issue. Sam, Palo Alto? Palo Alto? Perfect. Palo Alto.

Sam Kaplan

You conveniently sat the two cyber companies, cybersecurity companies, next to each other. So my name is Sam Kaplan. I’m the Assistant General Counsel for Global Policy at Palo Alto Networks. And for those of you that don’t know us, we’re the world’s largest pure play cybersecurity company. Can you hear me? Yeah. Okay. There it’s better. Sorry. I need to project better. Anyways, I think, Jason, to pivot off of your question, I think, you know, at a high level, one of the – The one last question. and I think if we could impart to policymakers is, you know, start with the standards organizations, to tell you the truth. The standards organizations, both in the United States but also abroad, Carly referred to the Singapore agency, but they are in the midst of developing these voluntary frameworks that are really serving as the foundation, not only to understanding the technology but to better understand sort of the risk picture that we are facing when it comes to these types of technologies, where we started with traditional model security frameworks when it comes to LLMs that are all based on sort of prompt and responses.

These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic, and as they are painting a better picture and working with industry to understand how that risk picture is changing and how what was once sort of… almost a two -dimensional… understanding of the risk when it comes to AI models is now very much a three -dimensional picture when you’re looking at agents, because these are the parts of the models that all of a sudden have arms and legs. So when you’re looking at this from a security perspective, you’re taking what could be sort of a digital threat that can sort of metastasize on networks. These are threats that all of a sudden can have kinetic consequences in real life as these agents are executing decisions across the financial system from your previous panel, but across autonomous systems.

So understanding that risk picture is going to be critically important. And last, I think that really pivots into one of the themes from the summit itself, as policymakers, in particular policymakers, are looking at sort of responsible and safe deployment. They need to understand and appreciate that security, security of those models, security of those agents, is a foundational layer to increasing trust, to facilitating response. deployment of AI because it’s the best way to secure and, as much as we can, understand the behavior of these models and agents as they’re interacting with the ecosystem and now the real physical world that we’re seeing.

Jason Oxman

Yeah, and policymakers are keeping an eye on all the products and services to see if that is done well or not, in which case they may step in. All right, to follow your thematic, we’re moving from cybersecurity to enterprise software. You’re going to take my joke, aren’t you? You sat me next to condos. I know, I know. It’s not my joke, it’s Sam’s joke. But, yes, I’m going to take it. I’m going to take it. So, Danielle, please commence the enterprise software portion of our program. I can speak for you if you want me to. I’m joking.

Danielle Gilliam-Moore:

Danielle with Salesforce. I’m our director of global public policy, and I lead our AI policy work. The panelists have said a lot of great things, and they’ve also stolen a lot of what I’m going to say, so I’ll try to make this short. But when we think about AI, I think there’s – A governance response. Okay. needs to happen and when we talk about governance I think a lot of people conflate governance with regulation and governance is more than regulation. Governance can be regulation but it’s also standards, it’s also global norms, it’s also you know risk and quality assurance procedures in companies and so along with the standards piece I think a critical thing to remember is that you know ISO controls takes about three years to that process so it’s quite a long process.

So when you look at the ISO 42001 standard it’s a great standard but it’ll take time to further build on that which I think then makes in organizations likeness the different safety Institute’s incredibly important in filling in the gaps while work is being done to bring about new controls around agentic. The other thing I’ll say is on regulation there’s this emerging framework that it was first kind of started in the UK but I’m seeing governments like Indonesia on the other hand, there’s a lot of government that’s how we can make sure that we’re not just looking at the data and the data is is being used to make sure that we’re not just looking at the data and the data take this on of instead of having this large overarching AI regulation they’re looking at they’re allowing the different ministries that have core competencies on things like financial services or health care to take the lead so you have a more diffuse model that’s happening and I encourage I would encourage lawmakers to look at that you know some of these agencies have years and years and years and years of relationships and expertise and so wouldn’t they be best placed to think about not necessarily regulations but frameworks rules that best suit you know a small startup that isn’t that is operating you know a financial services agent or something like that some edge use case I think that is a more agile way to look at agentic which you know agility does I think bring about adoption and is very key to adoption.

Thanks. Jason Oxman

Perfect. Combiz is anything left for you to say?

Combiz Abdolrahimi

I was just going to say ditto to everything that Danielle said because that’s basically what I was going to say, and she said it way better than I could ever do. Yeah, calm these up. They’re a human service now. I guess I would add just having worked in government now within industry, there’s kind of – I like to think like I could sort of have like the vantage point of like a former regulator, policymaker as well as now in industry. And I think what we are looking for and what we’ve heard earlier today is like we want clarity. We want clarity. We want standards. We want to – like we want to see what good governance looks like.

Don’t give us – if I could give a message to governments and regulators, don’t give us sort of theoretical abstract principles, but give us actually what practical standards, what does good governance look like, operational clarity, playbooks, model frameworks. Jason and I, I remember – for many years ago when I was at Treasury and you were at ETF, you know, there was this line, like, you know, these technologies are rapidly evolving. And as they’re evolving, policies and regulations need to evolve with them. Otherwise, it’s going to stifle these innovations, and it’s going to actually create more harm than good.

Jason Oxman

Well put. Well put. All right, so now that we’ve provided a wish list for regulators, the next question, and Danielle, I’m going to give you the chance to go first because of your observation that sometimes panels go down the line and it’s not fair to the people who are at the end of the panel. I think that’s absolutely true. I would have let Kambiz go first, but you’re speaking for the enterprise software industry generally. So the question is, you know, one of the big themes here at the AI Impact Summit is unification of the policy agenda across countries, across governments, across regions. So. So is there a particular platform you’ve seen or organization you’ve seen?

Is there a particular place where conversations like the ones we’ve been talking about here should be taking place? You know, the U.S., India, like -minded governments around the world, they want to be all on the same page. But there is a tendency for India -specific standards, for U.S.-specific standards. There’s a tendency for that in the physical world and in the digital world, and that’s very difficult for us to operate in. So in the agentic AI arena, I’m curious from all of you if there is a particular multilateral venue or a particular platform or a particular thing you’ve seen work well that you would recommend to governments here that they look to for this. And, Danielle, have I bought you enough time to come up with your answer so that I can call on you first?

Danielle Gilliam-Moore:

I woke up this morning knowing the answer to this question. Oh, excellent. Okay. I live for this question. It’s all yours. Which is the OECD. All right. The OECD, I think, is kind of – it’s not worth it. I remember it all started, but there was this really interesting moment where the OECD puts out principles in – was it 2019, I believe? And then it was like it set the floor for everyone else. I mean, the EU AI Act’s definitions are based off of those principles. We’ve seen draft legislation at the state level that’s based off of the OECD AI principles. Globally, when I was doing rounds of meetings in APAC, they were looking at the OECD principles.

So I feel like the world is shouting OECD and a lot of the regulatory work that they’re doing, but they don’t necessarily say they’re not always looking there. But the OECD has been doing such interesting work. They now have the reporting framework. They’re doing work with GPI. Them having that Hiroshima AI process framework, that was them taking the work of the G7 and bringing it into what they’re doing. So the OECD is doing so much work to reach out, and so I would encourage governments to look at what the OECD is doing and help them built.

Jason Oxman

That’s great. Sam? You can pick the same one if you want to or …

Sam Kaplan

well and I’m actually going to layer it because I think Danielle is exactly right I think when you’re looking from a policy and higher level governance the OECD has been the leader in this there are structures in place through the OECD to develop these if you look at legislation regulatory proposals that have come out even across the various US states they’ve based definitions off of what the OECD so that has been a foundational piece I think so from a broader perspective I think that’s a good layer I think you know the one that has potential I would like it to see move more tactical rather than being a little bit esoteric and studying is the International Consortium of Safety Institutes I think the structures are there you have the right players that are coming to the table I think if those organizations like what Cassie’s doing right now are advancing you know more tactical standards to create a taxonomy when it comes to agentic AI security how are we measuring how the attack is going to affect the surface has changed when it comes to agents.

To understand the scope of the scale of this problem, I think there’s a great deal of potential, but I think you need sort of these two levelings to talk policy and standards.

Jason Oxman

Fantastic. Carly?

Carly Ramsey

Just to add something different to the discussion is that based in Singapore, what I’ve seen in the years that I’ve been there is that the Singapore International Cyber Week has been, every year has gotten more attendance from governments from all around the world. So that is a potential, it’s an annual event, and so the positioning is on policy, bringing governments to discuss cyber policy. And so potentially that is an area that could be considered, sure, that the varying countries from around the world, the different, like India is well attended at Singapore International Cyber Week, make sure that they all have a voice in the future of Identity AI.

Jason Oxman

That’s great. Love it. Ellie, do you have a preferred platform? Multilateral?

Ellie Sakhaee

Yes, I’m going to add to what my colleague said here, and that is technical benchmarks. We talk about the standards, but we may understand what agents do, but we don’t fully understand what multi -agent systems may do. They may have emerging risks. They may have completely different behaviors that we don’t really know because we don’t really have real versions of multi -agent systems. There are some emerging, but the risk surface will change as these agents interact with each other. So I think the academic community, industry, all of us have a role to play to develop and expand the benchmarks for multi -agent systems to make sure before we put them into the world, they are tested.

Jason Oxman

Great. Jennifer, and then Cambiz, you’re going to get the last word.

Jennifer Mulvaney

Thank you for sharing. So what I would just say is I think that definitely OECD comes to mind as the largest. Most credible group, and I think that makes sense. But we do have to think about having space for some of the smaller, more regional groups as well. I’m speaking in Tokyo. couple weeks at the Friends of Hiroshima G7, where they had their principles there back when Japan hosted the G7. So I think that’s really important to have those types of smaller regional, perhaps even focused on specific policy areas to then feed into the bigger consortium in a way that people can understand. So I think that’s really important.

Jason Oxman

That’s great. That’s great. Kambiz, close us out.

Combiz Abdolrahimi

Yeah, hopefully. So actually, I was surprised that nobody mentioned the one that I was like, please don’t mention it. Please don’t mention it. Let me do it. So we’re talking about standards. We’re talking about technical benchmarks. We’re talking about principles. We’re talking about coordination at a global scale, private sector, governments, academia, institutions, the ITU, the UN, AI for Good. I mean, they do all of that. And I think that we want to engage with more countries, with more stakeholders in this conversation and make sure that we are being inclusive. and that’s one of the sort of multilateral forums that I would look to.

Jason Oxman

That’s a terrific one. Thanks for adding one to the list at the end of the round. This has been a fantastic discussion. I love the way we paired the business discussion of Agentic AI with the policy recommendations, and hopefully policymakers will pay attention to what we’re doing. ITI is proud to represent all of the companies here on the panel here today as part of the global tech industry and particularly proud to be partnered with Government of India on the AI Impact Summit. Our congratulations to the Prime Minister and to the entire Government of India for this incredible, incredible gathering. Thank you to all of you for being here to be a part of this important discussion, and please join me in thanking our terrific panelists.

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

A

Austin Mayron

Speech speed

194 words per minute

Speech length

951 words

Speech time

293 seconds

Shift from safety‑focused institute to standards‑focused agency

Explanation

CAISI moved from being the U.S. AI Safety Institute to a standards‑focused organization, signalling a strategic pivot toward innovation and standardisation rather than pure safety research.


Evidence

“That signaled a shift away from safety principles, more towards standards and innovation.” [1]. “CAISI was originally founded as the U.S. AI Safety Institute, but last year in June of 2025, Secretary of Commerce Howard Lutnick refounded us as the Center for AI Standards and Innovation.” [3].


Major discussion point

Government role & standards for agentic AI


Topics

Artificial intelligence | The enabling environment for digital development


CAISI as front‑door to U.S. government, partnering with NIST

Explanation

CAISI positions itself as the industry gateway to the U.S. government and works closely with NIST to develop standards and best practices for AI agents.


Evidence

“The Secretary has tasked us to be the front door for industry to the United States government, and we really see ourselves as serving in that role.” [20]. “The other aspect of our organization that bears no is that we’re located with the NIST, the National Institute for Standards and Technology.” [11].


Major discussion point

Government role & standards for agentic AI


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Use of RFIs, listening sessions, and benchmarks to shape AI‑agent security standards

Explanation

CAISI is gathering industry input through RFIs and listening sessions and plans to develop benchmarks and evaluation methods to give confidence in AI‑agent security.


Evidence

“We put out a request for information or RFI about what challenges industry is facing with AI agent security.” [29]. “As I said, we’re also convening listening sessions, I think in April, on barriers to adoption, particularly on agent issues for education, healthcare, and finance.” [36]. “CAISI could play a role in helping settle concerns about that because we could develop benchmarks, methodologies, and evaluation methods to give industry the confidence they need…” [21].


Major discussion point

Government role & standards for agentic AI


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Bottom‑up industry input essential for effective standards and guardrails

Explanation

A grassroots, industry‑driven approach is needed to create practical standards and guardrails rather than top‑down mandates.


Evidence

“It’s more of a bottom -up, grassroots approach than a top -down.” [85]. “We are very focused on helping industry.” [6]. “Our goal is to hear from industry how traditional standards work, best practices, guidelines can help unlock and facilitate adoption.” [13].


Major discussion point

Enterprise guardrails & risk management


Topics

Artificial intelligence | The enabling environment for digital development


P

Prith Banerjee

Speech speed

171 words per minute

Speech length

1262 words

Speech time

442 seconds

Agentic engineers augment chip and system design

Explanation

Synopsys has created “agentic engineers” that work alongside human engineers to accelerate chip and system design cycles.


Evidence

“So at Synopsys what we have created is agentic engineers.” [41]. “That’s where agentic AI is coming in.” [45]. “These are like human engineers that are not trying to take the jobs of human engineers away.” [53].


Major discussion point

Business use cases of agentic AI


Topics

Artificial intelligence | The digital economy


Verification‑validation of software‑defined physical systems

Explanation

Before hardware is fabricated, software‑defined verification and validation are performed to ensure safety of physical AI‑enabled systems.


Evidence

“software -defined verification validation, right, before the software is, before the chip is designed.” [86]. “We have to make sure that these software -defined systems – just imagine an airplane, right?” [87]. “We are dealing with physical AI interacting with the real world.” [90].


Major discussion point

Enterprise guardrails & risk management


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Accelerating pace of innovation makes agentic AI critical

Explanation

The speed and complexity of innovation are increasing, underscoring the need for agentic AI to keep up with market demands.


Evidence

“That pace of innovation is changing.” [8]. “So the pace of innovation is becoming faster and the complexity.” [50].


Major discussion point

Business use cases of agentic AI


Topics

Artificial intelligence | The digital economy


C

Caroline Louveaux

Speech speed

163 words per minute

Speech length

678 words

Speech time

249 seconds

Real‑time fraud detection, triage and autonomous payment execution

Explanation

Agentic AI is being used to detect suspicious transactions, triage fraud signals, and execute secure payments with millisecond‑level decision making.


Evidence

“AI agentic systems are being deployed, for example, to detect suspicious transactions, to triage fraud signals, to initiate secure payment flows.” [39]. “If you think about it, if we want to be able to detect and to block fraud in real time, decisions have to be made in milliseconds.” [55]. “Before an agent acts and before it makes a payment, we want to make sure that it’s verified and trusted.” [56].


Major discussion point

Business use cases of agentic AI


Topics

Artificial intelligence | The digital economy


Four guardrails: know your agent, security‑by‑design, clear consumer intent, auditability

Explanation

A playbook defines four essential guardrails for enterprise deployment of agentic AI: knowing the agent, building security‑by‑design, ensuring clear consumer intent, and maintaining auditability.


Evidence

“The first one is know your agent.” [95]. “The second one, of course, is security by design.” [98]. “Third, … we want to make sure that we have clear consumer intent.” [97]. “They must act within clear values, principles, within clear permissions.” [101].


Major discussion point

Enterprise guardrails & risk management


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Human oversight and end‑to‑end control are required

Explanation

Even with autonomous agents, humans must retain full oversight and control to ensure safety and trust.


Evidence

“And of course, humans have to have full oversight end -to -end.” [64]. “And we want these agentic payments to be safe and secure and trusted.” [58].


Major discussion point

Enterprise guardrails & risk management


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


S

Syam Nair

Speech speed

183 words per minute

Speech length

645 words

Speech time

210 seconds

Storage‑controller‑proximate agents improve data quality and risk exposure

Explanation

Agents placed near storage controllers can enhance data quality, expose risks early, and streamline AI pipelines by keeping data close to the point of use.


Evidence

“And that’s where agents can actually help and agents help, which is we are developing agents which are sitting closer to the storage controller.” [47]. “If you know the storage architecture says that without moving data … you can actually have the data at the source itself, which will be ready for AI.” [70]. “One of the key challenges in AI itself is having quality of data.” [71].


Major discussion point

Business use cases of agentic AI


Topics

Artificial intelligence | Data governance


Data lineage, governance and public‑private partnership as guardrails

Explanation

Effective guardrails require clear data lineage, robust governance, and collaboration between public and private sectors to prevent unsafe agent outcomes.


Evidence

“If the data can be manipulated, if the lineage of data is not properly understood, if it is not really governed, if there are no guardrails for that, then you could actually get outcomes from agents that are going to be scary.” [102]. “So guardrails become important.” [76]. “Number one is public -private partnership in identifying the guardrails …” [78].


Major discussion point

Enterprise guardrails & risk management


Topics

Data governance | Building confidence and security in the use of ICTs


J

Jason Oxman

Speech speed

153 words per minute

Speech length

2123 words

Speech time

831 seconds

Prefer voluntary, consensus‑based industry standards over regulation

Explanation

Industry favours globally‑recognised, voluntary standards rather than prescriptive government regulation.


Evidence

“consensus -based industry standards, which we would all prefer to operate.” [14]. “It’s better than government regulation, particularly because those standards are global in nature, and NIST is a great example, as you noted, of support of those voluntary.” [26].


Major discussion point

Enterprise guardrails & risk management


Topics

Artificial intelligence | The enabling environment for digital development


Call for practical standards, playbooks and operational clarity

Explanation

Regulators should deliver concrete, actionable standards and playbooks instead of abstract principles.


Evidence

“Don’t give us – if I could give a message to governments and regulators, don’t give us sort of theoretical abstract principles, but give us actually what practical standards, what does good governance look like, operational clarity, playbooks, model frameworks.” [133].


Major discussion point

Policy focus recommendations for regulators


Topics

Artificial intelligence | The enabling environment for digital development


J

Jennifer Mulvaney

Speech speed

223 words per minute

Speech length

333 words

Speech time

89 seconds

Policy should protect humans and prevent harm

Explanation

Regulatory focus must be on safeguarding people and averting harms that AI systems could cause.


Evidence

“Policy, you know, has been around since the dawn of time, and it really is about helping to prevent harms against humans.” [112]. “I think when policymakers look at anything, whether it’s tech or welfare or tax policy, it’s what does this policy mean for humans and how to prevent harm…” [113].


Major discussion point

Policy focus recommendations for regulators


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


Combine global bodies (OECD, G7, ITU, UN, AI for Good) for inclusive coordination

Explanation

A coordinated multilateral approach involving existing international bodies ensures inclusive, harmonised AI governance.


Evidence

“We’re talking about coordination at a global scale, private sector, governments, academia, institutions, the ITU, the UN, AI for Good.” [147]. “I think that definitely OECD comes to mind as the largest.” [143].


Major discussion point

Preferred multilateral platforms for global coordination


Topics

Artificial intelligence | The enabling environment for digital development


E

Ellie Sakhaee

Speech speed

146 words per minute

Speech length

505 words

Speech time

206 seconds

Regulate use/outcome of agents rather than underlying model

Explanation

Regulation should target the applications and harms of agents, not the base AI models themselves.


Evidence

“So we should be thinking about regulating the use or application or the harm that they actualize compared to regulating the underlying technology.” [91].


Major discussion point

Policy focus recommendations for regulators


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Develop academic‑industry benchmarks for multi‑agent systems

Explanation

Benchmarks created jointly by academia and industry are needed to test multi‑agent behaviours before deployment.


Evidence

“So I think the academic community, industry, all of us have a role to play to develop and expand the benchmarks for multi -agent systems to make sure before we put them into the world, they are tested.” [35].


Major discussion point

Preferred multilateral platforms for global coordination


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


C

Carly Ramsey

Speech speed

188 words per minute

Speech length

547 words

Speech time

173 seconds

Open standards and models improve accessibility and cross‑regional harmonisation

Explanation

Open standards lower barriers to entry and help align AI governance across regions.


Evidence

“Are the standards perhaps open?” [15]. “I think open models, open standards are really interesting and are allowing people to access tools that they might not normally be able to access.” [110].


Major discussion point

Policy focus recommendations for regulators


Topics

Artificial intelligence | Closing all digital divides


Singapore International Cyber Week as venue for global AI‑cyber policy dialogue

Explanation

The annual Singapore International Cyber Week brings together governments worldwide, making it a suitable platform for AI and cyber policy coordination.


Evidence

“Singapore International Cyber Week has been, every year has gotten more attendance from governments from all around the world.” [152]. “Singapore is a leader in cybersecurity standards in this region.” [154].


Major discussion point

Preferred multilateral platforms for global coordination


Topics

Internet governance | Building confidence and security in the use of ICTs


S

Sam Kaplan

Speech speed

173 words per minute

Speech length

675 words

Speech time

233 seconds

Security of agents is foundational for trust and deployment

Explanation

Robust security for AI agents underpins user trust and successful deployment across sectors.


Evidence

“These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic…” [12]. “They need to understand and appreciate that security, security of those models, security of those agents, is a foundational layer to increasing trust, to facilitating response.” [128].


Major discussion point

Policy focus recommendations for regulators


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


International Consortium of Safety Institutes for tactical standards and taxonomy

Explanation

The International Consortium of Safety Institutes can create practical, tactical standards and taxonomies for agentic AI security.


Evidence

“I think the structures are there you have the right players that are coming to the table I think if those organizations like what Cassie’s doing right now are advancing you know more tactical standards to create a taxonomy when it comes to agentic AI security…” [34].


Major discussion point

Preferred multilateral platforms for global coordination


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


D

Danielle Gilliam-Moore

Speech speed

189 words per minute

Speech length

635 words

Speech time

201 seconds

Prefer practical, operational standards and playbooks over abstract principles

Explanation

Regulators should supply concrete, operational guidance rather than high‑level theoretical frameworks.


Evidence

“Don’t give us – if I could give a message to governments and regulators, don’t give us sort of theoretical abstract principles, but give us actually what practical standards, what does good governance look like, operational clarity, playbooks, model frameworks.” [133].


Major discussion point

Policy focus recommendations for regulators


Topics

Artificial intelligence | The enabling environment for digital development


OECD as primary reference for AI principles and reporting frameworks

Explanation

The OECD provides a widely‑adopted set of AI principles and reporting mechanisms that can guide national policies.


Evidence

“Globally, when I was doing rounds of meetings in APAC, they were looking at the OECD principles.” [139]. “I think that definitely OECD comes to mind as the largest.” [143].


Major discussion point

Preferred multilateral platforms for global coordination


Topics

Artificial intelligence | The enabling environment for digital development


C

Combiz Abdolrahimi

Speech speed

165 words per minute

Speech length

334 words

Speech time

120 seconds

Emphasis on standards, technical benchmarks and practical governance

Explanation

Calls for concrete standards and technical benchmarks to give industry clear guidance on safe agentic AI deployment.


Evidence

“We want standards.” [2]. “So we’re talking about standards.” [4]. “We’re talking about technical benchmarks.” [31]. “Don’t give us – … theoretical abstract principles, but give us actually what practical standards…” [133].


Major discussion point

Policy focus recommendations for regulators


Topics

Artificial intelligence | The enabling environment for digital development


Inclusive multilateral forums (ITU, UN, AI for Good) for global coordination

Explanation

Advocates engaging a broad set of stakeholders through existing international bodies to ensure inclusive AI governance.


Evidence

“We’re talking about coordination at a global scale, private sector, governments, academia, institutions, the ITU, the UN, AI for Good.” [147]. “And I think that we want to engage with more countries, with more stakeholders in this conversation and make sure that we are being inclusive.” [161].


Major discussion point

Preferred multilateral platforms for global coordination


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Agreements

Agreement points

Human oversight and control must be maintained in agentic AI systems

Speakers

– Prith Banerjee
– Caroline Louveaux
– Ellie Sakhaee

Arguments

Agentic engineers complement human engineers in chip design and complex systems development


Clear consumer intent verification and traceable transactions are essential for agentic commerce


Moving from human-in-the-loop to human-on-the-loop as agent reliability improves, similar to drone pilot regulations


Summary

All speakers agreed that while AI agents can operate autonomously, humans must remain in control with appropriate oversight mechanisms that can evolve as systems become more reliable


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Security and trust are foundational requirements for agentic AI deployment

Speakers

– Caroline Louveaux
– Syam Nair
– Sam Kaplan

Arguments

Autonomy can only scale if there’s trust, requiring verification of legitimate agents


Data governance and lineage control are critical since agents make decisions based on data without human empathy


Security of AI models and agents is foundational to increasing trust and responsible deployment


Summary

Speakers consistently emphasized that security measures and trust-building mechanisms are essential prerequisites for successful agentic AI adoption and scaling


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Industry-driven, voluntary standards are preferable to top-down regulation

Speakers

– Austin Mayron
– Jason Oxman

Arguments

CAISI takes a bottom-up approach, gathering industry input to understand challenges before developing solutions


Voluntary, industry-driven, consensus-based standards are preferable to government regulation


Summary

Both speakers advocated for collaborative, industry-informed approaches to standards development rather than prescriptive government regulation


Topics

The enabling environment for digital development | Artificial intelligence


OECD serves as the primary multilateral platform for AI policy coordination

Speakers

– Danielle Gilliam-Moore
– Sam Kaplan

Arguments

OECD has been foundational in setting AI principles that influence global legislation and regulatory frameworks


International Consortium of Safety Institutes should advance tactical standards for agentic AI security


Summary

Both speakers identified OECD as the key multilateral organization for high-level AI policy coordination, with Sam adding the need for more tactical standards development through other bodies


Topics

The enabling environment for digital development | Artificial intelligence | Internet governance


Similar viewpoints

All three speakers emphasized that agentic AI systems operating in critical infrastructure require enhanced safety and security measures due to their potential for real-world impact

Speakers

– Prith Banerjee
– Caroline Louveaux
– Syam Nair

Arguments

Physical AI systems require extra safety measures due to real-world consequences in cars, aircraft, and critical infrastructure


AI agents enable real-time fraud detection and secure payment flows in milliseconds


Agents help prepare data quality and manage cybersecurity threats at the storage layer


Topics

Building confidence and security in the use of ICTs | Artificial intelligence | The digital economy


These speakers shared the view that global coordination and practical implementation of standards are essential for effective AI governance

Speakers

– Carly Ramsey
– Danielle Gilliam-Moore
– Combiz Abdolrahimi

Arguments

Standards harmonization across countries is crucial to avoid contradictory requirements for global companies


Governance responses should include standards, global norms, and risk procedures, not just regulation


Practical standards and operational clarity are needed rather than theoretical abstract principles


Topics

The enabling environment for digital development | Artificial intelligence | Internet governance


Both speakers emphasized the need for human-centered, practical approaches to AI policy that focus on real-world implementation rather than abstract principles

Speakers

– Jennifer Mulvaney
– Combiz Abdolrahimi

Arguments

Policy should focus on preventing harm to humans, emphasizing ‘humans before models’


Practical standards and operational clarity are needed rather than theoretical abstract principles


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | The enabling environment for digital development


Unexpected consensus

Complementary rather than replacement role of AI agents

Speakers

– Prith Banerjee
– Caroline Louveaux

Arguments

Agentic engineers complement human engineers in chip design and complex systems development


AI agents enable real-time fraud detection and secure payment flows in milliseconds


Explanation

Despite coming from very different industries (semiconductor design vs. financial services), both speakers emphasized that AI agents should complement rather than replace human workers, which represents unexpected alignment across diverse sectors


Topics

Artificial intelligence | The digital economy | Social and economic development


Need for multiple multilateral venues rather than single platform

Speakers

– Danielle Gilliam-Moore
– Jennifer Mulvaney
– Combiz Abdolrahimi

Arguments

OECD has been foundational in setting AI principles that influence global legislation and regulatory frameworks


Policy should focus on preventing harm to humans, emphasizing ‘humans before models’


ITU and UN AI for Good offer inclusive multilateral forums for global stakeholder coordination


Explanation

Unexpectedly, speakers converged on the idea that multiple complementary multilateral venues are needed rather than relying on a single platform, suggesting a more distributed approach to global AI governance


Topics

The enabling environment for digital development | Artificial intelligence | Internet governance


Overall assessment

Summary

Strong consensus emerged around human-centered AI development, the need for security and trust as foundational elements, preference for industry-driven standards over regulation, and the importance of global coordination through multiple multilateral venues


Consensus level

High level of consensus with speakers from diverse industries and backgrounds agreeing on fundamental principles for agentic AI governance, suggesting these principles may represent broadly acceptable approaches for policymakers and industry stakeholders


Differences

Different viewpoints

Level of human oversight required for AI agents

Speakers

– Ellie Sakhaee
– Caroline Louveaux

Arguments

Moving from human-in-the-loop to human-on-the-loop as agent reliability improves, similar to drone pilot regulations


Clear consumer intent verification and traceable transactions are essential for agentic commerce


Summary

Sakhaee advocates for reducing human oversight as agents become more reliable (moving from human-in-the-loop to human-on-the-loop), while Louveaux emphasizes maintaining strict human control and clear consumer intent verification for all agent actions


Topics

Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society


Regulatory approach – technology vs. harm-based regulation

Speakers

– Ellie Sakhaee
– Danielle Gilliam-Moore

Arguments

Focus should be on regulating harm and applications rather than the underlying technology


Governance responses should include standards, global norms, and risk procedures, not just regulation


Summary

Sakhaee specifically argues against regulating the underlying AI technology and favors harm-based regulation, while Gilliam-Moore advocates for a broader governance approach that includes multiple regulatory mechanisms beyond just harm-based approaches


Topics

Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society


Primary multilateral coordination venue

Speakers

– Danielle Gilliam-Moore
– Combiz Abdolrahimi

Arguments

OECD has been foundational in setting AI principles that influence global legislation and regulatory frameworks


ITU and UN AI for Good offer inclusive multilateral forums for global stakeholder coordination


Summary

Gilliam-Moore strongly advocates for OECD as the primary venue due to its proven track record in setting foundational AI principles, while Abdolrahimi emphasizes ITU and UN AI for Good for their inclusivity and broader stakeholder participation


Topics

The enabling environment for digital development | Artificial intelligence | Internet governance


Unexpected differences

Risk assessment complexity between different AI applications

Speakers

– Prith Banerjee
– Caroline Louveaux

Arguments

Physical AI systems require extra safety measures due to real-world consequences in cars, aircraft, and critical infrastructure


Autonomy can only scale if there’s trust, requiring verification of legitimate agents


Explanation

While both work with high-stakes AI applications, Banerjee emphasizes the unique dangers of physical AI systems that can cause kinetic harm, while Louveaux focuses on trust and verification in digital commerce. This represents different risk assessment frameworks for different AI domains


Topics

Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society


Overall assessment

Summary

The discussion revealed relatively low levels of fundamental disagreement, with most differences centered on implementation approaches rather than core principles. Key areas of disagreement included the appropriate level of human oversight for AI agents, whether to regulate technology or harms, and which multilateral venues should lead coordination efforts


Disagreement level

Low to moderate disagreement level. The speakers generally agreed on the need for responsible AI development, industry-government collaboration, and international coordination, but differed on specific mechanisms and approaches. This suggests a maturing field where stakeholders share common goals but are still working out optimal implementation strategies


Partial agreements

Partial agreements

Both agree on the importance of multilateral coordination for AI governance, but Kaplan emphasizes the need for more tactical, technical standards through safety institutes while Gilliam-Moore focuses on high-level policy principles through OECD

Speakers

– Sam Kaplan
– Danielle Gilliam-Moore

Arguments

International Consortium of Safety Institutes should advance tactical standards for agentic AI security


OECD has been foundational in setting AI principles that influence global legislation and regulatory frameworks


Topics

The enabling environment for digital development | Artificial intelligence | Building confidence and security in the use of ICTs


Both agree on the need for practical, industry-informed approaches to AI governance, but Mayron focuses on government-led standards development through industry consultation while Abdolrahimi emphasizes industry’s need for concrete guidance from government

Speakers

– Austin Mayron
– Combiz Abdolrahimi

Arguments

CAISI takes a bottom-up approach, gathering industry input to understand challenges before developing solutions


Practical standards and operational clarity are needed rather than theoretical abstract principles


Topics

The enabling environment for digital development | Artificial intelligence | Capacity development


Similar viewpoints

All three speakers emphasized that agentic AI systems operating in critical infrastructure require enhanced safety and security measures due to their potential for real-world impact

Speakers

– Prith Banerjee
– Caroline Louveaux
– Syam Nair

Arguments

Physical AI systems require extra safety measures due to real-world consequences in cars, aircraft, and critical infrastructure


AI agents enable real-time fraud detection and secure payment flows in milliseconds


Agents help prepare data quality and manage cybersecurity threats at the storage layer


Topics

Building confidence and security in the use of ICTs | Artificial intelligence | The digital economy


These speakers shared the view that global coordination and practical implementation of standards are essential for effective AI governance

Speakers

– Carly Ramsey
– Danielle Gilliam-Moore
– Combiz Abdolrahimi

Arguments

Standards harmonization across countries is crucial to avoid contradictory requirements for global companies


Governance responses should include standards, global norms, and risk procedures, not just regulation


Practical standards and operational clarity are needed rather than theoretical abstract principles


Topics

The enabling environment for digital development | Artificial intelligence | Internet governance


Both speakers emphasized the need for human-centered, practical approaches to AI policy that focus on real-world implementation rather than abstract principles

Speakers

– Jennifer Mulvaney
– Combiz Abdolrahimi

Arguments

Policy should focus on preventing harm to humans, emphasizing ‘humans before models’


Practical standards and operational clarity are needed rather than theoretical abstract principles


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | The enabling environment for digital development


Takeaways

Key takeaways

Agentic AI represents a shift from assistive AI to operational AI that can take autonomous actions, requiring careful balance between autonomy and human oversight


Enterprise guardrails must include clear verification of agents, security by design, explicit consumer intent, and full traceability/auditability


Physical AI systems (cars, aircraft, critical infrastructure) require heightened safety measures due to potential real-world consequences compared to digital-only applications


Data governance and quality control are foundational since agents make decisions based on data without human empathy or situational awareness


Voluntary, industry-driven, consensus-based standards are preferred over government regulation for fostering innovation while ensuring safety


Human-centered design principles should prioritize ‘humans before models’ with clear accountability remaining with human operators


Global coordination through established multilateral organizations (OECD, ITU, International Consortium of Safety Institutes) is essential for harmonized standards


Security of AI agents is foundational to building trust and enabling responsible deployment at scale


Resolutions and action items

CAISI to continue gathering industry input through RFIs on AI agent security and sector-specific listening sessions in healthcare, education, and finance


Industry participants encouraged to engage with CAISI’s formal comment processes and upcoming listening sessions


Development of technical benchmarks for multi-agent systems to understand emerging risks before deployment


Continued collaboration between standards organizations (NIST, OECD) and industry to develop practical operational guidelines


Focus on harmonizing international standards to avoid contradictory requirements across jurisdictions


Unresolved issues

How to effectively regulate multi-agent systems whose behaviors and risks are not yet fully understood


Balancing the pace of innovation (annual product cycles) with the time required for comprehensive safety validation


Ensuring global accessibility and inclusivity of agentic AI while maintaining security and safety standards


Determining appropriate levels of human oversight as agents become more autonomous and reliable


Managing the expanded ‘blast radius’ of potential errors or threats in networked agent systems


Coordinating between different national and regional standards bodies to prevent fragmentation


Addressing the three-year timeline for ISO standards development versus the rapid pace of AI advancement


Suggested compromises

Moving from ‘human-in-the-loop’ to ‘human-on-the-loop’ or ‘human-in-command’ as agent reliability improves, similar to drone pilot regulations


Using sector-specific regulatory approaches through existing agencies with domain expertise rather than overarching AI regulation


Implementing a bottom-up, grassroots approach to standards development that starts with industry-identified challenges


Combining high-level policy coordination (OECD) with tactical technical standards development (Safety Institutes)


Allowing smaller regional groups to feed into larger multilateral consortiums for more inclusive participation


Focusing regulation on applications and harms rather than the underlying technology to avoid stifling innovation


Thought provoking comments

CAISI was originally founded as the U.S. AI Safety Institute, but last year in June of 2025, Secretary of Commerce Howard Lutnick refounded us as the Center for AI Standards and Innovation. That signaled a shift away from safety principles, more towards standards and innovation.

Speaker

Austin Mayron


Reason

This reveals a significant policy shift in the U.S. government’s approach to AI governance – moving from a safety-first regulatory mindset to an innovation-enabling standards approach. This is particularly insightful as it demonstrates how government agencies are adapting their strategies based on industry feedback and practical implementation challenges.


Impact

This comment established the foundational tone for the entire discussion, signaling that the conversation would focus on collaborative, industry-driven solutions rather than top-down regulation. It gave industry participants confidence to speak openly about practical challenges rather than defensively about compliance.


Unlike the world of Facebook and Google… We are dealing with physical AI interacting with the real world. If real world some things happen, some really dangerous things can happen… You could imagine a software-defined airplane being used as a missile, right?

Speaker

Prith Banerjee


Reason

This comment fundamentally reframes the AI safety discussion by distinguishing between digital AI (social media, search) and physical AI (autonomous vehicles, aircraft, industrial systems). It introduces the concept that the stakes are exponentially higher when AI agents can affect physical systems, making the discussion more urgent and concrete.


Impact

This shifted the conversation from abstract policy discussions to concrete, high-stakes scenarios. It elevated the urgency of the guardrails discussion and influenced subsequent speakers to address real-world consequences. Caroline’s payment fraud examples and Syam’s data security concerns became more pointed after this framing.


We are moving from AI systems that recommend to AI systems that act… decisions have to be made in milliseconds at scale. And of course, while speed and scale matter a lot, accountability is a must.

Speaker

Caroline Louveaux


Reason

This succinctly captures the fundamental shift that agentic AI represents – from assistive to operational AI. The insight about millisecond decision-making at scale while maintaining accountability highlights the core tension between AI efficiency and human oversight.


Impact

This comment provided a clear conceptual framework that other panelists built upon. It influenced the subsequent discussion about human-in-the-loop vs. human-on-the-loop concepts and helped structure the conversation around the spectrum of AI autonomy.


As we move from agents that need confirmation for every single step… we should be thinking about moving from human in the loop to human on the loop or human in command. A similar analogy to this is how Federal Aviation Administration in the U.S. thinks about moving from pilot being always in sight of drones to pilots being in command of drones.

Speaker

Ellie Sakhaee


Reason

This provides a sophisticated framework for thinking about AI governance that scales with capability and risk. The drone analogy is particularly powerful because it shows how existing regulatory frameworks have successfully managed the transition from direct human control to supervised autonomy.


Impact

This comment introduced a nuanced, graduated approach to AI oversight that moved the discussion beyond binary thinking (human control vs. AI autonomy). It influenced subsequent speakers to think about adaptive governance frameworks and helped bridge the gap between technical capabilities and policy approaches.


Unlike agents, which can do everything, agents cannot take accountability. They’re not responsible. They can’t take accountability. It’s the humans. It’s the business owner who takes it.

Speaker

Syam Nair


Reason

This cuts to the heart of a fundamental philosophical and legal challenge with agentic AI – the accountability gap. Despite AI agents becoming more autonomous and capable, the responsibility ultimately remains with humans, creating complex questions about liability, oversight, and control.


Impact

This comment crystallized a key theme that had been implicit throughout the discussion and influenced the policy panel to focus on governance frameworks that maintain clear lines of human accountability while enabling AI autonomy.


Policy has been around since the dawn of time, and it really is about helping to prevent harms against humans… We’re now in a world of policy actually governing systems, not just people.

Speaker

Jennifer Mulvaney


Reason

This observation highlights a fundamental shift in governance – from regulating human behavior to regulating autonomous systems. It’s insightful because it recognizes that traditional policy frameworks may be inadequate for governing entities that can act independently.


Impact

This comment reframed the entire policy discussion by highlighting that we’re entering uncharted territory in governance. It influenced subsequent speakers to think about how existing regulatory frameworks need to evolve to address autonomous systems.


We should be thinking about regulating the use or application or the harm that they actualize compared to regulating the underlying technology. Otherwise, we end up regulating… the AI model that by the time that the regulation goes into effect, the AI model has evolved into something that is now agentic.

Speaker

Ellie Sakhaee


Reason

This addresses a critical challenge in technology regulation – the pace mismatch between regulatory processes and technological development. It suggests focusing on outcomes and applications rather than specific technologies, which is a more sustainable approach to governance.


Impact

This comment influenced the discussion toward more flexible, outcome-based regulatory approaches and away from technology-specific rules. It supported the theme of adaptive governance that several other panelists built upon.


Overall assessment

These key comments fundamentally shaped the discussion by establishing several important frameworks: the shift from safety-focused to innovation-enabling governance, the critical distinction between digital and physical AI risks, the spectrum of human oversight models, and the need for outcome-based rather than technology-specific regulation. The conversation evolved from basic use cases to sophisticated discussions about graduated autonomy, accountability gaps, and adaptive governance frameworks. The interplay between technical experts describing real-world implementation challenges and policy experts proposing governance solutions created a rich dialogue that moved beyond typical AI hype to address practical deployment issues. The discussion successfully bridged the gap between business applications and policy implications, with each panel building on insights from the other to create a comprehensive view of agentic AI’s opportunities and challenges.


Follow-up questions

How can traditional standards work, best practices, and guidelines help unlock and facilitate AI agent adoption across different industries?

Speaker

Austin Mayron


Explanation

CAISI kicked off an AI agent standards initiative to hear from industry on this topic, indicating a need for systematic research into how existing frameworks can be adapted for agentic AI


What are the specific challenges industry is facing with AI agent security?

Speaker

Austin Mayron


Explanation

CAISI put out a request for information (RFI) on this topic, suggesting gaps in understanding of security vulnerabilities unique to AI agents


What are the barriers to AI agent adoption in healthcare, education, and finance sectors?

Speaker

Austin Mayron


Explanation

CAISI announced sector-specific listening sessions to understand adoption challenges, indicating these regulated industries face unique obstacles


How do AI agents and systems handle PII (Personally Identifiable Information) in regulated fields like healthcare and education?

Speaker

Austin Mayron


Explanation

There’s reluctance to adopt AI agents due to unclear regulatory compliance, requiring development of benchmarks and evaluation methods


How can interoperability be achieved for AI agent systems?

Speaker

Austin Mayron


Explanation

CAISI mentioned they’re looking at interoperability issues with more information coming in the future, suggesting this is an active area of research


What are the emerging risks and behaviors of multi-agent systems when agents interact with each other?

Speaker

Ellie Sakhaee


Explanation

The risk surface changes as agents interact, and current understanding is limited because real multi-agent systems are still emerging


How can technical benchmarks be developed and expanded for multi-agent systems?

Speaker

Ellie Sakhaee


Explanation

There’s a need for testing frameworks before multi-agent systems are deployed in the real world, requiring collaboration between academia and industry


How can global standards be harmonized to avoid contradictory requirements across different countries?

Speaker

Carly Ramsey


Explanation

There’s concern about compatibility between different national frameworks (e.g., Singapore’s agentic AI governance framework vs. NIST standards), which could create compliance challenges for global companies


How can data governance and lineage be properly managed for agentic AI systems?

Speaker

Syam Nair


Explanation

Since agents make decisions based on data without human empathy or situational awareness, proper data governance is critical to prevent manipulated or poorly understood data from leading to harmful outcomes


How can accountability frameworks be established when AI agents cannot take responsibility for their actions?

Speaker

Syam Nair


Explanation

Unlike humans, agents cannot be held accountable, so clear frameworks are needed to determine human responsibility and business owner liability


How can consumer intent be clearly verified and maintained in agentic commerce systems?

Speaker

Caroline Louveaux


Explanation

The sushi ordering incident demonstrates the need for better systems to ensure agents act only on clear, verified consumer authorization


How can verification and validation achieve close to 100% coverage for software-defined systems in critical applications like autonomous vehicles and aircraft?

Speaker

Prith Banerjee


Explanation

Physical AI systems interacting with the real world require extremely high reliability standards, and current verification methods need improvement to handle the complexity and safety requirements


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.