Agentic AI in Focus Opportunities Risks and Governance

20 Feb 2026 11:00h - 12:00h

Agentic AI in Focus Opportunities Risks and Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The opening panel was convened to explore both the business case for agentic AI and the public-policy measures needed to encourage and safeguard its use [1-3]. Austin Mayron, Acting Director of the U.S. Center for AI Standards and Innovation (CAISI), introduced the agency’s placement within the Department of Commerce and its partnership with NIST to develop voluntary standards that help industry adopt AI agents [15-20][26-29].


CAISI has recently launched an AI-agent standards initiative, issued a request for information on agent security, and announced sector-specific listening sessions on health-care, education and finance to gather industry challenges [32-38][39-41]. Prith Banerjee described how Synopsys is creating “agentic engineers” that augment human designers in rapid chip-to-system development, enabling yearly product cycles that would otherwise be impossible [73-81][88-94]. Caroline Louveaux explained that MasterCard is moving from AI that merely recommends to AI agents that act in real-time fraud detection, and she outlined four guardrails-knowing the agent, security-by-design, clear consumer intent, and traceability-to ensure safe, accountable payments [105-112][218-226][229-236]. Syam Nair highlighted that NetApp is embedding agents near storage controllers to improve data quality for AI workloads, noting that the technology is still early (around level three of a five-level autonomy scale) and that multi-level guardrails are required [132-140][141-148].


Austin urged a bottom-up, industry-driven approach to standards, citing ongoing RFI processes and upcoming listening sessions, and suggested that CAISI could develop benchmarks for handling personally identifiable information in regulated sectors [156-164][168-171]. Prith warned that autonomous, software-defined systems such as cars or aircraft could become weapons if compromised, emphasizing the need for exhaustive verification and validation before hardware prototyping [191-207]. Syam added that data governance is critical because agents act on data without empathy, and that ultimate accountability must remain with human owners, requiring coordinated public-private guardrails [240-248].


Panelists agreed that voluntary, consensus-based standards are preferable to top-down regulation and identified the OECD as the leading multilateral forum for AI principles and reporting frameworks [172-176][386-393]. Additional recommendations included developing technical benchmarks for multi-agent systems and leveraging events such as Singapore International Cyber Week and global bodies like the ITU and UN to foster inclusive coordination [401-406][429-434]. The discussion concluded that aligning industry standards, robust guardrails, and international cooperation will be essential to unlock the benefits of agentic AI while managing its risks [435-440].


Keypoints

Major discussion points


Government-industry collaboration on standards and security for agentic AI – CAISI (the U.S. Center for AI Standards and Innovation) explains its placement within the Department of Commerce and NIST, its role as a “front door” for industry, and its recent AI-agent standards initiative, including RFIs on security and sector-specific listening sessions [13-18][19-30][32-38][156-166][168-171].


Business use-cases of agentic AI across sectors


Semiconductor design: Synopsys creates “agentic engineers” that augment human designers to handle the exploding complexity of chip and system design, enabling faster product cycles [55-73][88-95][90-94].


Payments & fraud prevention: Mastercard moves from recommendation-only AI to “agentic” AI that detects and blocks fraudulent transactions in milliseconds, and it has defined four guard-rails (know-your-agent, security-by-design, clear consumer intent, traceability) to ensure safe autonomous payments [105-115][218-231].


Data-centric cloud services: NetApp develops storage-proximate agents that improve data quality and enable real-time security actions, while emphasizing the need for multi-level guard-rails and strong data governance [132-141][235-244].


Enterprise guard-rails and risk-management concerns – Panelists stress that agentic systems must operate under clear permissions, human oversight, and robust governance. Mastercard’s four guard-rails, NetApp’s layered safeguards (public-private partnership, data lineage, human accountability), and Prith’s safety warnings about autonomous physical systems (e.g., weaponised cars or aircraft) illustrate the breadth of risk-management strategies [179-208][218-231][236-248].


Policy recommendations focused on voluntary, consensus-based standards and global coordination – Austin highlights a bottom-up approach to standards development; Ellie urges regulators to consider the autonomy continuum and human-in-the-loop vs. human-on-the-loop models [277-284][287-289]; Carly calls for open standards and cross-regional harmonisation (Singapore, India); Danielle and Sam point to the OECD as the primary multilateral venue, while also noting the role of safety-institute consortia; Jennifer adds that regional groups should complement OECD work; Combiz stresses inclusion of bodies such as the ITU and UN [386-398][401-402][418-423][430-434].


Purpose of the panel – The session is framed as a two-part discussion: first to map business use-cases of agentic AI, then to explore public-policy implications and what governments should do to encourage safe adoption [1-6][249-256].


Overall purpose/goal


The panel aims to bridge the business and policy worlds by showcasing concrete agentic-AI applications, identifying the practical challenges and guard-rails needed for safe deployment, and delivering concrete recommendations to policymakers on how standards, coordination mechanisms, and regulatory approaches can foster responsible innovation while protecting consumers and critical infrastructure [1-6][249-256].


Tone of the discussion


Opening: Formal and forward-looking, with a clear agenda-setting tone [1-6][13-18].


Technical deep-dives: Energetic and optimistic as speakers describe transformative use-cases (Synopsys, Mastercard, NetApp) [55-73][105-115][132-141].


Cautionary moments: A shift to a more urgent, even “scary” tone when highlighting safety risks in physical AI (autonomous cars, weaponised systems) and the sushi-order anecdote [179-208][228-231].


Collaborative & constructive: Returns to a cooperative tone as panelists discuss standards, share best-practice recommendations, and acknowledge the need for global coordination [156-166][277-284][386-398].


Closing: Appreciative and hopeful, emphasizing partnership between industry and governments and thanking participants [435-441].


Overall, the conversation moves from informative introductions to enthusiastic showcase of technology, through a brief but pointed warning about risks, and culminates in a collaborative, solution-oriented tone aimed at shaping policy.


Speakers


Jason Oxman


Area of expertise: Technology industry leadership, AI policy moderation


Role / Title: Moderator/Host; President & CEO of the Information Technology Industry Council (ITI) [S14][S15]


Austin Mayron


Area of expertise: AI standards, innovation policy, government-industry liaison


Role / Title: Acting Director, U.S. Center for AI Standards and Innovation (CAISI) [S9][S10]


Prith Banerjee


Area of expertise: Semiconductor design automation, AI-driven engineering


Role / Title: CTO and SVP, Synopsys (design software automation semiconductor company) [S17][S18]


Caroline Louveaux


Area of expertise: Payments security, privacy, AI-enabled fraud detection


Role / Title: Chief Privacy AI and Data Responsibility Officer, MasterCard [S16]


Syam Nair


Area of expertise: Multi-cloud storage, data quality, AI-driven data preparation


Role / Title: Chief Product Officer, NetApp (global multi-cloud service provider) [S1]


Danielle Gilliam-Moore


Area of expertise: AI public policy, governance frameworks


Role / Title: Director of Global Public Policy, Salesforce (leads AI policy work) [S2][S3]


Combiz Abdolrahimi


Area of expertise: Governance, standards, policy implementation (former regulator)


Role / Title: Industry professional with former government/regulatory experience (specific title not specified) [S4]


Ellie Sakhaee


Area of expertise: AI public policy, machine learning, human-in-the-loop governance


Role / Title: Public Policy Team Member, Google; Ph.D. in Computer Science / Machine Learning [S5][S6]


Sam Kaplan


Area of expertise: Cybersecurity policy, AI risk standards


Role / Title: Assistant General Counsel for Global Policy, Palo Alto Networks [S7]


Jennifer Mulvaney


Area of expertise: Technology policy advocacy, human-centered AI


Role / Title: Public Policy Lead, Adobe [S11]


Carly Ramsey


Area of expertise: Internet infrastructure, AI standards, regional policy coordination


Role / Title: Lead, Public Policy for Asia Pacific, Cloudflare (based in Singapore) [S12][S13]


Additional speakers:


None (all speakers appearing in the transcript are included in the list above).


Full session reportComprehensive analysis and detailed insights

The discussion opened at the AI Impact Summit, organized by the Institute for Technology Innovation (ITI), with Jason Oxman outlining a two-part agenda: first to map the business case for “agentic AI” – AI that can act autonomously rather than merely provide recommendations – and second to explore the public-policy measures needed to encourage its use while safeguarding society [1-6].


Austin Mayron (Acting Director, U.S. Center for AI Standards and Innovation – CAISI) then described CAISI’s role and organisational context. CAISI sits within the Department of Commerce and, as the “front door for industry to the United States government,” serves as the primary entry point for industry engagement with federal AI policy [13-20]. He clarified that “the other aspect of our organization that bears note is that we are co-located with the National Institute of Standards and Technology (NIST)” [13-20]. CAISI also draws talent from Frontier AI Labs, which helps explain novel concepts to other parts of the administration [13-20]. The centre evolved from the U.S. AI Safety Institute to a standards-and-innovation focus in June 2025, signalling a shift from prescriptive safety to enabling innovation [16-18][S1][S19].


Just this week, CAISI kicked off an AI-agent standards initiative [32-38]. It issued a Request for Information (RFI) on AI-agent security [32-38] and, concurrently, “CAISI also points to a draft NIST-ITL publication on AI identity and verification that is currently open for public comment” [32-38][S-X]. Within days it announced sector-specific listening sessions on health-care, education and finance to collect industry-level barriers [32-38][156-166][168-171].


The business-case speakers followed.


Prith Banerjee (Synopsys) presented a hardware-centric use case. Synopsys, the leading electronic-design-automation provider, is expanding from chip design to “chips-to-systems” after acquiring Ansys for $35 billion [61-63]. He described “agentic engineers” – AI-driven agents that perform low-level reasoning tasks in chip and system design, complementing rather than replacing human engineers [90-94]. Accelerating product cycles in automotive and aerospace (from multi-year to annual cadences) and the growing complexity of designs (now trillions of transistors) exceed what human designers alone can manage [73-84][85-88]. These agents enable rapid verification and validation before hardware prototyping, a necessity when physical AI controls safety-critical functions such as brakes or steering [85-87][88-95].


Caroline Louveaux (Mastercard) offered a financial-services example. Mastercard has moved from AI that merely recommends actions to “agentic AI” that actively detects suspicious transactions, triages fraud signals and initiates secure payment flows in milliseconds [105-108][109-115]. She emphasized that such AI agents must operate within clearly defined permissions and be subject to continuous human oversight [111-115]. To institutionalise this, Mastercard devised a four-point guard-rail playbook: (1) “Know Your Agent” – verify the agent’s legitimacy; (2) security-by-design – protect credentials through tokenisation; (3) explicit consumer intent – ensure the user authorises each purchase; and (4) traceability/auditability – maintain records for dispute resolution and regulator confidence [218-236].


Syam Nair (NetApp) described a data-centric deployment. NetApp embeds AI agents close to storage controllers so that data can be prepared for AI workloads without moving it through cumbersome pipelines [135-137]. This proximity improves data quality, especially for unstructured data, and enables real-time security actions such as detecting threats within the 59-second average breach window [138-140]. He placed NetApp’s capability at roughly level 3 of a five-level autonomy spectrum, indicating an early-stage but rapidly progressing effort [141-148]. Nair warned that the “blast radius” of an error grows when many agents operate across an enterprise, so guardrails must be layered: public-private partnership on policy, rigorous data-governance to preserve lineage, and the principle that ultimate accountability remains with human owners [239-248].


The panel then turned to enterprise-wide risk management and guard-rail design. Prith Banerjee warned that software-defined physical systems (autonomous cars, aircraft) could be weaponised if compromised, citing a hacked car in Mumbai used as a weapon [191-198]. He argued that exhaustive digital-level verification (aiming for near-100 % coverage) is essential before any hardware is fabricated [205-207]. Caroline’s four-guard-rail framework and Syam’s layered approach echoed this need for clear permissions, human-in-the-loop oversight, and auditability [218-236][239-248]. Austin reinforced that standards development must be bottom-up, gathering input from field experts before defining problems and adopting a “humility-driven” approach that treats industry as the primary source of insight [158-162]. He also highlighted the importance of interoperability in future standards [172-174][172-174].


All panelists agreed that voluntary, consensus-based standards driven by industry-government collaboration are preferable to prescriptive regulation [172-174][19-22][26-29][332-339][304-307][364-368]. Carly Ramsey stressed that open models and open standards are needed to avoid fragmented regional regimes [304-307][S1]. Combiz Abdolrahimi added that abstract principles must be translated into concrete playbooks, benchmarks and operational guidance [364-368].


Policy recommendations converged on a human-centric, risk-based approach. Jennifer Mulvaney (U.S. Department of Labor) reminded the audience that policy should always protect humans first, asking “what does this mean for humans and how can we prevent harm?” [263-270][S73]. Ellie Sakhaee (Google) proposed regulating the applications of AI agents rather than the underlying models and suggested a continuum of autonomy that moves from “human-in-the-loop” to “human-on-the-loop” and eventually “human-in-command” as agents become more reliable [277-284][285-286]. This graduated oversight model mirrors the FAA’s transition from pilot-always-in-sight to pilot-on-the-loop for drones [285-286].


International coordination was identified as essential. Danielle Gilliam-Moore (U.S. State Department), Sam Kaplan (OECD) and Jennifer Mulvaney all pointed to the OECD’s AI principles and reporting framework as the primary global anchor, noting its influence on the EU AI Act and numerous U.S. state drafts [386-393][401-403][418-420]. Carly Ramsey added that regional events such as Singapore International Cyber Week provide a practical venue for cross-border dialogue and for aligning Singapore’s AI-governance framework with NIST standards [404-406][S1]. Sam highlighted the International Consortium of Safety Institutes as a tactical forum for developing technical taxonomies, while Combiz broadened the scope to include the ITU, UN and AI-for-Good initiatives [401-402][432-434].


Modest disagreement emerged over the optimal multilateral platform and the balance between global standards and agile, sector-specific frameworks. Danielle advocated a top-down reliance on the OECD; Carly preferred a region-focused cyber-week; Sam suggested a safety-institute consortium; and Combiz called for broader UN-based engagement [386-393][404-406][401-402][432-434]. Similarly, Danielle argued for fast, ministry-driven governance to fill gaps left by slow-moving ISO processes, whereas others (Carly, Sam, Austin) emphasised the need for globally harmonised voluntary standards [353-358][304-313][158-162].


Key take-aways


1. Safe, widespread adoption of agentic AI depends on (i) voluntary, consensus-based standards developed through a bottom-up industry-government partnership; (ii) layered enterprise guardrails that embed security-by-design, clear permissions, data-governance and human accountability; (iii) a human-centric policy lens that scales oversight with agent autonomy; and (iv) coordinated international effort anchored by the OECD but complemented by regional forums and technical consortia.


2. Unresolved issues include (a) precise technical specifications for AI-agent security and benchmarks for multi-agent interactions; (b) mechanisms for harmonising regional and global standards; and (c) definition of autonomy thresholds for shifting oversight models.


Continued collaboration among standards bodies, industry, academia and governments will be required to close these gaps.


Session transcriptComplete transcript of the session
Jason Oxman

Our second discussion will be this panel, which will discuss the business case use of agentic AI. And then we’ll follow that with a second panel, which will discuss the public policy implications of agentic AI. That is to say, what government should be doing to encourage and to safeguard the use of agentic AI. We all know that agentic AI is quite literally the AI of agents. And there’s been a lot of discussion here at the AI Impact Summit about how agentic AI is creating new opportunities for jobs, for societal benefits, for use cases across different industries. And one of the most important questions is, of course, what public policy solutions are going to be necessary to encourage the use of agentic AI.

So I’m very pleased to welcome as our opening speaker, Austin Mayron, who is the Acting Director of the Center for AI Standards and Innovation, and a senior, you have the longest title in the world, Austin. Thank you. Senior Legal Advisor to the Undersecretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office. office. Austin, we are thrilled to have you here. You have some very interesting updates on how the U .S. administration is approaching agentic AI, including what the office is doing, which I think is enormously important as well. So you’re going to join us for a few minutes of table -setting remarks, if you will, and we’re thrilled to have you here.

Austin, I’ll turn it over to you.

Austin Mayron

Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin Mayron, and I’m the Acting Director of the U .S. Center for AI Standards and Innovation, also called CAISI. CAISI was originally founded as the U .S. AI Safety Institute, but last year in June of 2025, Secretary of Commerce Howard Lutnick refounded us as the Center for AI Standards and Innovation. That signaled a shift away from safety principles, more towards standards and innovation. I think there’s two organizational aspects of CAISI that are worth note. The first is that we’re located within the Department of Commerce. We are very focused on helping industry. The Secretary has tasked us to be the front door for industry to the United States government, and we really see ourselves as serving in that role.

We collaborate with various aspects of the AI ecosystem, including the Frontier Labs, for instance, on pre -deployment evaluations. And we like to partner with industry to help understand government. As one example, sometimes there’s a lack of AI expertise within the U .S. government. And CAISI, because we have talent from Frontier AI Labs, we’re able to help explain novel concepts to other aspects of the administration. The other aspect of our organization that bears no is that we’re located with the NIST, the National Institute for Standards and Technology. And the thing that’s worth noting there is that NIST, throughout its history, it hasn’t been a regulatory organization. It’s been an organization that’s promoted economic growth and technological development by developing standards and facilitating the development of standards and best practices.

And so CAISI, we see our role as partnering with industry to develop the standards and best practices they need to flourish. And here, we’re here today to talk about AI agents, which is an incredibly timely topic. And so I thank ITI for organizing this. Just this week, CAISI, my organization, we kicked off an AI agent standards initiative. Our goal is to hear from industry how traditional standards work, best practices, guidelines can help unlock and facilitate adoption. So one area where we’ve already started that work is on AI agent security. We put out a request for information or RFI about what challenges industry is facing with AI agent security. Our colleagues at NIST at the Information Technology Laboratory also have a publication out for comment on AI identity and verification, which we encourage you, if you’re interested, please look at the documents, review them, send in your comments.

We also announced this week that we’re going to be holding sector specific listening sessions on barriers to adoption, the sectors of health care, education and finance. And our goal here is we want to learn actually what are the challenges that industry is facing. These AI agents, they have tremendous potential, but we want to understand. How CAISI and NIST and the U .S. government can help unlock adoption through standards and best practices. So I’m delighted to be here and take part in this conversation and learn more from my fellow panelists.

Jason Oxman

Thanks, Austin, so much. Really appreciate your being here and helping set the stage for us for our discussion of agentic AI. As I mentioned, we have three great experts here to start us off on the business side discussion before we move to the policy side discussion, because I really think it’s important for us to understand exactly what use cases of agentic AI are happening across different segments of the AI stack. So we’re very fortunate to have three experts here to help us with this discussion. Prith Banerjee is the CTO and SVP of Synopsys, the design software automation semiconductor company. Great to have you here, Prith. Caroline Louveaux is Chief Privacy AI and Data Responsibility Officer at MasterCard.

Caroline, thanks for being here. And also delighted to have Syam Nair, who is Chief Product Officer at NetApp, the global multi -cloud service provider. And so the three of them are each going to share a couple. A couple minutes. of opening remarks on agentic AI use cases. What we’ve asked them each to do is share with all of you kind of the top favorite agentic AI use case that’s happening so that we can use that as a way to frame the discussion around business and policy to solutions. So if we could, Prith, I’ll start with you for your favorite agentic AI use case that’s happening at Synopsys.

Prith Banerjee

Sure. So I’m Prith Banerjee, and my role is to look at sort of future directions of where Synopsys is headed. And agentic AI is actually the core of this. But before I do that, I want to share with you what Synopsys does. Synopsys is the leading provider of electronic design automation tools and IP to design chips. So the chips from, say, NVIDIA or AMD or Broadcom, Qualcomm are designed with these billion transistor chips, trillion transistor chips designed with Synopsys tools. But the opportunity that Synopsys has, seen is these chips are going into systems, systems that are like cars or… aircraft or spacecraft or system data centers, healthcare, et cetera, right? So we have this vision of chips to systems that, and because of that, Synopsys recently acquired Ansys for $35 billion, right, to be a chips to systems company.

I came into Synopsys as CTO at Ansys. So now the challenge that I want to share with all of you is as you are designing a car, right, it’s a software -defined car, right, a Tesla car has more than 100 million lines of C code in that car. That code runs on an ECU, an ECU designed by NXP or STMicro or Qualcomm. And that chip is still not yet designed, right? It is being designed with, say, Synopsys tools, but you’re writing software on the tool or on that chip, and so you have to do what is called software -defined verification validation, right, before the software is, before the chip is designed. Right. And that. that control will control the electric brakes, the electric steering, the autonomous driving of the car.

And the car is, it’s a physical product, it is being driven on the road, right? And so you use ANSYS physics simulation like Fluent for aerodynamics or LS Dyna for crash or HFS for electromagnetic. So essentially what we are doing is bringing the physics of the world around us powered by AI along with the chip design in this what we call intelligent product design which is silicon design. So the chip inside any complex design, software enabled, so you can do software updates over there, updates and AI driven. So that’s all the context and if we are a $10 billion company with a market cap of 100 billion. So the agentic AI part is the following, that the pace of innovation in the world is changing.

You used to design a new car every 7 years or maybe 5 years. That pace of innovation is changing. like Tesla, Elon Musk said we have to do it every year. Every year they want to bring a new car to market. Or NVIDIA Jensen, right? The chip design used to be every three years. NVIDIA Jensen says you have to do it every year. So the pace of innovation is becoming faster and the complexity. You used to have a chip with maybe a million transistors. Now it’s a billion transistors. It’s a trillion transistors. It’s incredibly complex. And then you have the chip with all the complicated system. The complexity is so hard that you used to have human designers at the Qualcomm, NVIDIA, etc.

who could use those things using the Synopsys tools. You cannot do that anymore. It is very, very hard. That’s where agentic AI is coming in. So at Synopsys what we have created is agentic engineers. These are like human engineers that are not trying to take the jobs of human engineers away. They are going to complement the job of a human engineer so you at Broadcom, Qualcomm, we have a hundred thousand engineers. but you will be complemented with another 200 ,000 agentic engineers from Synopsys who will do the lower level reasoning job like a human, right? But the human will still be in the loop to make sure that you are not doing drastic sort of bad things, right?

This is the incredible opportunity. But as the world talks about agentic AI in the world of large language models and data and words as tokens, our world is what we call physical AI, which is physics, and it’s the physical AI part where we are applying our agentic engineering technology to. Very, very exciting area.

Jason Oxman

That’s great. And I love how you described the human engineers being complemented by, not replaced by, the agentic AI that’s helping them be more efficient and do their jobs better. Caroline, I think of payments networks as having used AI for decades, literally. The fact that you can take a plastic card and tie it back to a, a human being, no matter where they are in the world, is actually truly remarkable. When you think about how payments networks work, it is truly remarkable, the technology. especially since you’re processing literally millions of transactions a second around the world. So with that, you look over global AI for MasterCard, and I’m curious how agentic AI is influencing the work that you and your colleagues do to make these payments rails run around the world.

Caroline Louveaux

Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have been leveraging AI for decades to make our payment network safer and more secure for everyone. Now with agentic, we are moving from AI systems that recommend to AI systems that act, right? And in cybersecurity and payments, the shift is already real today. AI agentic systems are being deployed, for example, to detect suspicious transactions, to triage fraud signals, to initiate secure payment flows. If you think about it, if we want to be able to detect and to block fraud in real time, decisions have to be made in milliseconds. at scale. And of course, while speed and scale matter a lot, accountability is a must.

What’s important is that these agents don’t make decisions with open -ended autonomy. They must act within clear values, principles, within clear permissions. What is the agent allowed to do? What is not allowed to do? And when does a human need to step in? And of course, humans have to have full oversight end -to -end. So, I mean, there are many other use cases. I’m happy to talk more about that, but I think that’s really our main use case. But of course, the technology is moving really, really fast. We are now talking about this multi -agent ecosystem that raises a whole new range of opportunities as well as novel challenges. And so that’s where these kind of summits where we all come together are really, really important to really get it right.

Jason Oxman

I love how you characterize it as moving from what we call assistive AI to operational AI. In other words, instead of just helping with a task, the AI, as an agent, can actually take a task on. Still oversight in the system, and that, I should have previewed this. We’re going to come back around and talk to the panelists about guidelines and protections, and as Austin importantly noted at the outset, the security of the system, how that’s built in as well. And, Siam, I want to come to you next. The multi -cloud that NetApp operates obviously is moving data around the world on behalf of customers, storing data around the world and allowing your customers to access data in a multi -cloud environment.

How is agentic AI helping NetApp with that level of customer service?

Syam Nair

Thank you. So NetApp actually, as you said, multi -cloud, we both power public cloud as well as private cloud. Many of the largest infrastructure is actually the data infrastructure. It’s built on NetApp. I’m a file storage standpoint. One of the key challenges in AI itself is having quality of data. Data quality is super important, and the previous session actually talked about it. And data quality, especially from unstructured, truly unstructured, how do you really get the structured value out of it? And that’s where agents can actually help and agents help, which is we are developing agents which are sitting closer to the storage controller. If you know the storage architecture says that without moving data and going through cluttered pipelines and, you know, positioning the data ready for AI, you can actually have the data at the source itself, which will be ready for AI.

And how this helps is, you know, many of the areas, cybersecurity, as it continues to grow as a threat, you know, 59 seconds is the average breakout of a threat these days, risk and threat will become super important to manage. And you need to do that at the layer where the data sits. So agentic has a really good use case with respect to that. We are still in our journey, early journey in terms of building these capabilities. One would say, look, if you have five levels of AI where, you know, agentic AI where level one is mostly assisted, co -pilot to autonomous agents, running a network of agents at level five, we’re still in that journey somewhere in the three range.

And that’s what we see from customers in terms of how they want to leverage data. So that’s one of my favorite use cases in preparing the data, making sure that the right data is available both for the agents and the agents can make it available for the use cases.

Jason Oxman

Yeah, interesting. So the agents are actually helping you expose any risks that may need to be addressed as part of that provisioning of data. And, Austin, I’m going to ask you to set up our second round question with me, not for me. And that is, you know, the industry has a responsibility to inform governments about risks and how they’re being addressed. So as we move into the next question for the panel around enterprise guardrails that companies are seeing. So, Austin, I’m going to ask you to set up your question. And then I’m going to ask you to set up your question. And then I’m going to ask you to set up your question. So, Austin, I’m going to ask you to set up your question.

And then I’m going to ask you to set up your question. anything in particular you would flag that you’re looking to hear from industry in the U .S. administration about those guardrails. You are overseeing an operation that asks for industry input, which I think is rare and particularly great. So thank you for doing that. Perhaps some practice tips that you can provide to everyone in the room about what it is helpful to provide government, the U .S. administration or other government colleagues that you’ve heard from on these issues and how it’s helpful to provide that information.

Austin Mayron

Yeah, absolutely. So at CAISI , our focus right now is truly on unlocking innovation and adoption. And we work in the standards space, and so we look to how NIST -fostered standards and best practices and guidelines documents can help with that innovation and that adoption. And so the NIST process, the way it normally works is we like to gather and collaborate. It can be an industry to… understand the challenges they’re facing. It’s more of a bottom -up, grassroots approach than a top -down. We’re not sitting there in Washington and saying, you know, this is the problem and we’re going to fix it. We take a little bit of humility and say, we don’t actually know what the problem is until we talk to the people who are closest to the issue, because we only have a narrow slice of the world from our vantage point, and the people who are actually in the field working on innovation, working on adoption, they have a better sense of what the barriers are.

And so we encourage everyone in industry and across the ecosystem to really engage with us, to tell us the problems that you’re encountering, and we have structured formal ways for you to do that. For instance, the request for information on AI agent security, I think it’s open for about another month, and some have already submitted comments, but we look forward to comments. As I said, we’re also convening listening sessions, I think in April, on barriers to adoption, particularly on agent issues for education, healthcare, and finance. We’re starting with those three sectors, but we really welcome that type of engagement, because we want to facilitate adoption. And one example that I sort of like to use…

I don’t know if it’s actually a barrier to adoption, but let’s say in a regulated field like healthcare or education, there’s PII, and there’s a reluctance to adopt because it’s unclear how the AI agents and systems are treating PII and whether it will satisfy regulatory burdens. CAISI could play a role in helping settle concerns about that because we could develop benchmarks, methodologies, and evaluation methods to give industry the confidence they need that, for instance, the model that they’re looking to procure and adopt and implement handles PII the way they need to to satisfy their regulatory obligations. So that’s a way where Casey, through measurement science, best practices, and standards, can help facilitate adoption. We’re also looking at interoperability, and we’ll have more about that in the coming months.

Jason Oxman

That’s great. Really appreciate that, Austin. And I love the focus on voluntary, industry -driven, consensus -based standards because that’s how the tech industry prefers to operate. It’s better than government regulation, particularly because those standards are global in nature, and NIST is a great example, as you noted, of support of those voluntary. consensus -based industry standards, which we would all prefer to operate. And, Prith, I’ll come back to you on this question of, I guess I’d call them guardrails, kind of the enterprise guardrails around risk management that you’re putting in place. Governments are paying attention. We want to handle these issues in the private sector. What are you seeing that’s important as far as those enterprise guardrails for risk management?

Prith Banerjee

So that’s a great question. Actually, at the AI Summit yesterday, there were a lot of speakers, from starting with Prime Minister Modi to President Macron, everybody kind of talked about responsible, safe AI and AI for everyone. But I want everybody in the audience to understand what is going on in this world, right? So there is a problem, right? You have a video that you can watch on, say, YouTube or Facebook, and you want to prevent a young child from watching that, right? And that is responsible AI, and you want to make sure that a 12 -year -old doesn’t watch it. But if he or she watches it, it’s not the end of the world. I mean, yes, you have seen this, but the world that we live in is this intelligent product design, right?

You are designing a car, and we have, as Syam was mentioning, level 1, which is assistive, all the way to level 5, which is fully autonomous. Now, imagine a world – I’m now doing the scary part so you understand how scary it can be, right? An autonomous car that is driving on the streets of Mumbai, right? And it’s supposed to be autonomous, making sure the pedestrians and the cows are being avoided. But suppose there is a cyber attack, right? And somebody goes in, and you want to use that car as a weapon, right? As you know, there are terrorists that go in, and they bang into these things, right? So we have to make sure that these software -defined systems – just imagine an airplane, right?

You know what has happened in the past. In 9 -11, an airplane hit a thing. So you could imagine a software -defined airplane being used as a – as a missile, right? So this is how important it is because unlike the world of Facebook and Google, and I’m not undermining Facebook, Google, I’m just saying you are dealing with people watching stuff and saying like, unlike, right? We are dealing with physical AI interacting with the real world. If real world some things happen, some really dangerous things can happen, right? And so we have to be extra careful. So that’s the challenge. What we are trying to do is to make sure that as part of this agentic engineering workflow, we are doing it in a responsible manner, in a safe manner, right?

And the work that we are doing in terms of verification, validation. So the software flow that we do before we actually do a hardware prototyping, we do full like 100 % coverage at the digital level. So we are designing the airplane on the computer, designing the car on the computer with as close to 100 % guarantee. Nothing is 100%. but I want you to understand how much more complicated this is right because in the hands we can design software defined sort of data centers or software defined nuclear arsenals right in the hands of the wrong person some bad things can happen so we have to be extra careful about the responsible safe AI that we do for our intelligent product design.

It is happening software defined is happening but we have to be super careful.

Jason Oxman

Thank you, sometimes the best way to get people to pay attention to what you’re saying is to scare them and so you’ve certainly done that and Caroline there’s a lot of bad stuff happening on the payment systems as well and the consequences of fraud and security breaches are or actual shutdown of the network is almost impossible to contemplate global commerce grinding to a halt I don’t know if you want to scare people like that as well when you talk about.

Caroline Louveaux

Let me go there.

Jason Oxman

Go ahead.

Caroline Louveaux

With enterprise guidelines coming to New Delhi I watched the companion it’s a movie around romance robot I’m not going to spoil the end, but that’s actually a scary story for sure. Now, back to the MasterCard vault. The principle is very simple. Autonomy can only scale if there’s trust. And so at MasterCard, we think we have a role to play when it comes to agentic commerce, meaning you use an agent to make payments on your behalf. And so we want these agentic payments to be safe and secure and trusted. And therefore, we came up with a playbook with four key guardrails. The first one is know your agent. Before an agent acts and before it makes a payment, we want to make sure that it’s verified and trusted.

So everyone needs to know that it’s a legitimate agent and not a rogue robot or a fraudster. Important, right? The second one, of course, is security by design. It has to remain the foundation. And so we are leveraging advanced technologies around customer authentication, tokenization. to make sure that the sensitive credentials, for example, your card number, is not visible and not exposed to third parties, to the merchants, to the agents, or anything like that. Third, and that’s a bit new, we want to make sure that we have clear consumer intent. The consumer has to be always in control of what he or she authorizes the agent to purchase on his or her behalf. We learned this the practical way just a couple of months ago.

An employee at Massaca decided to ask an agent, hey, are you able to buy sushis? The idea was just to test the agent’s capability to do so, but the agent took the question literally and placed an order using the employee card details on file. So, lesson learned, clarity matters, clarity of the intent that can be verified, otherwise you end up with these platters of sushis. And then last but not least, everything has to be, traceable and auditable. and that’s needed if you want to be able to give consumers the redress if things go wrong, dispute resolution and of course to make the regulators happy and comfortable and so these guardrails are not there to slow adoption, you know, if done well they’re going to be key to scale adoption in a way that is trusted by design.

Jason Oxman

Great, sushi is not scary but the use case you described is, so appreciate that

Caroline Louveaux

It’s only sushi, we’re good.

Jason Oxman

It’s only sushi, that’s right Syam, you get to wrap us up because we’re closing the panel out You don’t have to scare people if you don’t want to but I’d love to hear how NetApp is thinking about enterprise guardrails for risk management around agentic AI

Syam Nair

Yeah, no scary stories I think one of the ways I would say this is, you know, as humans we used to make mistakes but it was much more contained. As sometimes in enterprises you had insider threat but it was much more contained. But now you’re talking about a network of agents where the blast radius in terms of an error or a mistake or a threat is much more profound. So guardrails become important. They need to be at multiple levels. Number one is public -private partnership in identifying the guardrails in terms of how agents need to operate, being very specific to the enterprise, being very specific to the business is important, and working together with the customers, in some cases consumers, others in business -to -business, understanding the use case and for which how we need to build guardrails within the system.

And more importantly, I think, and I’ll go back to what one needs to figure out is the governance of the data because data is the one that is actually going to power how agents make these decisions, right? Unlike human, there is no empathy built into the agent, at least not at this point, and it is not making decisions based on situational awareness. It’s making decisions based on the data. And if the data can be manipulated, if the lineage of data is not properly understood, if it is not really governed, if there are no guardrails for that, then you could actually get outcomes from agents that are going to be scary. The last piece of this is, look, unlike agents, which can do everything, agents cannot take accountability.

They’re not responsible. They can’t take accountability. It’s the humans. It’s the business owner who takes it. So having those guardrails work in tandem with the customer, consumer, with the public -private sector partnership is super important in terms of defending.

Jason Oxman

Thank you. Thank you. policymakers looking at. And what should policymakers look at? Our goal in the tech industry, obviously, is to ensure that public policy is inspirational to innovators, that it doesn’t interfere with the ability of innovators to get the products and services out to market that we all want to see and benefit from. But of course, policymakers have other things in mind. They want to make sure that consumers are protected. They want to make sure that safety and security is part of the design of products that are deployed into the market. So we have a great industry panel of experts who are going to share their views on what policymakers should be thinking about and what they should be doing to inspire the use of agentic AI while also addressing important public policy concerns.

So I’ll ask each of our panelists to address that and to introduce themselves. Jennifer, I already said who you are. You can just introduce yourself and your company, and let’s take that as the prompt. And you get to pick one thing that you think policymakers should be most focused on. focusing on.

Jennifer Mulvaney

Great. Thank you, Jason. Jennifer Mulvaney with Adobe. And, you know, I learned a great Hindi term yesterday watching the prime minister speak, and that is mahaf, human. I mean, you really think about policy. Policy, you know, has been around since the dawn of time, and it really is about helping to prevent harms against humans. And so that is what policy still is meant to do today. I think when policymakers look at anything, whether it’s tech or welfare or tax policy, it’s what does this policy mean for humans and how to prevent harm and what does that mean? And we as lobbyists in Washington, D .C., or my former role there, you humans go in and talk about what it means for whatever that stakeholder group you’re talking about is.

So we’re now in a world of policy actually governing systems, not just people. But I think that the prime minister’s focus on human is something that Adobe talks a lot about as well, that should be humans before models. Our CEO of Adobe often says it’s not what we can do with technology, it’s what we should do. And I really love that statement because that really does think about what is this going to mean for humans? How can we advance that agenda?

Jason Oxman

Love that. Thank you, Jennifer. Yep. Ellie Sakhaee.

Ellie Sakhaee

Hi, everyone. I’m Ellie Sakhaee. I am part of public policy team within Google. Several of our colleagues in the previous panel mentioned that agentic AI is not a point in development, right? So it’s, as we think about agentic AI, we should be thinking about the continuum, depending on agent’s autonomy, depending on their access to memory, depending on the context of use, and depending on their ability to do long -term planning and basically act on the real world. So that is why I think it’s important when we think about policy to think about this continuum of agents rather than something is agentic and something is not agentic. That being said, I think that one of the main safeguards that we talk about is human in the loop for agentic AI.

And that also varies significantly with the ability or the reliability of an agent. So as we move from agents that need confirmation for every single step that they want to take, they need human approval. As we move from them to agents that are more autonomous, we should be thinking about moving from human in the loop to human on the loop or human in command. A similar analogy to this is how Federal Aviation Administration in the U .S. thinks about moving from pilot being always in sight of drones to pilots being in command of drones. So as the safety of these drones improve and safety of AI systems to keep track of these drones through detection and avoid system improves, we can move from pilot.

always keeping an eye line with the drone to pilot being on the loop or pilot being in command. So I think these analogies within different industries allow us to think about agents. And another thing that I think policymakers, as they think about agents, should consider is that agents may be a new technology, but they, at the end of the day, they may cause harm. So we should be thinking about regulating the use or application or the harm that they actualize compared to regulating the underlying technology. Otherwise, we end up regulating, let’s say, the AI models that by the time that the regulation goes into effect, the AI model has evolved into something that is now agentic.

Jason Oxman

Makes sense, and appreciate your perspective. And I should have noted that you’re not only doing public policy work for Google, but you’re actually a real agent. You’re a real computer scientist, Ph .D., machine learning. She knows how the machines think. which is important as well. And sometimes they talk to us, right? Sometimes. Let’s go to Carly Cloudflare next.

Carly Ramsey

Great. Thank you. Hi, everyone. My name is Carly Ramsey. I lead public policy for Asia Pacific for Cloudflare. I’m based in Singapore. And Cloudflare, just for those of you who don’t know us, Cloudflare runs a global network, and we kind of sit in between our customers and their users, and we protect the traffic that goes back and forth and take a large majority of all the AI model providers are our customers as well, so we’re protecting that traffic as it goes back and forth. So we have a unique viewpoint. We also offer developer tools as well, and people are building AI agents off of Cloudflare, so there’s that angle that Cloudflare sees as well. So, like you said, choose one thing that we recommend to policymakers.

That’s a hard one, but I was thinking in keeping with the theme of this summit, which is very much about inclusive AI, I think that something that policymakers should consider is whether or not we’re making agentic AI specifically available. for everyone, right? So that becomes, is it accessible? Are the standards perhaps open? I think open models, open standards are really interesting and are allowing people to access tools that they might not normally be able to access. And so as policymakers think about diffusing this technology more widely, maybe just even outside of the enterprises, one thing that as someone who sits in Asia Pacific, and this is really concerning to me, is like how do we ensure that the different governments when they’re making these tools accessible are talking to each other?

And I think ITI has a really neat role to play in that actually because we all know that NIST is the gold standard and these are voluntary standards. They’re often referenced a lot in Asia actually. Singapore just came out with their own framework on agentic AI governance, right? And the question is, is that going to be compatible with whatever NIST is going to put out? Big question. Singapore is a leader in cybersecurity standards in this region. And I’ve had some interesting conversations here in these past couple of days about India. India, obviously, with the bastion of tech talent that we see in India, they want to be involved in standard development and for the global south.

You know what I mean? So great. And how do we get them involved? And how do we make sure that as global companies that they’re not – all of these standards aren’t contradicting each other as well, right? So that harmonization piece is very important.

Jason Oxman

So important. Technology doesn’t want to stop at borders. It wants to serve the world, and such an important issue. Sam, Palo Alto? Palo Alto? Perfect. Palo Alto.

Sam Kaplan

You conveniently sat the two cyber companies, cybersecurity companies, next to each other. So my name is Sam Kaplan. I’m the Assistant General Counsel for Global Policy at Palo Alto Networks. And for those of you that don’t know us, we’re the world’s largest pure play cybersecurity company. Can you hear me? Yeah. Okay. There it’s better. Sorry. I need to project better. Anyways, I think, Jason, to pivot off of your question, I think, you know, at a high level, one of the – The one last question. and I think if we could impart to policymakers is, you know, start with the standards organizations, to tell you the truth. The standards organizations, both in the United States but also abroad, Carly referred to the Singapore agency, but they are in the midst of developing these voluntary frameworks that are really serving as the foundation, not only to understanding the technology but to better understand sort of the risk picture that we are facing when it comes to these types of technologies, where we started with traditional model security frameworks when it comes to LLMs that are all based on sort of prompt and responses.

These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic, and as they are painting a better picture and working with industry to understand how that risk picture is changing and how what was once sort of… almost a two -dimensional… understanding of the risk when it comes to AI models is now very much a three -dimensional picture when you’re looking at agents, because these are the parts of the models that all of a sudden have arms and legs. So when you’re looking at this from a security perspective, you’re taking what could be sort of a digital threat that can sort of metastasize on networks. These are threats that all of a sudden can have kinetic consequences in real life as these agents are executing decisions across the financial system from your previous panel, but across autonomous systems.

So understanding that risk picture is going to be critically important. And last, I think that really pivots into one of the themes from the summit itself, as policymakers, in particular policymakers, are looking at sort of responsible and safe deployment. They need to understand and appreciate that security, security of those models, security of those agents, is a foundational layer to increasing trust, to facilitating response. deployment of AI because it’s the best way to secure and, as much as we can, understand the behavior of these models and agents as they’re interacting with the ecosystem and now the real physical world that we’re seeing.

Jason Oxman

Yeah, and policymakers are keeping an eye on all the products and services to see if that is done well or not, in which case they may step in. All right, to follow your thematic, we’re moving from cybersecurity to enterprise software. You’re going to take my joke, aren’t you? You sat me next to condos. I know, I know. It’s not my joke, it’s Sam’s joke. But, yes, I’m going to take it. I’m going to take it. So, Danielle, please commence the enterprise software portion of our program. I can speak for you if you want me to. I’m joking.

Danielle Gilliam-Moore

Danielle with Salesforce. I’m our director of global public policy, and I lead our AI policy work. The panelists have said a lot of great things, and they’ve also stolen a lot of what I’m going to say, so I’ll try to make this short. But when we think about AI, I think there’s – A governance response. Okay. needs to happen and when we talk about governance I think a lot of people conflate governance with regulation and governance is more than regulation. Governance can be regulation but it’s also standards, it’s also global norms, it’s also you know risk and quality assurance procedures in companies and so along with the standards piece I think a critical thing to remember is that you know ISO controls takes about three years to that process so it’s quite a long process.

So when you look at the ISO 42001 standard it’s a great standard but it’ll take time to further build on that which I think then makes in organizations likeness the different safety Institute’s incredibly important in filling in the gaps while work is being done to bring about new controls around agentic. The other thing I’ll say is on regulation there’s this emerging framework that it was first kind of started in the UK but I’m seeing governments like Indonesia on the other hand, there’s a lot of government that’s how we can make sure that we’re not just looking at the data and the data is is being used to make sure that we’re not just looking at the data and the data take this on of instead of having this large overarching AI regulation they’re looking at they’re allowing the different ministries that have core competencies on things like financial services or health care to take the lead so you have a more diffuse model that’s happening and I encourage I would encourage lawmakers to look at that you know some of these agencies have years and years and years and years of relationships and expertise and so wouldn’t they be best placed to think about not necessarily regulations but frameworks rules that best suit you know a small startup that isn’t that is operating you know a financial services agent or something like that some edge use case I think that is a more agile way to look at agentic which you know agility does I think bring about adoption and is very key to adoption.

Thanks.

Jason Oxman

Perfect. Combiz is anything left for you to say?

Combiz Abdolrahimi

I was just going to say ditto to everything that Danielle said because that’s basically what I was going to say, and she said it way better than I could ever do. Yeah, calm these up. They’re a human service now. I guess I would add just having worked in government now within industry, there’s kind of – I like to think like I could sort of have like the vantage point of like a former regulator, policymaker as well as now in industry. And I think what we are looking for and what we’ve heard earlier today is like we want clarity. We want clarity. We want standards. We want to – like we want to see what good governance looks like.

Don’t give us – if I could give a message to governments and regulators, don’t give us sort of theoretical abstract principles, but give us actually what practical standards, what does good governance look like, operational clarity, playbooks, model frameworks. Jason and I, I remember – for many years ago when I was at Treasury and you were at ETF, you know, there was this line, like, you know, these technologies are rapidly evolving. And as they’re evolving, policies and regulations need to evolve with them. Otherwise, it’s going to stifle these innovations, and it’s going to actually create more harm than good.

Jason Oxman

Well put. Well put. All right, so now that we’ve provided a wish list for regulators, the next question, and Danielle, I’m going to give you the chance to go first because of your observation that sometimes panels go down the line and it’s not fair to the people who are at the end of the panel. I think that’s absolutely true. I would have let Kambiz go first, but you’re speaking for the enterprise software industry generally. So the question is, you know, one of the big themes here at the AI Impact Summit is unification of the policy agenda across countries, across governments, across regions. So. So is there a particular platform you’ve seen or organization you’ve seen?

Is there a particular place where conversations like the ones we’ve been talking about here should be taking place? You know, the U .S., India, like -minded governments around the world, they want to be all on the same page. But there is a tendency for India -specific standards, for U .S.-specific standards. There’s a tendency for that in the physical world and in the digital world, and that’s very difficult for us to operate in. So in the agentic AI arena, I’m curious from all of you if there is a particular multilateral venue or a particular platform or a particular thing you’ve seen work well that you would recommend to governments here that they look to for this.

And, Danielle, have I bought you enough time to come up with your answer so that I can call on you first?

Danielle Gilliam-Moore

I woke up this morning knowing the answer to this question. Oh, excellent. Okay. I live for this question. It’s all yours. Which is the OECD. All right. The OECD, I think, is kind of – it’s not worth it. I remember it all started, but there was this really interesting moment where the OECD puts out principles in – was it 2019, I believe? And then it was like it set the floor for everyone else. I mean, the EU AI Act’s definitions are based off of those principles. We’ve seen draft legislation at the state level that’s based off of the OECD AI principles. Globally, when I was doing rounds of meetings in APAC, they were looking at the OECD principles.

So I feel like the world is shouting OECD and a lot of the regulatory work that they’re doing, but they don’t necessarily say they’re not always looking there. But the OECD has been doing such interesting work. They now have the reporting framework. They’re doing work with GPI. Them having that Hiroshima AI process framework, that was them taking the work of the G7 and bringing it into what they’re doing. So the OECD is doing so much work to reach out, and so I would encourage governments to look at what the OECD is doing and help them built.

Jason Oxman

That’s great. Sam? You can pick the same one if you want to or …

Sam Kaplan

well and I’m actually going to layer it because I think Danielle is exactly right I think when you’re looking from a policy and higher level governance the OECD has been the leader in this there are structures in place through the OECD to develop these if you look at legislation regulatory proposals that have come out even across the various US states they’ve based definitions off of what the OECD so that has been a foundational piece I think so from a broader perspective I think that’s a good layer I think you know the one that has potential I would like it to see move more tactical rather than being a little bit esoteric and studying is the International Consortium of Safety Institutes I think the structures are there you have the right players that are coming to the table I think if those organizations like what Cassie’s doing right now are advancing you know more tactical standards to create a taxonomy when it comes to agentic AI security how are we measuring how the attack is going to affect the surface has changed when it comes to agents.

To understand the scope of the scale of this problem, I think there’s a great deal of potential, but I think you need sort of these two levelings to talk policy and standards.

Jason Oxman

Fantastic. Carly?

Carly Ramsey

Just to add something different to the discussion is that based in Singapore, what I’ve seen in the years that I’ve been there is that the Singapore International Cyber Week has been, every year has gotten more attendance from governments from all around the world. So that is a potential, it’s an annual event, and so the positioning is on policy, bringing governments to discuss cyber policy. And so potentially that is an area that could be considered, sure, that the varying countries from around the world, the different, like India is well attended at Singapore International Cyber Week, make sure that they all have a voice in the future of Identity AI.

Jason Oxman

That’s great. Love it. Ellie, do you have a preferred platform? Multilateral?

Ellie Sakhaee

Yes, I’m going to add to what my colleague said here, and that is technical benchmarks. We talk about the standards, but we may understand what agents do, but we don’t fully understand what multi -agent systems may do. They may have emerging risks. They may have completely different behaviors that we don’t really know because we don’t really have real versions of multi -agent systems. There are some emerging, but the risk surface will change as these agents interact with each other. So I think the academic community, industry, all of us have a role to play to develop and expand the benchmarks for multi -agent systems to make sure before we put them into the world, they are tested.

Jason Oxman

Great. Jennifer, and then Cambiz, you’re going to get the last word.

Jennifer Mulvaney

Thank you for sharing. So what I would just say is I think that definitely OECD comes to mind as the largest. Most credible group, and I think that makes sense. But we do have to think about having space for some of the smaller, more regional groups as well. I’m speaking in Tokyo. couple weeks at the Friends of Hiroshima G7, where they had their principles there back when Japan hosted the G7. So I think that’s really important to have those types of smaller regional, perhaps even focused on specific policy areas to then feed into the bigger consortium in a way that people can understand. So I think that’s really important.

Jason Oxman

That’s great. That’s great. Kambiz, close us out.

Combiz Abdolrahimi

Yeah, hopefully. So actually, I was surprised that nobody mentioned the one that I was like, please don’t mention it. Please don’t mention it. Let me do it. So we’re talking about standards. We’re talking about technical benchmarks. We’re talking about principles. We’re talking about coordination at a global scale, private sector, governments, academia, institutions, the ITU, the UN, AI for Good. I mean, they do all of that. And I think that we want to engage with more countries, with more stakeholders in this conversation and make sure that we are being inclusive. and that’s one of the sort of multilateral forums that I would look to.

Jason Oxman

That’s a terrific one. Thanks for adding one to the list at the end of the round. This has been a fantastic discussion. I love the way we paired the business discussion of Agentic AI with the policy recommendations, and hopefully policymakers will pay attention to what we’re doing. ITI is proud to represent all of the companies here on the panel here today as part of the global tech industry and particularly proud to be partnered with Government of India on the AI Impact Summit. Our congratulations to the Prime Minister and to the entire Government of India for this incredible, incredible gathering. Thank you to all of you for being here to be a part of this important discussion, and please join me in thanking our terrific panelists.

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (46)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“CAISI was originally founded as the U.S. AI Safety Institute”

The transcript of Austin Mayron states that CAISI was originally founded as the U.S. AI Safety Institute, confirming the report’s claim [S2].

!
Correctionmedium

“The centre evolved from the U.S. AI Safety Institute to a standards‑and‑innovation focus in June 2025, signalling a shift from prescriptive safety to enabling innovation”

According to the same transcript, the transition from the U.S. AI Safety Institute to CAISI occurred “last year,” not specifically in June 2025, indicating a discrepancy in the reported timing [S2].

Confirmedmedium

“The discussion opened at the AI Impact Summit”

The knowledge base records the AI Impact Summit as a 2026 event, confirming that such a summit took place, though it does not specify the organizer [S106].

External Sources (118)
S1
Agentic AI in Focus Opportunities Risks and Governance — -Syam Nair- Chief Product Officer at NetApp (global multi-cloud service provider)
S2
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Yeah, and policymakers are keeping an eye on all the products and services to see if that is done well or not, in which …
S3
Agentic AI in Focus Opportunities Risks and Governance — Danielle Gilliam-Moore: Danielle with Salesforce. I’m our director of global public policy, and I lead our AI policy wo…
S4
Agentic AI in Focus Opportunities Risks and Governance — -Combiz Abdolrahimi- Role/company not clearly specified, appears to work in industry with former government experience
S5
Agentic AI in Focus Opportunities Risks and Governance — -Ellie Sakhaee- Public policy team member at Google, Ph.D. in computer science/machine learning
S6
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Hi, everyone. I’m Ellie Sakhaee. I am part of public policy team within Google. Several of our colleagues in the previou…
S7
Agentic AI in Focus Opportunities Risks and Governance — -Sam Kaplan- Assistant General Counsel for Global Policy at Palo Alto Networks (cybersecurity company)
S8
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — So understanding that risk picture is going to be critically important. And last, I think that really pivots into one of…
S9
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin May…
S10
S11
Agentic AI in Focus Opportunities Risks and Governance — -Jennifer Mulvaney- Public policy role at Adobe
S12
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — And I think ITI has a really neat role to play in that actually because we all know that NIST is the gold standard and t…
S13
Agentic AI in Focus Opportunities Risks and Governance — Great. Thank you. Hi, everyone. My name is Carly Ramsey. I lead public policy for Asia Pacific for Cloudflare. I’m based…
S14
Driving U.S. Innovation in Artificial Intelligence — 7. Jason Oxman – President & CEO, Information Technology Industry Council 8. Julia Stoyanovich – Associate Professor, De…
S15
Agentic AI in Focus Opportunities Risks and Governance — -Jason Oxman- Moderator/Host, appears to be with ITI (Information Technology Industry Council)
S16
Agentic AI in Focus Opportunities Risks and Governance — – Ellie Sakhaee- Caroline Louveaux
S17
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Thanks, Austin, so much. Really appreciate your being here and helping set the stage for us for our discussion of agenti…
S18
Agentic AI in Focus Opportunities Risks and Governance — Sure. So I’m Prith Banerjee, and my role is to look at sort of future directions of where Synopsys is headed. And agenti…
S19
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Michael Sellitto- Owen Lauder- Michael Brown Industry-led, consensus-based approach to standards development is prefe…
S20
WS #257 Emerging Norms for Digital Public Infrastructure — Belli advocates for a bottom-up approach in developing DPI, emphasizing the importance of understanding local contexts a…
S21
WS #283 AI Agents: Ensuring Responsible Deployment — Dominique Lazanski: And actually, well, you bring up the point that I think agentic AI started in the 90s with search an…
S22
Challenging the status quo of AI security — Connection between observed security challenges and need for standards Given the new security challenges that emerge wh…
S23
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S24
Setting the Rules_ Global AI Standards for Growth and Governance — A major theme was the challenge of measurement and benchmarking in AI systems. Rebecca Weiss from ML Commons explained t…
S25
Singapore International Cyber Week (SICW) 2025 — Artificial intelligence is a recurring theme across the agenda, with panels examining AI-enabled cyber operations, accou…
S26
Singapore International Cyber Week — The 5th edition of Singapore International Cyber Week 2020 (SICW) – the region’s most established cybersecurity event – …
S27
Singapore International Cyber Week 2021 — The SICW is one of Asia-Pacific’s most established cybersecurity event since its inception in 2016. SICW brings together…
S28
Cloudflare launches Moltworker platform after AI assistant success — The viral success of Moltbot has prompted Cloudflare tolaunch a dedicated platformfor running the popular AI assistant. …
S29
Closing Session  — Appreciation for working group members for the depth, rigor and practicality of outcomes, stating these are not abstract…
S30
Closing remarks – Charting the path forward — 5. **Coherent Policy Frameworks**: Called for “coherent and interoperable policy frameworks to prevent fragmentation whi…
S31
Agentic AI and the new industrial diplomacy — Xiaomi has been publicly promoting its ‘black-light factory’ concept for smartphones and consumer electronics. This refe…
S32
From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation — In current discourse, agentic AI usually refers to systems that can pursue goals with limited supervision. Such systems …
S33
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S34
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Very high level of consensus with no significant disagreements identified. The alignment spans government policy makers,…
S35
Semiconductor design set for AI revolution with new Synopsys tool — Synopsys hasintroduced AgentEngineer,an AI-powered technology designed to streamline semiconductor design by automating …
S36
AI being used in payment fraud prevention for e-commerce — Fraugster, a German-Israeli payment security company, has launched afraud prevention solution, Fraud Free Product, using…
S37
WS #19 Satellites, Data, Action: Transforming Tomorrow with Digital — Kulesza Joanna: I think we have a mic and the camera has not yet been enabled, but I’m glad to speak. I also see we ha…
S38
African Union (AU) Data Policy Framework — Cloud servicesare used on-demand at any time, through any access network, using any connected devices that use cloud com…
S39
Data first in the AI era — – **Cybersecurity as Essential to Data Governance**: The panelists stressed that data governance and cybersecurity are i…
S40
Agentic AI in Focus Opportunities Risks and Governance — “If the data can be manipulated, if the lineage of data is not properly understood, if it is not really governed, if the…
S41
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Dennis Wong:Thank you. Thank you very much. And thanks for having me. As you’ve seen, Singapore has experimented in Sand…
S42
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — High level of consensus with significant implications for policy development. The agreement suggests that the DNS commun…
S43
Data first in the AI era — Steve Macfeely: OK, good afternoon, everybody. I’m glad to see so many people here. So the question, why international d…
S44
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S45
Crisis management — This collaboration also helps mitigate the limitations of both approaches. Human oversight ensures accountability, corre…
S46
Agentic AI and the new industrial diplomacy — Several trends are converging:UNandUNESCOframeworks emphasize that AI should augment human capabilities, not replace hum…
S47
UNSC meeting: Artificial intelligence, peace and security — Brazil:Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today an…
S48
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S49
Green and digital transitions: towards a sustainable future | IGF 2023 WS #147 — In terms of governance, a framework is deemed essential to operationalise long-term systems for the service of citizens….
S50
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Cedric Sabbah:Does anybody else want to say something about this concept of agility? I’m not seeing anyone. So, okay, we…
S51
Promoting policies that make digital trade work for all (OECD) — It recognizes the growth and benefits derived from digital transformation but also highlights challenges stemming from d…
S52
Setting the Rules_ Global AI Standards for Growth and Governance — Implementation requires interoperable and modular standards ecosystems to avoid reinventing approaches for each sector o…
S53
How to make AI governance fit for purpose? — All speakers recognize that AI’s global nature requires international cooperation and coordination, though they may diff…
S54
Seeing, moving, living: AI’s promise for accessible technology — These questions require international coordination and inclusive decision-making. Standard-setting bodies cannot be domi…
S55
Global AI Policy Framework: International Cooperation and Historical Perspectives — The concept includes practical elements such as cloud and data standards that guarantee interoperability and reversibili…
S56
Global Perspectives on Openness and Trust in AI — “I think the difference is that, to your point from the opening, Amba, is that I think part of what we were trying to do…
S57
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Nobuhisa Nishigata:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ….
S58
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Michael Sellitto- Owen Lauder- Michael Brown Industry-led, consensus-based approach to standards development is prefe…
S59
Agentic AI in Focus Opportunities Risks and Governance — Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin May…
S60
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — Austin Marin, Acting Director of the US Center for AI Standards and Innovation, introduced a major new government initia…
S61
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And this is almost like a test for me of kind of saying. These names of these institutions through this panel. But they …
S62
From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation — In current discourse, agentic AI usually refers to systems that can pursue goals with limited supervision. Such systems …
S63
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S64
Comprehensive Summary: World Economic Forum Discussion on Stablecoins — Jeremy Allaire describes the broad proliferation of stablecoin use cases across different sectors of the economy. He arg…
S65
Semiconductor design set for AI revolution with new Synopsys tool — Synopsys hasintroduced AgentEngineer,an AI-powered technology designed to streamline semiconductor design by automating …
S66
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Sure. So I’m Prith Banerjee, and my role is to look at sort of future directions of where Synopsys is headed. And agenti…
S67
NVIDIA and Synopsys shape a new era in engineering — The US tech giant, NVIDIA,has deepenedits long-standing partnership with Synopsys through a multi-year strategy designed…
S68
AI being used in payment fraud prevention for e-commerce — Fraugster, a German-Israeli payment security company, has launched afraud prevention solution, Fraud Free Product, using…
S69
AI agents complete first secure transaction with Mastercard and PayOS — PayOS and Mastercard havecompleted the first live agentic paymentusing a Mastercard Agentic Token, marking a pivotal ste…
S70
Rule of Law for Data Governance | IGF 2023 Open Forum #50 — Alibaba Cloud Intelligence Group has played a significant role in cloud-based data governance, offering a range of cloud…
S71
GOVERNMENT CLOUD POLICY — – i. Whole-of-government efficiencies: Reducing the cost of developing and maintaining technology and reducing …
S72
Keynote-António Guterres — We need guardrails that preserve human agency, human oversight and human accountability
S73
Agents of Change AI for Government Services & Climate Resilience — “…they can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately…
S74
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S75
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-leap-policy-to-practice-with-aip2 — And we’ve seen with the GDPR framework. For example, that that has had a limiting effect on the African continent. So I …
S76
Open Forum #26 High-level review of AI governance from Inter-governmental P — Audrey Plonk: Does it work now? Okay, now I can hear you. Oh, wonderful. Thank you. I think maybe I was in the observe…
S77
The Role of Government and Innovators in Citizen-Centric AI — The discussion aimed to explore how artificial intelligence, particularly large language models, can transform public se…
S78
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — And with the interfaces that we have today, they can be introduced also in business applications. And I think what ethic…
S79
Main Session on Artificial Intelligence | IGF 2023 — Moderator 1 – Maria Paz Canales Lobel:Thank you very much, Maria, for the opportunity to be here with you today, and I’m…
S80
Any other business /Adoption of the report/ Closure of the session — In summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call f…
S81
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S82
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S83
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S84
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S85
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S86
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S87
Crypto at a Crossroads / DAVOS 2025 — Anthony Scaramucci: Hey, listen, I mean, you know, we have to call balls and strikes. It’s probably not a great Europea…
S88
From India to the Global South_ Advancing Social Impact with AI — The discussion maintained an overwhelmingly optimistic and energetic tone throughout. It began with excitement about you…
S89
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S90
From summer disillusionment to autumn clarity: Ten lessons for AI — As we refocus on existing risks, some accountability is due:how and why did respected voices get carried away with AGI p…
S91
How can we deal with AI risks? — Long-term risksare the scary sci-fi stuff – the unknown unknowns. These are the existential threats, the extinction risk…
S92
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S93
Artificial intelligence — AI applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the …
S94
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S95
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S96
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkab…
S97
Panel 1 – Accelerating Cable Repairs: Reducing Delays Through Smarter Processes  — The tone was collaborative and constructive throughout, with panelists building on each other’s points and sharing pract…
S98
How AI Is Transforming Diplomacy and Conflict Management — The discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Participants demonstrated…
S99
WS #395 Applying International Law Principles in the Digital Space — The discussion maintained a serious, academic tone throughout, with participants demonstrating deep expertise and concer…
S100
Open Mic & Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S101
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S102
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S103
Closing Session  — Sustained collaboration between governments, industry, and other stakeholders is essential for translating recommendatio…
S104
Opening and introduction — The AU’s commitment to working with Member States in adopting the meeting’s recommendations was reaffirmed, alongside th…
S105
Agentic AI transforms enterprise workflows in 2026 — Enterprise AIentereda new phase as organisations transitioned from simple, prompt-driven tools to autonomous agents capa…
S106
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — A haves and have -nots framing, however, risks distracting from what should be the main point of international AI dialog…
S107
360° on AI Regulations — In conclusion, the analysis reveals that AI regulation is guided by existing laws, and there is a complementary nature b…
S108
Research Publication No. 2014-6 March 17, 2014 — A prime example of the important role that a government unit can play in cloud standard setting initiatives is the Natio…
S109
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — So I would say the Manav mission, it’s welfare, human -centric, and all those aspects are there. And from the governance…
S110
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — On the one hand, we want them to be able to apply across borders because we want to enable companies to have responsible…
S111
INTRODUCTION — To effectively pursue the objectives defined in the strategy, it will be essential to define an entity responsible for t…
S112
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Yes, thank you. So super excited. This week we announced in partnership with the Office of Principal Scientific Advisory…
S113
Brainstorming with AI opens new doors for innovation — AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Compa…
S114
Hitler’s impact: Catalysing Europe’s fall and USA’s rise to power — WWI brought about a professionalisation of the states’ bureaucracy in the Allied states and a belated realisation that r…
S115
UNITED NATIONS CONFERENCE ON TRADE AND DEVELOPMENT — As UNCTAD (2019a) warned, firms in many developing countries may find themselves in subordinate positions, with data …
S116
EXCERPTED FROM — 8. The war on terror has been justified by what has been coined the Bush Doctrine of preemption, unilateralism, and mili…
S117
DIGITAL DIVIDENDS — The internet emerged from U.S. government research in the 1970s, but as it grew into a global network of netw…
S118
AI safety institute launches £8.5 million initiative to enhance systemic safety research — The AI Safety Instituteis launchingan £8.5 million funding scheme to support research on AI system safety, while the ini…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Austin Mayron
4 arguments194 words per minute951 words293 seconds
Argument 1
Industry‑front‑door & standards focus – Austin Mayron explains that CAISI serves as the “front door” for industry to the U.S. government, partnering with NIST to develop voluntary, consensus‑based standards that unlock adoption.
EXPLANATION
Austin describes CAISI’s role within the Department of Commerce as the primary gateway for industry to engage with the U.S. government. He emphasizes the partnership with NIST to create voluntary standards that facilitate innovation and adoption of agentic AI.
EVIDENCE
He states that the Secretary has tasked CAISI to be the front door for industry to the United States government and that they collaborate with NIST, an organization that historically promotes economic growth through standards rather than regulation, to develop the standards and best practices needed for industry to flourish [19-22][26-29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
CAISI’s role as the front door and its partnership with NIST to develop voluntary standards is described in S19, which announces the Agent Standards Initiative led by CAISI, and reinforced in S1 which highlights CAISI’s front‑door function and collaboration with NIST.
MAJOR DISCUSSION POINT
Front‑door industry engagement and standards development
AGREED WITH
Jason Oxman, Sam Kaplan, Carly Ramsey, Combiz Abdolrahimi
Argument 2
Security RFI and sector‑specific listening sessions – Austin Mayron notes CAISI’s request for information on AI‑agent security and upcoming sector‑focused sessions to identify adoption barriers and develop benchmarks.
EXPLANATION
Austin outlines CAISI’s recent actions to gather industry input on AI agent security through an RFI and to hold listening sessions for health care, education, and finance. These efforts aim to surface challenges and create benchmarks that support safe adoption.
EVIDENCE
He mentions that CAISI issued a request for information on AI agent security and that they are convening sector-specific listening sessions in April on barriers to adoption for health care, education, and finance, inviting industry to share challenges [34-38][164-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The issuance of a request for information and the organization of sector‑specific listening sessions are documented in S19 and S1, both of which mention CAISI’s sector‑focused listening sessions to surface adoption challenges.
MAJOR DISCUSSION POINT
Collecting industry feedback on security and adoption barriers
Argument 3
CAISI follows a bottom‑up, humility‑driven approach, gathering input from field experts before defining problems, rather than imposing top‑down solutions.
EXPLANATION
Mayron explains that CAISI prefers to listen to those closest to the technology challenges, acknowledging its limited perspective and emphasizing collaboration with industry to identify barriers.
EVIDENCE
He states that CAISI takes “a little bit of humility and say, we don’t actually know what the problem is until we talk to the people who are closest to the issue” and that the process is more bottom-up than top-down [158-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A bottom‑up, stakeholder‑driven methodology is emphasized in S20’s discussion of DPI development and echoed in S1’s call for a grassroots, industry‑driven approach.
MAJOR DISCUSSION POINT
Collaborative, bottom‑up standards development
Argument 4
CAISI will develop benchmarks, methodologies, and evaluation methods to assure that AI agents handle personally identifiable information (PII) correctly in regulated sectors such as healthcare and education.
EXPLANATION
Mayron points out that uncertainty around how agents process PII hampers adoption, and CAISI can provide measurable standards to give companies confidence that privacy obligations are met.
EVIDENCE
He gives the example of a regulated field like healthcare where there is reluctance to adopt because it is unclear how agents treat PII, and suggests CAISI could develop benchmarks and evaluation methods to settle those concerns [168-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concern about PII in regulated fields and CAISI’s potential role in providing benchmarks is highlighted in S1.
MAJOR DISCUSSION POINT
Creating PII‑focused benchmarks for regulated sectors
S
Sam Kaplan
1 argument173 words per minute675 words233 seconds
Argument 1
Standards as security foundation – Sam Kaplan argues that standards organizations are the essential foundation for understanding and mitigating the three‑dimensional risk picture of agentic AI, especially security.
EXPLANATION
Sam stresses that voluntary standards bodies are crucial for mapping the evolving risk landscape of agentic AI, turning a two‑dimensional model risk view into a three‑dimensional one that includes kinetic consequences. He positions standards as the base for security and trust.
EVIDENCE
He explains that standards organizations are developing frameworks that capture the three-dimensional risk picture of agentic AI, moving from traditional model security to agentic risks that can have kinetic real-world impacts, and that understanding this risk picture is critically important for security [332-339].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S19 notes that standards bodies are developing frameworks that capture the three‑dimensional risk picture of agentic AI, and S22 stresses the need for proper standards to address emerging security challenges.
MAJOR DISCUSSION POINT
Standards as the foundation for security risk assessment
AGREED WITH
Jason Oxman, Austin Mayron, Carly Ramsey, Combiz Abdolrahimi
C
Carly Ramsey
3 arguments188 words per minute547 words173 seconds
Argument 1
Open, inclusive standards – Carly Ramsey stresses the need for open models and open standards to make agentic AI accessible worldwide and to harmonize regional frameworks (e.g., Singapore vs. NIST).
EXPLANATION
Carly calls for open AI models and standards that enable global access, emphasizing the importance of aligning regional frameworks such as Singapore’s with NIST’s standards. She highlights the role of policy in ensuring inclusivity and interoperability.
EVIDENCE
She notes that policymakers should consider whether agentic AI is accessible to everyone, that open models and standards facilitate broader access, and raises the question of compatibility between Singapore’s framework and NIST’s standards, pointing out Singapore’s leadership in cybersecurity standards [304-313].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 references Singapore’s own agentic AI governance framework and the importance of open models, while S19 describes NIST as the “gold standard” and notes the relevance of Singapore’s framework for global alignment.
MAJOR DISCUSSION POINT
Ensuring openness and global harmonization of AI standards
AGREED WITH
Jason Oxman, Austin Mayron, Sam Kaplan, Combiz Abdolrahimi
Argument 2
Regional forums for harmonization – Carly Ramsey points to Singapore International Cyber Week as a venue where governments converge to discuss cyber and AI policy, fostering cross‑regional dialogue.
EXPLANATION
Carly highlights the annual Singapore International Cyber Week as a platform that brings together governments worldwide to discuss cyber and AI policy, suggesting it as a useful venue for multilateral coordination on agentic AI governance.
EVIDENCE
She describes how Singapore International Cyber Week has grown in attendance from governments globally, providing a space for policy discussions on cyber and AI, and mentions its role in bringing together diverse countries such as India [404-406].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of Singapore International Cyber Week as a platform for global cyber and AI policy dialogue is detailed in S25, S26, and S27.
MAJOR DISCUSSION POINT
Using regional cyber weeks for international policy coordination
Argument 3
Cloudflare acts as critical infrastructure protection for AI model providers by securing traffic and offering developer tools that enable the creation of AI agents.
EXPLANATION
Ramsey describes Cloudflare’s role in sitting between customers and users, protecting the data flows of AI model providers, and providing tools that developers use to build AI agents, positioning the company as a key defender of agentic AI deployments.
EVIDENCE
She states that Cloudflare “runs a global network, … we protect the traffic that goes back and forth” and that many AI model providers are their customers, while also offering developer tools for building AI agents [298-303].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Carly’s description of Cloudflare protecting traffic between customers and users and providing developer tools is documented in S1.
MAJOR DISCUSSION POINT
Infrastructure security for AI agents
C
Combiz Abdolrahimi
3 arguments165 words per minute334 words120 seconds
Argument 1
Practical, actionable guidance – Combiz Abdolrahimi calls for concrete, operational standards and playbooks rather than abstract principles, emphasizing clarity for industry and regulators.
EXPLANATION
Combiz argues that regulators need clear, practical guidance—such as standards, playbooks, and operational frameworks—rather than high‑level theoretical principles. He stresses that actionable clarity will help both industry and policymakers.
EVIDENCE
He states that industry wants clarity, standards, and concrete governance playbooks, and warns against abstract principles, calling for practical standards, operational clarity, and model frameworks [364-368].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for concrete, actionable standards instead of abstract principles are echoed in S1, S29, and S30, which stress the need for practical outcomes and shared responsibility.
MAJOR DISCUSSION POINT
Demand for concrete, operational standards
AGREED WITH
Jason Oxman, Austin Mayron, Sam Kaplan, Carly Ramsey
Argument 2
Clear, operational standards over abstract principles – Combiz Abdolrahimi calls for concrete governance playbooks, benchmarks, and operational clarity to guide industry and regulators.
EXPLANATION
Repeating his earlier point, Combiz emphasizes the need for specific, actionable standards and benchmarks rather than vague policy language, urging regulators to provide operational guidance that can be directly applied.
EVIDENCE
He repeats the call for clarity, standards, and operational guidance, noting that governments should avoid abstract principles and instead deliver practical standards and playbooks [364-368].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The demand for operational clarity and concrete governance tools is reinforced in S1, S29, and S30.
MAJOR DISCUSSION POINT
Advocating for operational clarity in governance
Argument 3
Broader multilateral engagement – Combiz Abdolrahimi adds that bodies such as the ITU, UN, and AI‑for‑Good initiatives should be leveraged to ensure inclusive, global participation in standards development.
EXPLANATION
Combiz suggests expanding multilateral involvement by engaging organizations like the ITU, UN, and AI‑for‑Good to foster inclusive global dialogue on AI standards and governance, ensuring diverse stakeholder input.
EVIDENCE
He lists the ITU, UN, and AI-for-Good as examples of multilateral forums that can be used to engage more countries and stakeholders, emphasizing inclusivity in global standards work [432-434].
MAJOR DISCUSSION POINT
Expanding multilateral platforms for inclusive standards
D
Danielle Gilliam-Moore
3 arguments189 words per minute635 words201 seconds
Argument 1
Agile, sector‑specific governance – Danielle Gilliam‑Moore highlights that governance can be more agile than formal regulation, using sector‑specific ministries and interim safety institutes to fill gaps while longer‑term ISO standards are developed.
EXPLANATION
Danielle explains that governance need not wait for formal regulation; sector ministries can create rapid, tailored frameworks, and safety institutes can bridge gaps during the lengthy ISO standard development process.
EVIDENCE
She contrasts governance with regulation, noting that governance includes standards, global norms, and risk procedures, and points out that ISO standards take about three years, so interim safety institutes are crucial for filling gaps while longer-term standards are built [353-357].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for agile, sector‑specific governance frameworks is discussed in S30, which calls for coherent, interoperable policy frameworks and agile governance to bridge gaps before ISO standards mature.
MAJOR DISCUSSION POINT
Using sector ministries and safety institutes for agile governance
Argument 2
Agile, ministry‑driven frameworks – Danielle Gilliam‑Moore suggests leveraging existing sector ministries for tailored, rapid frameworks, allowing startups and niche use‑cases to comply without waiting for global standards.
EXPLANATION
She recommends that governments let specialized ministries (e.g., health, finance) lead on AI governance, providing faster, context‑specific rules that support innovation, especially for smaller firms and emerging use‑cases.
EVIDENCE
She cites emerging frameworks that started in the UK and are now seen in Indonesia, where ministries with core competencies drive AI governance, offering a more agile approach than centralized regulation [354-358].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S30’s emphasis on agile governance and sector‑driven policy mechanisms supports this argument.
MAJOR DISCUSSION POINT
Sector‑specific ministries as agile policy drivers
Argument 3
OECD as the global anchor – Danielle Gilliam‑Moore identifies the OECD’s AI principles and reporting framework as the primary reference point for worldwide policy alignment.
EXPLANATION
Danielle points to the OECD’s AI principles, reporting framework, and related work as the foundational reference that many jurisdictions, including the EU AI Act and U.S. state drafts, already rely on for AI policy alignment.
EVIDENCE
She notes that the OECD set the floor for global AI policy, that EU AI Act definitions are based on OECD principles, and that many state-level legislations reference them, highlighting the OECD’s reporting framework and GPI work as key global references [386-393].
MAJOR DISCUSSION POINT
OECD as the central reference for AI policy
P
Prith Banerjee
3 arguments171 words per minute1262 words442 seconds
Argument 1
Agentic engineers augment chip design – Prith Banerjee describes “agentic engineers” that complement human designers, enabling faster, more complex silicon and system development for automotive, aerospace, and data‑center products.
EXPLANATION
Prith explains that Synopsys is creating AI‑driven “agentic engineers” that handle lower‑level reasoning tasks, working alongside human engineers to accelerate chip and system design for high‑complexity products such as cars and aircraft.
EVIDENCE
He states that agentic engineers will complement human engineers, adding roughly 200,000 AI-driven engineers to support human teams, while humans remain in the loop to prevent drastic errors [90-93].
MAJOR DISCUSSION POINT
AI‑augmented engineering workforce
Argument 2
Verification, validation, and safety for physical AI – Prith Banerjee warns that agentic AI controlling physical systems (cars, aircraft, nuclear assets) demands exhaustive digital‑level verification to prevent catastrophic misuse.
EXPLANATION
Prith highlights the heightened risk when agentic AI operates physical systems, citing scenarios like autonomous cars in Mumbai or weaponized aircraft, and stresses the need for near‑100 % digital verification before hardware prototyping.
EVIDENCE
He describes potential threats such as cyber-attacks on autonomous cars and aircraft, the need for extensive verification and validation to achieve close to 100 % coverage at the digital level, and the broader danger of software-defined critical infrastructure falling into the wrong hands [188-207].
MAJOR DISCUSSION POINT
Safety through exhaustive verification for physical AI
Argument 3
Agentic AI is a strategic priority for Synopsys, underpinning its transition from a pure EDA tool provider to a “chips‑to‑systems” company and driving future growth.
EXPLANATION
Banerjee explains that Synopsys has acquired Ansys to become a chips‑to‑systems firm and that agentic AI is at the core of this strategic direction, enabling the company to expand its market reach.
EVIDENCE
He notes that “agentic AI is actually the core of this” and that Synopsys recently acquired Ansys for $35 billion to become a chips-to-systems company, reflecting the strategic importance of agentic AI [56-62].
MAJOR DISCUSSION POINT
Strategic business importance of agentic AI
C
Caroline Louveaux
3 arguments163 words per minute678 words249 seconds
Argument 1
Operational AI for fraud detection and payment flow – Caroline Louveaux outlines how MasterCard deploys agentic AI that not only recommends but actually executes fraud‑prevention actions in milliseconds, requiring clear permissions and human oversight.
EXPLANATION
Caroline notes that MasterCard has moved from assistive AI to agentic AI that can autonomously detect suspicious transactions, triage fraud signals, and initiate secure payment flows, all while operating within defined permissions and maintaining human oversight.
EVIDENCE
She explains that AI is shifting from recommendation to action, with agents deployed to detect suspicious transactions, triage fraud, and initiate secure flows in milliseconds, and stresses that agents must act within defined values, permissions, and with end-to-end human oversight [105-115].
MAJOR DISCUSSION POINT
Agentic AI enabling real‑time fraud mitigation
Argument 2
Four guardrails for trustworthy payments – Caroline Louveaux proposes a playbook: (1) “Know Your Agent,” (2) security‑by‑design, (3) explicit consumer intent, and (4) traceability/auditable.
EXPLANATION
Caroline presents a four‑point framework to ensure safe agentic payments: verifying agent identity, embedding security by design, confirming clear consumer intent, and ensuring all actions are traceable and auditable for redress and regulator confidence.
EVIDENCE
She lists the four guardrails-know your agent, security by design, clear consumer intent (illustrated by the sushi-ordering incident), and traceability/audibility-explaining each and noting their role in building trust while scaling adoption [218-226][227-231].
MAJOR DISCUSSION POINT
Guardrails to secure agentic payment transactions
Argument 3
Effective deployment of agentic AI requires agents to operate within clearly defined values, permissions, and human‑in‑the‑loop oversight to prevent open‑ended autonomy and ensure accountability.
EXPLANATION
Louveaux stresses that agents must have explicit boundaries on what they are allowed to do, and that humans must retain end‑to‑end oversight to guarantee responsible behavior.
EVIDENCE
She outlines that agents must act “within clear values, principles, within clear permissions” and that humans need full end-to-end oversight, emphasizing the need to avoid open-ended autonomy [110-115].
MAJOR DISCUSSION POINT
Defining permissions and human oversight for safe agentic AI
S
Syam Nair
3 arguments183 words per minute645 words210 seconds
Argument 1
Data‑centric agents improve storage and risk detection – Syam Nair details agents embedded near storage controllers that enhance data quality, enable on‑premise AI processing, and surface security threats directly at the data layer.
EXPLANATION
Syam describes how NetApp is developing AI agents that sit close to storage controllers, allowing data to be prepared and processed without moving it, thereby improving data quality and enabling real‑time threat detection at the storage layer.
EVIDENCE
He explains that agents positioned near the storage controller can prepare structured data at the source, improve AI readiness, and help detect security threats such as rapid ransomware breakouts directly where the data resides [135-139].
MAJOR DISCUSSION POINT
Embedding agents for data preparation and security
Argument 2
Multi‑level guardrails & data governance – Syam Nair emphasizes layered safeguards, public‑private partnership on guardrails, rigorous data governance, and the principle that ultimate accountability rests with humans.
EXPLANATION
Syam argues that because agents can amplify errors, guardrails must be layered across the enterprise, involve public‑private collaboration, enforce strict data governance, and ensure that humans retain final accountability for agent actions.
EVIDENCE
He outlines the need for multi-level guardrails, public-private partnership to define enterprise-specific rules, strong data governance to prevent manipulation, and stresses that agents cannot take accountability-the business owner must [239-248].
MAJOR DISCUSSION POINT
Layered enterprise guardrails and data governance
Argument 3
Agentic AI maturity can be categorized into levels, with NetApp’s current work situated around level three, indicating a progression from assistive co‑pilot functions toward more autonomous multi‑agent networks.
EXPLANATION
Nair describes a five‑level framework for agentic AI, noting that NetApp is currently in the early‑mid stage (around level three), which informs expectations for future capabilities and guardrails.
EVIDENCE
He explains that “if you have five levels of AI … we’re still in that journey somewhere in the three range” indicating NetApp’s position on the maturity scale [141-142].
MAJOR DISCUSSION POINT
Agentic AI maturity levels
J
Jennifer Mulvaney
2 arguments223 words per minute333 words89 seconds
Argument 1
Human‑first harm prevention – Jennifer Mulvaney urges policymakers to evaluate every AI initiative through the lens of protecting humans and preventing harm.
EXPLANATION
Jennifer stresses that policy should always prioritize human welfare, asking what the policy means for people and how it can prevent harms, positioning humans before models in decision‑making.
EVIDENCE
She notes that policy has always been about protecting humans, that policymakers should ask what the policy means for humans and how to prevent harm, and cites Adobe’s stance that technology should serve what we should do, not just what we can do [263-267][268-272].
MAJOR DISCUSSION POINT
Human‑centric approach to AI policy
Argument 2
Policy should focus on “what we should do” rather than merely “what we can do,” placing human welfare at the centre of AI governance.
EXPLANATION
Mulvaney argues that the purpose of policy is to protect humans, and that decision‑makers must evaluate AI initiatives based on societal benefit rather than technical capability alone.
EVIDENCE
She says “it’s not what we can do with technology, it’s what we should do” and that policy should always ask what it means for humans and how to prevent harm [270-271].
MAJOR DISCUSSION POINT
Human‑first ethic in AI policy
E
Ellie Sakhaee
3 arguments146 words per minute505 words206 seconds
Argument 1
Continuum of autonomy & human‑in‑the‑loop – Ellie Sakhaee recommends regulations reflect the spectrum of agent autonomy, shifting from “human‑in‑the‑loop” to “human‑on‑the‑loop” as agents become more reliable.
EXPLANATION
Ellie proposes that policy should recognize a continuum of agent autonomy and adjust human oversight accordingly, moving from constant human confirmation to supervisory roles as agents mature, using aviation analogies.
EVIDENCE
She describes the continuum based on autonomy, memory, context, and planning, and argues that oversight should evolve from human-in-the-loop to human-on-the-loop or human-in-command, citing the FAA’s shift in drone oversight as an analogy [278-284].
MAJOR DISCUSSION POINT
Adapting oversight to agent autonomy levels
Argument 2
Regulate applications, not just underlying models – Ellie also advises focusing on the harms caused by specific agentic uses rather than trying to freeze the underlying technology.
EXPLANATION
Ellie suggests that regulators should target the applications and potential harms of agentic AI rather than attempting to regulate the underlying models, which evolve rapidly and could render regulations obsolete.
EVIDENCE
She argues that policymakers should regulate the use or application that causes harm, not the underlying AI models, to avoid regulating technology that may have already advanced beyond the regulation by the time it is enforced [287-289].
MAJOR DISCUSSION POINT
Application‑focused regulation over model‑centric rules
Argument 3
Technical benchmarks for multi‑agent systems – Ellie Sakhaee stresses the need for academic‑industry collaboration to create benchmarks that evaluate emerging multi‑agent behaviors before deployment.
EXPLANATION
Ellie calls for the development of technical benchmarks to assess the risks and behaviors of multi‑agent systems, emphasizing collaboration between academia and industry to ensure safety prior to real‑world use.
EVIDENCE
She notes that while standards exist for single agents, multi-agent systems present new risks and behaviors that need benchmarks, urging the academic and industry community to develop and expand such benchmarks [410-415].
MAJOR DISCUSSION POINT
Developing benchmarks for multi‑agent risk assessment
J
Jason Oxman
5 arguments153 words per minute2123 words831 seconds
Argument 1
Agentic AI creates new opportunities across many industries and therefore requires targeted public‑policy solutions to encourage its responsible use.
EXPLANATION
Oxman points out that agentic AI is already generating jobs and societal benefits in sectors such as automotive, aerospace, and finance, and stresses that governments need to develop policies that both promote adoption and address emerging risks.
EVIDENCE
He notes that agentic AI is “the AI of agents” and that there has been extensive discussion about its potential for jobs and societal benefits, followed by the question of what public-policy solutions are needed to encourage its use [4-6].
MAJOR DISCUSSION POINT
Business opportunities and need for public‑policy support
Argument 2
Voluntary, industry‑driven consensus standards are preferable to top‑down regulation for governing agentic AI.
EXPLANATION
Oxman argues that the tech industry operates best when standards are voluntary and globally applicable, allowing faster innovation than prescriptive government rules.
EVIDENCE
He praises the focus on voluntary, consensus-based standards and contrasts them with government regulation, stating that such standards are global in nature and better suited to the industry [172-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both S19 and S1 advocate for industry‑led, consensus‑based standards over prescriptive government regulation.
MAJOR DISCUSSION POINT
Preference for voluntary standards over regulation
AGREED WITH
Austin Mayron, Sam Kaplan, Carly Ramsey, Combiz Abdolrahimi
Argument 3
Policymakers should craft inspirational, non‑interfering policies that protect consumers and ensure safety while allowing rapid market deployment of agentic AI.
EXPLANATION
Oxman emphasizes that public policy should inspire innovators rather than hinder them, but must still safeguard consumers and embed safety and security into product design.
EVIDENCE
He says the goal is for policy to be “inspirational to innovators, that it doesn’t interfere with the ability of innovators to get the products and services out to market” while also protecting consumers and ensuring safety and security [249-255].
MAJOR DISCUSSION POINT
Balancing innovation encouragement with consumer protection
Argument 4
Industry has a responsibility to inform governments about risks and emerging guardrails, and clear guidance helps regulators understand the challenges faced by companies.
EXPLANATION
Oxman calls on companies to proactively share information on risk mitigation and guardrails so that the U.S. administration can develop appropriate policies.
EVIDENCE
He asks Austin to set up a question about what the industry should flag for the U.S. administration regarding guardrails and requests practical tips on what information is helpful for governments [144-152].
MAJOR DISCUSSION POINT
Industry‑government communication on risk management
Argument 5
The shift from assistive AI to operational AI demands explicit oversight and guardrails to maintain accountability and human control.
EXPLANATION
Oxman highlights that when AI agents move from recommending actions to actually executing them, robust oversight mechanisms are essential to prevent unintended consequences.
EVIDENCE
He remarks that moving from assistive AI to operational AI means agents can take tasks on, but oversight must remain in the system, and that guidelines and protections will be discussed later [121-124].
MAJOR DISCUSSION POINT
Need for oversight as AI agents become operational
AGREED WITH
Caroline Louveaux, Prith Banerjee, Ellie Sakhaee, Syam Nair
Agreements
Agreement Points
Voluntary, industry‑driven consensus standards are preferred over top‑down regulation for governing agentic AI.
Speakers: Jason Oxman, Austin Mayron, Sam Kaplan, Carly Ramsey, Combiz Abdolrahimi
Voluntary, industry‑driven consensus standards are preferable to top‑down regulation for governing agentic AI. Industry‑front‑door & standards focus – Austin Mayron explains that CAISI serves as the “front door” for industry to the U.S. government, partnering with NIST to develop voluntary, consensus‑based standards that unlock adoption. Standards as security foundation – Sam Kaplan argues that standards organizations are the essential foundation for understanding and mitigating the three‑dimensional risk picture of agentic AI, especially security. Open, inclusive standards – Carly Ramsey stresses the need for open models and open standards to make agentic AI accessible worldwide and to harmonize regional frameworks (e.g., Singapore vs. NIST). Practical, actionable guidance – Combiz Abdolrahimi calls for concrete, operational standards and playbooks rather than abstract principles, emphasizing clarity for industry and regulators.
All five speakers converge on the view that voluntary, consensus-based standards-developed collaboratively with industry-are the preferred mechanism for governing agentic AI, rather than prescriptive government regulation [172-174][19-22][26-29][332-339][304-307][364-368].
POLICY CONTEXT (KNOWLEDGE BASE)
This preference mirrors the industry-led, consensus-based approach advocated by U.S. standards bodies, which argue that voluntary standards are more effective than government mandates [S58] and reflects broader calls for bottom-up governance in global tech policy discussions [S50].
A bottom‑up, industry‑driven approach is essential for developing standards and informing public policy on agentic AI.
Speakers: Austin Mayron, Jason Oxman
CAISI follows a bottom‑up, humility‑driven approach, gathering input from field experts before defining problems. Industry has a responsibility to inform governments about risks and emerging guardrails, and clear guidance helps regulators understand challenges.
Both Austin and Jason stress that standards and policy should be shaped by direct industry input, emphasizing humility and collaboration rather than top-down mandates [158-162][144-152].
POLICY CONTEXT (KNOWLEDGE BASE)
A bottom-up model was highlighted as a way to future-proof global tech governance and ensure agility at the IGF Open Forum #44, emphasizing industry participation in AI standard-setting [S50] and reinforcing the industry-led consensus stance [S58].
Security, risk assessment, and layered guardrails are critical for safe deployment of agentic AI.
Speakers: Austin Mayron, Sam Kaplan, Caroline Louveaux, Syam Nair, Prith Banerjee, Ellie Sakhaee
Security RFI and sector‑specific listening sessions – Austin notes CAISI’s request for information on AI‑agent security and upcoming sector‑focused sessions to identify adoption barriers. Standards as security foundation – Sam Kaplan argues that standards organizations provide the foundational layer for understanding and mitigating the three‑dimensional risk picture of agentic AI. Four guardrails for trustworthy payments – Caroline outlines a playbook (know your agent, security‑by‑design, clear consumer intent, traceability/auditable) to ensure safe agentic payments. Multi‑level guardrails & data governance – Syam emphasizes layered safeguards, public‑private partnership, rigorous data governance, and human accountability. Verification, validation, and safety for physical AI – Prith warns that agentic AI controlling physical systems requires near‑100 % digital verification to prevent catastrophic misuse. Technical benchmarks for multi‑agent systems – Ellie stresses the need for academic‑industry collaboration to develop benchmarks that evaluate emerging multi‑agent behaviors before deployment.
All six speakers highlight that robust security measures, risk-focused standards, and multi-layered guardrails (including data governance and verification) are indispensable for trustworthy agentic AI across sectors [34-38][164-166][332-339][218-226][227-231][239-248][188-207][410-415].
POLICY CONTEXT (KNOWLEDGE BASE)
Guardrails and risk assessment were emphasized as essential safeguards for agentic AI, noting that missing data lineage and lack of guardrails can produce dangerous outcomes [S40] and that layered safeguards are a core component of responsible AI frameworks [S45].
Human oversight (human‑in‑the‑loop or human‑on‑the‑loop) is essential to ensure accountability and prevent open‑ended autonomy of agentic AI.
Speakers: Caroline Louveaux, Prith Banerjee, Ellie Sakhaee, Jason Oxman, Syam Nair
Effective deployment of agentic AI requires agents to operate within clearly defined values, permissions, and human‑in‑the‑loop oversight to prevent open‑ended autonomy. Agentic engineers complement human engineers; the human remains in the loop to prevent drastic errors. Continuum of autonomy & human‑in‑the‑loop – regulations should reflect a spectrum of agent autonomy, shifting from human‑in‑the‑loop to human‑on‑the‑loop as agents mature. The shift from assistive AI to operational AI demands explicit oversight and guardrails to maintain accountability and human control. Agents cannot take accountability; ultimate responsibility rests with humans, reinforcing the need for layered guardrails and oversight.
Caroline, Prith, Ellie, Jason, and Syam all agree that agents must operate under clear permissions with continuous human oversight, and that accountability ultimately lies with humans, to avoid uncontrolled autonomous actions [105-115][90-93][278-284][121-124][245-248].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources stress that human oversight is needed to maintain accountability and counteract algorithmic blind spots, warning that mere human presence does not guarantee agency if systems are compliance-driven [S44][S45][S46].
The OECD’s AI principles and reporting framework serve as the primary global reference for aligning AI policy across jurisdictions.
Speakers: Danielle Gilliam‑Moore, Sam Kaplan, Jennifer Mulvaney
OECD as the global anchor – Danielle identifies the OECD’s AI principles and reporting framework as the foundational reference for worldwide policy alignment. OECD has been the leader … foundational piece – Sam notes that many U.S. state definitions and international policies are based on OECD principles. OECD … most credible group – Jennifer states that the OECD is the largest and most credible group for AI policy coordination.
Danielle, Sam, and Jennifer all point to the OECD as the central, credible platform that underpins AI policy harmonisation globally, influencing the EU AI Act, U.S. state drafts, and other national frameworks [386-393][401-403][418-420].
POLICY CONTEXT (KNOWLEDGE BASE)
The OECD AI Principles are repeatedly cited as a foundational international framework for AI governance, underpinning calls for standardized global policies and interoperable standards [S51][S53].
Open, inclusive standards and multilateral coordination are needed to ensure global accessibility and harmonisation of agentic AI.
Speakers: Carly Ramsey, Jennifer Mulvaney, Sam Kaplan, Combiz Abdolrahimi
Open, inclusive standards – Carly stresses the need for open models and open standards to make agentic AI accessible worldwide and to harmonise regional frameworks. Need space for smaller regional groups – Jennifer highlights the importance of regional initiatives complementing global standards like the OECD. International Consortium of Safety Institutes – Sam suggests a tactical, multilateral forum to develop technical standards and taxonomies for agentic AI security. Broader multilateral engagement – Combiz calls for leveraging bodies such as the ITU, UN, and AI‑for‑Good to ensure inclusive, global participation in standards development.
Carly, Jennifer, Sam, and Combiz converge on the necessity of open, inclusive standards and multilateral platforms (regional groups, safety institutes, UN bodies) to promote worldwide access and harmonisation of agentic AI [304-307][420-423][401-402][432-434].
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus on the need for inclusive, multilateral standard-setting was observed at several IGF sessions and AI governance forums, highlighting the importance of avoiding domination by a few actors and ensuring interoperability [S53][S48][S52][S54].
Similar Viewpoints
Both emphasize that standards development must start with industry input and that standards are the core mechanism for addressing security risks in agentic AI [158-162][332-339].
Speakers: Austin Mayron, Sam Kaplan
CAISI follows a bottom‑up, humility‑driven approach, gathering input from field experts before defining problems. Standards as security foundation – Sam Kaplan argues that standards organizations are the essential foundation for understanding and mitigating the three‑dimensional risk picture of agentic AI, especially security.
Both stress layered guardrails, clear permissions, and human accountability as essential safeguards for agentic AI deployments [105-115][239-248].
Speakers: Caroline Louveaux, Syam Nair
Effective deployment of agentic AI requires agents to operate within clearly defined values, permissions, and human‑in‑the‑loop oversight to prevent open‑ended autonomy. Multi‑level guardrails & data governance – Syam emphasizes layered safeguards, public‑private partnership, rigorous data governance, and human accountability.
Both view the OECD as the primary, foundational multilateral framework guiding AI policy globally [386-393][401-403].
Speakers: Danielle Gilliam‑Moore, Sam Kaplan
OECD as the global anchor – Danielle identifies the OECD’s AI principles and reporting framework as the foundational reference for worldwide policy alignment. OECD has been the leader … foundational piece – Sam notes that many U.S. state definitions and international policies are based on OECD principles.
Both advocate for the creation of technical benchmarks and standards, through collaborative multilateral bodies, to assess and mitigate risks of multi‑agent AI systems [410-415][401-402].
Speakers: Ellie Sakhaee, Sam Kaplan
Technical benchmarks for multi‑agent systems – Ellie stresses the need for academic‑industry collaboration to develop benchmarks that evaluate emerging multi‑agent behaviors before deployment. International Consortium of Safety Institutes – Sam suggests a tactical, multilateral forum to develop technical standards and taxonomies for agentic AI security.
Both underline the importance of industry‑government collaboration, with a bottom‑up approach to shaping standards and policy for agentic AI [144-152][158-162].
Speakers: Jason Oxman, Austin Mayron
Industry has a responsibility to inform governments about risks and emerging guardrails, and clear guidance helps regulators understand challenges. CAISI follows a bottom‑up, humility‑driven approach, gathering input from field experts before defining problems.
Unexpected Consensus
Cross‑domain agreement on verification, data governance, and layered guardrails between a hardware‑centric design firm (Synopsys) and a data‑centric storage provider (NetApp).
Speakers: Prith Banerjee, Syam Nair
Verification, validation, and safety for physical AI – Prith warns that agentic AI controlling physical systems requires near‑100 % digital verification to prevent catastrophic misuse. Multi‑level guardrails & data governance – Syam emphasizes layered safeguards, public‑private partnership, rigorous data governance, and human accountability.
Despite operating in different parts of the technology stack (chip design vs. data storage), both speakers converge on the necessity of exhaustive verification and strong data governance as core guardrails for safe agentic AI, a convergence not explicitly anticipated given their distinct business focuses [188-207][239-248].
Overall Assessment

The panel exhibits strong consensus around four core themes: (1) voluntary, industry‑driven consensus standards are preferred to top‑down regulation; (2) a bottom‑up, collaborative approach to standards and policy is essential; (3) security, risk assessment, and layered guardrails—including human oversight and data governance—are critical for safe agentic AI; (4) the OECD serves as the primary global anchor for policy alignment, complemented by calls for inclusive, open standards and multilateral coordination.

High consensus across technical, policy, and governance dimensions, indicating that future policy initiatives are likely to prioritize voluntary standards, collaborative stakeholder engagement, robust security frameworks, and alignment with OECD principles, thereby facilitating broader industry adoption while safeguarding societal interests.

Differences
Different Viewpoints
Preferred multilateral platform for coordinating agentic AI governance
Speakers: Danielle Gilliam-Moore, Carly Ramsey, Sam Kaplan, Combiz Abdolrahimi
Danielle Gilliam-Moore identifies the OECD as the primary global anchor for AI policy alignment, citing its principles, reporting framework and influence on EU and US state legislation [386-393]. Carly Ramsey points to Singapore International Cyber Week as a practical venue where governments converge to discuss cyber and AI policy, emphasizing its annual, region-focused nature [404-406]. Sam Kaplan, while agreeing on the OECD’s importance, adds the International Consortium of Safety Institutes as a tactical forum for developing technical standards and taxonomies for agentic AI security [401-402]. Combiz Abdolrahimi expands the set of relevant multilateral bodies to include the ITU, UN and AI-for-Good initiatives, arguing for broader inclusive engagement [432-434].
Speakers disagree on which multilateral forum should be the primary focus for coordinating agentic AI governance. Danielle stresses the OECD as the foundational reference, Carly highlights a regional cyber‑week event, Sam adds a safety‑institutes consortium, and Combiz calls for even broader UN‑based platforms. All agree coordination is needed but differ on the optimal venue.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions at IGF and other multistakeholder venues have identified the need for a dedicated multilateral platform to coordinate AI governance efforts, reflecting broad agreement on the principle despite differing platform preferences [S53][S48].
Approach to achieving effective governance: global standards versus agile, sector‑specific frameworks
Speakers: Danielle Gilliam-Moore, Carly Ramsey, Sam Kaplan, Austin Mayron
Danielle Gilliam-Moore argues for agile, ministry-driven frameworks that can act faster than lengthy ISO standards, using sector ministries and safety institutes as interim solutions [353-358]. Carly Ramsey and Sam Kaplan emphasize the importance of global, voluntary consensus standards (e.g., NIST, OECD) as the foundation for security and interoperability, advocating for harmonisation across regions [304-313][332-339]. Austin Mayron describes a bottom-up, voluntary standards process coordinated through NIST and CAISI, focusing on industry-driven consensus rather than sector-specific regulation [158-162][156-162].
All speakers seek robust governance for agentic AI but diverge on the mechanism: Danielle promotes fast, sector‑specific, ministry‑led frameworks; Carly, Sam and Austin favour globally‑aligned, voluntary consensus standards. The disagreement lies in the speed and scope of the governance approach.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between universal standards and sector-specific agile frameworks has been highlighted, with calls for modular, interoperable standards ecosystems to balance consistency and flexibility [S52][S50][S53].
Unexpected Differences
Scope of openness in AI models and standards
Speakers: Carly Ramsey, Austin Mayron
Carly Ramsey calls for open models and open standards to make agentic AI accessible globally and stresses the need for compatibility between regional frameworks and NIST standards [304-313]. Austin Mayron, while supporting voluntary standards, does not explicitly address openness of models and focuses on industry-driven, possibly proprietary, standards development and sector-specific listening sessions [32-38][156-162].
Carly’s explicit demand for open, universally accessible standards and models was not mirrored by Austin’s discussion, which centered on voluntary, industry‑driven standards without a clear stance on openness. This divergence was not anticipated given the overall consensus on standards.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on openness reference the foundational openness of the open-source movement and the need to balance openness with trust and security, as discussed in U.S. policy reflections and global AI openness perspectives [S56][S55][S53].
Overall Assessment

The panel largely converged on the importance of standards, guardrails, and collaborative governance for agentic AI. The most notable divergences concern the preferred multilateral coordination mechanism (OECD vs regional events vs safety‑institute consortia) and the balance between global standards and agile, sector‑specific frameworks.

Low to moderate. While participants share common goals of safe, trustworthy, and inclusive agentic AI, they differ on the pathways to achieve these goals. The disagreements are more about implementation details than fundamental principles, suggesting that consensus on high‑level policy is achievable, but coordination on specific institutional venues and governance models will require further negotiation.

Partial Agreements
All speakers agree that standards are the preferred tool for governing agentic AI, but they differ on the emphasis: Jason focuses on industry preference, Austin on the front‑door government role, Sam on security foundations, and Carly on openness and global accessibility.
Speakers: Jason Oxman, Austin Mayron, Sam Kaplan, Carly Ramsey
Jason Oxman argues that voluntary, industry-driven consensus standards are preferable to top-down regulation for governing agentic AI [172-174]. Austin Mayron describes CAISI’s role in partnering with NIST to develop voluntary standards that unlock adoption [26-29]. Sam Kaplan states that standards organisations are the essential foundation for understanding and mitigating the three-dimensional risk picture of agentic AI [332-339]. Carly Ramsey stresses the need for open, inclusive standards to make agentic AI accessible worldwide and to harmonise regional frameworks [304-313].
Both agree that robust guardrails are essential for safe deployment of agentic AI, but Caroline focuses on payment‑specific guardrails while Syam advocates a broader, multi‑level enterprise framework that includes data governance and accountability.
Speakers: Caroline Louveaux, Syam Nair
Caroline Louveaux proposes four guardrails (know your agent, security-by-design, clear consumer intent, traceability) to ensure trustworthy agentic payments [218-226][227-231]. Syam Nair emphasizes layered guardrails, public-private partnership, rigorous data governance and human accountability for enterprise-wide agentic AI risk management [239-248].
Takeaways
Key takeaways
CAISI (U.S. Center for AI Standards and Innovation) acts as the industry front‑door to the U.S. government, partnering with NIST to develop voluntary, consensus‑based standards that facilitate safe adoption of agentic AI. Security and trust are foundational; standards bodies (NIST, OECD, International Consortium of Safety Institutes) are seen as the primary mechanism for defining guardrails and risk‑mitigation frameworks for agentic AI. Open, inclusive standards and open‑model approaches are essential for global accessibility and for harmonising regional frameworks (e.g., Singapore vs. NIST). Concrete, operational guidance (playbooks, benchmarks, verification/validation methods) is preferred over abstract principles; industry seeks clear, actionable standards. Agentic AI is already delivering business value: Synopsys uses “agentic engineers” to accelerate chip‑and‑system design; MasterCard deploys agentic agents for real‑time fraud detection and payment execution; NetApp embeds agents near storage controllers to improve data quality and surface security threats. Four core guardrails for trustworthy agentic payments were outlined: (1) Know Your Agent, (2) Security‑by‑Design, (3) Explicit Consumer Intent, (4) Traceability & Audibility. Risk‑management guardrails must be multi‑layered, include strong data‑governance, public‑private partnership, and retain ultimate human accountability. Policy should focus on protecting humans, adopt a continuum‑of‑autonomy approach (human‑in‑the‑loop → human‑on‑the‑loop → human‑in‑command), and regulate applications rather than trying to freeze underlying models. Agile, sector‑specific governance (leveraging existing ministries or safety institutes) can fill gaps while longer‑term ISO standards are developed. International coordination is critical; the OECD’s AI principles and reporting framework are identified as the primary global anchor, complemented by regional forums such as Singapore International Cyber Week and multilateral bodies (ITU, UN, AI‑for‑Good). Technical benchmarks for multi‑agent systems are needed to understand emergent risks before deployment.
Resolutions and action items
CAISI issued a Request for Information (RFI) on AI‑agent security; industry is invited to submit comments within the next month. CAISI announced sector‑specific listening sessions (healthcare, education, finance) to be held in April to gather barriers to adoption and inform standards development. MasterCard shared its four‑point guardrail playbook for agentic payments, signalling intent to adopt these internally and encourage industry uptake. Synopsys highlighted its development of “agentic engineers” as a product offering, indicating ongoing internal deployment. NetApp is progressing toward level‑3 agentic capabilities (agents near storage controllers) and will continue to refine data‑governance guardrails. Panelists collectively urged companies to engage with standards bodies (NIST, CAISI, OECD, International Consortium of Safety Institutes) and submit feedback on emerging drafts. Policymakers were encouraged to look to the OECD for baseline principles and to support sector‑specific, agile regulatory frameworks.
Unresolved issues
Specific technical specifications for AI‑agent security standards and verification/validation metrics remain under development. How to achieve seamless harmonisation between regional standards (e.g., Singapore’s framework) and U.S./NIST guidelines is still an open question. The exact definition of the autonomy continuum and the thresholds for shifting from human‑in‑the‑loop to human‑on‑the‑loop have not been concretised. Benchmarks for multi‑agent system behaviour and emergent risk assessment are not yet established. Mechanisms for ongoing public‑private partnership on data‑governance and accountability in large‑scale deployments need further definition. Details on how smaller, sector‑specific ministries will coordinate with global standards bodies to produce agile frameworks were not fully fleshed out.
Suggested compromises
Adopt voluntary, consensus‑based standards (via NIST/CAISI) rather than top‑down regulation to balance innovation speed with safety. Implement sector‑specific, agile governance frameworks (through existing ministries or safety institutes) as interim measures while broader ISO standards are being finalised. Combine open‑model/open‑standard approaches with rigorous security‑by‑design requirements to ensure accessibility without sacrificing trust. Use a layered guardrail approach—technical safeguards, data‑governance policies, and human accountability—to mitigate the larger blast radius of agentic errors.
Thought Provoking Comments
CAISI was originally founded as the U.S. AI Safety Institute, but last year it was refounded as the Center for AI Standards and Innovation, signaling a shift away from safety principles toward standards and innovation.
Highlights a strategic pivot in government approach—from prescriptive safety to enabling industry through standards—introducing a new framework for public‑private collaboration.
Set the stage for the discussion on how government can facilitate adoption; prompted later speakers to reference standards, voluntary consensus, and the role of NIST, shaping the conversation toward bottom‑up, industry‑driven policy development.
Speaker: Austin Mayron
We have created agentic engineers… they complement human engineers rather than replace them, acting as lower‑level reasoning agents while humans stay in the loop.
Introduces the concept of ‘agentic engineers’ as a hybrid workforce, reframing AI agents as augmentative tools rather than job‑threatening replacements.
Shifted the dialogue toward collaboration between AI and humans; later participants (e.g., Caroline, Syam) referenced human oversight and guardrails, deepening the discussion on human‑in‑the‑loop designs.
Speaker: Prith Banerjee
Imagine an autonomous car in Mumbai being hacked and used as a weapon… software‑defined airplanes could become missiles. We must ensure responsible, safe AI in intelligent product design.
Provides a vivid, high‑stakes scenario that underscores the potential physical dangers of agentic AI, moving the conversation from technical benefits to existential risk.
Created a turning point toward safety concerns; prompted Caroline to discuss guardrails, and Syam to talk about blast radius and accountability, adding urgency and depth to the risk‑management discussion.
Speaker: Prith Banerjee
Our four guardrails for agentic payments: know your agent, security by design, clear consumer intent, and traceability/auditable records.
Offers concrete, actionable policy recommendations that translate abstract safety concepts into specific operational controls.
Anchored the abstract safety talk in practical measures; other panelists referenced these guardrails when discussing enterprise risk management, and it guided the later focus on standards and accountability.
Speaker: Caroline Louveaux
Data governance is the key; agents have no empathy and make decisions solely on data. If data lineage isn’t understood, agents can produce scary outcomes, and accountability always rests with humans.
Connects data quality and governance directly to agentic risk, emphasizing that the root of many failures lies in data rather than the agents themselves.
Expanded the conversation from system‑level guardrails to the foundational role of data, influencing subsequent remarks about multi‑level safeguards and the need for clear operational standards.
Speaker: Syam Nair
We should think of a continuum of agent autonomy and move from ‘human‑in‑the‑loop’ to ‘human‑on‑the‑loop’ or ‘human‑in‑command’ as agents become more reliable, similar to FAA’s evolving drone oversight.
Provides a nuanced framework for scaling oversight with agent capability, offering a clear regulatory pathway rather than a binary safe/unsafe view.
Guided the policy discussion toward graduated oversight mechanisms; later speakers referenced this continuum when discussing guardrails and standards, adding a layer of sophistication to the regulatory conversation.
Speaker: Ellie Sakhaee
Policy should focus on the impact on humans—‘humans before models’—and ask what we should do, not just what we can do, to prevent harm.
Re‑centers the debate on human welfare, reminding participants that technology policy is ultimately about protecting people, not just advancing tech.
Reinforced the human‑centric theme introduced earlier, influencing the tone of later remarks about inclusive standards and global governance.
Speaker: Jennifer Mulvaney
Policymakers need to ensure agentic AI is inclusive and accessible; open standards and harmonization across regions (e.g., NIST vs. Singapore frameworks) are essential.
Raises the issue of global equity and standard compatibility, highlighting the risk of fragmented regulations that could hinder adoption.
Shifted the focus to international coordination; prompted Danielle, Sam, and others to suggest multilateral venues like the OECD and to discuss cross‑regional alignment.
Speaker: Carly Ramsey
The OECD provides the foundational global platform for AI policy; its principles underpin the EU AI Act and many US state initiatives, making it the ideal venue for coordinated standards.
Identifies a concrete, existing multilateral institution that can serve as the hub for harmonized policy, moving the conversation from abstract needs to a specific solution.
Consolidated the earlier calls for coordination into a actionable recommendation; subsequent speakers (Sam, Carly, Ellie) built on this by mentioning complementary bodies and events, solidifying the multilateral governance theme.
Speaker: Danielle Gilliam-Moore
Governments should provide practical, operational standards and playbooks rather than abstract principles; clarity and actionable guidance are what industry needs.
Emphasizes the necessity for concrete implementation tools, bridging the gap between high‑level policy and day‑to‑day industry practice.
Reinforced the demand for actionable guidance, echoing earlier points about standards and influencing the final consensus on the need for clear, practical frameworks.
Speaker: Combiz Abdolrahimi
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from a broad overview of agentic AI to concrete concerns about safety, governance, and global coordination. Austin’s framing of CAISI’s standards‑focused mission established a bottom‑up policy lens, which Prith amplified with vivid risk scenarios and the notion of ‘agentic engineers.’ Caroline’s four guardrails and Syam’s emphasis on data governance translated these risks into actionable controls, while Ellie’s continuum of autonomy offered a scalable oversight model. The human‑centric reminder from Jennifer kept the dialogue grounded in societal impact. Finally, Carly, Danielle, and Combiz converged on the need for inclusive, harmonized, and practical standards, pinpointing the OECD and other multilateral forums as the vehicles for such coordination. Collectively, these comments shifted the tone from exploratory to solution‑oriented, deepened the analysis of risk and governance, and forged a consensus around the importance of standards, human oversight, and international collaboration in shaping policy for agentic AI.

Follow-up Questions
What specific enterprise guardrails or risk‑management practices should companies adopt for agentic AI deployments?
Understanding concrete guardrails is critical for safe, responsible deployment of agentic AI across industries such as semiconductor design, payments, and data storage.
Speaker: Jason Oxman (asked), Prith Banerjee, Caroline Louveaux, Syam Nair
What should policymakers prioritize when regulating or supporting agentic AI?
Policymakers need clear focus areas—human‑centric safeguards, standards, and agile frameworks—to foster innovation while protecting consumers and society.
Speaker: Jason Oxman (asked), Jennifer Mulvaney, Ellie Sakhaee, Carly Ramsey, Sam Kaplan, Danielle Gilliam‑Moore, Combiz Abdolrahimi
Which multilateral platform or organization should governments use to coordinate global agentic AI standards and policies?
A common venue is needed to harmonize standards across regions (e.g., OECD, International Consortium of Safety Institutes, Singapore International Cyber Week) to avoid fragmented regulations.
Speaker: Jason Oxman (asked), Danielle Gilliam‑Moore, Sam Kaplan, Carly Ramsey, Ellie Sakhaee, Jennifer Mulvaney, Combiz Abdolrahimi
How can industry best provide input to the U.S. administration on guardrails for agentic AI?
Effective industry‑government communication ensures that standards and guidelines address real‑world barriers and regulatory concerns.
Speaker: Jason Oxman (asked), Austin Mayron
What research is needed to develop technical benchmarks for multi‑agent systems and understand emergent risks?
Benchmarks will allow systematic testing of multi‑agent interactions, helping to identify safety and security gaps before deployment.
Speaker: Ellie Sakhaee, Sam Kaplan
How can data governance be ensured for agentic AI to prevent manipulation and guarantee trustworthy decisions?
Since agents act on data, robust data lineage, governance, and accountability mechanisms are essential to avoid erroneous or malicious outcomes.
Speaker: Syam Nair, Combiz Abdolrahimi
How should AI agent security be addressed in regulated sectors (healthcare, education, finance), especially regarding handling of PII?
Regulated industries need standards and benchmarks that demonstrate compliance with privacy laws while enabling agentic AI adoption.
Speaker: Austin Mayron
What approaches can achieve interoperability of AI agents across different sectors and platforms?
Interoperability is key for widespread adoption; research is needed on common protocols, data formats, and integration frameworks.
Speaker: Austin Mayron
How can practical standards, playbooks, and operational clarity be created for governance of agentic AI?
Stakeholders request concrete, actionable guidance rather than abstract principles to implement responsible AI at scale.
Speaker: Combiz Abdolrahimi
How can global harmonization of standards be achieved, especially between the U.S., Singapore, India, and other regions?
Ensuring that regional frameworks (e.g., Singapore’s AI governance) align with global standards (e.g., NIST, OECD) avoids conflicting requirements for multinational firms.
Speaker: Carly Ramsey
How can we assess and mitigate kinetic consequences of agentic AI in physical systems such as autonomous vehicles and aircraft?
Physical AI agents can cause real‑world harm; research into verification, validation, and safety‑critical testing is needed for safety‑critical domains.
Speaker: Prith Banerjee
How can trust, consumer intent, and auditability be ensured in agentic payment systems?
Payments require clear verification of agents, secure design, explicit consumer consent, and traceability to prevent fraud and maintain confidence.
Speaker: Caroline Louveaux
How can public‑private partnerships define and enforce guardrails for agentic AI within enterprises?
Collaboration between governments and companies is needed to set sector‑specific rules, especially around data provenance and accountability.
Speaker: Syam Nair
How can standards bodies move from high‑level principles to tactical, actionable standards for agentic AI security?
Translating broad AI safety concepts into concrete security specifications will help industry implement protective measures effectively.
Speaker: Sam Kaplan
How can the OECD be leveraged effectively as a central forum for AI policy coordination?
The OECD’s principles have become a global reference; understanding its mechanisms can guide nations in aligning regulations.
Speaker: Danielle Gilliam‑Moore, Sam Kaplan
What role can the International Consortium of Safety Institutes play in developing tactical standards for agentic AI security?
This consortium could bridge the gap between high‑level policy and technical standards, focusing on security taxonomy and measurement.
Speaker: Sam Kaplan
How can the Singapore International Cyber Week serve as a platform for worldwide policy dialogue on agentic AI?
Annual cyber‑security gatherings can bring together diverse governments to discuss AI governance, fostering inclusive standard‑setting.
Speaker: Carly Ramsey
What research is needed to create benchmarks for AI agent identity and verification?
Robust identity verification is essential for secure agent interactions; standards are currently being drafted and need further study.
Speaker: Austin Mayron (referencing NIST publication)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.