Agentic AI in Focus Opportunities Risks and Governance
20 Feb 2026 11:00h - 12:00h
Agentic AI in Focus Opportunities Risks and Governance
Summary
The opening panel was convened to explore both the business case for agentic AI and the public-policy measures needed to encourage and safeguard its use [1-3]. Austin Mayron, Acting Director of the U.S. Center for AI Standards and Innovation (CAISI), introduced the agency’s placement within the Department of Commerce and its partnership with NIST to develop voluntary standards that help industry adopt AI agents [15-20][26-29].
CAISI has recently launched an AI-agent standards initiative, issued a request for information on agent security, and announced sector-specific listening sessions on health-care, education and finance to gather industry challenges [32-38][39-41]. Prith Banerjee described how Synopsys is creating “agentic engineers” that augment human designers in rapid chip-to-system development, enabling yearly product cycles that would otherwise be impossible [73-81][88-94]. Caroline Louveaux explained that MasterCard is moving from AI that merely recommends to AI agents that act in real-time fraud detection, and she outlined four guardrails-knowing the agent, security-by-design, clear consumer intent, and traceability-to ensure safe, accountable payments [105-112][218-226][229-236]. Syam Nair highlighted that NetApp is embedding agents near storage controllers to improve data quality for AI workloads, noting that the technology is still early (around level three of a five-level autonomy scale) and that multi-level guardrails are required [132-140][141-148].
Austin urged a bottom-up, industry-driven approach to standards, citing ongoing RFI processes and upcoming listening sessions, and suggested that CAISI could develop benchmarks for handling personally identifiable information in regulated sectors [156-164][168-171]. Prith warned that autonomous, software-defined systems such as cars or aircraft could become weapons if compromised, emphasizing the need for exhaustive verification and validation before hardware prototyping [191-207]. Syam added that data governance is critical because agents act on data without empathy, and that ultimate accountability must remain with human owners, requiring coordinated public-private guardrails [240-248].
Panelists agreed that voluntary, consensus-based standards are preferable to top-down regulation and identified the OECD as the leading multilateral forum for AI principles and reporting frameworks [172-176][386-393]. Additional recommendations included developing technical benchmarks for multi-agent systems and leveraging events such as Singapore International Cyber Week and global bodies like the ITU and UN to foster inclusive coordination [401-406][429-434]. The discussion concluded that aligning industry standards, robust guardrails, and international cooperation will be essential to unlock the benefits of agentic AI while managing its risks [435-440].
Keypoints
Major discussion points
– Government-industry collaboration on standards and security for agentic AI – CAISI (the U.S. Center for AI Standards and Innovation) explains its placement within the Department of Commerce and NIST, its role as a “front door” for industry, and its recent AI-agent standards initiative, including RFIs on security and sector-specific listening sessions [13-18][19-30][32-38][156-166][168-171].
– Business use-cases of agentic AI across sectors
– Semiconductor design: Synopsys creates “agentic engineers” that augment human designers to handle the exploding complexity of chip and system design, enabling faster product cycles [55-73][88-95][90-94].
– Payments & fraud prevention: Mastercard moves from recommendation-only AI to “agentic” AI that detects and blocks fraudulent transactions in milliseconds, and it has defined four guard-rails (know-your-agent, security-by-design, clear consumer intent, traceability) to ensure safe autonomous payments [105-115][218-231].
– Data-centric cloud services: NetApp develops storage-proximate agents that improve data quality and enable real-time security actions, while emphasizing the need for multi-level guard-rails and strong data governance [132-141][235-244].
– Enterprise guard-rails and risk-management concerns – Panelists stress that agentic systems must operate under clear permissions, human oversight, and robust governance. Mastercard’s four guard-rails, NetApp’s layered safeguards (public-private partnership, data lineage, human accountability), and Prith’s safety warnings about autonomous physical systems (e.g., weaponised cars or aircraft) illustrate the breadth of risk-management strategies [179-208][218-231][236-248].
– Policy recommendations focused on voluntary, consensus-based standards and global coordination – Austin highlights a bottom-up approach to standards development; Ellie urges regulators to consider the autonomy continuum and human-in-the-loop vs. human-on-the-loop models [277-284][287-289]; Carly calls for open standards and cross-regional harmonisation (Singapore, India); Danielle and Sam point to the OECD as the primary multilateral venue, while also noting the role of safety-institute consortia; Jennifer adds that regional groups should complement OECD work; Combiz stresses inclusion of bodies such as the ITU and UN [386-398][401-402][418-423][430-434].
– Purpose of the panel – The session is framed as a two-part discussion: first to map business use-cases of agentic AI, then to explore public-policy implications and what governments should do to encourage safe adoption [1-6][249-256].
Overall purpose/goal
The panel aims to bridge the business and policy worlds by showcasing concrete agentic-AI applications, identifying the practical challenges and guard-rails needed for safe deployment, and delivering concrete recommendations to policymakers on how standards, coordination mechanisms, and regulatory approaches can foster responsible innovation while protecting consumers and critical infrastructure [1-6][249-256].
Tone of the discussion
– Opening: Formal and forward-looking, with a clear agenda-setting tone [1-6][13-18].
– Technical deep-dives: Energetic and optimistic as speakers describe transformative use-cases (Synopsys, Mastercard, NetApp) [55-73][105-115][132-141].
– Cautionary moments: A shift to a more urgent, even “scary” tone when highlighting safety risks in physical AI (autonomous cars, weaponised systems) and the sushi-order anecdote [179-208][228-231].
– Collaborative & constructive: Returns to a cooperative tone as panelists discuss standards, share best-practice recommendations, and acknowledge the need for global coordination [156-166][277-284][386-398].
– Closing: Appreciative and hopeful, emphasizing partnership between industry and governments and thanking participants [435-441].
Overall, the conversation moves from informative introductions to enthusiastic showcase of technology, through a brief but pointed warning about risks, and culminates in a collaborative, solution-oriented tone aimed at shaping policy.
Speakers
– Jason Oxman
– Area of expertise: Technology industry leadership, AI policy moderation
– Role / Title: Moderator/Host; President & CEO of the Information Technology Industry Council (ITI) [S14][S15]
– Austin Mayron
– Area of expertise: AI standards, innovation policy, government-industry liaison
– Role / Title: Acting Director, U.S. Center for AI Standards and Innovation (CAISI) [S9][S10]
– Prith Banerjee
– Area of expertise: Semiconductor design automation, AI-driven engineering
– Role / Title: CTO and SVP, Synopsys (design software automation semiconductor company) [S17][S18]
– Caroline Louveaux
– Area of expertise: Payments security, privacy, AI-enabled fraud detection
– Role / Title: Chief Privacy AI and Data Responsibility Officer, MasterCard [S16]
– Syam Nair
– Area of expertise: Multi-cloud storage, data quality, AI-driven data preparation
– Role / Title: Chief Product Officer, NetApp (global multi-cloud service provider) [S1]
– Danielle Gilliam-Moore
– Area of expertise: AI public policy, governance frameworks
– Role / Title: Director of Global Public Policy, Salesforce (leads AI policy work) [S2][S3]
– Combiz Abdolrahimi
– Area of expertise: Governance, standards, policy implementation (former regulator)
– Role / Title: Industry professional with former government/regulatory experience (specific title not specified) [S4]
– Ellie Sakhaee
– Area of expertise: AI public policy, machine learning, human-in-the-loop governance
– Role / Title: Public Policy Team Member, Google; Ph.D. in Computer Science / Machine Learning [S5][S6]
– Sam Kaplan
– Area of expertise: Cybersecurity policy, AI risk standards
– Role / Title: Assistant General Counsel for Global Policy, Palo Alto Networks [S7]
– Jennifer Mulvaney
– Area of expertise: Technology policy advocacy, human-centered AI
– Role / Title: Public Policy Lead, Adobe [S11]
– Carly Ramsey
– Area of expertise: Internet infrastructure, AI standards, regional policy coordination
– Role / Title: Lead, Public Policy for Asia Pacific, Cloudflare (based in Singapore) [S12][S13]
Additional speakers:
– None (all speakers appearing in the transcript are included in the list above).
The discussion opened at the AI Impact Summit, organized by the Institute for Technology Innovation (ITI), with Jason Oxman outlining a two-part agenda: first to map the business case for “agentic AI” – AI that can act autonomously rather than merely provide recommendations – and second to explore the public-policy measures needed to encourage its use while safeguarding society [1-6].
Austin Mayron (Acting Director, U.S. Center for AI Standards and Innovation – CAISI) then described CAISI’s role and organisational context. CAISI sits within the Department of Commerce and, as the “front door for industry to the United States government,” serves as the primary entry point for industry engagement with federal AI policy [13-20]. He clarified that “the other aspect of our organization that bears note is that we are co-located with the National Institute of Standards and Technology (NIST)” [13-20]. CAISI also draws talent from Frontier AI Labs, which helps explain novel concepts to other parts of the administration [13-20]. The centre evolved from the U.S. AI Safety Institute to a standards-and-innovation focus in June 2025, signalling a shift from prescriptive safety to enabling innovation [16-18][S1][S19].
Just this week, CAISI kicked off an AI-agent standards initiative [32-38]. It issued a Request for Information (RFI) on AI-agent security [32-38] and, concurrently, “CAISI also points to a draft NIST-ITL publication on AI identity and verification that is currently open for public comment” [32-38][S-X]. Within days it announced sector-specific listening sessions on health-care, education and finance to collect industry-level barriers [32-38][156-166][168-171].
The business-case speakers followed.
Prith Banerjee (Synopsys) presented a hardware-centric use case. Synopsys, the leading electronic-design-automation provider, is expanding from chip design to “chips-to-systems” after acquiring Ansys for $35 billion [61-63]. He described “agentic engineers” – AI-driven agents that perform low-level reasoning tasks in chip and system design, complementing rather than replacing human engineers [90-94]. Accelerating product cycles in automotive and aerospace (from multi-year to annual cadences) and the growing complexity of designs (now trillions of transistors) exceed what human designers alone can manage [73-84][85-88]. These agents enable rapid verification and validation before hardware prototyping, a necessity when physical AI controls safety-critical functions such as brakes or steering [85-87][88-95].
Caroline Louveaux (Mastercard) offered a financial-services example. Mastercard has moved from AI that merely recommends actions to “agentic AI” that actively detects suspicious transactions, triages fraud signals and initiates secure payment flows in milliseconds [105-108][109-115]. She emphasized that such AI agents must operate within clearly defined permissions and be subject to continuous human oversight [111-115]. To institutionalise this, Mastercard devised a four-point guard-rail playbook: (1) “Know Your Agent” – verify the agent’s legitimacy; (2) security-by-design – protect credentials through tokenisation; (3) explicit consumer intent – ensure the user authorises each purchase; and (4) traceability/auditability – maintain records for dispute resolution and regulator confidence [218-236].
Syam Nair (NetApp) described a data-centric deployment. NetApp embeds AI agents close to storage controllers so that data can be prepared for AI workloads without moving it through cumbersome pipelines [135-137]. This proximity improves data quality, especially for unstructured data, and enables real-time security actions such as detecting threats within the 59-second average breach window [138-140]. He placed NetApp’s capability at roughly level 3 of a five-level autonomy spectrum, indicating an early-stage but rapidly progressing effort [141-148]. Nair warned that the “blast radius” of an error grows when many agents operate across an enterprise, so guardrails must be layered: public-private partnership on policy, rigorous data-governance to preserve lineage, and the principle that ultimate accountability remains with human owners [239-248].
The panel then turned to enterprise-wide risk management and guard-rail design. Prith Banerjee warned that software-defined physical systems (autonomous cars, aircraft) could be weaponised if compromised, citing a hacked car in Mumbai used as a weapon [191-198]. He argued that exhaustive digital-level verification (aiming for near-100 % coverage) is essential before any hardware is fabricated [205-207]. Caroline’s four-guard-rail framework and Syam’s layered approach echoed this need for clear permissions, human-in-the-loop oversight, and auditability [218-236][239-248]. Austin reinforced that standards development must be bottom-up, gathering input from field experts before defining problems and adopting a “humility-driven” approach that treats industry as the primary source of insight [158-162]. He also highlighted the importance of interoperability in future standards [172-174][172-174].
All panelists agreed that voluntary, consensus-based standards driven by industry-government collaboration are preferable to prescriptive regulation [172-174][19-22][26-29][332-339][304-307][364-368]. Carly Ramsey stressed that open models and open standards are needed to avoid fragmented regional regimes [304-307][S1]. Combiz Abdolrahimi added that abstract principles must be translated into concrete playbooks, benchmarks and operational guidance [364-368].
Policy recommendations converged on a human-centric, risk-based approach. Jennifer Mulvaney (U.S. Department of Labor) reminded the audience that policy should always protect humans first, asking “what does this mean for humans and how can we prevent harm?” [263-270][S73]. Ellie Sakhaee (Google) proposed regulating the applications of AI agents rather than the underlying models and suggested a continuum of autonomy that moves from “human-in-the-loop” to “human-on-the-loop” and eventually “human-in-command” as agents become more reliable [277-284][285-286]. This graduated oversight model mirrors the FAA’s transition from pilot-always-in-sight to pilot-on-the-loop for drones [285-286].
International coordination was identified as essential. Danielle Gilliam-Moore (U.S. State Department), Sam Kaplan (OECD) and Jennifer Mulvaney all pointed to the OECD’s AI principles and reporting framework as the primary global anchor, noting its influence on the EU AI Act and numerous U.S. state drafts [386-393][401-403][418-420]. Carly Ramsey added that regional events such as Singapore International Cyber Week provide a practical venue for cross-border dialogue and for aligning Singapore’s AI-governance framework with NIST standards [404-406][S1]. Sam highlighted the International Consortium of Safety Institutes as a tactical forum for developing technical taxonomies, while Combiz broadened the scope to include the ITU, UN and AI-for-Good initiatives [401-402][432-434].
Modest disagreement emerged over the optimal multilateral platform and the balance between global standards and agile, sector-specific frameworks. Danielle advocated a top-down reliance on the OECD; Carly preferred a region-focused cyber-week; Sam suggested a safety-institute consortium; and Combiz called for broader UN-based engagement [386-393][404-406][401-402][432-434]. Similarly, Danielle argued for fast, ministry-driven governance to fill gaps left by slow-moving ISO processes, whereas others (Carly, Sam, Austin) emphasised the need for globally harmonised voluntary standards [353-358][304-313][158-162].
Key take-aways
1. Safe, widespread adoption of agentic AI depends on (i) voluntary, consensus-based standards developed through a bottom-up industry-government partnership; (ii) layered enterprise guardrails that embed security-by-design, clear permissions, data-governance and human accountability; (iii) a human-centric policy lens that scales oversight with agent autonomy; and (iv) coordinated international effort anchored by the OECD but complemented by regional forums and technical consortia.
2. Unresolved issues include (a) precise technical specifications for AI-agent security and benchmarks for multi-agent interactions; (b) mechanisms for harmonising regional and global standards; and (c) definition of autonomy thresholds for shifting oversight models.
Continued collaboration among standards bodies, industry, academia and governments will be required to close these gaps.
Our second discussion will be this panel, which will discuss the business case use of agentic AI. And then we’ll follow that with a second panel, which will discuss the public policy implications of agentic AI. That is to say, what government should be doing to encourage and to safeguard the use of agentic AI. We all know that agentic AI is quite literally the AI of agents. And there’s been a lot of discussion here at the AI Impact Summit about how agentic AI is creating new opportunities for jobs, for societal benefits, for use cases across different industries. And one of the most important questions is, of course, what public policy solutions are going to be necessary to encourage the use of agentic AI.
So I’m very pleased to welcome as our opening speaker, Austin Mayron, who is the Acting Director of the Center for AI Standards and Innovation, and a senior, you have the longest title in the world, Austin. Thank you. Senior Legal Advisor to the Undersecretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office. office. Austin, we are thrilled to have you here. You have some very interesting updates on how the U .S. administration is approaching agentic AI, including what the office is doing, which I think is enormously important as well. So you’re going to join us for a few minutes of table -setting remarks, if you will, and we’re thrilled to have you here.
Austin, I’ll turn it over to you.
Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin Mayron, and I’m the Acting Director of the U .S. Center for AI Standards and Innovation, also called CAISI. CAISI was originally founded as the U .S. AI Safety Institute, but last year in June of 2025, Secretary of Commerce Howard Lutnick refounded us as the Center for AI Standards and Innovation. That signaled a shift away from safety principles, more towards standards and innovation. I think there’s two organizational aspects of CAISI that are worth note. The first is that we’re located within the Department of Commerce. We are very focused on helping industry. The Secretary has tasked us to be the front door for industry to the United States government, and we really see ourselves as serving in that role.
We collaborate with various aspects of the AI ecosystem, including the Frontier Labs, for instance, on pre -deployment evaluations. And we like to partner with industry to help understand government. As one example, sometimes there’s a lack of AI expertise within the U .S. government. And CAISI, because we have talent from Frontier AI Labs, we’re able to help explain novel concepts to other aspects of the administration. The other aspect of our organization that bears no is that we’re located with the NIST, the National Institute for Standards and Technology. And the thing that’s worth noting there is that NIST, throughout its history, it hasn’t been a regulatory organization. It’s been an organization that’s promoted economic growth and technological development by developing standards and facilitating the development of standards and best practices.
And so CAISI, we see our role as partnering with industry to develop the standards and best practices they need to flourish. And here, we’re here today to talk about AI agents, which is an incredibly timely topic. And so I thank ITI for organizing this. Just this week, CAISI, my organization, we kicked off an AI agent standards initiative. Our goal is to hear from industry how traditional standards work, best practices, guidelines can help unlock and facilitate adoption. So one area where we’ve already started that work is on AI agent security. We put out a request for information or RFI about what challenges industry is facing with AI agent security. Our colleagues at NIST at the Information Technology Laboratory also have a publication out for comment on AI identity and verification, which we encourage you, if you’re interested, please look at the documents, review them, send in your comments.
We also announced this week that we’re going to be holding sector specific listening sessions on barriers to adoption, the sectors of health care, education and finance. And our goal here is we want to learn actually what are the challenges that industry is facing. These AI agents, they have tremendous potential, but we want to understand. How CAISI and NIST and the U .S. government can help unlock adoption through standards and best practices. So I’m delighted to be here and take part in this conversation and learn more from my fellow panelists.
Thanks, Austin, so much. Really appreciate your being here and helping set the stage for us for our discussion of agentic AI. As I mentioned, we have three great experts here to start us off on the business side discussion before we move to the policy side discussion, because I really think it’s important for us to understand exactly what use cases of agentic AI are happening across different segments of the AI stack. So we’re very fortunate to have three experts here to help us with this discussion. Prith Banerjee is the CTO and SVP of Synopsys, the design software automation semiconductor company. Great to have you here, Prith. Caroline Louveaux is Chief Privacy AI and Data Responsibility Officer at MasterCard.
Caroline, thanks for being here. And also delighted to have Syam Nair, who is Chief Product Officer at NetApp, the global multi -cloud service provider. And so the three of them are each going to share a couple. A couple minutes. of opening remarks on agentic AI use cases. What we’ve asked them each to do is share with all of you kind of the top favorite agentic AI use case that’s happening so that we can use that as a way to frame the discussion around business and policy to solutions. So if we could, Prith, I’ll start with you for your favorite agentic AI use case that’s happening at Synopsys.
Sure. So I’m Prith Banerjee, and my role is to look at sort of future directions of where Synopsys is headed. And agentic AI is actually the core of this. But before I do that, I want to share with you what Synopsys does. Synopsys is the leading provider of electronic design automation tools and IP to design chips. So the chips from, say, NVIDIA or AMD or Broadcom, Qualcomm are designed with these billion transistor chips, trillion transistor chips designed with Synopsys tools. But the opportunity that Synopsys has, seen is these chips are going into systems, systems that are like cars or… aircraft or spacecraft or system data centers, healthcare, et cetera, right? So we have this vision of chips to systems that, and because of that, Synopsys recently acquired Ansys for $35 billion, right, to be a chips to systems company.
I came into Synopsys as CTO at Ansys. So now the challenge that I want to share with all of you is as you are designing a car, right, it’s a software -defined car, right, a Tesla car has more than 100 million lines of C code in that car. That code runs on an ECU, an ECU designed by NXP or STMicro or Qualcomm. And that chip is still not yet designed, right? It is being designed with, say, Synopsys tools, but you’re writing software on the tool or on that chip, and so you have to do what is called software -defined verification validation, right, before the software is, before the chip is designed. Right. And that. that control will control the electric brakes, the electric steering, the autonomous driving of the car.
And the car is, it’s a physical product, it is being driven on the road, right? And so you use ANSYS physics simulation like Fluent for aerodynamics or LS Dyna for crash or HFS for electromagnetic. So essentially what we are doing is bringing the physics of the world around us powered by AI along with the chip design in this what we call intelligent product design which is silicon design. So the chip inside any complex design, software enabled, so you can do software updates over there, updates and AI driven. So that’s all the context and if we are a $10 billion company with a market cap of 100 billion. So the agentic AI part is the following, that the pace of innovation in the world is changing.
You used to design a new car every 7 years or maybe 5 years. That pace of innovation is changing. like Tesla, Elon Musk said we have to do it every year. Every year they want to bring a new car to market. Or NVIDIA Jensen, right? The chip design used to be every three years. NVIDIA Jensen says you have to do it every year. So the pace of innovation is becoming faster and the complexity. You used to have a chip with maybe a million transistors. Now it’s a billion transistors. It’s a trillion transistors. It’s incredibly complex. And then you have the chip with all the complicated system. The complexity is so hard that you used to have human designers at the Qualcomm, NVIDIA, etc.
who could use those things using the Synopsys tools. You cannot do that anymore. It is very, very hard. That’s where agentic AI is coming in. So at Synopsys what we have created is agentic engineers. These are like human engineers that are not trying to take the jobs of human engineers away. They are going to complement the job of a human engineer so you at Broadcom, Qualcomm, we have a hundred thousand engineers. but you will be complemented with another 200 ,000 agentic engineers from Synopsys who will do the lower level reasoning job like a human, right? But the human will still be in the loop to make sure that you are not doing drastic sort of bad things, right?
This is the incredible opportunity. But as the world talks about agentic AI in the world of large language models and data and words as tokens, our world is what we call physical AI, which is physics, and it’s the physical AI part where we are applying our agentic engineering technology to. Very, very exciting area.
That’s great. And I love how you described the human engineers being complemented by, not replaced by, the agentic AI that’s helping them be more efficient and do their jobs better. Caroline, I think of payments networks as having used AI for decades, literally. The fact that you can take a plastic card and tie it back to a, a human being, no matter where they are in the world, is actually truly remarkable. When you think about how payments networks work, it is truly remarkable, the technology. especially since you’re processing literally millions of transactions a second around the world. So with that, you look over global AI for MasterCard, and I’m curious how agentic AI is influencing the work that you and your colleagues do to make these payments rails run around the world.
Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have been leveraging AI for decades to make our payment network safer and more secure for everyone. Now with agentic, we are moving from AI systems that recommend to AI systems that act, right? And in cybersecurity and payments, the shift is already real today. AI agentic systems are being deployed, for example, to detect suspicious transactions, to triage fraud signals, to initiate secure payment flows. If you think about it, if we want to be able to detect and to block fraud in real time, decisions have to be made in milliseconds. at scale. And of course, while speed and scale matter a lot, accountability is a must.
What’s important is that these agents don’t make decisions with open -ended autonomy. They must act within clear values, principles, within clear permissions. What is the agent allowed to do? What is not allowed to do? And when does a human need to step in? And of course, humans have to have full oversight end -to -end. So, I mean, there are many other use cases. I’m happy to talk more about that, but I think that’s really our main use case. But of course, the technology is moving really, really fast. We are now talking about this multi -agent ecosystem that raises a whole new range of opportunities as well as novel challenges. And so that’s where these kind of summits where we all come together are really, really important to really get it right.
I love how you characterize it as moving from what we call assistive AI to operational AI. In other words, instead of just helping with a task, the AI, as an agent, can actually take a task on. Still oversight in the system, and that, I should have previewed this. We’re going to come back around and talk to the panelists about guidelines and protections, and as Austin importantly noted at the outset, the security of the system, how that’s built in as well. And, Siam, I want to come to you next. The multi -cloud that NetApp operates obviously is moving data around the world on behalf of customers, storing data around the world and allowing your customers to access data in a multi -cloud environment.
How is agentic AI helping NetApp with that level of customer service?
Thank you. So NetApp actually, as you said, multi -cloud, we both power public cloud as well as private cloud. Many of the largest infrastructure is actually the data infrastructure. It’s built on NetApp. I’m a file storage standpoint. One of the key challenges in AI itself is having quality of data. Data quality is super important, and the previous session actually talked about it. And data quality, especially from unstructured, truly unstructured, how do you really get the structured value out of it? And that’s where agents can actually help and agents help, which is we are developing agents which are sitting closer to the storage controller. If you know the storage architecture says that without moving data and going through cluttered pipelines and, you know, positioning the data ready for AI, you can actually have the data at the source itself, which will be ready for AI.
And how this helps is, you know, many of the areas, cybersecurity, as it continues to grow as a threat, you know, 59 seconds is the average breakout of a threat these days, risk and threat will become super important to manage. And you need to do that at the layer where the data sits. So agentic has a really good use case with respect to that. We are still in our journey, early journey in terms of building these capabilities. One would say, look, if you have five levels of AI where, you know, agentic AI where level one is mostly assisted, co -pilot to autonomous agents, running a network of agents at level five, we’re still in that journey somewhere in the three range.
And that’s what we see from customers in terms of how they want to leverage data. So that’s one of my favorite use cases in preparing the data, making sure that the right data is available both for the agents and the agents can make it available for the use cases.
Yeah, interesting. So the agents are actually helping you expose any risks that may need to be addressed as part of that provisioning of data. And, Austin, I’m going to ask you to set up our second round question with me, not for me. And that is, you know, the industry has a responsibility to inform governments about risks and how they’re being addressed. So as we move into the next question for the panel around enterprise guardrails that companies are seeing. So, Austin, I’m going to ask you to set up your question. And then I’m going to ask you to set up your question. And then I’m going to ask you to set up your question. So, Austin, I’m going to ask you to set up your question.
And then I’m going to ask you to set up your question. anything in particular you would flag that you’re looking to hear from industry in the U .S. administration about those guardrails. You are overseeing an operation that asks for industry input, which I think is rare and particularly great. So thank you for doing that. Perhaps some practice tips that you can provide to everyone in the room about what it is helpful to provide government, the U .S. administration or other government colleagues that you’ve heard from on these issues and how it’s helpful to provide that information.
Yeah, absolutely. So at CAISI , our focus right now is truly on unlocking innovation and adoption. And we work in the standards space, and so we look to how NIST -fostered standards and best practices and guidelines documents can help with that innovation and that adoption. And so the NIST process, the way it normally works is we like to gather and collaborate. It can be an industry to… understand the challenges they’re facing. It’s more of a bottom -up, grassroots approach than a top -down. We’re not sitting there in Washington and saying, you know, this is the problem and we’re going to fix it. We take a little bit of humility and say, we don’t actually know what the problem is until we talk to the people who are closest to the issue, because we only have a narrow slice of the world from our vantage point, and the people who are actually in the field working on innovation, working on adoption, they have a better sense of what the barriers are.
And so we encourage everyone in industry and across the ecosystem to really engage with us, to tell us the problems that you’re encountering, and we have structured formal ways for you to do that. For instance, the request for information on AI agent security, I think it’s open for about another month, and some have already submitted comments, but we look forward to comments. As I said, we’re also convening listening sessions, I think in April, on barriers to adoption, particularly on agent issues for education, healthcare, and finance. We’re starting with those three sectors, but we really welcome that type of engagement, because we want to facilitate adoption. And one example that I sort of like to use…
I don’t know if it’s actually a barrier to adoption, but let’s say in a regulated field like healthcare or education, there’s PII, and there’s a reluctance to adopt because it’s unclear how the AI agents and systems are treating PII and whether it will satisfy regulatory burdens. CAISI could play a role in helping settle concerns about that because we could develop benchmarks, methodologies, and evaluation methods to give industry the confidence they need that, for instance, the model that they’re looking to procure and adopt and implement handles PII the way they need to to satisfy their regulatory obligations. So that’s a way where Casey, through measurement science, best practices, and standards, can help facilitate adoption. We’re also looking at interoperability, and we’ll have more about that in the coming months.
That’s great. Really appreciate that, Austin. And I love the focus on voluntary, industry -driven, consensus -based standards because that’s how the tech industry prefers to operate. It’s better than government regulation, particularly because those standards are global in nature, and NIST is a great example, as you noted, of support of those voluntary. consensus -based industry standards, which we would all prefer to operate. And, Prith, I’ll come back to you on this question of, I guess I’d call them guardrails, kind of the enterprise guardrails around risk management that you’re putting in place. Governments are paying attention. We want to handle these issues in the private sector. What are you seeing that’s important as far as those enterprise guardrails for risk management?
So that’s a great question. Actually, at the AI Summit yesterday, there were a lot of speakers, from starting with Prime Minister Modi to President Macron, everybody kind of talked about responsible, safe AI and AI for everyone. But I want everybody in the audience to understand what is going on in this world, right? So there is a problem, right? You have a video that you can watch on, say, YouTube or Facebook, and you want to prevent a young child from watching that, right? And that is responsible AI, and you want to make sure that a 12 -year -old doesn’t watch it. But if he or she watches it, it’s not the end of the world. I mean, yes, you have seen this, but the world that we live in is this intelligent product design, right?
You are designing a car, and we have, as Syam was mentioning, level 1, which is assistive, all the way to level 5, which is fully autonomous. Now, imagine a world – I’m now doing the scary part so you understand how scary it can be, right? An autonomous car that is driving on the streets of Mumbai, right? And it’s supposed to be autonomous, making sure the pedestrians and the cows are being avoided. But suppose there is a cyber attack, right? And somebody goes in, and you want to use that car as a weapon, right? As you know, there are terrorists that go in, and they bang into these things, right? So we have to make sure that these software -defined systems – just imagine an airplane, right?
You know what has happened in the past. In 9 -11, an airplane hit a thing. So you could imagine a software -defined airplane being used as a – as a missile, right? So this is how important it is because unlike the world of Facebook and Google, and I’m not undermining Facebook, Google, I’m just saying you are dealing with people watching stuff and saying like, unlike, right? We are dealing with physical AI interacting with the real world. If real world some things happen, some really dangerous things can happen, right? And so we have to be extra careful. So that’s the challenge. What we are trying to do is to make sure that as part of this agentic engineering workflow, we are doing it in a responsible manner, in a safe manner, right?
And the work that we are doing in terms of verification, validation. So the software flow that we do before we actually do a hardware prototyping, we do full like 100 % coverage at the digital level. So we are designing the airplane on the computer, designing the car on the computer with as close to 100 % guarantee. Nothing is 100%. but I want you to understand how much more complicated this is right because in the hands we can design software defined sort of data centers or software defined nuclear arsenals right in the hands of the wrong person some bad things can happen so we have to be extra careful about the responsible safe AI that we do for our intelligent product design.
It is happening software defined is happening but we have to be super careful.
Thank you, sometimes the best way to get people to pay attention to what you’re saying is to scare them and so you’ve certainly done that and Caroline there’s a lot of bad stuff happening on the payment systems as well and the consequences of fraud and security breaches are or actual shutdown of the network is almost impossible to contemplate global commerce grinding to a halt I don’t know if you want to scare people like that as well when you talk about.
Let me go there.
Go ahead.
With enterprise guidelines coming to New Delhi I watched the companion it’s a movie around romance robot I’m not going to spoil the end, but that’s actually a scary story for sure. Now, back to the MasterCard vault. The principle is very simple. Autonomy can only scale if there’s trust. And so at MasterCard, we think we have a role to play when it comes to agentic commerce, meaning you use an agent to make payments on your behalf. And so we want these agentic payments to be safe and secure and trusted. And therefore, we came up with a playbook with four key guardrails. The first one is know your agent. Before an agent acts and before it makes a payment, we want to make sure that it’s verified and trusted.
So everyone needs to know that it’s a legitimate agent and not a rogue robot or a fraudster. Important, right? The second one, of course, is security by design. It has to remain the foundation. And so we are leveraging advanced technologies around customer authentication, tokenization. to make sure that the sensitive credentials, for example, your card number, is not visible and not exposed to third parties, to the merchants, to the agents, or anything like that. Third, and that’s a bit new, we want to make sure that we have clear consumer intent. The consumer has to be always in control of what he or she authorizes the agent to purchase on his or her behalf. We learned this the practical way just a couple of months ago.
An employee at Massaca decided to ask an agent, hey, are you able to buy sushis? The idea was just to test the agent’s capability to do so, but the agent took the question literally and placed an order using the employee card details on file. So, lesson learned, clarity matters, clarity of the intent that can be verified, otherwise you end up with these platters of sushis. And then last but not least, everything has to be, traceable and auditable. and that’s needed if you want to be able to give consumers the redress if things go wrong, dispute resolution and of course to make the regulators happy and comfortable and so these guardrails are not there to slow adoption, you know, if done well they’re going to be key to scale adoption in a way that is trusted by design.
Great, sushi is not scary but the use case you described is, so appreciate that
It’s only sushi, we’re good.
It’s only sushi, that’s right Syam, you get to wrap us up because we’re closing the panel out You don’t have to scare people if you don’t want to but I’d love to hear how NetApp is thinking about enterprise guardrails for risk management around agentic AI
Yeah, no scary stories I think one of the ways I would say this is, you know, as humans we used to make mistakes but it was much more contained. As sometimes in enterprises you had insider threat but it was much more contained. But now you’re talking about a network of agents where the blast radius in terms of an error or a mistake or a threat is much more profound. So guardrails become important. They need to be at multiple levels. Number one is public -private partnership in identifying the guardrails in terms of how agents need to operate, being very specific to the enterprise, being very specific to the business is important, and working together with the customers, in some cases consumers, others in business -to -business, understanding the use case and for which how we need to build guardrails within the system.
And more importantly, I think, and I’ll go back to what one needs to figure out is the governance of the data because data is the one that is actually going to power how agents make these decisions, right? Unlike human, there is no empathy built into the agent, at least not at this point, and it is not making decisions based on situational awareness. It’s making decisions based on the data. And if the data can be manipulated, if the lineage of data is not properly understood, if it is not really governed, if there are no guardrails for that, then you could actually get outcomes from agents that are going to be scary. The last piece of this is, look, unlike agents, which can do everything, agents cannot take accountability.
They’re not responsible. They can’t take accountability. It’s the humans. It’s the business owner who takes it. So having those guardrails work in tandem with the customer, consumer, with the public -private sector partnership is super important in terms of defending.
Thank you. Thank you. policymakers looking at. And what should policymakers look at? Our goal in the tech industry, obviously, is to ensure that public policy is inspirational to innovators, that it doesn’t interfere with the ability of innovators to get the products and services out to market that we all want to see and benefit from. But of course, policymakers have other things in mind. They want to make sure that consumers are protected. They want to make sure that safety and security is part of the design of products that are deployed into the market. So we have a great industry panel of experts who are going to share their views on what policymakers should be thinking about and what they should be doing to inspire the use of agentic AI while also addressing important public policy concerns.
So I’ll ask each of our panelists to address that and to introduce themselves. Jennifer, I already said who you are. You can just introduce yourself and your company, and let’s take that as the prompt. And you get to pick one thing that you think policymakers should be most focused on. focusing on.
Great. Thank you, Jason. Jennifer Mulvaney with Adobe. And, you know, I learned a great Hindi term yesterday watching the prime minister speak, and that is mahaf, human. I mean, you really think about policy. Policy, you know, has been around since the dawn of time, and it really is about helping to prevent harms against humans. And so that is what policy still is meant to do today. I think when policymakers look at anything, whether it’s tech or welfare or tax policy, it’s what does this policy mean for humans and how to prevent harm and what does that mean? And we as lobbyists in Washington, D .C., or my former role there, you humans go in and talk about what it means for whatever that stakeholder group you’re talking about is.
So we’re now in a world of policy actually governing systems, not just people. But I think that the prime minister’s focus on human is something that Adobe talks a lot about as well, that should be humans before models. Our CEO of Adobe often says it’s not what we can do with technology, it’s what we should do. And I really love that statement because that really does think about what is this going to mean for humans? How can we advance that agenda?
Love that. Thank you, Jennifer. Yep. Ellie Sakhaee.
Hi, everyone. I’m Ellie Sakhaee. I am part of public policy team within Google. Several of our colleagues in the previous panel mentioned that agentic AI is not a point in development, right? So it’s, as we think about agentic AI, we should be thinking about the continuum, depending on agent’s autonomy, depending on their access to memory, depending on the context of use, and depending on their ability to do long -term planning and basically act on the real world. So that is why I think it’s important when we think about policy to think about this continuum of agents rather than something is agentic and something is not agentic. That being said, I think that one of the main safeguards that we talk about is human in the loop for agentic AI.
And that also varies significantly with the ability or the reliability of an agent. So as we move from agents that need confirmation for every single step that they want to take, they need human approval. As we move from them to agents that are more autonomous, we should be thinking about moving from human in the loop to human on the loop or human in command. A similar analogy to this is how Federal Aviation Administration in the U .S. thinks about moving from pilot being always in sight of drones to pilots being in command of drones. So as the safety of these drones improve and safety of AI systems to keep track of these drones through detection and avoid system improves, we can move from pilot.
always keeping an eye line with the drone to pilot being on the loop or pilot being in command. So I think these analogies within different industries allow us to think about agents. And another thing that I think policymakers, as they think about agents, should consider is that agents may be a new technology, but they, at the end of the day, they may cause harm. So we should be thinking about regulating the use or application or the harm that they actualize compared to regulating the underlying technology. Otherwise, we end up regulating, let’s say, the AI models that by the time that the regulation goes into effect, the AI model has evolved into something that is now agentic.
Makes sense, and appreciate your perspective. And I should have noted that you’re not only doing public policy work for Google, but you’re actually a real agent. You’re a real computer scientist, Ph .D., machine learning. She knows how the machines think. which is important as well. And sometimes they talk to us, right? Sometimes. Let’s go to Carly Cloudflare next.
Great. Thank you. Hi, everyone. My name is Carly Ramsey. I lead public policy for Asia Pacific for Cloudflare. I’m based in Singapore. And Cloudflare, just for those of you who don’t know us, Cloudflare runs a global network, and we kind of sit in between our customers and their users, and we protect the traffic that goes back and forth and take a large majority of all the AI model providers are our customers as well, so we’re protecting that traffic as it goes back and forth. So we have a unique viewpoint. We also offer developer tools as well, and people are building AI agents off of Cloudflare, so there’s that angle that Cloudflare sees as well. So, like you said, choose one thing that we recommend to policymakers.
That’s a hard one, but I was thinking in keeping with the theme of this summit, which is very much about inclusive AI, I think that something that policymakers should consider is whether or not we’re making agentic AI specifically available. for everyone, right? So that becomes, is it accessible? Are the standards perhaps open? I think open models, open standards are really interesting and are allowing people to access tools that they might not normally be able to access. And so as policymakers think about diffusing this technology more widely, maybe just even outside of the enterprises, one thing that as someone who sits in Asia Pacific, and this is really concerning to me, is like how do we ensure that the different governments when they’re making these tools accessible are talking to each other?
And I think ITI has a really neat role to play in that actually because we all know that NIST is the gold standard and these are voluntary standards. They’re often referenced a lot in Asia actually. Singapore just came out with their own framework on agentic AI governance, right? And the question is, is that going to be compatible with whatever NIST is going to put out? Big question. Singapore is a leader in cybersecurity standards in this region. And I’ve had some interesting conversations here in these past couple of days about India. India, obviously, with the bastion of tech talent that we see in India, they want to be involved in standard development and for the global south.
You know what I mean? So great. And how do we get them involved? And how do we make sure that as global companies that they’re not – all of these standards aren’t contradicting each other as well, right? So that harmonization piece is very important.
So important. Technology doesn’t want to stop at borders. It wants to serve the world, and such an important issue. Sam, Palo Alto? Palo Alto? Perfect. Palo Alto.
You conveniently sat the two cyber companies, cybersecurity companies, next to each other. So my name is Sam Kaplan. I’m the Assistant General Counsel for Global Policy at Palo Alto Networks. And for those of you that don’t know us, we’re the world’s largest pure play cybersecurity company. Can you hear me? Yeah. Okay. There it’s better. Sorry. I need to project better. Anyways, I think, Jason, to pivot off of your question, I think, you know, at a high level, one of the – The one last question. and I think if we could impart to policymakers is, you know, start with the standards organizations, to tell you the truth. The standards organizations, both in the United States but also abroad, Carly referred to the Singapore agency, but they are in the midst of developing these voluntary frameworks that are really serving as the foundation, not only to understanding the technology but to better understand sort of the risk picture that we are facing when it comes to these types of technologies, where we started with traditional model security frameworks when it comes to LLMs that are all based on sort of prompt and responses.
These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic, and as they are painting a better picture and working with industry to understand how that risk picture is changing and how what was once sort of… almost a two -dimensional… understanding of the risk when it comes to AI models is now very much a three -dimensional picture when you’re looking at agents, because these are the parts of the models that all of a sudden have arms and legs. So when you’re looking at this from a security perspective, you’re taking what could be sort of a digital threat that can sort of metastasize on networks. These are threats that all of a sudden can have kinetic consequences in real life as these agents are executing decisions across the financial system from your previous panel, but across autonomous systems.
So understanding that risk picture is going to be critically important. And last, I think that really pivots into one of the themes from the summit itself, as policymakers, in particular policymakers, are looking at sort of responsible and safe deployment. They need to understand and appreciate that security, security of those models, security of those agents, is a foundational layer to increasing trust, to facilitating response. deployment of AI because it’s the best way to secure and, as much as we can, understand the behavior of these models and agents as they’re interacting with the ecosystem and now the real physical world that we’re seeing.
Yeah, and policymakers are keeping an eye on all the products and services to see if that is done well or not, in which case they may step in. All right, to follow your thematic, we’re moving from cybersecurity to enterprise software. You’re going to take my joke, aren’t you? You sat me next to condos. I know, I know. It’s not my joke, it’s Sam’s joke. But, yes, I’m going to take it. I’m going to take it. So, Danielle, please commence the enterprise software portion of our program. I can speak for you if you want me to. I’m joking.
Danielle with Salesforce. I’m our director of global public policy, and I lead our AI policy work. The panelists have said a lot of great things, and they’ve also stolen a lot of what I’m going to say, so I’ll try to make this short. But when we think about AI, I think there’s – A governance response. Okay. needs to happen and when we talk about governance I think a lot of people conflate governance with regulation and governance is more than regulation. Governance can be regulation but it’s also standards, it’s also global norms, it’s also you know risk and quality assurance procedures in companies and so along with the standards piece I think a critical thing to remember is that you know ISO controls takes about three years to that process so it’s quite a long process.
So when you look at the ISO 42001 standard it’s a great standard but it’ll take time to further build on that which I think then makes in organizations likeness the different safety Institute’s incredibly important in filling in the gaps while work is being done to bring about new controls around agentic. The other thing I’ll say is on regulation there’s this emerging framework that it was first kind of started in the UK but I’m seeing governments like Indonesia on the other hand, there’s a lot of government that’s how we can make sure that we’re not just looking at the data and the data is is being used to make sure that we’re not just looking at the data and the data take this on of instead of having this large overarching AI regulation they’re looking at they’re allowing the different ministries that have core competencies on things like financial services or health care to take the lead so you have a more diffuse model that’s happening and I encourage I would encourage lawmakers to look at that you know some of these agencies have years and years and years and years of relationships and expertise and so wouldn’t they be best placed to think about not necessarily regulations but frameworks rules that best suit you know a small startup that isn’t that is operating you know a financial services agent or something like that some edge use case I think that is a more agile way to look at agentic which you know agility does I think bring about adoption and is very key to adoption.
Thanks.
Perfect. Combiz is anything left for you to say?
I was just going to say ditto to everything that Danielle said because that’s basically what I was going to say, and she said it way better than I could ever do. Yeah, calm these up. They’re a human service now. I guess I would add just having worked in government now within industry, there’s kind of – I like to think like I could sort of have like the vantage point of like a former regulator, policymaker as well as now in industry. And I think what we are looking for and what we’ve heard earlier today is like we want clarity. We want clarity. We want standards. We want to – like we want to see what good governance looks like.
Don’t give us – if I could give a message to governments and regulators, don’t give us sort of theoretical abstract principles, but give us actually what practical standards, what does good governance look like, operational clarity, playbooks, model frameworks. Jason and I, I remember – for many years ago when I was at Treasury and you were at ETF, you know, there was this line, like, you know, these technologies are rapidly evolving. And as they’re evolving, policies and regulations need to evolve with them. Otherwise, it’s going to stifle these innovations, and it’s going to actually create more harm than good.
Well put. Well put. All right, so now that we’ve provided a wish list for regulators, the next question, and Danielle, I’m going to give you the chance to go first because of your observation that sometimes panels go down the line and it’s not fair to the people who are at the end of the panel. I think that’s absolutely true. I would have let Kambiz go first, but you’re speaking for the enterprise software industry generally. So the question is, you know, one of the big themes here at the AI Impact Summit is unification of the policy agenda across countries, across governments, across regions. So. So is there a particular platform you’ve seen or organization you’ve seen?
Is there a particular place where conversations like the ones we’ve been talking about here should be taking place? You know, the U .S., India, like -minded governments around the world, they want to be all on the same page. But there is a tendency for India -specific standards, for U .S.-specific standards. There’s a tendency for that in the physical world and in the digital world, and that’s very difficult for us to operate in. So in the agentic AI arena, I’m curious from all of you if there is a particular multilateral venue or a particular platform or a particular thing you’ve seen work well that you would recommend to governments here that they look to for this.
And, Danielle, have I bought you enough time to come up with your answer so that I can call on you first?
I woke up this morning knowing the answer to this question. Oh, excellent. Okay. I live for this question. It’s all yours. Which is the OECD. All right. The OECD, I think, is kind of – it’s not worth it. I remember it all started, but there was this really interesting moment where the OECD puts out principles in – was it 2019, I believe? And then it was like it set the floor for everyone else. I mean, the EU AI Act’s definitions are based off of those principles. We’ve seen draft legislation at the state level that’s based off of the OECD AI principles. Globally, when I was doing rounds of meetings in APAC, they were looking at the OECD principles.
So I feel like the world is shouting OECD and a lot of the regulatory work that they’re doing, but they don’t necessarily say they’re not always looking there. But the OECD has been doing such interesting work. They now have the reporting framework. They’re doing work with GPI. Them having that Hiroshima AI process framework, that was them taking the work of the G7 and bringing it into what they’re doing. So the OECD is doing so much work to reach out, and so I would encourage governments to look at what the OECD is doing and help them built.
That’s great. Sam? You can pick the same one if you want to or …
well and I’m actually going to layer it because I think Danielle is exactly right I think when you’re looking from a policy and higher level governance the OECD has been the leader in this there are structures in place through the OECD to develop these if you look at legislation regulatory proposals that have come out even across the various US states they’ve based definitions off of what the OECD so that has been a foundational piece I think so from a broader perspective I think that’s a good layer I think you know the one that has potential I would like it to see move more tactical rather than being a little bit esoteric and studying is the International Consortium of Safety Institutes I think the structures are there you have the right players that are coming to the table I think if those organizations like what Cassie’s doing right now are advancing you know more tactical standards to create a taxonomy when it comes to agentic AI security how are we measuring how the attack is going to affect the surface has changed when it comes to agents.
To understand the scope of the scale of this problem, I think there’s a great deal of potential, but I think you need sort of these two levelings to talk policy and standards.
Fantastic. Carly?
Just to add something different to the discussion is that based in Singapore, what I’ve seen in the years that I’ve been there is that the Singapore International Cyber Week has been, every year has gotten more attendance from governments from all around the world. So that is a potential, it’s an annual event, and so the positioning is on policy, bringing governments to discuss cyber policy. And so potentially that is an area that could be considered, sure, that the varying countries from around the world, the different, like India is well attended at Singapore International Cyber Week, make sure that they all have a voice in the future of Identity AI.
That’s great. Love it. Ellie, do you have a preferred platform? Multilateral?
Yes, I’m going to add to what my colleague said here, and that is technical benchmarks. We talk about the standards, but we may understand what agents do, but we don’t fully understand what multi -agent systems may do. They may have emerging risks. They may have completely different behaviors that we don’t really know because we don’t really have real versions of multi -agent systems. There are some emerging, but the risk surface will change as these agents interact with each other. So I think the academic community, industry, all of us have a role to play to develop and expand the benchmarks for multi -agent systems to make sure before we put them into the world, they are tested.
Great. Jennifer, and then Cambiz, you’re going to get the last word.
Thank you for sharing. So what I would just say is I think that definitely OECD comes to mind as the largest. Most credible group, and I think that makes sense. But we do have to think about having space for some of the smaller, more regional groups as well. I’m speaking in Tokyo. couple weeks at the Friends of Hiroshima G7, where they had their principles there back when Japan hosted the G7. So I think that’s really important to have those types of smaller regional, perhaps even focused on specific policy areas to then feed into the bigger consortium in a way that people can understand. So I think that’s really important.
That’s great. That’s great. Kambiz, close us out.
Yeah, hopefully. So actually, I was surprised that nobody mentioned the one that I was like, please don’t mention it. Please don’t mention it. Let me do it. So we’re talking about standards. We’re talking about technical benchmarks. We’re talking about principles. We’re talking about coordination at a global scale, private sector, governments, academia, institutions, the ITU, the UN, AI for Good. I mean, they do all of that. And I think that we want to engage with more countries, with more stakeholders in this conversation and make sure that we are being inclusive. and that’s one of the sort of multilateral forums that I would look to.
That’s a terrific one. Thanks for adding one to the list at the end of the round. This has been a fantastic discussion. I love the way we paired the business discussion of Agentic AI with the policy recommendations, and hopefully policymakers will pay attention to what we’re doing. ITI is proud to represent all of the companies here on the panel here today as part of the global tech industry and particularly proud to be partnered with Government of India on the AI Impact Summit. Our congratulations to the Prime Minister and to the entire Government of India for this incredible, incredible gathering. Thank you to all of you for being here to be a part of this important discussion, and please join me in thanking our terrific panelists.
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin Mayron, and I’m the Acting Director of the U.S. Center for AI Standards and Innovat…
EventAustin Marin, Acting Director of the US Center for AI Standards and Innovation, introduced a major new government initiative aimed at supporting the development of AI agent standards. The Centre, rece…
EventAnd this is almost like a test for me of kind of saying. These names of these institutions through this panel. But they just announced a few days ago that they’re going to be working on standardizing …
EventIn current discourse, agentic AI usually refers to systems that can pursue goals with limited supervision. Such systems may break complex objectives into subtasks, call external tools or APIs, and coo…
BlogAI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search interest for ‘AI agents’ surged throughout the year, reflecting a broader shift…
UpdatesJeremy Allaire describes the broad proliferation of stablecoin use cases across different sectors of the economy. He argues that stablecoins are being adopted by major platforms and companies for vari…
EventSynopsys hasintroduced AgentEngineer,an AI-powered technology designed to streamline semiconductor design by automating complex engineering tasks. The company, which provides software for chip design,…
UpdatesSure. So I’m Prith Banerjee, and my role is to look at sort of future directions of where Synopsys is headed. And agentic AI is actually the core of this. But before I do that, I want to share with yo…
Event_reportingThe US tech giant, NVIDIA,has deepenedits long-standing partnership with Synopsys through a multi-year strategy designed to redefine digital engineering across global industries. An agreement that inc…
UpdatesFraugster, a German-Israeli payment security company, has launched afraud prevention solution, Fraud Free Product, using artificial intelligence technology to detect cases of fraud. The product is is …
UpdatesPayOS and Mastercard havecompleted the first live agentic paymentusing a Mastercard Agentic Token, marking a pivotal step for AI-driven commerce. The demonstration, powered by Mastercard Agent Pay, ex…
UpdatesAlibaba Cloud Intelligence Group has played a significant role in cloud-based data governance, offering a range of cloud services to clients from over 200 countries and regions worldwide. Their servic…
Event– i. Whole-of-government efficiencies: Reducing the cost of developing and maintaining technology and reducing the duplication of capabilities across government. – ii. Interoperability: E…
ResourceWe need guardrails that preserve human agency, human oversight and human accountability
Event“…they can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately so it should have governance then it’s auditability…”<a href=”https://dig.wat…
EventThe discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-fits-all approaches, noting that “interoperability can often mean the dominance …
EventAnd we’ve seen with the GDPR framework. For example, that that has had a limiting effect on the African continent. So I think we rather want to be seeing kind of. a global consensus around a set of pr…
Event_reportingAudrey Plonk: Does it work now? Okay, now I can hear you. Oh, wonderful. Thank you. I think maybe I was in the observer room and not the speaker room. Anyway, thank you for the kind introduction, Y…
EventThe discussion aimed to explore how artificial intelligence, particularly large language models, can transform public sector operations and improve citizen services. The panel sought to identify pract…
EventAnd with the interfaces that we have today, they can be introduced also in business applications. And I think what ethical aspects need and legal aspects need to be considered are very context specifi…
EventModerator 1 – Maria Paz Canales Lobel:Thank you very much, Maria, for the opportunity to be here with you today, and I’m very pleased to have you here with us today. Good afternoon, everyone. My name …
EventIn summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call for a significant and operational Convention. They emphasised strategic planning …
EventThe tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate for a high-level international gathering, with speakers expressing honor, gratitud…
EventThe tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-looking perspective, emphasizing partnership and shared responsibility. The discu…
EventThe overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological change but expressed confidence in the ability of democratic institutions and mult…
EventThe tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere acknowledging 20 years of progress while expressing serious concerns about curren…
EventThe tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunities and potential ways forward. There was a sense of urgency about the need for …
EventThe overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technology for government. There was a sense of urgency about the need for governments t…
EventAnthony Scaramucci: Hey, listen, I mean, you know, we have to call balls and strikes. It’s probably not a great European analogy, but at the end of the day, I was there when President Trump gave th…
EventThe discussion maintained an overwhelmingly optimistic and energetic tone throughout. It began with excitement about youth innovations and government initiatives, continued with passionate advocacy fr…
EventThe tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementations and expressing confidence in achieving ambitious goals. There’s a sense of ur…
EventAs we refocus on existing risks, some accountability is due:how and why did respected voices get carried away with AGI prognostications? How did the media amplify unverified claims of near-omnipotent …
BlogLong-term risksare the scary sci-fi stuff – the unknown unknowns. These are the existential threats, the extinction risks that could see AI evolve from servant to master, jeopardising even humanity’s …
BlogAs AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunities and risks. Beyond traditional regulatory frameworks, strategies include subs…
UpdatesAI applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations with min…
TechnologiesThe tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shifts to educational and expansive while presenting AI capabilities. It becomes inc…
EventThe tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized opportunities rather than obstacles, with particular enthusiasm around technology’s p…
EventThe discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkable consensus on the importance of standards, with no significant disagreements o…
EventThe tone was collaborative and constructive throughout, with panelists building on each other’s points and sharing practical experiences. There was a sense of urgency about addressing systemic challen…
EventThe discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Participants demonstrated genuine enthusiasm for AI’s potential while expressing well-founded concerns ab…
EventThe discussion maintained a serious, academic tone throughout, with participants demonstrating deep expertise and concern about the challenges presented. The tone was collaborative and constructive, w…
EventThe overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, with multiple speakers thanking organizers and participants. The tone became more …
EventThe tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusiastic and grateful atmosphere, with speakers expressing appreciation for partici…
EventThe discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebration of achievements, and forward-looking optimism. However, there are moments of…
EventSustained collaboration between governments, industry, and other stakeholders is essential for translating recommendations into meaningful outcomes
EventThe AU’s commitment to working with Member States in adopting the meeting’s recommendations was reaffirmed, alongside the importance of partnerships with technical support organisations to strengthen …
Event“CAISI was originally founded as the U.S. AI Safety Institute”
The transcript of Austin Mayron states that CAISI was originally founded as the U.S. AI Safety Institute, confirming the report’s claim [S2].
“The centre evolved from the U.S. AI Safety Institute to a standards‑and‑innovation focus in June 2025, signalling a shift from prescriptive safety to enabling innovation”
According to the same transcript, the transition from the U.S. AI Safety Institute to CAISI occurred “last year,” not specifically in June 2025, indicating a discrepancy in the reported timing [S2].
“The discussion opened at the AI Impact Summit”
The knowledge base records the AI Impact Summit as a 2026 event, confirming that such a summit took place, though it does not specify the organizer [S106].
The panel exhibits strong consensus around four core themes: (1) voluntary, industry‑driven consensus standards are preferred to top‑down regulation; (2) a bottom‑up, collaborative approach to standards and policy is essential; (3) security, risk assessment, and layered guardrails—including human oversight and data governance—are critical for safe agentic AI; (4) the OECD serves as the primary global anchor for policy alignment, complemented by calls for inclusive, open standards and multilateral coordination.
High consensus across technical, policy, and governance dimensions, indicating that future policy initiatives are likely to prioritize voluntary standards, collaborative stakeholder engagement, robust security frameworks, and alignment with OECD principles, thereby facilitating broader industry adoption while safeguarding societal interests.
The panel largely converged on the importance of standards, guardrails, and collaborative governance for agentic AI. The most notable divergences concern the preferred multilateral coordination mechanism (OECD vs regional events vs safety‑institute consortia) and the balance between global standards and agile, sector‑specific frameworks.
Low to moderate. While participants share common goals of safe, trustworthy, and inclusive agentic AI, they differ on the pathways to achieve these goals. The disagreements are more about implementation details than fundamental principles, suggesting that consensus on high‑level policy is achievable, but coordination on specific institutional venues and governance models will require further negotiation.
The discussion was driven forward by a series of pivotal remarks that moved the conversation from a broad overview of agentic AI to concrete concerns about safety, governance, and global coordination. Austin’s framing of CAISI’s standards‑focused mission established a bottom‑up policy lens, which Prith amplified with vivid risk scenarios and the notion of ‘agentic engineers.’ Caroline’s four guardrails and Syam’s emphasis on data governance translated these risks into actionable controls, while Ellie’s continuum of autonomy offered a scalable oversight model. The human‑centric reminder from Jennifer kept the dialogue grounded in societal impact. Finally, Carly, Danielle, and Combiz converged on the need for inclusive, harmonized, and practical standards, pinpointing the OECD and other multilateral forums as the vehicles for such coordination. Collectively, these comments shifted the tone from exploratory to solution‑oriented, deepened the analysis of risk and governance, and forged a consensus around the importance of standards, human oversight, and international collaboration in shaping policy for agentic AI.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

