Aligning AI Governance Across the Tech Stack ITI C-Suite Panel
20 Feb 2026 11:00h - 12:00h
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel
Summary
The panel opened by noting the challenge of managing AI risk while supporting global innovation and the need for governments to align their approaches [1-2]. Panelists agreed that fragmented national policies risk stifling cross-border AI services, so alignment is essential [12][15].
Jay Chaudhry warned that if every country imposed its own AI rules, multinational firms would face operational friction, yet excessive alignment could also hinder innovation [22-24]. He argued that too much compliance kills innovation and that a balanced, flexible approach is preferable [27-28].
Aparna Bawa emphasized that cross-border data flows underpin services like Zoom and AI, and restricting them would impede citizens’ progress [47-50]. She described the trade-off between protecting privacy/security and maintaining free data movement, calling for a basic, commonly understood framework [52-56]. Bawa also highlighted a partnership model where enterprises provide safeguards while users adopt responsible AI practices [106-108][121-128].
David Zapolsky echoed the importance of unrestricted flow of goods, information, and services for Amazon’s global operations, from e-commerce to satellite internet [58-61]. He cautioned that premature, blanket regulation creates uncertainty and costs, citing Colorado’s early AI law as an example of unclear implementation [64-68]. Zapolsky suggested focusing on high-risk uses-such as decisions affecting health or civil rights-and building common principles rather than a universal theory of AI regulation [66-68].
Jarek Kutylowski argued that DeepL’s global mission requires a transparent, harmonized governance layer that respects sovereignty yet enables consistent AI services worldwide [75-82]. He noted that as DeepL moves into agentic AI, the stakes rise and the company must embed trust and flexible controls to meet varied regulatory expectations [167-174][176-182].
All panelists concluded that developing international standards and inclusive, up-skilling initiatives will be key to unlocking AI’s benefits without over-regulation [364-368][390-393].
Keypoints
Major discussion points
– Global alignment of AI governance is essential to avoid fragmentation and sustain innovation.
The moderator frames the need for coordinated policy across borders [1-2] and notes that AI “doesn’t stop at borders” [15-20]. Panelists echo this: Jay points out the chaos of 50-country rule-sets [22-28]; Aparna stresses that cross-border data flows are the lifeblood of services like Zoom and warns that heavy restrictions “impede their own citizens’ progress” [39-52]; David argues for “common principles” and a “high-risk” baseline rather than a “unified field theory” of regulation [58-68]; Jarek adds that a “common layer…with a right balance of protecting sovereignty” would benefit global users [75-82].
– Finding the right balance between risk-based regulation and innovation is a recurring tension.
Jason warns that “acting too much…can stifle innovation” [30-34], while Jay observes that “when we start doing too much governance…we start killing innovations” [27-28]. David describes the danger of premature, blanket rules (e.g., Colorado’s early AI law) that create “costs…uncertainty and you inhibit innovation” [64-68]. Later, Jay stresses the need for “flexible policy that evolves” and cautions that “compliance doesn’t mean security” and that over-regulation can render controls obsolete [180-202]. David reinforces the point by differentiating risk profiles (shopping assistant vs. medical documentation) and urging regulators to “not…inhibit adoption of really useful ways” [281-286][288-295].
– Security and trust are non-negotiable foundations, especially as AI agents become more powerful.
The moderator explicitly asks about the “trust and security conversation” [84-86]. Jay explains that AI can be “abused” through data-poisoning and other attacks, and argues for a security overlay across all five AI layers [87-95]. In the forward-looking segment he warns that “AI agents will be the weakest link” and could be hijacked, underscoring the need for identity, authorization, and robust zero-trust controls [340-358].
– Enterprises and end-users share responsibility; product design must embed choice, education, and safeguards.
Aparna describes the partnership model: enterprises must provide “sufficient controls for the individual user” while users need basic AI hygiene (e.g., not feeding personal data into prompts) [102-130]. She later expands on how Zoom offers tiered controls-from enterprise admin toggles to consumer-level safety features-so that “every risk-based decision is you are a user” across diverse contexts [226-272].
– Upstream governance decisions (e.g., Amazon’s cloud services) shape downstream customer capabilities and must be built with security, data-ownership, and flexibility in mind.
David outlines Amazon’s “upstream” approach: a Bedrock platform that supplies over 100 models, keeps customer data private, embeds guardrails, and provides disclosures so enterprises can control outputs [137-160]. He also notes that any government-imposed barrier creates “friction” for Amazon’s globally interoperable services [58-63].
Overall purpose / goal of the discussion
The panel was convened to explore how governments worldwide can cooperate with industry to create a coherent, risk-aware AI governance framework that protects citizens, preserves security, and yet does not choke the rapid innovation needed for AI-driven global interoperability.
Tone of the conversation
The tone begins formally and forward-looking, emphasizing the strategic importance of alignment. As individual speakers share concrete experiences, it becomes more pragmatic and collaborative, highlighting real-world trade-offs and shared responsibilities. Toward the end, the mood turns hopeful and aspirational, focusing on inclusive growth, emerging international standards, and a vision of cross-border cooperation for the next summit. Throughout, the discussion remains constructive, balancing caution about over-regulation with optimism about coordinated standards.
Speakers
– Jason Oxman – Moderator/Host; President & CEO, Information Technology Industry Council (ITI) [S7][S8]
– Jay Chaudhry – CEO, Chairman, and Founder of Zscaler; security expert [S9][S11]
– Aparna Bawa – Chief Operating Officer (COO) of Zoom [S4]
– David Zapolsky – Chief Global Affairs and Legal Officer at Amazon [S2]
– Jarek Kutylowski – CEO of DeepL (also referred to as Dr. Jarek Kutylowski) [S6][S5]
Additional speakers:
– None
The panel opened with Jason Oxman framing the dual challenge for the AI industry: managing risk while fostering global innovation and interoperability, and urging governments to move beyond fragmented, nation-centric rules toward coordinated AI governance that can support systems at scale [1-2].
After a brief roll-call of the participants – Jay Chaudhry (CEO, Zscaler), Aparna Bawa (COO, Zoom), David Zapolsky (Chief Global Affairs & Legal Officer, Amazon) and Dr Jarek Kutylowski (CEO, DeepL) – the moderator asked each panelist why cross-jurisdictional alignment matters.
Jay Chaudhry warned that a multinational corporation operating in dozens of countries would be crippled if each market imposed its own AI regime, creating “a lot of issues” and “killing innovations” when governance becomes excessive [6-9]. He also referenced India’s five-layer AI security model, noting that without a comparable security overlay the model could be abused, underscoring the need to embed sovereignty-respecting security controls from the outset [10-12].
Aparna Bawa emphasized the importance of cross-border data flows for Zoom’s global connectivity and argued that any restriction “impedes their own citizens’ progress” by throttling the infrastructure that underpins AI services [13-16]. She added that the COVID-19 pandemic forced Zoom to shift from an enterprise-only platform to a consumer-facing service, prompting the rapid deployment of default security controls such as waiting rooms and passcodes to balance swift innovation with user safety [17-20].
David Zapolsky described how Amazon’s e-commerce, cloud, and satellite-Internet businesses rely on the free flow of goods, information, and open skies, and warned that government-imposed barriers generate friction and uncertainty. He illustrated this with Colorado’s early AI law, showing how premature, blanket regulation creates costs and stalls adoption because “no one really knows how to apply it” [21-24][28-31]. He also noted Amazon’s internal “launch-everywhere” mantra – the desire to roll out new AI features globally at once – which sometimes must be delayed due to regulatory uncertainty [25-27].
Jarek Kutylowski explained DeepL’s mission to enable multilingual communication through a “transparent, common layer” of governance that balances national sovereignty with shared norms, allowing the company to serve a truly global market while maintaining trust in AI outputs [32-35]. He added that growing up under Europe’s early AI regulation gave DeepL an “edge” in learning to work with regulatory requirements, informing its strategy for expansion into other markets [36-38].
The discussion then turned to security and trust as non-negotiable foundations. Jay Chaudhry argued that AI is “powerful but dangerous” and advocated a security overlay across all five layers of the AI stack to guard against data poisoning, rogue agents, and AI-enabled threats such as ransomware and nation-state misuse [39-45][46-48][49-52]. Aparna Bawa highlighted Zoom’s partnership model, noting that privacy, security, and user-choice are embedded in the product-from enterprise-level toggles to consumer safeguards-and that users must also practice “basic AI hygiene,” for example by avoiding the inclusion of personal data in prompts [53-57].
David Zapolsky then described Amazon’s Bedrock platform, which offers over one hundred models while guaranteeing that “the data they use…stays their data.” The service includes built-in guardrails, content filtering, and transparent disclosures, shifting much of the downstream governance burden to customers in a secure, scalable environment [58-62].
Jarek Kutylowski discussed DeepL’s move into agentic AI, noting that the stakes have risen from simple email translation to high-impact tasks such as translating R&D documentation for drug approvals. He argued that trust in AI outcomes must be reinforced by transparent, adaptable governance and that providing customers with tools to manage risk themselves is a hallmark of a mature AI provider [63-68].
When the moderator asked how a flexible, risk-based approach might preserve both safety and progress, the panel converged on several points. All three speakers agreed that over-regulation “kills innovation” and that governance must be evidence-based and use-case specific. David Zapolsky defined “high-risk” uses explicitly as “decisions that affect life, health or civil rights” and advocated a principle-based approach that first identifies such uses before tailoring safeguards [69-71][72-74]. Aparna Bawa echoed the need for a “basic level framework” that respects national sovereignty while providing clear, evidence-driven guidelines for developers [75-77].
Looking ahead one year, the panelists shared a common vision of inclusive, standards-driven AI. Jay Chaudhry called for up-skilling programs and configurable security controls that enable enterprises of all sizes to adopt AI safely [78-80]. Aparna Bawa stressed the importance of low-bandwidth access so that even a farmer in a Karnataka village can benefit from AI, linking inclusivity to market creation [81-84]. David Zapolsky highlighted the emerging international consensus around standards such as ISO 42001, which would provide “a common set of principles and a common set of technical standards” for global AI governance [85-88]. Jarek Kutylowski concluded that a global framework would facilitate seamless multilingual collaboration, the core of DeepL’s mission [89-91].
Collectively, the panel calls for a globally-aligned, risk-based AI governance framework that protects security, respects sovereignty, and enables inclusive innovation. [92-93]
The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and interoperability. So today’s discussion, we’re very fortunate to have leaders from across the AI stack, if you will, who are here with us to discuss how governments can help industry work in partnership with industry, if you will, to align responsibilities, to reduce fragmentation, and to build trust in AI systems that are built for scale. We are very pleased to have with us some luminaries from across the tech ecosystem. Jay Choudhury is the CEO of Zscaler. Aparna will be joining us in just a moment. David Zapolsky. I almost missed that. David Zapolsky, who made it, is the Chief Global Affairs and Legal Officer at Amazon.
And Dr. Jarek Kutylowski. How did I do there? Thank you, is the CEO of DeepL. So to set up the conversation, I wanted to ask each of our panelists to help us think through the AI governance conversation that’s taking place globally. So as we’ve seen here at the AI Impact Summit, there are efforts among global governments to align their approach, even though they may take different directions. Hi, Aparna. And as Aparna is now joining us, I will introduce Aparna Bawa, who is the chief operating officer of Zoom, which is not only a technology company, it is also a verb. And so thank you, Aparna, for being here with us today. So as we were getting ready to talk about AI governance conversations, it is absolutely the case that there is a need for governments around the world to align their approaches to AI governance, because, of course, technology doesn’t, by its very nature, want to stop at borders.
It wants to cross borders and unite people around the world. So I wanted to ask each of our esteemed panelists, and, Jay, I’ll start with you. for perhaps your philosophical perspective on how AI alignment can take place across governments. Why is it that that alignment matters? And perhaps even share your perspective on what happens if that AI alignment breaks down and governments are going off in different directions and taking different approaches. Where do you see the biggest challenges around this idea of alignment of AI governance around the world? Jay, thank you.
Thank you. So we are a highly connected world. Imagine any large corporation that’s doing business in 50 countries. If each country has its own governance rules and all but using AI, and you’re using some systems locally, some systems globally, it’ll create a lot of issues. some line of alignment is good, but over -alignment doesn’t help either. In fact, I have similar thoughts on governance too. Some level of governance is needed. When we start doing too much governance, too much compliance, we start killing innovations. So that’s personally my view. No,
it’s an important viewpoint because there is this idea that governments need to act. They need to protect citizens. They need to ensure security. But acting too much, perhaps in advance, can stifle innovation. So, Aparna, I want to go to you with the same question. As we’re having this global AI governance conversation here at the AI Impact Summit, governments are going in different directions in many cases. This is the first time the conversation has taken place in the global south, so I think that’s a good thing for aligning governance approaches. So from where you sit, why is alignment across the AI governance ecosystem internationally so important, and what can happen when it doesn’t happen and goes wrong?
I
will say, just to start, as an Indian American and someone who has lived in India, and we talked about this this morning at a breakfast we were at, it is quite striking to me some of the haves and have -nots. Like even we were talking about this morning, for example, during COVID, how some countries were fighting for PPE and fighting for oxygen tanks. And, you know, we in California were stockpiling toilet paper. I mean, the contrast is so stark. And I remember during COVID thinking to myself, that doesn’t seem right. And so I do feel like countries should protect the rights of their citizens and should want to advance their economies. But it is a tradeoff.
And I think it’s very well put to say it’s a tradeoff. So, for example, Zoom. imagine you would not be able to connect with people globally if we did not have cross -border data flow. So when we’re talking about AI, you can talk about AI, but it’s no different at the data layer. But we would not exist if we didn’t have cross -border data flows and free unencumbered data flow. And when governments start putting more and more restrictions on them within their own countries, it impedes their own citizens’ progress. And so at some point, it becomes a tradeoff. Now, obviously, the requirements around privacy and security are table stakes. If you get on a Zoom meeting with someone, you want to know that the person on the other side is that person.
That is sort of table stakes. But I’m with Jay on this one. I think there’s a basic level framework that is necessary. to be honest we live today with multiple in the United States we live with multiple states privacy frameworks and is it great no is it inefficient yes there’s something in between where you have a framework that is commonly understood with common set of norms and values I also respect a right of sovereignty for a nation so something there has to be a balance that
makes sense David Amazon operates pretty much in every country on the planet although I’m sure you can name a few that you’re not in yet there’s a few yeah there’s a few small number can you share your view on how this AI governance conversation needs to have some perhaps some unity to it
sure and first of all I I’m going to try not to repeat Aparna’s view because I basically agree with everything you just said if you think about every one of Amazon’s business models our stores the way we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets We’re looking to take that to 80. If you look at the cloud, if you look at our entertainment business, if you look at the satellites that we’re launching to launch a global Internet service, every one of them depends on free flow of goods, free flow of information, open skies.
That’s just kind of the way we’ve designed the company, to be global and to have interoperable services. And so every time a government erects barriers to that, it creates friction. It creates potential problems. And I think the global trend towards more of that is concerning. With AI particularly, I think the danger of some of the regulation that we’ve seen around the world is that we all still don’t really know how it’s going to be used, where it’s going to be most effective, where it’s going to be dangerous. There’s a lot of theories about it. There’s a lot of fear, uncertainty, and doubt about that, a lot of science fiction. And I think the danger in regulation, before you really understand the technology or how it’s going to play out, is that you create costs.
you create uncertainty and you inhibit innovation you inhibit adoption and that’s kind of what we’re seeing a couple years into this large language model journey there are parts of the world that were quick to regulate and civil society was all over that we’re going to regulate all these things we’re going to come up with these theoretical constructs of high risk, low risk and we don’t really know what that means in practice yet and so what’s happening? well, look at Colorado Colorado was one of the first states out of the box with comprehensive AI regulation which, by the way, isn’t bad in principle but they don’t know how to apply it no one really knows how to apply it and I think you’re seeing some buyers regret they put the implementation on hold they want to figure out standards I won’t even talk about the EU but they’re pretty much in the same boat they’re all looking for ways to not have to put the thing into practice because they don’t really know how it’s going to play out so I think what we need to do is step back look for some common principles what what is a high risk use?
what can we all agree are high risk? well, if you’re using a technology to make decisions that’s going to affect the life, health, or civil rights of an individual let’s talk about that are there laws that protect that already? do we need to supplement? them. Let’s work backwards from the harms we can see today and regulate there versus trying to come up with the unified field theory of AI regulation because that’s only going to slow us down.
Great. Yarek, we’ve been talking about unifying global governance approaches, making sure one might say that they all speak a common language. That’s what DeepL does. See what I did there? Your language AI platform is all about making sure everyone can communicate with each other regardless of the language they speak. From your perspective, you’re our European headquartered representative here, but you do business around the world. What can you share with us about how AI governance conversations being unified across governments is important to DeepL?
I truly believe that any successful technology needs to be inherently global. That holds both for the commercial models of the companies that we’re representing, but it also holds for the AI. just the access and the ability of reach towards the whole globe with what we are building. I think this creates the economies of scale on everything that we’re building. And when you are in AI, like obviously you’re running very, very high R &D costs and you have to be able to offset that with a huge customer base. So having a global market and being able to deploy to the whole world and therefore also to fulfill the mission of our companies, whether it’s just enabling communication, maybe in the case of Zoom, or making sure that this communication can happen multilingually, as in the case of DeepL, that really depends on a framework that is transparent and on a framework that is maybe not too different in all of the parts of this world.
And therefore, having some common layer, having this right balance of… of protecting the sovereignty and… And protecting maybe like a slightly different approach and slightly different mindset to certain topics like privacy, where we do have differences across the world. But doing that in a way that has a common understanding, that would be incredibly valuable. I think not only for the companies that we represent, but also really for our users and for our customers who depend on the best possible solutions.
Jay, I want to come back to you because you are our resident security expert and sometimes doomsayer about what happens if we don’t include trust and security as part of the conversation. I’ve heard you remind members of the government of India, indeed, that although the five pillars are enormously valuable, if you don’t have security overlaying them, we’re all in trouble. Talk to us about how the trust and security conversation is still a vital component around all the excitement.
Yeah, I have said that AI is powerful. but AI is dangerous because this technology can be abused. In India there’s a great focus on five layers and the focus is about being sovereign having everything that you can control. It starts with application then models underneath and so on and so forth. While it’s good to have that sovereign stuff imagine a bad guy can control all of that sovereign stuff sitting somewhere out there. Data poisoning can be done. All kinds of stuff can be done. So having a layer of security across all five layers becomes very important. So we should think about sovereignty not just in terms of this thing is sitting in my country but also in terms of who can access, who can do some of these things with it which is often overlooked.
And also the adoption of AI is happening very fast. And it’s wonderful And I’m not saying we should slow it down. I think we should embrace fast, but we should also start thinking about embracing cyber to make sure things are used securely at the same pace.
And in order to make sure that security is part of the AI ecosystem, Aparna, I want to ask you about what we all have responsibility to be thinking about as users, what enterprises have a responsibility to be thinking about. You know, we’ve talked about governance from the policy perspective, but, of course, users and enterprises also have a responsibility around AI. And as the COO of Zoom, you look over both the public policy and business aspects of what you’re deploying. How does the conversation about what we all should be thinking about factor into product development and deployment conversations?
It is a true partnership. And you know what? When Jay was talking, it resonated with me. When you work for a technology company, you’re not just working for a company. that is what you want to develop technology and you want people to adopt it as fast as possible you want them to be early adopters it’s so exciting in fact in our company you know companies have lots of different functions obviously our engineers our developers our product people are super they’re super early adopting their first to to take any sort of app that’s come out with its cursor etc and use it in their day -to -day and then there’s other people who have other day jobs i mean there’s finance people and the people people the hr people they have day jobs and they’re learning ai at night because they’re realizing that if i’m not on the ai bandwagon i’m going to get left behind and by the way if you’re looking to develop apps it’s actually yes you can focus on the sort of the the tech applications but the real so the the secret that not getting a ton of attention maybe a little bit of attention is this non -technical roles that could be augmented with ai so in that frame of mind i think it’s a really important thing to do and i think it’s a really important framework when you work for that kind of technology company it can be difficult to then start saying, but wait a minute, you need to slow down because you need to make sure that your CICD work is still going and it’s amplified because of the risks of AI, your security certifications, your red teaming, your privacy standards, all of that stuff is maintained.
I will tell you, the user plus the enterprise that is pushing out this technology, it’s a partnership. It is so important. The one thing that we learned during the pandemic, if you think about Zoom before pandemic, it was an enterprise -focused company, a work -focused company. And basically, when the pandemic hit, we said, okay, all you consumers, we will just hand you a platform that we usually give to IT administrators. And what do IT administrators at our customers do? They decide whether to turn up the security and privacy controls, turn down usability because it’s a tradeoff. It’s a definite tradeoff. They decide. We, in turn, just handed it to consumers and said, you can’t do that.
Who decides? and we realize, okay, public schools, they don’t have IT administrators. They don’t know how to turn on waiting rooms. They don’t know how to, you know, hide the meeting invite. They don’t know how to do these kinds of things. You have an obligation as an enterprise to make sure that there’s sufficient controls for the individual user and it scales all the way up to the enterprise and maintain that level of flexibility. You have that obligation. But on the same side, I would say the user, to be smart, has to understand some basic levels. I’ll tell you for an example. My kids use all the AI engines, ChatGPT, Cloud. They use it all. And it is a conversation we have to say is you don’t put all your information into your prompt because if you put all your information in your prompt, it is going into that engine and it will train that engine.
On the flip side, we as an enterprise provider, we have made the statement and we have made the policy decision that we will not use our customer content to train data. When I’m training my kids, I have to tell them, you can’t put your address into ChatGPT. You have to make sure that you’re safe in some way. So those are the kinds of things that you have to keep in mind. It’s a partnership between the user and the enterprise. And I think the enterprise obligation scales as you get down into the consumer use.
And I want to stay on this theme of training the user, if you will, whether they’re your children or a customer, because it is important for the tech industry to be mindful of the downstream. And, David, I want to come to you with this question. Amazon is, in a lot of ways, an upstream operator. You enable business and consumer customers on everything you do, from content to e -commerce to broadband in the future to your cloud customers. So how do you think about the upstream governance decisions that you’re making at Amazon and how they impact the downstream? How do you think about the downstream decisions or ways of operating that your customers are going to have to make as a result of those decisions you make at the Amazon level?
Well, we’re fortunate to have the scale to be able to serve enterprises in the cloud at the service layer. And so we have, you know, even before the AI, the current AI craze, we have a couple of decades of experience in thinking through what does governance and security look like for our enterprise customers. And as we’ve moved into this, you know, newer age where there’s AI services available, you know, one of the best solutions that we could come up with is creating an environment within the cloud services that so many hundreds of thousands of enterprises already use to give them access to models, not just our own, and we do our own models, and there’s upstream governance on those, you know, testing, making sure there’s, you know, we correct for bias, the things that a responsible model builder will do.
But at this enterprise level and the services, this is called bedrock. You know, we try to think through what are customers going to need. So we build in security. We build in the type of infrastructure that allows customers to scale up or down. We build in choice. Enterprises can choose from over 100 different models, open source and closed source. Not just ours, but, you know, all of the leading models from all around the world. And so we try to create an environment, a platform, where enterprise customers can come to use this new technology. First of all, get access to it without having to build their own servers and train their own models. And secondly, to do it in a way where they can rely on the security of the infrastructure.
The other thing that we will provide customers is that the data they use to employ those models, you know, stays their data. It doesn’t go to the model builders and it doesn’t go to us. So, you know, you can build that into the system. And then on top of that, given the way… that enterprises are using this technology, we try to build as many tools as possible to put the control of how this technology is deployed into the hands of enterprises and users. And so, for instance, on the Bedrock platform, we provide guardrails that allow you, as an enterprise, to basically control what types of outputs the models are going to give you. Now, are they more toxic?
Are they less biased? Can you filter for certain types of content? We build those controls right into the interface so enterprises can have that control. We build disclosures into the types of services that we offer so that we provide some visibility and transparency into here’s how this thing is built, here’s what you should use it for, here’s what you probably shouldn’t use it for, and we provide those kinds of choices to consumers. And so you have to think through the overall security in the system. in the environment. and the accessibility of this technology. And as far as our approach is, the cloud is probably the best place to do that. It’s certainly the easiest way to access the technology and likely the safest.
Jarek, you’ve moved DeepL’s business model from it started as translation. Now it’s getting into agentic AI, and you have agents on your platform that can execute tasks on behalf of your customers. Which I can imagine raises very different governance policy decisions that you have to make on behalf of your customers when you’re just translating versus when agents can act autonomously, particularly because you’re a global business and they can act autonomously across borders. How are you thinking about the policies and procedures for governance that you have to put in place in an agentic AI world that are different than perhaps you did in a language translation? world?
I think generally, but also in the language space, it’s just like the stakes are becoming higher and higher. AI is becoming more and more powerful. And even if you look into translation, like a couple of years ago, Diebel would be translating your typical email to your customer. And that is important, of course. You want to look great in front of the customer. You want to be eloquent. You want to be able to connect with them, maybe like really on a human level when it comes to the language that this customer is speaking. And you’re enabling your business to basically become global very, very easily. But now what Diebel is translating, it’s plain maintenance records. It’s R &D documentation for new drugs that actually influences how those drugs are developed and whether they’re being approved by the FDA or not.
So these are highly critical use cases. And I think it has been mentioned that like privacy and… it is just the table stakes it’s just the beginning I think creating a layer of trust into the outcomes of the AI whether that’s translation whether that’s agentic AI that those decisions are really following what the enterprise is expecting of the AI that is really where kind of the battle is right now and that is where both the governance aspect of that that’s coming from the political side and from the governmental side needs to obviously be included but there’s also the aspect of how do the enterprise how do our customers want to regulate the AI that is being deployed and how flexible the products that we all are providing can be towards those very different approaches that we’re seeing across the world and with different types of enterprises maybe even
Each of you mentioned the concept of risk management in your comments and I want to come back to the balance that Jay alluded to earlier between risk management between promoting innovation and balancing risks and obviously there is a trade -off it’s a sliding scale the more you regulate risk the less room there is for innovation I want to ask each of our panelists, Jay, I’ll start with you, about how you’ve seen a flexible risk -based approach from government be the most effective, where you see that flexible approach still leave room for innovation, or the flip side to that, if you want to give any examples, where you’ve seen it go wrong, where a more prescriptive approach to regulation has denied you the opportunity to bring products or services to market or has generally been more of a challenge for industry because a government didn’t get the balance right between managing risk and promoting innovation?
There are many facets of governance and risk. Take, for example, data privacy. Obviously, that’s one kind of factor. But potentially, hacker attacks from a cyber point of view is a different kind of factor. We look at more in terms of two things. One, making sure your data is not lost. So the data becomes very important. There’s a consumer end of data, but there’s a bigger issue on the data side is enterprises. And you don’t try to treat the same data the same way in the practical business world. I’ll give you an example. When I worked with General Electric, the CISO, a very smart guy, Larry Virginia, would say, when I tried to secure everything, I secured nothing.
So then he would give an example. He says, as a CISO, I need to protect IP or intellectual property of my products. But my washers and dryers are out there. I don’t spend time trying to protect its IP at all. You can buy them in a store. And figure it out. But I’m dead serious about protecting IP on my jet engine. That’s very important. Trying to just say all consumer data, all this data, it just starts creating issues. That’s why I also like to say compliance doesn’t mean security. In fact, when you work on compliance, all this thing works through the government entities, pros, cons, and it takes a lot longer. And by the time it’s out there, the cyber and compliance needs have moved on.
So the stuff you put in place many, many times is old. In fact, when Zscaler came out with our Zero Trust cloud -based architecture, a lot of these regulators came in, wait a second, where is your firewall? So what do you mean firewall? Firewalls don’t, we don’t use firewalls. We are anti -firewalls. And they said, no, no, no, wait a second. The banks can use it. If you know. It’s not a firewall. When we went through certification for the federal government in the U .S. the certifying body first came firewalls no it took us three months to educate them so that’s why I think over regulation I really don’t like it there needs to be a way of saying what’s the impact of this thing on what kind of stuff that’s the right approach all data is not created equal trying to put the onus off securing all data gets hard then classifying data gets hard so these are not simple issues AI makes it very hard we don’t even fully understand how AI does what it does so I think a flexible policy that evolves is a better thing while keeping track of the most important data and then beyond data hackers too, that’s a big problem we talked about agents today a user is the weakest link tomorrow AI agents will be your weakest link and they’ll be all over they are maturing they’ll come Imagine an agent getting hacked or hijacked in your company with access to all kinds of stuff.
So that’s where companies like Zscaler, we are focused on making sure our zero trust change can be extended to deal with agents, starting with understanding their identity, authorization, all those things. Those things are very important the way we look at it. Otherwise, business will shut down.
So Aparna Zoom brings some amazing innovations using AI to the platform that we’re all familiar with. It makes it a lot easier for us to do everything from transcribe meetings to pretend to be a cat when you’re in court. No, that’s not a – that’s a –
I was going to say it can summarize your meeting. It can take notes for you. It can send action items to your teams. It can calendar those action item follow -ups. It can give them deadlines. All done.
There it is. But I can imagine you’ve had some challenges around the world in that balance between innovation. and risk management from governments. Can you either share a positive example of where that’s gone well in your mind, or if you want to, an example of where it hasn’t gone well, where consumers and businesses have been denied Zoom innovation because that balance isn’t struck? Or perhaps you can keep it at a higher level if you prefer.
between innovation, our product team, innovate, innovate, innovate, our governance team, security, privacy, et cetera, is always thinking about that as well. And so how do you strike that balance? And I think I’ll start at the top level. It’s a sliding scale on many different fronts. But if you look at it like a layer cake or even a data stack, but the top level, it’s customer choice. So David was very appropriate when he said customer choice, but customer choice is different by the category of customer. If you are an enterprise and you have 200 people on an IT admin team or under the CIO, and you are buying Zoom and you have a giant security team and a giant compliance team, you’re going to be making choices for yourself.
I’m not going to tell HSBC what they’re going to do. They’re going to decide what they’re going to do. And we deliver the platform and we have toggles for them to decide what they want to deploy, what they don’t want to deploy, who they want to deploy it to. We make it very easy. So we provide a lot of choice. So the same platform services Fortune One. The same platform also services my mother -in -law, who is on the free account and who is chatting with her friends and won’t upgrade. I tell her, please upgrade. She gets off, waits five minutes, gets back on, and that’s how they do it. So for her, it’s very different.
So for her, you have to mandate a few things. You can’t give your meeting ID to everybody. It cannot be on the top of the UI. You know, those are some basic things. You have to have waiting rooms. If you’re in a school environment, you have to have mandatory passcodes. These are sorts of things that you – so that’s a sliding scale. I would say take it one level deeper. I think the biggest thing I have learned from working at Zoom, and in all honesty, I credit our founder for this. The biggest thing I’ve learned working at Zoom is everything goes back to the user experience. And our customers are not monoliths. They don’t just want to take down.
They don’t want to take down all the technology. They want to do it in a safe and secure way. They don’t want to be surprised. So you have to think, I am a user. I’m an end user. It doesn’t matter that I sell to Zscaler. Thank you very much. It doesn’t matter that I sell to Zscaler. I need to worry about how Jay Choudhury’s engineer feels when he gets on Zoom. And that’s the user experience I’m going for. So if you are a user and you feel like, wait a second, I don’t really want – if I’m a finance person in Jay Choudhury’s team and I say I don’t really want my meeting to be automatically transcribed and then spit into an AI engine because I’m worried, or if I’m a lawyer, I’m worried about attorney -client privilege, well, I need to give them the option to say I opt out of that.
I need to be able to give them choice. And I think that’s how I think about it. Every risk -based decision is you are a user. You’re not one kind of user. You have multiple types of users. How do you make it easy? How do you make it easy for, at a very lowest common denominator, for them to trust you? And that’s really the answer that you go through.
That’s great. David, let’s go from different kinds of users to different kind of products. You were the first on the panel to use the phrase risk -based approach, and nowhere is that more evident than Amazon’s wide range of products and services to your customers. I can imagine it’s a very different internal conversation about governance and risk when determining how AI is going to, on Amazon Prime, recommend my next series or show. Not a lot of risk there. But other Amazon products could have more risk to them. So on the sliding scale, and you also, you travel the world, quite literally now you’re doing it, talking to governments about that innovation versus risk management and the risk of getting that balance wrong.
How do you communicate that to governments and also make the internal product decisions that you need to around those issues?
Well, you sort of… kind of stole one of my talking points when I have some of these conversations, which is it does matter how this technology is used and where. It’s a different set of considerations when we think about what kind of protections or risks arise from an AI -assisted shopping assistant versus a tool we might make available to help doctors document how they’re treating patients and make it easier for people to prescribe medications. Those are two very different risk profiles. But if you start with a regulation that doesn’t differentiate between those, you’re going to inhibit innovation. You’re going to prevent adoption of really useful ways that this technology can be used. And so that’s…
You know, that’s the pitch I make when I get to talk… to people whose business it is to think about regulation. It is about risk. It’s about how the technology is used. And my point earlier was that we don’t really know yet how the technology is going to be used. When we see it, we can analyze it. I can’t, you know, and on that point generally, you know, there are cases where technology companies have made a decision to not bring certain types of technology into, say, Europe because of regulatory uncertainty. And typically those get worked through. But I can’t tell you how many conversations I’ve had internally where folks have come up with an idea or a product and our sort of internal mantra is we want to launch something everywhere all at once.
We want to serve customers. If we have convictions, something’s going to happen. If it’s good for customers, why just do it in one place? And sometimes the answer to that is. it’s too costly. It’s going to take more time. We can’t really figure out how this is going to fit within, you know, the regulatory scheme in a certain other jurisdiction because they haven’t thought of it either. And so we’re going to wait. We’re just going to, you know, wait on that. We’ll launch it in this place first and we’ll see if it works. And then if it works, then we’ll think about, you know, the costs associated with scaling it globally. And so that’s a real world issue that governments have to understand and deal with when they make decisions about how prescriptive their regulations are going to be, especially in the abstract.
And so those are the sorts of conversations I have. I think, you know, in the AI space, I think you can look at countries like Peru. You can look at countries like Japan that have proceeded cautiously. I think India has the same approach and I’m very encouraged by the way India is approaching these issues. You have to you can’t rule out regulation completely. And Amazon’s an advocate. of regulation that mandates that people developing and deploying this technology do it responsibly. But we have to understand what we’re regulating before you can really pull the trigger. And so those are the – I think those types of examples are useful for people to keep in mind when they’re considering how to resolve that balance.
And the results of those conversations not going in the right direction, David, is that consumers or businesses might get denied the technology that their neighbors are enjoying. So, Jarek, I wanted to ask you, as the CEO of DeepL, in the process of expanding around the globe, are there examples that you can think of where you’ve had to make a go -no -go decision entering a particular country or launching a particular product, including your new Agentech AI products? because of the regulatory environment or because of the way in which a country looks at? Or the flip side of that, if you want to take the positive, is are you attracted to a particular market because, as David said, it’s done the right thing, like Peru or Japan or even India is endeavoring to do, where they’re more likely to get deep L service because of the decisions they’ve made, the approach they take to these AI governance decisions?
Yeah, Jason, let me maybe first start with a principle. I’m a scientist by heart, so I’m really excited about bringing the best possible technology to each and every one of our customers and users. I think they all deserve it. I think they all should be equipped with that. But yes, there is kind of some of those things that we need to take into account. And actually, quite often, those are not really location -based or country -based or regulation -based, but really also based on the use cases of those. Of those customers. AI can be incredibly powerful, but that power also demonstrates its possibilities in different ways in different applications. And going back to my example from earlier, like the translation of an email has just a different criticality grade than a translation of a patent application.
The execution of an agent in a particular environment versus in an enterprise environment has a different grade of complexity. But going back to kind of the regulation aspect of it, I think we’re lucky as a company to have grown in Europe in kind of an environment which is maybe like slightly earlier on regulation than other places in the world. And I think that gives us an edge to be able to understand how to work with this regulation and how to prepare and then also be very, very early in other markets, like you mentioned Colorado earlier, and be able to handle that complexity and be able to handle that complexity for our customers, really. Because most often it is our customers who do not understand this space.
We do. And we have to go all of the way to give them the possibility to figure this out for themselves, for their applications, for their use cases, and across a whole range of products. So in short, I think it can be managed, but it is really like part of the excellency of a company to be able to manage it together with the customer.
The last question that we have time. I want to address to each of you is a forward -looking question. It used to be possible to have conversations about policy outcomes years in advance. I think the best we can hope for. is for me to ask this question in advance of Switzerland hosting the next AI Impact Summit or whatever they choose to call it next year at this time. So my question to all of you on the panel is, a year from now, if we are to gather, and something had happened in the AI governance, AI regulatory space over the course of that year that you’d like to see happen and you were looking backwards to India and say, I’m really glad that one thing happened or that one thing changed or this government or this international body did this thing over the course of the last year to really help unleash the innovation and power of AI in a secure way that we all want to see, what could that one thing be that you’re looking at?
And it can be something that you’re focused on in your business as well over the course of the next year that government can help make a reality. So, Jay, I’ll start with you with this question. Then we’ll go down. I’ll go down the panel to bring our time to a close together. What’s the one thing you’re hoping if we’re talking a year from now has happened in global AI governance that’s going to make everything that we’re talking about and excited about a huge success?
The AI train is moving at a pretty fast pace. It will keep on moving. Then you look at the things that could go wrong. That’s where governance comes in. I think there’s too much focus on data. There’s less focus on bad things that bad guys can do. I think probably the biggest issue will be, hey, today we hear all about these ransom attacks, ransomware. AI can make it so much easier. Bad guys are very motivated to make money. Today, when they do attack, they have to find your attack surface. They’re finding those IP addresses that are open to the Internet, those firewalls and VPNs and everything. AI, you can discover it in 30 seconds. AI can write beautiful emails for phishing.
as if they come from your CFO. Once you get in, AI agents can discover your whole network to figure out what those things are. It can bring those things down. So I think we need to focus more on to make sure we can protect against those risks. I talked about AI agents going rogue. Those are one kind of risk. And then the second kind of risks government needs to worry about is nation states trying to use AI to really have advantage, understanding, getting these backdoors planted and all that kind of stuff. I think if you’re sitting next year and we’ve done enough in those areas that we don’t have some of these things that blow up.
If they blow up, then government starts tightening things more and more, which doesn’t sometimes help. So proactive areas to secure it will be very, very important.
All right. So protecting against these threats so that government doesn’t overreact and stifle innovation as a result. Aparna, what’s your one thing that you hope for for next year?
You know, it really struck me in this impact summit, the focus on inclusivity, upskilling, skilling and upskilling people who wouldn’t otherwise have access to technology. And if you think about why we got started, we were founded because we wanted to provide free and open access to collaboration and have people from all walks of life connect. I think our founder had to travel to date his wife, you know, and didn’t want to see her more than once the next number of weeks. So, you know, it’s something powerful. In a year, I would like to actually see that happen. Now, it’s not. I think it’s completely altruistic. I do firmly believe that even enterprises who have more of a chance of adopting AI and gaining some of the efficiencies of AI, they need a market.
And the market is you, me, and all of us. And the more people in a village somewhere in a corner of India, even near – we were just talking about Karnataka in another meeting, in a village that has low bandwidth, et cetera, in Karnataka. If a farmer can adopt AI and can change their lives in successive generations, that is good for business. And so for me, progress on that. I still think it’s very – it’s all talk. But I love the idea. I love seeing a billboard where Prime Minister Modi is talking about inclusivity. That’s wonderful to hear. It’s good for business. Maybe it’s a bit altruistic, but I would think it would be good for Zoom.
I love it. AI lifting up more broadly the world. David?
I’ll take a much higher level approach. You know, I think there’s a sort of consensus around AI regulation that’s kind of yearning to get out. Like it’s sort of gelling a little bit. We saw it sort of in the Hiroshima agreements. We see it, you know, talked about in forums like this. You know, there is sort of an emerging consensus about how to approach this technology. In a responsible way, and I totally, again, agree violently with Aparna in adding the inclusiveness piece and commend the Prime Minister and India for making that a big part of the debate. But I think I would like to see countries around the world start to converge on this basic consensus.
It doesn’t mean that countries can’t have their own perspectives or sovereign outlooks, but there is sort of a… a movement toward an international standard that – and there’s a parallel with the technical standards. There’s ISO 42001, which everybody can abide by and give people a common set of principles and a common set of technical standards they need to make so that we can all be more confident in the way we roll out this technology.
I love that. A move toward more global industry consensus -based standards to help govern all that we do, hopefully put government regulators out of business if we can all do it right. Jarek, you get to bring us home with your aspiration for us as we gather together next year in Switzerland.
Yeah, I think there’s place for those government regulators too. I would love, as you just explained, getting them all together and creating a framework. But I think there is a – bigger role for AI in this world. I think there’s so many amazing humans across all of the continents of this world and I would love to see in a year and once again that goes back a little bit to DeepL’s mission for them to be able to collaborate as much as they can no matter where they sit geographically no matter which language they speak, no matter what they do in their job just giving the opportunity to each and everyone in every place of this world and there’s amazing examples of cooperation between India and other countries and strengthening that even more and I think AI gives us even more possibilities to do that in the upcoming year so maybe in Switzerland we’re going to be able to look at that and see hey in India we’ve just set the cornerstone of making this possible and making this world a better place.
I bet they will. You know, it was AI action last year. Now it’s AI impact. Hopefully it will be AI collaboration or something of the sort next year. I love that that image of everybody across borders, across geographies, across languages collaborating together. What a great discussion. I love how we were both philosophical and practical. I really appreciate all of you sharing your deep insight on these important AI governance issues. And I appreciate all of you being here in the audience to hear this discussion. Please join me in recognizing and thanking our terrific panelists. And please enjoy the rest of the summit. Thank you. Now we’ve got to get a picture. Are we going to take a picture?
We have to get a picture, yeah. We’re going to have to hang back behind there. We’re going to have to hang back behind there.
High level of consensus with significant implications for AI governance development. The alignment suggests that despite different stakeholder perspectives, there is substantial agreement on both the …
EventChaudhry warns that if each nation imposes its own AI rules, companies operating across borders will face fragmented compliance and operational difficulties. A coordinated global approach is needed to…
EventThe panelists stressed the need for harmonized global regulations to avoid fragmentation and ensure interoperability across jurisdictions. They discussed various regulatory approaches, including the E…
EventBoth speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically highlights the significance of ensuring alignment with values at a global level. Thi…
EventThe discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and emergent-technology governance approaches. The balance between innovation and safe…
EventLegal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovations and their high costs Regulation and innovation must work together, not in …
EventThe discussion highlighted tensions between regulation and innovation. While some advocated for light-touch governance to avoid stifling progress, others stressed the importance of ensuring AI safety …
EventTechnology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semiconductors, and the internet. It is estimated that technology is moving approxima…
Event“These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic…”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/aligning-ai-gove…
Event“safety, privacy, these are absolutely foundational and non-negotiable as we’ve seen on the LEGO education side and similarly in ours.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/alig…
EventAI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search interest for ‘AI agents’ surged throughout the year, reflecting a broader shift…
UpdatesBut the trust in these systems have to be built over time, and they don’t come without some assurance being put in place. The question is for AI, and specifically agentic AI, what would be the compone…
EventAnd so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advanced economies or for the dominant platforms, but for the people that we’re trying …
EventChin Lin: Okay, I think to answer this question, we have to know that to set up a user-centric deployment is a collapsible problem. First, it calls for the engagement of multiple stakeholders and the…
EventInnovation must be guided by responsibility, with safety and privacy designed into products from the start
EventThere is a need for both technical solutions (safety by design) and education/awareness initiatives
EventWe must begin to shift liability onto those entities that fail to take reasonable precautions to secure their software while recognizing that even the most advanced software security programs cannot p…
ResourceThe global cloud computing market is accelerating. Companies are increasingly looking at cloud computing as a viable place to run their core business applications. Nevertheless, accordi…
ResourceGovernance for cloud computing refers to the system by which the provision and use of cloud services are directed and controlled. <a href=”https://dig.watch/event/india-ai-impact-summit-2026/aligning-…
Resource“Jason Oxman framed the AI industry’s dual challenge as managing risk while fostering global innovation and interoperability, urging governments to move beyond fragmented, nation‑centric rules toward coordinated AI governance that can support systems at scale.”
The Open Forum discussion highlights the need to address resource inequities, build global regulatory capacity, and coordinate multiple governance frameworks to avoid fragmentation while respecting national approaches, aligning with Oxman’s call for coordinated AI governance [S23].
“Panelists included Jay Chaudhry (CEO, Zscaler), Aparna Bawa (COO, Zoom) and David Zapolsky (Chief Global Affairs & Legal Officer, Amazon).”
The panel roster listed in the knowledge base confirms the participation of Jay Chaudhry, Aparna Bawa, and David Zapolsky in the AI governance discussion [S2].
“Jay Chaudhry referenced India’s five‑layer AI security model and argued that without a comparable security overlay the model could be abused, emphasizing the need for sovereignty‑respecting security controls.”
India’s layered approach to AI sovereignty, focusing on software stacks, model development, orchestration, and applications, is documented as a strategic framework, supporting Chaudhry’s reference to a five-layer model and the need for security overlays [S79] and [S45].
“Aparna Bawa emphasized that cross‑border data flows are essential for Zoom’s global connectivity and that any restriction impedes citizens’ progress by throttling AI‑supporting infrastructure.”
Discussion on cross-border data flows stresses that data localization stifles businesses and that unrestricted flows are vital for services like Zoom, matching Bawa’s point [S82] and [S81].
“The COVID‑19 pandemic forced Zoom to shift from an enterprise‑only platform to a consumer‑facing service, leading to rapid deployment of default security controls such as waiting rooms and passcodes.”
Reports note that Zoom pivoted during the pandemic to a broader AI-first work platform and that security features like waiting rooms were added to protect users, providing context for Bawa’s statement [S84] and [S85].
“David Zapolsky cited Colorado’s early AI law as an example of premature, blanket regulation that creates costs and stalls adoption because ‘no one really knows how to apply it’.”
Colorado’s AI law, described as pioneering yet controversial, has faced criticism for its early implementation and unclear application, corroborating Zapolsky’s example [S89].
The panel shows strong convergence on four core themes: (1) the necessity of global coordination and shared principles to avoid fragmented AI governance; (2) the preference for flexible, risk‑based regulation tailored to specific use‑cases; (3) the imperative to embed security, trust and user‑centric controls into AI products from the outset; and (4) the importance of inclusive access and upskilling for underserved populations. These points cut across multiple domains—policy, technology, security and development—indicating a high level of consensus among industry leaders.
High consensus; the shared positions suggest that future AI governance initiatives are likely to prioritize international standards, risk‑based regulatory approaches, security‑by‑design, and inclusive deployment, providing a solid foundation for coordinated policy action.
The panel largely converged on the need for balanced, risk‑based AI governance that supports innovation and global interoperability. The main points of contention revolve around how much alignment is appropriate and the relative priority of security versus user experience. While all agree that over‑regulation can hinder progress, Jay stresses the dangers of excessive alignment and the necessity of layered security, whereas Aparna, David and Jarek advocate for common frameworks to keep cross‑border data and services flowing.
Moderate – disagreements are nuanced rather than outright oppositional. They reflect differing emphases (security vs usability, alignment vs over‑alignment) that could affect policy design, suggesting that future governance discussions will need to reconcile these perspectives to achieve both innovation and robust protection.
The discussion pivoted around the tension between global AI alignment and the need for flexibility. Early remarks by Jay and Aparna framed the problem of fragmented regulation versus innovation, which was sharpened by David’s concrete example of Colorado’s premature law. Subsequent comments introduced security (Jay), risk‑based principles (David), and user‑centric product design (Aparna, Jarek). Each of these insights redirected the conversation toward actionable frameworks—common high‑risk definitions, zero‑trust for AI agents, and inclusive access—while also warning of future threats that could trigger over‑regulation. Collectively, these pivotal comments moved the panel from abstract policy talk to concrete, multi‑dimensional solutions, culminating in a shared vision of international standards that balance sovereignty, security, and inclusive innovation.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

