Aligning AI Governance Across the Tech Stack ITI C-Suite Panel

20 Feb 2026 11:00h - 12:00h

Aligning AI Governance Across the Tech Stack ITI C-Suite Panel

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by noting the challenge of managing AI risk while supporting global innovation and the need for governments to align their approaches [1-2]. Panelists agreed that fragmented national policies risk stifling cross-border AI services, so alignment is essential [12][15].


Jay Chaudhry warned that if every country imposed its own AI rules, multinational firms would face operational friction, yet excessive alignment could also hinder innovation [22-24]. He argued that too much compliance kills innovation and that a balanced, flexible approach is preferable [27-28].


Aparna Bawa emphasized that cross-border data flows underpin services like Zoom and AI, and restricting them would impede citizens’ progress [47-50]. She described the trade-off between protecting privacy/security and maintaining free data movement, calling for a basic, commonly understood framework [52-56]. Bawa also highlighted a partnership model where enterprises provide safeguards while users adopt responsible AI practices [106-108][121-128].


David Zapolsky echoed the importance of unrestricted flow of goods, information, and services for Amazon’s global operations, from e-commerce to satellite internet [58-61]. He cautioned that premature, blanket regulation creates uncertainty and costs, citing Colorado’s early AI law as an example of unclear implementation [64-68]. Zapolsky suggested focusing on high-risk uses-such as decisions affecting health or civil rights-and building common principles rather than a universal theory of AI regulation [66-68].


Jarek Kutylowski argued that DeepL’s global mission requires a transparent, harmonized governance layer that respects sovereignty yet enables consistent AI services worldwide [75-82]. He noted that as DeepL moves into agentic AI, the stakes rise and the company must embed trust and flexible controls to meet varied regulatory expectations [167-174][176-182].


All panelists concluded that developing international standards and inclusive, up-skilling initiatives will be key to unlocking AI’s benefits without over-regulation [364-368][390-393].


Keypoints


Major discussion points


Global alignment of AI governance is essential to avoid fragmentation and sustain innovation.


The moderator frames the need for coordinated policy across borders [1-2] and notes that AI “doesn’t stop at borders” [15-20]. Panelists echo this: Jay points out the chaos of 50-country rule-sets [22-28]; Aparna stresses that cross-border data flows are the lifeblood of services like Zoom and warns that heavy restrictions “impede their own citizens’ progress” [39-52]; David argues for “common principles” and a “high-risk” baseline rather than a “unified field theory” of regulation [58-68]; Jarek adds that a “common layer…with a right balance of protecting sovereignty” would benefit global users [75-82].


Finding the right balance between risk-based regulation and innovation is a recurring tension.


Jason warns that “acting too much…can stifle innovation” [30-34], while Jay observes that “when we start doing too much governance…we start killing innovations” [27-28]. David describes the danger of premature, blanket rules (e.g., Colorado’s early AI law) that create “costs…uncertainty and you inhibit innovation” [64-68]. Later, Jay stresses the need for “flexible policy that evolves” and cautions that “compliance doesn’t mean security” and that over-regulation can render controls obsolete [180-202]. David reinforces the point by differentiating risk profiles (shopping assistant vs. medical documentation) and urging regulators to “not…inhibit adoption of really useful ways” [281-286][288-295].


Security and trust are non-negotiable foundations, especially as AI agents become more powerful.


The moderator explicitly asks about the “trust and security conversation” [84-86]. Jay explains that AI can be “abused” through data-poisoning and other attacks, and argues for a security overlay across all five AI layers [87-95]. In the forward-looking segment he warns that “AI agents will be the weakest link” and could be hijacked, underscoring the need for identity, authorization, and robust zero-trust controls [340-358].


Enterprises and end-users share responsibility; product design must embed choice, education, and safeguards.


Aparna describes the partnership model: enterprises must provide “sufficient controls for the individual user” while users need basic AI hygiene (e.g., not feeding personal data into prompts) [102-130]. She later expands on how Zoom offers tiered controls-from enterprise admin toggles to consumer-level safety features-so that “every risk-based decision is you are a user” across diverse contexts [226-272].


Upstream governance decisions (e.g., Amazon’s cloud services) shape downstream customer capabilities and must be built with security, data-ownership, and flexibility in mind.


David outlines Amazon’s “upstream” approach: a Bedrock platform that supplies over 100 models, keeps customer data private, embeds guardrails, and provides disclosures so enterprises can control outputs [137-160]. He also notes that any government-imposed barrier creates “friction” for Amazon’s globally interoperable services [58-63].


Overall purpose / goal of the discussion


The panel was convened to explore how governments worldwide can cooperate with industry to create a coherent, risk-aware AI governance framework that protects citizens, preserves security, and yet does not choke the rapid innovation needed for AI-driven global interoperability.


Tone of the conversation


The tone begins formally and forward-looking, emphasizing the strategic importance of alignment. As individual speakers share concrete experiences, it becomes more pragmatic and collaborative, highlighting real-world trade-offs and shared responsibilities. Toward the end, the mood turns hopeful and aspirational, focusing on inclusive growth, emerging international standards, and a vision of cross-border cooperation for the next summit. Throughout, the discussion remains constructive, balancing caution about over-regulation with optimism about coordinated standards.


Speakers

Jason Oxman – Moderator/Host; President & CEO, Information Technology Industry Council (ITI) [S7][S8]


Jay Chaudhry – CEO, Chairman, and Founder of Zscaler; security expert [S9][S11]


Aparna Bawa – Chief Operating Officer (COO) of Zoom [S4]


David Zapolsky – Chief Global Affairs and Legal Officer at Amazon [S2]


Jarek Kutylowski – CEO of DeepL (also referred to as Dr. Jarek Kutylowski) [S6][S5]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

The panel opened with Jason Oxman framing the dual challenge for the AI industry: managing risk while fostering global innovation and interoperability, and urging governments to move beyond fragmented, nation-centric rules toward coordinated AI governance that can support systems at scale [1-2].


After a brief roll-call of the participants – Jay Chaudhry (CEO, Zscaler), Aparna Bawa (COO, Zoom), David Zapolsky (Chief Global Affairs & Legal Officer, Amazon) and Dr Jarek Kutylowski (CEO, DeepL) – the moderator asked each panelist why cross-jurisdictional alignment matters.


Jay Chaudhry warned that a multinational corporation operating in dozens of countries would be crippled if each market imposed its own AI regime, creating “a lot of issues” and “killing innovations” when governance becomes excessive [6-9]. He also referenced India’s five-layer AI security model, noting that without a comparable security overlay the model could be abused, underscoring the need to embed sovereignty-respecting security controls from the outset [10-12].


Aparna Bawa emphasized the importance of cross-border data flows for Zoom’s global connectivity and argued that any restriction “impedes their own citizens’ progress” by throttling the infrastructure that underpins AI services [13-16]. She added that the COVID-19 pandemic forced Zoom to shift from an enterprise-only platform to a consumer-facing service, prompting the rapid deployment of default security controls such as waiting rooms and passcodes to balance swift innovation with user safety [17-20].


David Zapolsky described how Amazon’s e-commerce, cloud, and satellite-Internet businesses rely on the free flow of goods, information, and open skies, and warned that government-imposed barriers generate friction and uncertainty. He illustrated this with Colorado’s early AI law, showing how premature, blanket regulation creates costs and stalls adoption because “no one really knows how to apply it” [21-24][28-31]. He also noted Amazon’s internal “launch-everywhere” mantra – the desire to roll out new AI features globally at once – which sometimes must be delayed due to regulatory uncertainty [25-27].


Jarek Kutylowski explained DeepL’s mission to enable multilingual communication through a “transparent, common layer” of governance that balances national sovereignty with shared norms, allowing the company to serve a truly global market while maintaining trust in AI outputs [32-35]. He added that growing up under Europe’s early AI regulation gave DeepL an “edge” in learning to work with regulatory requirements, informing its strategy for expansion into other markets [36-38].


The discussion then turned to security and trust as non-negotiable foundations. Jay Chaudhry argued that AI is “powerful but dangerous” and advocated a security overlay across all five layers of the AI stack to guard against data poisoning, rogue agents, and AI-enabled threats such as ransomware and nation-state misuse [39-45][46-48][49-52]. Aparna Bawa highlighted Zoom’s partnership model, noting that privacy, security, and user-choice are embedded in the product-from enterprise-level toggles to consumer safeguards-and that users must also practice “basic AI hygiene,” for example by avoiding the inclusion of personal data in prompts [53-57].


David Zapolsky then described Amazon’s Bedrock platform, which offers over one hundred models while guaranteeing that “the data they use…stays their data.” The service includes built-in guardrails, content filtering, and transparent disclosures, shifting much of the downstream governance burden to customers in a secure, scalable environment [58-62].


Jarek Kutylowski discussed DeepL’s move into agentic AI, noting that the stakes have risen from simple email translation to high-impact tasks such as translating R&D documentation for drug approvals. He argued that trust in AI outcomes must be reinforced by transparent, adaptable governance and that providing customers with tools to manage risk themselves is a hallmark of a mature AI provider [63-68].


When the moderator asked how a flexible, risk-based approach might preserve both safety and progress, the panel converged on several points. All three speakers agreed that over-regulation “kills innovation” and that governance must be evidence-based and use-case specific. David Zapolsky defined “high-risk” uses explicitly as “decisions that affect life, health or civil rights” and advocated a principle-based approach that first identifies such uses before tailoring safeguards [69-71][72-74]. Aparna Bawa echoed the need for a “basic level framework” that respects national sovereignty while providing clear, evidence-driven guidelines for developers [75-77].


Looking ahead one year, the panelists shared a common vision of inclusive, standards-driven AI. Jay Chaudhry called for up-skilling programs and configurable security controls that enable enterprises of all sizes to adopt AI safely [78-80]. Aparna Bawa stressed the importance of low-bandwidth access so that even a farmer in a Karnataka village can benefit from AI, linking inclusivity to market creation [81-84]. David Zapolsky highlighted the emerging international consensus around standards such as ISO 42001, which would provide “a common set of principles and a common set of technical standards” for global AI governance [85-88]. Jarek Kutylowski concluded that a global framework would facilitate seamless multilingual collaboration, the core of DeepL’s mission [89-91].


Collectively, the panel calls for a globally-aligned, risk-based AI governance framework that protects security, respects sovereignty, and enables inclusive innovation. [92-93]


Session transcriptComplete transcript of the session
Jason Oxman

The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and interoperability. So today’s discussion, we’re very fortunate to have leaders from across the AI stack, if you will, who are here with us to discuss how governments can help industry work in partnership with industry, if you will, to align responsibilities, to reduce fragmentation, and to build trust in AI systems that are built for scale. We are very pleased to have with us some luminaries from across the tech ecosystem. Jay Choudhury is the CEO of Zscaler. Aparna will be joining us in just a moment. David Zapolsky. I almost missed that. David Zapolsky, who made it, is the Chief Global Affairs and Legal Officer at Amazon.

And Dr. Jarek Kutylowski. How did I do there? Thank you, is the CEO of DeepL. So to set up the conversation, I wanted to ask each of our panelists to help us think through the AI governance conversation that’s taking place globally. So as we’ve seen here at the AI Impact Summit, there are efforts among global governments to align their approach, even though they may take different directions. Hi, Aparna. And as Aparna is now joining us, I will introduce Aparna Bawa, who is the chief operating officer of Zoom, which is not only a technology company, it is also a verb. And so thank you, Aparna, for being here with us today. So as we were getting ready to talk about AI governance conversations, it is absolutely the case that there is a need for governments around the world to align their approaches to AI governance, because, of course, technology doesn’t, by its very nature, want to stop at borders.

It wants to cross borders and unite people around the world. So I wanted to ask each of our esteemed panelists, and, Jay, I’ll start with you. for perhaps your philosophical perspective on how AI alignment can take place across governments. Why is it that that alignment matters? And perhaps even share your perspective on what happens if that AI alignment breaks down and governments are going off in different directions and taking different approaches. Where do you see the biggest challenges around this idea of alignment of AI governance around the world? Jay, thank you.

Jay Chaudhry

Thank you. So we are a highly connected world. Imagine any large corporation that’s doing business in 50 countries. If each country has its own governance rules and all but using AI, and you’re using some systems locally, some systems globally, it’ll create a lot of issues. some line of alignment is good, but over -alignment doesn’t help either. In fact, I have similar thoughts on governance too. Some level of governance is needed. When we start doing too much governance, too much compliance, we start killing innovations. So that’s personally my view. No,

Jason Oxman

it’s an important viewpoint because there is this idea that governments need to act. They need to protect citizens. They need to ensure security. But acting too much, perhaps in advance, can stifle innovation. So, Aparna, I want to go to you with the same question. As we’re having this global AI governance conversation here at the AI Impact Summit, governments are going in different directions in many cases. This is the first time the conversation has taken place in the global south, so I think that’s a good thing for aligning governance approaches. So from where you sit, why is alignment across the AI governance ecosystem internationally so important, and what can happen when it doesn’t happen and goes wrong?

I

Aparna Bawa

will say, just to start, as an Indian American and someone who has lived in India, and we talked about this this morning at a breakfast we were at, it is quite striking to me some of the haves and have -nots. Like even we were talking about this morning, for example, during COVID, how some countries were fighting for PPE and fighting for oxygen tanks. And, you know, we in California were stockpiling toilet paper. I mean, the contrast is so stark. And I remember during COVID thinking to myself, that doesn’t seem right. And so I do feel like countries should protect the rights of their citizens and should want to advance their economies. But it is a tradeoff.

And I think it’s very well put to say it’s a tradeoff. So, for example, Zoom. imagine you would not be able to connect with people globally if we did not have cross -border data flow. So when we’re talking about AI, you can talk about AI, but it’s no different at the data layer. But we would not exist if we didn’t have cross -border data flows and free unencumbered data flow. And when governments start putting more and more restrictions on them within their own countries, it impedes their own citizens’ progress. And so at some point, it becomes a tradeoff. Now, obviously, the requirements around privacy and security are table stakes. If you get on a Zoom meeting with someone, you want to know that the person on the other side is that person.

That is sort of table stakes. But I’m with Jay on this one. I think there’s a basic level framework that is necessary. to be honest we live today with multiple in the United States we live with multiple states privacy frameworks and is it great no is it inefficient yes there’s something in between where you have a framework that is commonly understood with common set of norms and values I also respect a right of sovereignty for a nation so something there has to be a balance that

Jason Oxman

makes sense David Amazon operates pretty much in every country on the planet although I’m sure you can name a few that you’re not in yet there’s a few yeah there’s a few small number can you share your view on how this AI governance conversation needs to have some perhaps some unity to it

David Zapolsky

sure and first of all I I’m going to try not to repeat Aparna’s view because I basically agree with everything you just said if you think about every one of Amazon’s business models our stores the way we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets We’re looking to take that to 80. If you look at the cloud, if you look at our entertainment business, if you look at the satellites that we’re launching to launch a global Internet service, every one of them depends on free flow of goods, free flow of information, open skies.

That’s just kind of the way we’ve designed the company, to be global and to have interoperable services. And so every time a government erects barriers to that, it creates friction. It creates potential problems. And I think the global trend towards more of that is concerning. With AI particularly, I think the danger of some of the regulation that we’ve seen around the world is that we all still don’t really know how it’s going to be used, where it’s going to be most effective, where it’s going to be dangerous. There’s a lot of theories about it. There’s a lot of fear, uncertainty, and doubt about that, a lot of science fiction. And I think the danger in regulation, before you really understand the technology or how it’s going to play out, is that you create costs.

you create uncertainty and you inhibit innovation you inhibit adoption and that’s kind of what we’re seeing a couple years into this large language model journey there are parts of the world that were quick to regulate and civil society was all over that we’re going to regulate all these things we’re going to come up with these theoretical constructs of high risk, low risk and we don’t really know what that means in practice yet and so what’s happening? well, look at Colorado Colorado was one of the first states out of the box with comprehensive AI regulation which, by the way, isn’t bad in principle but they don’t know how to apply it no one really knows how to apply it and I think you’re seeing some buyers regret they put the implementation on hold they want to figure out standards I won’t even talk about the EU but they’re pretty much in the same boat they’re all looking for ways to not have to put the thing into practice because they don’t really know how it’s going to play out so I think what we need to do is step back look for some common principles what what is a high risk use?

what can we all agree are high risk? well, if you’re using a technology to make decisions that’s going to affect the life, health, or civil rights of an individual let’s talk about that are there laws that protect that already? do we need to supplement? them. Let’s work backwards from the harms we can see today and regulate there versus trying to come up with the unified field theory of AI regulation because that’s only going to slow us down.

Jason Oxman

Great. Yarek, we’ve been talking about unifying global governance approaches, making sure one might say that they all speak a common language. That’s what DeepL does. See what I did there? Your language AI platform is all about making sure everyone can communicate with each other regardless of the language they speak. From your perspective, you’re our European headquartered representative here, but you do business around the world. What can you share with us about how AI governance conversations being unified across governments is important to DeepL?

Jarek Kutylowski

I truly believe that any successful technology needs to be inherently global. That holds both for the commercial models of the companies that we’re representing, but it also holds for the AI. just the access and the ability of reach towards the whole globe with what we are building. I think this creates the economies of scale on everything that we’re building. And when you are in AI, like obviously you’re running very, very high R &D costs and you have to be able to offset that with a huge customer base. So having a global market and being able to deploy to the whole world and therefore also to fulfill the mission of our companies, whether it’s just enabling communication, maybe in the case of Zoom, or making sure that this communication can happen multilingually, as in the case of DeepL, that really depends on a framework that is transparent and on a framework that is maybe not too different in all of the parts of this world.

And therefore, having some common layer, having this right balance of… of protecting the sovereignty and… And protecting maybe like a slightly different approach and slightly different mindset to certain topics like privacy, where we do have differences across the world. But doing that in a way that has a common understanding, that would be incredibly valuable. I think not only for the companies that we represent, but also really for our users and for our customers who depend on the best possible solutions.

Jason Oxman

Jay, I want to come back to you because you are our resident security expert and sometimes doomsayer about what happens if we don’t include trust and security as part of the conversation. I’ve heard you remind members of the government of India, indeed, that although the five pillars are enormously valuable, if you don’t have security overlaying them, we’re all in trouble. Talk to us about how the trust and security conversation is still a vital component around all the excitement.

Jay Chaudhry

Yeah, I have said that AI is powerful. but AI is dangerous because this technology can be abused. In India there’s a great focus on five layers and the focus is about being sovereign having everything that you can control. It starts with application then models underneath and so on and so forth. While it’s good to have that sovereign stuff imagine a bad guy can control all of that sovereign stuff sitting somewhere out there. Data poisoning can be done. All kinds of stuff can be done. So having a layer of security across all five layers becomes very important. So we should think about sovereignty not just in terms of this thing is sitting in my country but also in terms of who can access, who can do some of these things with it which is often overlooked.

And also the adoption of AI is happening very fast. And it’s wonderful And I’m not saying we should slow it down. I think we should embrace fast, but we should also start thinking about embracing cyber to make sure things are used securely at the same pace.

Jason Oxman

And in order to make sure that security is part of the AI ecosystem, Aparna, I want to ask you about what we all have responsibility to be thinking about as users, what enterprises have a responsibility to be thinking about. You know, we’ve talked about governance from the policy perspective, but, of course, users and enterprises also have a responsibility around AI. And as the COO of Zoom, you look over both the public policy and business aspects of what you’re deploying. How does the conversation about what we all should be thinking about factor into product development and deployment conversations?

Aparna Bawa

It is a true partnership. And you know what? When Jay was talking, it resonated with me. When you work for a technology company, you’re not just working for a company. that is what you want to develop technology and you want people to adopt it as fast as possible you want them to be early adopters it’s so exciting in fact in our company you know companies have lots of different functions obviously our engineers our developers our product people are super they’re super early adopting their first to to take any sort of app that’s come out with its cursor etc and use it in their day -to -day and then there’s other people who have other day jobs i mean there’s finance people and the people people the hr people they have day jobs and they’re learning ai at night because they’re realizing that if i’m not on the ai bandwagon i’m going to get left behind and by the way if you’re looking to develop apps it’s actually yes you can focus on the sort of the the tech applications but the real so the the secret that not getting a ton of attention maybe a little bit of attention is this non -technical roles that could be augmented with ai so in that frame of mind i think it’s a really important thing to do and i think it’s a really important framework when you work for that kind of technology company it can be difficult to then start saying, but wait a minute, you need to slow down because you need to make sure that your CICD work is still going and it’s amplified because of the risks of AI, your security certifications, your red teaming, your privacy standards, all of that stuff is maintained.

I will tell you, the user plus the enterprise that is pushing out this technology, it’s a partnership. It is so important. The one thing that we learned during the pandemic, if you think about Zoom before pandemic, it was an enterprise -focused company, a work -focused company. And basically, when the pandemic hit, we said, okay, all you consumers, we will just hand you a platform that we usually give to IT administrators. And what do IT administrators at our customers do? They decide whether to turn up the security and privacy controls, turn down usability because it’s a tradeoff. It’s a definite tradeoff. They decide. We, in turn, just handed it to consumers and said, you can’t do that.

Who decides? and we realize, okay, public schools, they don’t have IT administrators. They don’t know how to turn on waiting rooms. They don’t know how to, you know, hide the meeting invite. They don’t know how to do these kinds of things. You have an obligation as an enterprise to make sure that there’s sufficient controls for the individual user and it scales all the way up to the enterprise and maintain that level of flexibility. You have that obligation. But on the same side, I would say the user, to be smart, has to understand some basic levels. I’ll tell you for an example. My kids use all the AI engines, ChatGPT, Cloud. They use it all. And it is a conversation we have to say is you don’t put all your information into your prompt because if you put all your information in your prompt, it is going into that engine and it will train that engine.

On the flip side, we as an enterprise provider, we have made the statement and we have made the policy decision that we will not use our customer content to train data. When I’m training my kids, I have to tell them, you can’t put your address into ChatGPT. You have to make sure that you’re safe in some way. So those are the kinds of things that you have to keep in mind. It’s a partnership between the user and the enterprise. And I think the enterprise obligation scales as you get down into the consumer use.

Jason Oxman

And I want to stay on this theme of training the user, if you will, whether they’re your children or a customer, because it is important for the tech industry to be mindful of the downstream. And, David, I want to come to you with this question. Amazon is, in a lot of ways, an upstream operator. You enable business and consumer customers on everything you do, from content to e -commerce to broadband in the future to your cloud customers. So how do you think about the upstream governance decisions that you’re making at Amazon and how they impact the downstream? How do you think about the downstream decisions or ways of operating that your customers are going to have to make as a result of those decisions you make at the Amazon level?

David Zapolsky

Well, we’re fortunate to have the scale to be able to serve enterprises in the cloud at the service layer. And so we have, you know, even before the AI, the current AI craze, we have a couple of decades of experience in thinking through what does governance and security look like for our enterprise customers. And as we’ve moved into this, you know, newer age where there’s AI services available, you know, one of the best solutions that we could come up with is creating an environment within the cloud services that so many hundreds of thousands of enterprises already use to give them access to models, not just our own, and we do our own models, and there’s upstream governance on those, you know, testing, making sure there’s, you know, we correct for bias, the things that a responsible model builder will do.

But at this enterprise level and the services, this is called bedrock. You know, we try to think through what are customers going to need. So we build in security. We build in the type of infrastructure that allows customers to scale up or down. We build in choice. Enterprises can choose from over 100 different models, open source and closed source. Not just ours, but, you know, all of the leading models from all around the world. And so we try to create an environment, a platform, where enterprise customers can come to use this new technology. First of all, get access to it without having to build their own servers and train their own models. And secondly, to do it in a way where they can rely on the security of the infrastructure.

The other thing that we will provide customers is that the data they use to employ those models, you know, stays their data. It doesn’t go to the model builders and it doesn’t go to us. So, you know, you can build that into the system. And then on top of that, given the way… that enterprises are using this technology, we try to build as many tools as possible to put the control of how this technology is deployed into the hands of enterprises and users. And so, for instance, on the Bedrock platform, we provide guardrails that allow you, as an enterprise, to basically control what types of outputs the models are going to give you. Now, are they more toxic?

Are they less biased? Can you filter for certain types of content? We build those controls right into the interface so enterprises can have that control. We build disclosures into the types of services that we offer so that we provide some visibility and transparency into here’s how this thing is built, here’s what you should use it for, here’s what you probably shouldn’t use it for, and we provide those kinds of choices to consumers. And so you have to think through the overall security in the system. in the environment. and the accessibility of this technology. And as far as our approach is, the cloud is probably the best place to do that. It’s certainly the easiest way to access the technology and likely the safest.

Jason Oxman

Jarek, you’ve moved DeepL’s business model from it started as translation. Now it’s getting into agentic AI, and you have agents on your platform that can execute tasks on behalf of your customers. Which I can imagine raises very different governance policy decisions that you have to make on behalf of your customers when you’re just translating versus when agents can act autonomously, particularly because you’re a global business and they can act autonomously across borders. How are you thinking about the policies and procedures for governance that you have to put in place in an agentic AI world that are different than perhaps you did in a language translation? world?

Jarek Kutylowski

I think generally, but also in the language space, it’s just like the stakes are becoming higher and higher. AI is becoming more and more powerful. And even if you look into translation, like a couple of years ago, Diebel would be translating your typical email to your customer. And that is important, of course. You want to look great in front of the customer. You want to be eloquent. You want to be able to connect with them, maybe like really on a human level when it comes to the language that this customer is speaking. And you’re enabling your business to basically become global very, very easily. But now what Diebel is translating, it’s plain maintenance records. It’s R &D documentation for new drugs that actually influences how those drugs are developed and whether they’re being approved by the FDA or not.

So these are highly critical use cases. And I think it has been mentioned that like privacy and… it is just the table stakes it’s just the beginning I think creating a layer of trust into the outcomes of the AI whether that’s translation whether that’s agentic AI that those decisions are really following what the enterprise is expecting of the AI that is really where kind of the battle is right now and that is where both the governance aspect of that that’s coming from the political side and from the governmental side needs to obviously be included but there’s also the aspect of how do the enterprise how do our customers want to regulate the AI that is being deployed and how flexible the products that we all are providing can be towards those very different approaches that we’re seeing across the world and with different types of enterprises maybe even

Jason Oxman

Each of you mentioned the concept of risk management in your comments and I want to come back to the balance that Jay alluded to earlier between risk management between promoting innovation and balancing risks and obviously there is a trade -off it’s a sliding scale the more you regulate risk the less room there is for innovation I want to ask each of our panelists, Jay, I’ll start with you, about how you’ve seen a flexible risk -based approach from government be the most effective, where you see that flexible approach still leave room for innovation, or the flip side to that, if you want to give any examples, where you’ve seen it go wrong, where a more prescriptive approach to regulation has denied you the opportunity to bring products or services to market or has generally been more of a challenge for industry because a government didn’t get the balance right between managing risk and promoting innovation?

Jay Chaudhry

There are many facets of governance and risk. Take, for example, data privacy. Obviously, that’s one kind of factor. But potentially, hacker attacks from a cyber point of view is a different kind of factor. We look at more in terms of two things. One, making sure your data is not lost. So the data becomes very important. There’s a consumer end of data, but there’s a bigger issue on the data side is enterprises. And you don’t try to treat the same data the same way in the practical business world. I’ll give you an example. When I worked with General Electric, the CISO, a very smart guy, Larry Virginia, would say, when I tried to secure everything, I secured nothing.

So then he would give an example. He says, as a CISO, I need to protect IP or intellectual property of my products. But my washers and dryers are out there. I don’t spend time trying to protect its IP at all. You can buy them in a store. And figure it out. But I’m dead serious about protecting IP on my jet engine. That’s very important. Trying to just say all consumer data, all this data, it just starts creating issues. That’s why I also like to say compliance doesn’t mean security. In fact, when you work on compliance, all this thing works through the government entities, pros, cons, and it takes a lot longer. And by the time it’s out there, the cyber and compliance needs have moved on.

So the stuff you put in place many, many times is old. In fact, when Zscaler came out with our Zero Trust cloud -based architecture, a lot of these regulators came in, wait a second, where is your firewall? So what do you mean firewall? Firewalls don’t, we don’t use firewalls. We are anti -firewalls. And they said, no, no, no, wait a second. The banks can use it. If you know. It’s not a firewall. When we went through certification for the federal government in the U .S. the certifying body first came firewalls no it took us three months to educate them so that’s why I think over regulation I really don’t like it there needs to be a way of saying what’s the impact of this thing on what kind of stuff that’s the right approach all data is not created equal trying to put the onus off securing all data gets hard then classifying data gets hard so these are not simple issues AI makes it very hard we don’t even fully understand how AI does what it does so I think a flexible policy that evolves is a better thing while keeping track of the most important data and then beyond data hackers too, that’s a big problem we talked about agents today a user is the weakest link tomorrow AI agents will be your weakest link and they’ll be all over they are maturing they’ll come Imagine an agent getting hacked or hijacked in your company with access to all kinds of stuff.

So that’s where companies like Zscaler, we are focused on making sure our zero trust change can be extended to deal with agents, starting with understanding their identity, authorization, all those things. Those things are very important the way we look at it. Otherwise, business will shut down.

Jason Oxman

So Aparna Zoom brings some amazing innovations using AI to the platform that we’re all familiar with. It makes it a lot easier for us to do everything from transcribe meetings to pretend to be a cat when you’re in court. No, that’s not a – that’s a –

Aparna Bawa

I was going to say it can summarize your meeting. It can take notes for you. It can send action items to your teams. It can calendar those action item follow -ups. It can give them deadlines. All done.

Jason Oxman

There it is. But I can imagine you’ve had some challenges around the world in that balance between innovation. and risk management from governments. Can you either share a positive example of where that’s gone well in your mind, or if you want to, an example of where it hasn’t gone well, where consumers and businesses have been denied Zoom innovation because that balance isn’t struck? Or perhaps you can keep it at a higher level if you prefer.

Aparna Bawa

between innovation, our product team, innovate, innovate, innovate, our governance team, security, privacy, et cetera, is always thinking about that as well. And so how do you strike that balance? And I think I’ll start at the top level. It’s a sliding scale on many different fronts. But if you look at it like a layer cake or even a data stack, but the top level, it’s customer choice. So David was very appropriate when he said customer choice, but customer choice is different by the category of customer. If you are an enterprise and you have 200 people on an IT admin team or under the CIO, and you are buying Zoom and you have a giant security team and a giant compliance team, you’re going to be making choices for yourself.

I’m not going to tell HSBC what they’re going to do. They’re going to decide what they’re going to do. And we deliver the platform and we have toggles for them to decide what they want to deploy, what they don’t want to deploy, who they want to deploy it to. We make it very easy. So we provide a lot of choice. So the same platform services Fortune One. The same platform also services my mother -in -law, who is on the free account and who is chatting with her friends and won’t upgrade. I tell her, please upgrade. She gets off, waits five minutes, gets back on, and that’s how they do it. So for her, it’s very different.

So for her, you have to mandate a few things. You can’t give your meeting ID to everybody. It cannot be on the top of the UI. You know, those are some basic things. You have to have waiting rooms. If you’re in a school environment, you have to have mandatory passcodes. These are sorts of things that you – so that’s a sliding scale. I would say take it one level deeper. I think the biggest thing I have learned from working at Zoom, and in all honesty, I credit our founder for this. The biggest thing I’ve learned working at Zoom is everything goes back to the user experience. And our customers are not monoliths. They don’t just want to take down.

They don’t want to take down all the technology. They want to do it in a safe and secure way. They don’t want to be surprised. So you have to think, I am a user. I’m an end user. It doesn’t matter that I sell to Zscaler. Thank you very much. It doesn’t matter that I sell to Zscaler. I need to worry about how Jay Choudhury’s engineer feels when he gets on Zoom. And that’s the user experience I’m going for. So if you are a user and you feel like, wait a second, I don’t really want – if I’m a finance person in Jay Choudhury’s team and I say I don’t really want my meeting to be automatically transcribed and then spit into an AI engine because I’m worried, or if I’m a lawyer, I’m worried about attorney -client privilege, well, I need to give them the option to say I opt out of that.

I need to be able to give them choice. And I think that’s how I think about it. Every risk -based decision is you are a user. You’re not one kind of user. You have multiple types of users. How do you make it easy? How do you make it easy for, at a very lowest common denominator, for them to trust you? And that’s really the answer that you go through.

Jason Oxman

That’s great. David, let’s go from different kinds of users to different kind of products. You were the first on the panel to use the phrase risk -based approach, and nowhere is that more evident than Amazon’s wide range of products and services to your customers. I can imagine it’s a very different internal conversation about governance and risk when determining how AI is going to, on Amazon Prime, recommend my next series or show. Not a lot of risk there. But other Amazon products could have more risk to them. So on the sliding scale, and you also, you travel the world, quite literally now you’re doing it, talking to governments about that innovation versus risk management and the risk of getting that balance wrong.

How do you communicate that to governments and also make the internal product decisions that you need to around those issues?

David Zapolsky

Well, you sort of… kind of stole one of my talking points when I have some of these conversations, which is it does matter how this technology is used and where. It’s a different set of considerations when we think about what kind of protections or risks arise from an AI -assisted shopping assistant versus a tool we might make available to help doctors document how they’re treating patients and make it easier for people to prescribe medications. Those are two very different risk profiles. But if you start with a regulation that doesn’t differentiate between those, you’re going to inhibit innovation. You’re going to prevent adoption of really useful ways that this technology can be used. And so that’s…

You know, that’s the pitch I make when I get to talk… to people whose business it is to think about regulation. It is about risk. It’s about how the technology is used. And my point earlier was that we don’t really know yet how the technology is going to be used. When we see it, we can analyze it. I can’t, you know, and on that point generally, you know, there are cases where technology companies have made a decision to not bring certain types of technology into, say, Europe because of regulatory uncertainty. And typically those get worked through. But I can’t tell you how many conversations I’ve had internally where folks have come up with an idea or a product and our sort of internal mantra is we want to launch something everywhere all at once.

We want to serve customers. If we have convictions, something’s going to happen. If it’s good for customers, why just do it in one place? And sometimes the answer to that is. it’s too costly. It’s going to take more time. We can’t really figure out how this is going to fit within, you know, the regulatory scheme in a certain other jurisdiction because they haven’t thought of it either. And so we’re going to wait. We’re just going to, you know, wait on that. We’ll launch it in this place first and we’ll see if it works. And then if it works, then we’ll think about, you know, the costs associated with scaling it globally. And so that’s a real world issue that governments have to understand and deal with when they make decisions about how prescriptive their regulations are going to be, especially in the abstract.

And so those are the sorts of conversations I have. I think, you know, in the AI space, I think you can look at countries like Peru. You can look at countries like Japan that have proceeded cautiously. I think India has the same approach and I’m very encouraged by the way India is approaching these issues. You have to you can’t rule out regulation completely. And Amazon’s an advocate. of regulation that mandates that people developing and deploying this technology do it responsibly. But we have to understand what we’re regulating before you can really pull the trigger. And so those are the – I think those types of examples are useful for people to keep in mind when they’re considering how to resolve that balance.

Jason Oxman

And the results of those conversations not going in the right direction, David, is that consumers or businesses might get denied the technology that their neighbors are enjoying. So, Jarek, I wanted to ask you, as the CEO of DeepL, in the process of expanding around the globe, are there examples that you can think of where you’ve had to make a go -no -go decision entering a particular country or launching a particular product, including your new Agentech AI products? because of the regulatory environment or because of the way in which a country looks at? Or the flip side of that, if you want to take the positive, is are you attracted to a particular market because, as David said, it’s done the right thing, like Peru or Japan or even India is endeavoring to do, where they’re more likely to get deep L service because of the decisions they’ve made, the approach they take to these AI governance decisions?

Jarek Kutylowski

Yeah, Jason, let me maybe first start with a principle. I’m a scientist by heart, so I’m really excited about bringing the best possible technology to each and every one of our customers and users. I think they all deserve it. I think they all should be equipped with that. But yes, there is kind of some of those things that we need to take into account. And actually, quite often, those are not really location -based or country -based or regulation -based, but really also based on the use cases of those. Of those customers. AI can be incredibly powerful, but that power also demonstrates its possibilities in different ways in different applications. And going back to my example from earlier, like the translation of an email has just a different criticality grade than a translation of a patent application.

The execution of an agent in a particular environment versus in an enterprise environment has a different grade of complexity. But going back to kind of the regulation aspect of it, I think we’re lucky as a company to have grown in Europe in kind of an environment which is maybe like slightly earlier on regulation than other places in the world. And I think that gives us an edge to be able to understand how to work with this regulation and how to prepare and then also be very, very early in other markets, like you mentioned Colorado earlier, and be able to handle that complexity and be able to handle that complexity for our customers, really. Because most often it is our customers who do not understand this space.

We do. And we have to go all of the way to give them the possibility to figure this out for themselves, for their applications, for their use cases, and across a whole range of products. So in short, I think it can be managed, but it is really like part of the excellency of a company to be able to manage it together with the customer.

Jason Oxman

The last question that we have time. I want to address to each of you is a forward -looking question. It used to be possible to have conversations about policy outcomes years in advance. I think the best we can hope for. is for me to ask this question in advance of Switzerland hosting the next AI Impact Summit or whatever they choose to call it next year at this time. So my question to all of you on the panel is, a year from now, if we are to gather, and something had happened in the AI governance, AI regulatory space over the course of that year that you’d like to see happen and you were looking backwards to India and say, I’m really glad that one thing happened or that one thing changed or this government or this international body did this thing over the course of the last year to really help unleash the innovation and power of AI in a secure way that we all want to see, what could that one thing be that you’re looking at?

And it can be something that you’re focused on in your business as well over the course of the next year that government can help make a reality. So, Jay, I’ll start with you with this question. Then we’ll go down. I’ll go down the panel to bring our time to a close together. What’s the one thing you’re hoping if we’re talking a year from now has happened in global AI governance that’s going to make everything that we’re talking about and excited about a huge success?

Jay Chaudhry

The AI train is moving at a pretty fast pace. It will keep on moving. Then you look at the things that could go wrong. That’s where governance comes in. I think there’s too much focus on data. There’s less focus on bad things that bad guys can do. I think probably the biggest issue will be, hey, today we hear all about these ransom attacks, ransomware. AI can make it so much easier. Bad guys are very motivated to make money. Today, when they do attack, they have to find your attack surface. They’re finding those IP addresses that are open to the Internet, those firewalls and VPNs and everything. AI, you can discover it in 30 seconds. AI can write beautiful emails for phishing.

as if they come from your CFO. Once you get in, AI agents can discover your whole network to figure out what those things are. It can bring those things down. So I think we need to focus more on to make sure we can protect against those risks. I talked about AI agents going rogue. Those are one kind of risk. And then the second kind of risks government needs to worry about is nation states trying to use AI to really have advantage, understanding, getting these backdoors planted and all that kind of stuff. I think if you’re sitting next year and we’ve done enough in those areas that we don’t have some of these things that blow up.

If they blow up, then government starts tightening things more and more, which doesn’t sometimes help. So proactive areas to secure it will be very, very important.

Jason Oxman

All right. So protecting against these threats so that government doesn’t overreact and stifle innovation as a result. Aparna, what’s your one thing that you hope for for next year?

Aparna Bawa

You know, it really struck me in this impact summit, the focus on inclusivity, upskilling, skilling and upskilling people who wouldn’t otherwise have access to technology. And if you think about why we got started, we were founded because we wanted to provide free and open access to collaboration and have people from all walks of life connect. I think our founder had to travel to date his wife, you know, and didn’t want to see her more than once the next number of weeks. So, you know, it’s something powerful. In a year, I would like to actually see that happen. Now, it’s not. I think it’s completely altruistic. I do firmly believe that even enterprises who have more of a chance of adopting AI and gaining some of the efficiencies of AI, they need a market.

And the market is you, me, and all of us. And the more people in a village somewhere in a corner of India, even near – we were just talking about Karnataka in another meeting, in a village that has low bandwidth, et cetera, in Karnataka. If a farmer can adopt AI and can change their lives in successive generations, that is good for business. And so for me, progress on that. I still think it’s very – it’s all talk. But I love the idea. I love seeing a billboard where Prime Minister Modi is talking about inclusivity. That’s wonderful to hear. It’s good for business. Maybe it’s a bit altruistic, but I would think it would be good for Zoom.

Jason Oxman

I love it. AI lifting up more broadly the world. David?

David Zapolsky

I’ll take a much higher level approach. You know, I think there’s a sort of consensus around AI regulation that’s kind of yearning to get out. Like it’s sort of gelling a little bit. We saw it sort of in the Hiroshima agreements. We see it, you know, talked about in forums like this. You know, there is sort of an emerging consensus about how to approach this technology. In a responsible way, and I totally, again, agree violently with Aparna in adding the inclusiveness piece and commend the Prime Minister and India for making that a big part of the debate. But I think I would like to see countries around the world start to converge on this basic consensus.

It doesn’t mean that countries can’t have their own perspectives or sovereign outlooks, but there is sort of a… a movement toward an international standard that – and there’s a parallel with the technical standards. There’s ISO 42001, which everybody can abide by and give people a common set of principles and a common set of technical standards they need to make so that we can all be more confident in the way we roll out this technology.

Jason Oxman

I love that. A move toward more global industry consensus -based standards to help govern all that we do, hopefully put government regulators out of business if we can all do it right. Jarek, you get to bring us home with your aspiration for us as we gather together next year in Switzerland.

Jarek Kutylowski

Yeah, I think there’s place for those government regulators too. I would love, as you just explained, getting them all together and creating a framework. But I think there is a – bigger role for AI in this world. I think there’s so many amazing humans across all of the continents of this world and I would love to see in a year and once again that goes back a little bit to DeepL’s mission for them to be able to collaborate as much as they can no matter where they sit geographically no matter which language they speak, no matter what they do in their job just giving the opportunity to each and everyone in every place of this world and there’s amazing examples of cooperation between India and other countries and strengthening that even more and I think AI gives us even more possibilities to do that in the upcoming year so maybe in Switzerland we’re going to be able to look at that and see hey in India we’ve just set the cornerstone of making this possible and making this world a better place.

Jason Oxman

I bet they will. You know, it was AI action last year. Now it’s AI impact. Hopefully it will be AI collaboration or something of the sort next year. I love that that image of everybody across borders, across geographies, across languages collaborating together. What a great discussion. I love how we were both philosophical and practical. I really appreciate all of you sharing your deep insight on these important AI governance issues. And I appreciate all of you being here in the audience to hear this discussion. Please join me in recognizing and thanking our terrific panelists. And please enjoy the rest of the summit. Thank you. Now we’ve got to get a picture. Are we going to take a picture?

We have to get a picture, yeah. We’re going to have to hang back behind there. We’re going to have to hang back behind there.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Jason Oxman framed the AI industry’s dual challenge as managing risk while fostering global innovation and interoperability, urging governments to move beyond fragmented, nation‑centric rules toward coordinated AI governance that can support systems at scale.”

The Open Forum discussion highlights the need to address resource inequities, build global regulatory capacity, and coordinate multiple governance frameworks to avoid fragmentation while respecting national approaches, aligning with Oxman’s call for coordinated AI governance [S23].

Confirmedhigh

“Panelists included Jay Chaudhry (CEO, Zscaler), Aparna Bawa (COO, Zoom) and David Zapolsky (Chief Global Affairs & Legal Officer, Amazon).”

The panel roster listed in the knowledge base confirms the participation of Jay Chaudhry, Aparna Bawa, and David Zapolsky in the AI governance discussion [S2].

Confirmedmedium

“Jay Chaudhry referenced India’s five‑layer AI security model and argued that without a comparable security overlay the model could be abused, emphasizing the need for sovereignty‑respecting security controls.”

India’s layered approach to AI sovereignty, focusing on software stacks, model development, orchestration, and applications, is documented as a strategic framework, supporting Chaudhry’s reference to a five-layer model and the need for security overlays [S79] and [S45].

Confirmedhigh

“Aparna Bawa emphasized that cross‑border data flows are essential for Zoom’s global connectivity and that any restriction impedes citizens’ progress by throttling AI‑supporting infrastructure.”

Discussion on cross-border data flows stresses that data localization stifles businesses and that unrestricted flows are vital for services like Zoom, matching Bawa’s point [S82] and [S81].

Additional Contextmedium

“The COVID‑19 pandemic forced Zoom to shift from an enterprise‑only platform to a consumer‑facing service, leading to rapid deployment of default security controls such as waiting rooms and passcodes.”

Reports note that Zoom pivoted during the pandemic to a broader AI-first work platform and that security features like waiting rooms were added to protect users, providing context for Bawa’s statement [S84] and [S85].

Confirmedmedium

“David Zapolsky cited Colorado’s early AI law as an example of premature, blanket regulation that creates costs and stalls adoption because ‘no one really knows how to apply it’.”

Colorado’s AI law, described as pioneering yet controversial, has faced criticism for its early implementation and unclear application, corroborating Zapolsky’s example [S89].

External Sources (90)
S1
https://dig.watch/event/india-ai-impact-summit-2026/aligning-ai-governance-across-the-tech-stack-iti-c-suite-panel — The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and i…
S2
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -David Zapolsky: Chief Global Affairs and Legal Officer at Amazon
S3
https://dig.watch/event/india-ai-impact-summit-2026/aligning-ai-governance-across-the-tech-stack-iti-c-suite-panel — And Dr. Jarek Kutylowski. How did I do there? Thank you, is the CEO of DeepL. So to set up the conversation, I wanted to…
S4
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -Aparna Bawa: Chief Operating Officer (COO) of Zoom
S5
The Role of Government and Innovators in Citizen-Centric AI — – Arthur Mensch- Jarek Kutylowski
S6
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — – Jarek Kutylowski envisioned enhanced global collaboration that transcends language and geographic barriers And Dr. Ja…
S7
Driving U.S. Innovation in Artificial Intelligence — 7. Jason Oxman – President & CEO, Information Technology Industry Council 8. Julia Stoyanovich – Associate Professor, De…
S8
Agentic AI in Focus Opportunities Risks and Governance — -Jason Oxman- Moderator/Host, appears to be with ITI (Information Technology Industry Council)
S9
Cutting through Cyber Complexity / DAVOS 2025 — – Jay Chaudhry: CEO, Chairman, and Founder of Zscaler 3. Zero Trust Architecture: Jay Chaudhry, CEO of Zscaler, argued …
S10
https://dig.watch/event/india-ai-impact-summit-2026/aligning-ai-governance-across-the-tech-stack-iti-c-suite-panel — The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and i…
S11
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -Jay Chaudhry: CEO of Zscaler (security expert)
S12
Panel Discussion Data Sovereignty India AI Impact Summit — “So I think the takeaway is that as far as the infrastructure layer is concerned, as in sovereignty in compute is not on…
S13
Discussion Report: Sovereign AI in Defence and National Security — This comment shifts the discussion from narrow military applications to a comprehensive view of national resilience, inf…
S14
Cyberattacked: Who do you call? — Individual usersare often the weakest link in cybersecurity protection. More simple ‘cyber hygiene’ measures are needed …
S15
‘Operation Ghost Click’: Cyberzombies in the real world — The law is not enough. As always, the humans are the weakest link – almost every cyberattack has users’ ignorance and ne…
S16
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Moreover, AI is seen as a potential threat that can lead to new-age digital conflicts. The supporting evidence presents …
S17
Operationalizing data free flow with trust | IGF 2023 WS #197 — Lastly, Narayan from Nepal proposed the need for common regulations and collaborations to address privacy, security, and…
S18
Rule of Law for Data Governance | IGF 2023 Open Forum #50 — Additionally, the analysis underscores the importance of harmonizing and aligning laws to facilitate cross-border data f…
S19
Unlocking Trust and Safety to Preserve the Open Internet | IGF 2023 Open Forum #129 — Assessments are tailored based on risk, such as user volume, and the company’s product features
S20
Design Beyond Deception: A Manual for Design Practitioners | IGF 2023 Launch / Award Event #169 — Cristiana Santos:The first time in a decision we suggest that along with this DPA other enforcers name and publicize vio…
S21
https://dig.watch/event/india-ai-impact-summit-2026/building-the-next-wave-of-ai_-responsible-frameworks-standards — And I think the second point we should think about is I think the human state of mind works well in default versus optio…
S22
WS #100 Integrating the Global South in Global AI Governance — Roeske Martin: Thank you, Fadi, both for having us here and for your great partnership in this research that we’ve do…
S23
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S24
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically high…
S25
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and em…
S26
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovat…
S27
State of Play: AI Governance / DAVOS 2025 — The discussion highlighted tensions between regulation and innovation. While some advocated for light-touch governance t…
S28
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S29
Dynamic Coalition Collaborative Session — Matthias Hudobnik: Thanks a lot. Yeah, it’s a pleasure to be here at the Internet Governance Forum. I’m excited to contr…
S30
Regional experiences on the governance of emerging technologies NRI Collaborative Session — Chin Lin: Okay, I think to answer this question, we have to know that to set up a user-centric deployment is a collapsib…
S31
The Future of Digital Agriculture: Process for Progress — Technologies must be easily accessible, economically viable for the lowest-income groups, relevant to the context, and s…
S32
AI That Empowers Safety Growth and Social Inclusion in Action — And so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advance…
S33
The US National Cybersecurity Strategy — We must begin to shift liability onto those entities that fail to take reasonable precautions to secure their software w…
S34
Data Governance in the Context of Emerging Technologies: Promoting Human-Centred and Development-Oriented Societies   — In the context of this data-driven economy, the governance of this key asset should be tackled in a multilayered way. On…
S35
Ministerial Roundtable — Careful understanding of opportunities for cultural and language aspects is important, requiring upskilling and knowledg…
S36
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Diana Nyakundi:Yes, I think to some extent you are a bit too hopeful. because I would say we are currently making demand…
S37
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S38
CSTD open consultation on WSIS+20 — The analysis also recognizes the digital divide and the importance of bridging it. Inclusivity in ICT access, particular…
S39
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — Thank you. So we are a highly connected world. Imagine any large corporation that’s doing business in 50 countries. If e…
S40
General notices • alGemene KennisGewinGs — Large-scale initiatives require a high level of political and organisational leadership, supported by financ…
S41
(Plenary segment & Closing) Summit of the Future – General Assembly, 6th plenary meeting, 79th session — The level of disagreement among speakers is moderate. While there are differences in approach and emphasis, most speaker…
S42
Agentic AI in Focus Opportunities Risks and Governance — “These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic….
S43
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Anastasiya Kozakova:Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work …
S44
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
S45
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — This observation provides crucial historical context showing how trust requirements have fundamentally changed as AI mov…
S46
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — Beyond safety by design, companies need governance from design embedded at every stage from ideation through deployment …
S47
Shaping the Future AI Strategies for Jobs and Economic Development — The emphasis on collaboration over displacement provides a framework for managing workforce transitions while capturing …
S48
From principles to practice: Governing advanced AI in action — Sasha Rubel: It’s not an afterthought. I love that. Safety is the foundation and not an afterthought. It’s again one of …
S49
Cognitive Vulnerabilities: Why Humans Fall for Cyber Attacks — Therefore, application designers should aim to strike a balance between ensuring the security of transactions and provid…
S50
Clear-Eyed about Crypto — Prioritizing end user experience and choice is fundamental and should not be overlooked. Users should have the freedom t…
S51
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — In conclusion, the analysis highlights different perspectives on the impact of regulation on the tech industry. While le…
S52
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S53
WS #172 Regulating AI and Emerging Risks for Children’s Rights — Global cooperation and dialogue is needed to build common frameworks
S54
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The panelists stressed the need for harmonized global regulations to avoid fragmentation and ensure interoperability acr…
S55
WS #179 Privacy Preserving Interoperability and the Fediverse — Claybaugh contends that federated platforms must recognize and accommodate different user sophistication levels, from te…
S56
How IS3C is going to make the Internet more secure and safer | IGF 2023 — In conclusion, the analysis emphasizes the importance of a comprehensive security by design approach, collaborative effo…
S57
UNCTAD E-Commerce Week — In theG7 ICT Priorities: Technology, Innovation and the Global Economysession, the important role of ICT policy for the …
S58
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S59
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — Chaudhry warns that if each nation imposes its own AI rules, companies operating across borders will face fragmented com…
S60
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The panelists stressed the need for harmonized global regulations to avoid fragmentation and ensure interoperability acr…
S61
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically high…
S62
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and em…
S63
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovat…
S64
State of Play: AI Governance / DAVOS 2025 — The discussion highlighted tensions between regulation and innovation. While some advocated for light-touch governance t…
S65
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S66
Agentic AI in Focus Opportunities Risks and Governance — “These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic….
S67
Responsible AI for Children Safe Playful and Empowering Learning — “safety, privacy, these are absolutely foundational and non‑negotiable as we’ve seen on the LEGO education side and simi…
S68
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S69
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — But the trust in these systems have to be built over time, and they don’t come without some assurance being put in place…
S70
AI That Empowers Safety Growth and Social Inclusion in Action — And so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advance…
S71
Regional experiences on the governance of emerging technologies NRI Collaborative Session — Chin Lin: Okay, I think to answer this question, we have to know that to set up a user-centric deployment is a collapsib…
S72
Opening Ceremony — Innovation must be guided by responsibility, with safety and privacy designed into products from the start
S73
WS #179 Navigating Online Safety for Children and Youth — There is a need for both technical solutions (safety by design) and education/awareness initiatives
S74
The US National Cybersecurity Strategy — We must begin to shift liability onto those entities that fail to take reasonable precautions to secure their software w…
S75
Reviewing Global Governance Capacity Development and Identifying Opportunities for Collaboration — The global cloud computing market is accelerating. Companies are increasingly looking at cloud computing as a vi…
S76
Acknowledgements — Governance for cloud computing refers to the system by which the provision and use of cloud services are directed and co…
S77
How AI Drives Innovation and Economic Growth — The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterize…
S78
Technology Regulation and AI Governance Panel Discussion — Joel Kaplan emphasized the importance of maintaining regulatory environments that support AI development through access …
S79
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S80
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — He advocated for a layered approach to sovereignty, focusing on controlling critical chokepoints whilst accepting strate…
S81
African approaches to Cross-border Data Flows (GIZ) — Another area of concern was the impact of the e-commerce joint statement initiative on Africa’s data privacy efforts and…
S82
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — 4. Cross-Border Data Flows and Trade Paul Baker: Okay, thank you. Just quickly, I think that we have to be practical, …
S83
Advancing digital inclusion and human-rights:ROAM-X approach | IGF 2023 — Grace Githaiga:I think I want to be very brief. When we looked at the rights, and this is our first review, because we d…
S84
From video to AI: Zoom’s next chapter — Zoom, once synonymous with video conferencing during the pandemic, ispivotingto redefine itself as an ‘AI-first work pla…
S85
AI@UN: Navigating the tightrope between innovation and impartiality — The COVID-19 pandemic prompted a second shift—from physical to online meetings. While UN buildings ensure security and i…
S86
Software.gov — In conclusion, Doreen Bogdan-Martin emphasizes the importance of GovStack as an efficient and reusable tool for implemen…
S87
E-commerce in the WTO: the next arena of Internet policy discussions? — Regulatory frameworks on privacyare key to protecting personal information and enhancing trust in e-commerce, according …
S88
WS #278 Digital Solidarity & Rights-Based Capacity Building — Jennifer Bachus: Thanks to all of you. So, we’re going to go to an interactive discussion now in what’s a little uncon…
S89
Colorado’s AI law under review amid budget crisis — Colorado lawmakersface a dual challengeas they return to the State Capitol on 21 August for a special session: closing a…
S90
Keynotes — O’Flaherty acknowledges that the regulatory work is not finished and that current regulatory models will likely be insuf…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Jay Chaudhry
6 arguments142 words per minute1116 words469 seconds
Argument 1
Need for balanced alignment; over‑alignment stifles innovation
EXPLANATION
Jay argues that while some degree of alignment among governments is necessary, excessive alignment can hinder innovation. He stresses that too much governance and compliance can kill the pace of technological progress.
EVIDENCE
He notes that in a highly connected world, having a line of alignment is good, but over-alignment does not help, and that excessive governance and compliance kill innovation [24-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Over-regulation is warned as a barrier to innovation and alignment is needed but not excessive, as discussed in the panel summary [S2] and the Davos commentary on over-regulation [S9].
MAJOR DISCUSSION POINT
Alignment vs innovation
DISAGREED WITH
Aparna Bawa, David Zapolsky, Jarek Kutylowski
Argument 2
Compliance ≠ security; flexible, evolving policy needed; over‑regulation stalls progress
EXPLANATION
Jay distinguishes compliance from true security, asserting that meeting compliance requirements does not guarantee protection. He calls for flexible, evolving policies rather than rigid, prescriptive regulations that can delay or block innovation.
EVIDENCE
He explains that compliance does not equal security, that over-regulation creates outdated controls, and that a flexible policy that evolves with technology is needed, citing examples from his experience with Zscaler and federal certification processes [180-202].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for flexible, evolving policies rather than rigid compliance mandates is highlighted in the governance-innovation tension report [S2].
MAJOR DISCUSSION POINT
Compliance vs security
AGREED WITH
Aparna Bawa, David Zapolsky
Argument 3
AI can be weaponized; security overlay across all layers required; sovereignty alone insufficient
EXPLANATION
Jay warns that AI’s power can be abused, making security essential at every layer of the stack. He argues that sovereignty of data or models is not enough unless access and usage are also secured.
EVIDENCE
He describes scenarios such as data poisoning and malicious control of sovereign AI stacks, emphasizing the need for security across all five layers and noting that sovereignty must include who can access the system [87-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s potential as a weapon and the necessity for layered security are underscored in the AI-driven cyber-defense briefing [S16].
MAJOR DISCUSSION POINT
AI weaponization and layered security
AGREED WITH
Jason Oxman, David Zapolsky, Aparna Bawa, Jarek Kutylowski
DISAGREED WITH
Aparna Bawa
Argument 4
Users are the weakest link; need identity and authorization controls for AI agents
EXPLANATION
Jay points out that users, especially AI agents, can become the weakest security link if not properly managed. He stresses the importance of identity, authorization, and control mechanisms for AI agents to prevent hijacking.
EVIDENCE
He mentions that AI agents could be hacked or hijacked, gaining access to corporate resources, and that Zscaler is developing zero-trust controls to manage agent identity and authorization [209-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
User weakness in cybersecurity and the importance of identity controls are documented in the user-weakest-link analysis [S14] and the social-engineering case study [S15].
MAJOR DISCUSSION POINT
User/agent security risk
Argument 5
Governments should focus on AI‑enabled threats (ransomware, nation‑state misuse) to avoid over‑regulation
EXPLANATION
Jay urges governments to prioritize protecting against AI‑driven cyber threats rather than imposing blanket regulations that could stifle innovation. He highlights ransomware, AI‑generated phishing, and nation‑state exploitation as key risks.
EVIDENCE
He outlines how AI can accelerate ransomware attacks, generate convincing phishing emails, and be used by nation-states, arguing that proactive security focus will prevent reactionary over-regulation [340-360].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled cyber threats such as ransomware and nation-state misuse are highlighted as priority risks in the AI-driven cyber-defense report [S16].
MAJOR DISCUSSION POINT
Prioritising AI‑driven cyber threats
Argument 6
Security must keep pace with rapid AI adoption; cyber safeguards should be embedded as AI scales.
EXPLANATION
Jay argues that while AI adoption should be fast, security measures need to be introduced simultaneously to prevent abuse, emphasizing a parallel track of cyber protection alongside AI rollout.
EVIDENCE
He states “I think we should embrace fast, but we should also start thinking about embracing cyber to make sure things are used securely at the same pace” [96-97].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel stresses that security must evolve alongside fast AI adoption to avoid stifling progress [S2].
MAJOR DISCUSSION POINT
Synchronizing security with AI speed
A
Aparna Bawa
6 arguments180 words per minute1935 words643 seconds
Argument 1
Cross‑border data flows essential; fragmented rules impede progress; call for common framework
EXPLANATION
Aparna stresses that global services like Zoom rely on unrestricted cross‑border data flows, and that fragmented national regulations hinder both business and citizen progress. She calls for a basic, shared framework that balances sovereignty with free data movement.
EVIDENCE
She cites Zoom’s dependence on cross-border data for global connectivity and argues that increasing national restrictions impede citizens’ progress, while also acknowledging privacy and security as table-stakes [47-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The critical role of unrestricted cross-border data flows for global services like Zoom is emphasized in the summit remarks [S1] and calls for harmonised regulations are made in the IGF data-flow discussions [S17].
MAJOR DISCUSSION POINT
Data flow and regulatory fragmentation
AGREED WITH
Jason Oxman, David Zapolsky, Jarek Kutylowski
Argument 2
Enterprises must embed security, privacy, and user controls; partnership between provider and user essential
EXPLANATION
Aparna describes a partnership model where both the enterprise and the end‑user share responsibility for secure AI use. She emphasizes embedding security, privacy, and clear user controls into products from the start.
EVIDENCE
She notes that security certifications, privacy standards, and red-team testing must be maintained, and that the enterprise-user partnership is vital for safe AI deployment [102-108] and further elaborates on obligations to provide sufficient controls for all user types [119-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of an enterprise-user partnership for secure AI deployment is noted in the governance panel summary [S2] and reinforced by the trust-and-safety assessment framework [S19].
MAJOR DISCUSSION POINT
Enterprise‑user security partnership
AGREED WITH
Jay Chaudhry, Jason Oxman, David Zapolsky, Jarek Kutylowski
Argument 3
Tiered controls and user choice enable risk decisions tailored to user type; preserve user experience
EXPLANATION
Aparna explains that Zoom offers configurable security and privacy settings so that different user groups—enterprises, schools, individual consumers—can choose the level of protection that fits their needs without sacrificing usability.
EVIDENCE
She describes how Zoom provides toggles for security features, differentiates between enterprise and consumer accounts, and ensures mandatory controls (e.g., waiting rooms, passcodes) for higher-risk environments while keeping the experience smooth for casual users [230-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risk-based, configurable security controls that respect user experience are discussed in the trust-and-safety assessment report [S19].
MAJOR DISCUSSION POINT
Granular user‑centric risk controls
AGREED WITH
David Zapolsky, Jay Chaudhry
Argument 4
Users need education; enterprises must not use customer data for training; provide opt‑out mechanisms
EXPLANATION
Aparna highlights the need to educate users—especially younger ones—about safe AI interactions and asserts that Zoom will not use customer content to train its models, offering opt‑out options where appropriate.
EVIDENCE
She recounts teaching her children not to share personal information with AI engines and notes Zoom’s policy of not using customer content for model training, stressing the importance of user awareness and opt-out capabilities [124-128].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
User education as a defence against cyber threats and the need for opt-out mechanisms are highlighted in the user-weakest-link study [S14] and the social-engineering case analysis [S15].
MAJOR DISCUSSION POINT
User education and data privacy
Argument 5
Aim for AI access in low‑bandwidth regions; inclusive upskilling benefits business and society
EXPLANATION
Aparna envisions AI tools reaching underserved, low‑bandwidth areas, arguing that upskilling and inclusive access create both social benefits and new market opportunities for enterprises.
EVIDENCE
She references the summit’s focus on inclusivity, mentions villages in Karnataka with limited bandwidth, and argues that enabling farmers with AI can generate multi-generational benefits while also expanding Zoom’s market [364-374].
MAJOR DISCUSSION POINT
Inclusive AI deployment
AGREED WITH
David Zapolsky
Argument 6
Zoom’s product development prioritizes user experience, using configurable controls to balance security, privacy and usability.
EXPLANATION
Aparna explains that Zoom designs features around how users actually work, offering toggles and defaults that let enterprises and individual users choose appropriate security levels without sacrificing the overall experience.
EVIDENCE
She says “everything goes back to the user experience… they don’t want to take down all the technology… they want to do it in a safe and secure way” [252-257] and describes the platform’s granular toggles for waiting rooms, passcodes, and other controls that adapt to different user types [230-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The trade-off between regulation and user experience in Zoom’s design is described in the panel commentary on user-centric product development [S1].
MAJOR DISCUSSION POINT
User‑centric product design versus compliance‑first approaches
D
David Zapolsky
6 arguments169 words per minute1827 words645 seconds
Argument 1
Free flow of goods and information critical; over‑regulation creates friction; propose common high‑risk principles
EXPLANATION
David argues that Amazon’s global operations depend on the free movement of goods, data, and services, and that government barriers create friction. He suggests developing shared high‑risk principles rather than detailed, premature regulations.
EVIDENCE
He describes Amazon’s reliance on free flow of goods, information, and open skies across its stores, cloud, entertainment, and satellite services, and warns that each new barrier adds friction; he then calls for common high-risk principles based on real harms rather than speculative rules [60-63] and [67-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of free-flow of data and goods for global operations and the call for high-risk principles are echoed in the cross-border data-flow discussion [S1] and the alignment-innovation tension report [S2].
MAJOR DISCUSSION POINT
Global trade and AI risk principles
AGREED WITH
Jason Oxman, Aparna Bawa, Jarek Kutylowski
Argument 2
Regulation must be use‑case specific; one‑size‑fits‑all harms innovation; internal product decisions weigh risk vs rollout
EXPLANATION
David stresses that AI regulations need to differentiate between use‑cases, as the risk profile of a shopping assistant differs from a medical documentation tool. He explains how Amazon balances product rollout decisions with regulatory uncertainty.
EVIDENCE
He explains that AI applications have varied risk profiles, and that blanket regulation would inhibit innovation; he also details internal discussions about launching products globally versus waiting for regulatory clarity, citing examples like AI-assisted shopping versus clinical documentation [281-306].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A risk-based, use-case-specific regulatory approach is advocated to avoid stifling innovation in the governance-innovation balance summary [S2].
MAJOR DISCUSSION POINT
Use‑case‑driven regulation
AGREED WITH
Jay Chaudhry, Jason Oxman, Jarek Kutylowski
Argument 3
Build security, guardrails, and data ownership into cloud services (Bedrock); give enterprises direct control
EXPLANATION
David outlines how Amazon’s Bedrock platform embeds security, model‑level guardrails, and ensures that customer data remains owned by the customer, giving enterprises tools to control AI outputs and usage.
EVIDENCE
He describes Bedrock’s security architecture, the ability for customers to select from over 100 models, the guarantee that data stays with the customer, and built-in guardrails for toxicity, bias, and content filtering, along with disclosures for transparency [140-159].
MAJOR DISCUSSION POINT
Secure, customer‑controlled AI cloud services
AGREED WITH
Jay Chaudhry, Jason Oxman, Aparna Bawa, Jarek Kutylowski
Argument 4
Provide tools, disclosures, and controls so enterprises can self‑govern AI use
EXPLANATION
David emphasizes that Amazon equips enterprises with practical tools—guardrails, disclosures, and configurable controls—so they can manage AI responsibly without waiting for external regulation.
EVIDENCE
He notes that Bedrock includes guardrails, disclosure statements, and interfaces that let enterprises filter outputs, manage bias, and maintain visibility into model behavior, thereby enabling self-governance [153-159].
MAJOR DISCUSSION POINT
Enterprise self‑governance tools
AGREED WITH
Aparna Bawa, Jay Chaudhry
Argument 5
Converge on international consensus and standards (e.g., ISO 42001) to harmonize regulation
EXPLANATION
David calls for a global consensus on AI regulation, suggesting that an international standard such as ISO 42001 could provide common principles and technical requirements, allowing countries to retain sovereignty while aligning on core safeguards.
EVIDENCE
He references emerging consensus in forums like the Hiroshima agreements and proposes an ISO standard that would give everyone a common set of principles and technical standards [390-392].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for common international regulatory frameworks and standards to reduce fragmentation are made in the IGF discussion on harmonising data governance [S17] and [S18].
MAJOR DISCUSSION POINT
International AI standards
Argument 6
Regulation should focus on high‑risk AI applications that affect life, health, or civil rights rather than blanket rules.
EXPLANATION
David proposes that policymakers target AI uses with the greatest potential harm—those influencing fundamental rights—and align regulation with existing protections, avoiding over‑broad mandates that could stifle innovation.
EVIDENCE
He says “if you’re using a technology to make decisions that’s going to affect the life, health, or civil rights of an individual… are there laws that protect that already? do we need to supplement them?” [67-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Targeting high-impact AI threats rather than blanket regulation aligns with the AI-driven cyber-defense briefing that stresses focusing on the most dangerous uses [S16].
MAJOR DISCUSSION POINT
Targeted regulation of high‑risk AI
J
Jarek Kutylowski
6 arguments159 words per minute1076 words403 seconds
Argument 1
Global market requires transparent, similar frameworks; balance sovereignty with shared norms
EXPLANATION
Jarek argues that for AI‑driven companies operating worldwide, a transparent and relatively uniform regulatory framework is essential, while still respecting national sovereignty.
EVIDENCE
He states that successful technology needs a transparent framework that is not too different across regions, and that a balance between sovereignty and common norms is valuable [79-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for transparent, comparable regulatory frameworks while respecting sovereignty is highlighted in the alignment-innovation panel [S2] and the IGF consensus on common regulations [S17].
MAJOR DISCUSSION POINT
Transparent global AI framework
AGREED WITH
Jason Oxman, Aparna Bawa, David Zapolsky
DISAGREED WITH
Jay Chaudhry, Aparna Bawa, David Zapolsky
Argument 2
Different use‑cases have varying risk grades; governance must adapt to application context
EXPLANATION
Jarek points out that the criticality of AI outcomes varies widely—from casual email translation to patent‑level documentation—so governance must be calibrated to the specific application’s risk level.
EVIDENCE
He contrasts low-risk email translation with high-risk patent translation and agent execution in enterprise settings, emphasizing the need for differentiated governance based on use-case [323-325].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A risk-graded governance model that varies by use-case is discussed in the balanced governance report [S2].
MAJOR DISCUSSION POINT
Risk‑graded governance
AGREED WITH
Jay Chaudhry, David Zapolsky, Jason Oxman
Argument 3
Trust in AI outcomes critical; governance must ensure reliable, safe behavior for high‑impact tasks
EXPLANATION
Jarek stresses that for high‑impact AI uses, users must trust the outcomes, requiring governance that guarantees reliability, safety, and alignment with enterprise expectations.
EVIDENCE
He notes that trust in AI results is essential, especially for critical tasks, and that governance must provide a common understanding of high-risk uses, linking back to earlier points about transparency and shared norms [321-326].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ensuring trustworthy AI outcomes through risk-based governance is a focus of the trust-and-safety assessment framework [S19].
MAJOR DISCUSSION POINT
Ensuring trustworthy AI
AGREED WITH
Jay Chaudhry, Jason Oxman, David Zapolsky, Aparna Bawa
Argument 4
Companies must help customers navigate regulations and select appropriate AI usage
EXPLANATION
Jarek says that DeepL’s role includes guiding customers through complex regulatory landscapes, helping them choose suitable AI applications, and managing the associated risks.
EVIDENCE
He explains that DeepL often assists customers who lack regulatory expertise, providing them with the ability to determine appropriate use-cases and manage compliance across diverse markets [327-329].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The private sector’s role in guiding customers through complex regulatory landscapes is emphasized in the IGF private-sector collaboration briefing [S22].
MAJOR DISCUSSION POINT
Customer guidance on AI regulation
Argument 5
Promote worldwide collaboration enabling multilingual communication; regulatory framework to support global cooperation
EXPLANATION
Jarek envisions AI facilitating global collaboration by breaking language barriers, and calls for regulatory frameworks that enable such cross‑border cooperation while respecting local contexts.
EVIDENCE
He describes DeepL’s mission to let anyone collaborate regardless of language or geography, and expresses hope that future regulatory frameworks will support this vision, citing examples of cooperation between India and other countries [316-320].
MAJOR DISCUSSION POINT
Global multilingual collaboration
Argument 6
Early exposure to EU AI regulation gives DeepL a competitive advantage in handling compliance globally.
EXPLANATION
Jarek notes that being headquartered in Europe, where regulatory frameworks arrived earlier, allows DeepL to develop expertise and processes that can be leveraged when entering other markets, turning regulatory pressure into a strategic benefit.
EVIDENCE
He remarks “we’re lucky as a company to have grown in Europe in kind of an environment which is maybe like slightly earlier on regulation than other places… gives us an edge to be able to understand how to work with this regulation” [326-327].
MAJOR DISCUSSION POINT
Leveraging early regulation for competitive advantage
J
Jason Oxman
6 arguments158 words per minute2190 words829 seconds
Argument 1
Risk management must be balanced with innovation and interoperability.
EXPLANATION
Oxman stresses that while managing AI‑related risks is essential, governments and industry must do so in a way that does not choke global innovation or the ability of systems to interoperate across borders.
EVIDENCE
He opens the session by saying “The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and interoperability” [1] and later notes that “They need to protect citizens. They need to ensure security. But acting too much, perhaps in advance, can stifle innovation” [30-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing risk management with innovation while preserving interoperability is a central theme in the governance-innovation tension summary [S2].
MAJOR DISCUSSION POINT
Balancing risk and innovation
Argument 2
Governments need coordinated alignment to avoid fragmentation and to build trust.
EXPLANATION
He argues that AI technologies naturally cross borders, so fragmented national rules create inefficiencies; coordinated global approaches reduce fragmentation and foster trust among stakeholders.
EVIDENCE
Oxman states “there is a need for governments around the world to align their approaches to AI governance, because, of course, technology doesn’t, by its very nature, want to stop at borders” [15] and earlier frames the panel’s purpose as helping “governments… to reduce fragmentation, and to build trust in AI systems” [2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for coordinated global AI governance to reduce fragmentation and build trust is highlighted in the alignment-innovation panel [S2] and the IGF call for common regulations [S17].
MAJOR DISCUSSION POINT
Global AI governance alignment
AGREED WITH
Aparna Bawa, David Zapolsky, Jarek Kutylowski
Argument 3
Trust and security must be embedded as core components of any AI rollout.
EXPLANATION
Oxman highlights that without a strong security overlay, the excitement around AI can lead to vulnerable deployments, making trust a non‑negotiable element of policy and product design.
EVIDENCE
He asks the panel “Talk to us about how the trust and security conversation is still a vital component around all the excitement” [84-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding trust and security into AI deployments is a key recommendation in the trust-and-safety assessment report [S19].
MAJOR DISCUSSION POINT
Importance of security and trust
Argument 4
Upstream governance decisions by platform providers shape downstream user behavior and must be considered.
EXPLANATION
He points out that the policies Amazon adopts at the platform level affect how downstream enterprises and consumers can use AI, urging a holistic view of governance that includes upstream impacts.
EVIDENCE
Oxman asks David “how do you think about the upstream governance decisions that you’re making at Amazon and how they impact the downstream?” [131-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The impact of upstream platform governance on downstream usage is discussed in the zero-trust architecture commentary, which stresses upstream policy effects [S9].
MAJOR DISCUSSION POINT
Impact of upstream governance on downstream stakeholders
Argument 5
Agentic AI introduces new governance challenges beyond traditional translation services.
EXPLANATION
He notes that moving from simple translation to autonomous AI agents raises distinct policy questions, especially when those agents operate globally and make decisions without human oversight.
EVIDENCE
Oxman asks Jarek “How are you thinking about the policies and procedures for governance that you have to put in place in an agentic AI world that are different than perhaps you did in a language translation world?” [163-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emergence of AI-driven autonomous agents as new security challenges is highlighted in the AI-driven cyber-defense briefing [S16].
MAJOR DISCUSSION POINT
Governance of autonomous AI agents
Argument 6
Flexible, risk‑based regulation is essential; overly prescriptive rules can block innovation.
EXPLANATION
He solicits examples of where a flexible, risk‑based approach helped and where a rigid regulatory stance prevented product launches, indicating his belief that adaptability in regulation is key to fostering AI progress.
EVIDENCE
Oxman says “how you’ve seen a flexible risk-based approach from government be the most effective… where a more prescriptive approach… denied you the opportunity to bring products or services to market” [179-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A flexible, risk-based regulatory approach is advocated as more effective than prescriptive rules in the governance-innovation panel summary [S2].
MAJOR DISCUSSION POINT
Need for flexible risk‑based regulation
Agreements
Agreement Points
Global coordination and common frameworks are needed to avoid fragmentation and build trust across AI governance.
Speakers: Jason Oxman, Aparna Bawa, David Zapolsky, Jarek Kutylowski
Governments need coordinated alignment to avoid fragmentation and to build trust. Cross‑border data flows essential; fragmented rules impede progress; call for common framework Free flow of goods and information critical; over‑regulation creates friction; propose common high‑risk principles Global market requires transparent, similar frameworks; balance sovereignty with shared norms
All four panelists stress that AI technologies cross borders, so governments should align policies, maintain free data-flows, and adopt shared high-risk principles or transparent frameworks to reduce fragmentation and foster trust [15-17][30-34][47-51][60-63][67-68][79-82].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus aligns with calls for harmonised global AI regulations to prevent fragmentation, as highlighted in WS #145 on reviving trust and WS #172 on children’s rights [S53], and reflects the need for cross-jurisdictional alignment discussed by the ITI C-Suite panel [S39] and the high-level consensus on AI governance [S52].
Regulation should be flexible and risk‑based rather than prescriptive; one‑size‑fits‑all rules hinder innovation.
Speakers: Jay Chaudhry, David Zapolsky, Jason Oxman, Jarek Kutylowski
Compliance ≠ security; flexible, evolving policy needed; over‑regulation stalls progress Regulation must be use‑case specific; one‑size‑fits‑all harms innovation; internal product decisions weigh risk vs rollout Flexible, risk‑based regulation is essential; overly prescriptive rules can block innovation Different use‑cases have varying risk grades; governance must adapt to application context
Jay, David, Jason and Jarek all argue that AI rules need to adapt to specific use-cases and evolve with technology; rigid, blanket regulations would slow or block innovation [180-202][281-306][179-180][323-325].
POLICY CONTEXT (KNOWLEDGE BASE)
The preference for risk-based, flexible regulation mirrors critiques of overly prescriptive regimes such as the EU AI Act and supports principle-based frameworks that enable innovation while managing risk [S44], and echoes UNCTAD’s analysis on proportional regulation for SMEs [S51].
Security, trust and user protection must be embedded in AI systems from the start.
Speakers: Jay Chaudhry, Jason Oxman, David Zapolsky, Aparna Bawa, Jarek Kutylowski
AI can be weaponized; security overlay across all layers required; sovereignty alone insufficient Trust and security must be embedded as core components of any AI rollout Build security, guardrails, and data ownership into cloud services (Bedrock); give enterprises direct control Enterprises must embed security, privacy, and user controls; partnership between provider and user essential Trust in AI outcomes critical; governance must ensure reliable, safe behavior for high‑impact tasks
All five speakers highlight that AI deployments need strong security and trust mechanisms-layered safeguards, built-in guardrails, and clear user-provider partnerships-to prevent abuse and maintain confidence [87-95][84-86][140-159][102-108][321-326].
POLICY CONTEXT (KNOWLEDGE BASE)
Embedding security and trust from inception is a core tenet of security-by-design, echoed in discussions on agentic AI where security underpins trust [S42], safety-by-design in AI governance [S48], and the broader push for safety to be built into systems rather than added later [S45], [S46].
Providing tiered controls and giving customers choice enables risk‑appropriate use while preserving user experience.
Speakers: Aparna Bawa, David Zapolsky, Jay Chaudhry
Tiered controls and user choice enable risk decisions tailored to user type; preserve user experience Provide tools, disclosures, and controls so enterprises can self‑govern AI use Compliance ≠ security; flexible, evolving policy needed; over‑regulation stalls progress
Aparna describes Zoom’s configurable security toggles, David outlines Bedrock’s guardrails and disclosures, and Jay stresses that compliance alone is insufficient-together they advocate user-centric, choice-driven risk management [230-270][153-159][180-202].
POLICY CONTEXT (KNOWLEDGE BASE)
Tiered controls and user choice are advocated to balance risk management with usability, as seen in recommendations for user-centric design that preserve experience while ensuring security [S50], and in access-management guidance that stresses both protection and usability [S49]; UNCTAD also stresses user control and choice as a fairness principle [S51].
Inclusive AI access and upskilling for underserved communities are essential and also create market opportunities.
Speakers: Aparna Bawa, David Zapolsky
Aim for AI access in low‑bandwidth regions; inclusive upskilling benefits business and society I totally, again, agree violently with Aparna in adding the inclusiveness piece…
Both Aparna and David highlight the importance of bringing AI tools to low-bandwidth, rural areas and upskilling users, noting that such inclusivity benefits both society and business models [364-374][389-390].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of inclusive AI and skills development is reflected in the Ministerial Roundtable on cultural and language aspects [S35], the AI Impact Summit’s call for lifelong learning and social protection [S37], and WSIS+20’s emphasis on bridging the digital divide for equitable development [S38].
Similar Viewpoints
Both argue that compliance checks do not guarantee security and that AI regulation must be adaptable to specific use‑cases to avoid stifling innovation [180-202][281-306].
Speakers: Jay Chaudhry, David Zapolsky
Compliance ≠ security; flexible, evolving policy needed; over‑regulation stalls progress Regulation must be use‑case specific; one‑size‑fits‑all harms innovation; internal product decisions weigh risk vs rollout
Both emphasize the need for a common, transparent regulatory framework that respects sovereignty while enabling seamless cross‑border data and service flows [47-51][79-82].
Speakers: Aparna Bawa, Jarek Kutylowski
Cross‑border data flows essential; fragmented rules impede progress; call for common framework Global market requires transparent, similar frameworks; balance sovereignty with shared norms
Both recognize that decisions made at the platform (upstream) level directly affect how downstream enterprises and users can safely adopt AI services [131-136][137-160].
Speakers: Jason Oxman, David Zapolsky
Upstream governance decisions by platform providers shape downstream user behavior and must be considered Build security, guardrails, and data ownership into cloud services (Bedrock); give enterprises direct control
Unexpected Consensus
Security must be built into user experience and education, despite differing primary focuses.
Speakers: Jay Chaudhry, Aparna Bawa
Users are the weakest link; need identity and authorization controls for AI agents Enterprises must embed security, privacy, and user controls; partnership between provider and user essential Users need education; enterprises must not use customer data for training; provide opt‑out mechanisms
Jay, a security-focused executive, and Aparna, a product-experience leader, both stress that security cannot be an after-thought; it must be integrated into the user interface, user education, and partnership model-an alignment not obvious given their different domains [87-95][102-108][124-130].
POLICY CONTEXT (KNOWLEDGE BASE)
Integrating security into user experience and education aligns with IGF 2023’s comprehensive security-by-design approach that includes user empowerment and awareness [S56], and with literature on balancing security controls with user-friendly design [S49].
Overall Assessment

The panel shows strong convergence on four core themes: (1) the necessity of global coordination and shared principles to avoid fragmented AI governance; (2) the preference for flexible, risk‑based regulation tailored to specific use‑cases; (3) the imperative to embed security, trust and user‑centric controls into AI products from the outset; and (4) the importance of inclusive access and upskilling for underserved populations. These points cut across multiple domains—policy, technology, security and development—indicating a high level of consensus among industry leaders.

High consensus; the shared positions suggest that future AI governance initiatives are likely to prioritize international standards, risk‑based regulatory approaches, security‑by‑design, and inclusive deployment, providing a solid foundation for coordinated policy action.

Differences
Different Viewpoints
Extent of government alignment versus over‑alignment
Speakers: Jay Chaudhry, Aparna Bawa, David Zapolsky, Jarek Kutylowski
Need for balanced alignment; over‑alignment stifles innovation Cross‑border data flows essential; fragmented rules impede progress, call for common framework Free flow of goods and information critical; over‑regulation creates friction, propose common high‑risk principles Global market requires transparent, similar frameworks; balance sovereignty with shared norms
Jay warns that too much alignment or governance can kill innovation and that over-alignment is unhelpful [24-28]. In contrast, Aparna stresses the need for a basic shared framework to keep cross-border data flows open, David argues that free flow of goods and data is essential and calls for common high-risk principles, while Jarek emphasizes a transparent, comparable regulatory layer that respects sovereignty [47-51][58-63][67-68][79-82].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between necessary coordination and the risk of over-alignment is discussed in the ITI C-Suite panel on cross-jurisdictional AI governance [S39] and in WS #145’s debate on harmonised regulations versus national sovereignty [S54].
Security emphasis versus user‑experience and choice
Speakers: Jay Chaudhry, Aparna Bawa
AI can be weaponized; security overlay across all layers required; sovereignty alone insufficient Zoom’s product development prioritizes user experience, using configurable controls to balance security, privacy and usability
Jay stresses that AI systems need a security overlay at every layer to prevent abuse, arguing that security must keep pace with rapid AI adoption [87-95][96-97]. Aparna, while acknowledging security, argues that Zoom’s design centers on user experience, offering tiered controls and choice so different user groups can maintain usability while managing risk [252-257][230-270].
POLICY CONTEXT (KNOWLEDGE BASE)
This trade-off is highlighted in analyses of cognitive vulnerabilities that stress balancing strong security with a seamless user experience [S49] and in discussions on user-experience design that must accommodate varying user sophistication [S55].
Unexpected Differences
Perception of AI agents as a security threat versus focus on trust in AI outcomes
Speakers: Jay Chaudhry, Jarek Kutylowski
Users are the weakest link; need identity and authorization controls for AI agents Trust in AI outcomes critical; governance must ensure reliable, safe behavior for high‑impact tasks
Jay highlights that AI agents can be hijacked and become the weakest security link, calling for zero-trust identity and authorization controls for agents [209-212]. Jarek, while discussing trust, focuses on the reliability of AI results for high-impact uses and does not address agent-level security, indicating a differing risk perception [321-326].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors viewpoints that security of AI agents is a foundational layer for building trust [S42] and broader conversations on moving safety from an afterthought to an integral part of AI outcomes [S48].
Overall Assessment

The panel largely converged on the need for balanced, risk‑based AI governance that supports innovation and global interoperability. The main points of contention revolve around how much alignment is appropriate and the relative priority of security versus user experience. While all agree that over‑regulation can hinder progress, Jay stresses the dangers of excessive alignment and the necessity of layered security, whereas Aparna, David and Jarek advocate for common frameworks to keep cross‑border data and services flowing.

Moderate – disagreements are nuanced rather than outright oppositional. They reflect differing emphases (security vs usability, alignment vs over‑alignment) that could affect policy design, suggesting that future governance discussions will need to reconcile these perspectives to achieve both innovation and robust protection.

Partial Agreements
All three agree that regulation should be flexible and risk‑based, tailored to specific AI applications, rather than blanket rules. Jay calls for evolving policies and warns that compliance alone does not equal security [180-202]. David stresses use‑case‑specific regulation and the danger of one‑size‑fits‑all approaches [281-306]. Jarek highlights that AI tasks have different risk grades and governance must be calibrated accordingly [323-325].
Speakers: Jay Chaudhry, David Zapolsky, Jarek Kutylowski
Need for flexible, evolving policy rather than rigid, prescriptive regulation Regulation must be use‑case specific; one‑size‑fits‑all harms innovation Different use‑cases have varying risk grades; governance must adapt to application context
Both emphasize that unrestricted cross‑border flows of data and services are vital for global digital platforms and that fragmented national rules create friction. Aparna points to Zoom’s reliance on global data flows [47-51], while David describes Amazon’s dependence on free flow of goods, data and services across borders [58-63].
Speakers: Aparna Bawa, David Zapolsky
Cross‑border data flows essential; fragmented rules impede progress, call for common framework Free flow of goods and information critical; over‑regulation creates friction, propose common high‑risk principles
Takeaways
Key takeaways
Global AI governance needs a balanced alignment: some common principles are essential, but over‑alignment or overly prescriptive rules can stifle innovation. Cross‑border data flows and the free movement of goods and information are critical for AI services; fragmented national rules create friction. A risk‑based, use‑case‑specific regulatory approach is more effective than a one‑size‑fits‑all model. Security and trust must be embedded across all layers of AI systems; sovereignty alone is insufficient without safeguards against malicious use. Enterprises and end‑users share responsibility: providers must build privacy, security, and opt‑out mechanisms, while users need education on safe AI usage. Future progress hinges on inclusive access, upskilling in low‑bandwidth regions, and the development of international standards (e.g., ISO‑42001) to harmonise governance.
Resolutions and action items
Propose the creation of a common, high‑risk principle framework that can be adopted internationally (suggested by David Zapolsky). Encourage governments to focus regulatory efforts on AI‑enabled threats (ransomware, nation‑state misuse) rather than broad pre‑emptive bans (suggested by Jay Chaudhry). Implement tiered controls and user‑choice mechanisms within products to allow different risk tolerances (highlighted by Aparna Bawa). Embed security guardrails, data‑ownership guarantees, and model‑output controls into cloud AI services (e.g., Amazon Bedrock) (outlined by David Zapolsky). Promote global upskilling and inclusive AI access, especially in low‑bandwidth regions (advocated by Aparna Bawa). Work toward an international consensus and standards such as ISO‑42001 to provide a common technical and governance baseline (suggested by David Zapolsky).
Unresolved issues
Specific mechanisms for achieving global regulatory alignment and how to operationalise the proposed high‑risk principle framework. How to reconcile differing privacy and data‑protection regimes while maintaining a common governance layer. Concrete processes for ensuring AI agents are not hijacked or used maliciously across diverse jurisdictions. Details on how governments can support inclusive AI adoption without imposing burdensome compliance on innovators. The timeline and governance structure for developing and adopting international standards like ISO‑42001.
Suggested compromises
Adopt a flexible, evolving policy that sets a basic common layer of norms while allowing sovereign variations for privacy and other local concerns. Use a risk‑based approach that distinguishes high‑risk applications (e.g., decisions affecting health, civil rights) from low‑risk ones, applying stricter controls only where needed. Provide configurable product features (security toggles, privacy settings, opt‑out options) so enterprises and individual users can tailor risk levels to their context. Balance the need for security overlays with innovation speed by integrating security into the development pipeline rather than adding it as a later compliance hurdle.
Thought Provoking Comments
“If each country has its own governance rules for AI, a large corporation operating in 50 countries will face a lot of issues. Some alignment is good, but over‑alignment kills innovation.”
He succinctly framed the core tension between fragmented regulation and the need for enough flexibility to keep innovation alive, setting the stage for the entire governance debate.
His point prompted the panel to explore the balance between necessary oversight and stifling compliance, leading directly to Aparna’s discussion of cross‑border data flows and David’s warning about premature regulation.
Speaker: Jay Chaudhry
“Zoom would not exist without cross‑border data flows. When governments add more restrictions, they impede their own citizens’ progress. There needs to be a basic, commonly understood framework, but also respect for national sovereignty.”
She linked AI governance to a concrete, everyday technology (Zoom) and highlighted the trade‑off between security/privacy and the economic benefits of data fluidity, grounding the abstract debate in real‑world impact.
Her remarks expanded the conversation from high‑level policy to practical product implications, prompting David to echo the importance of free flow of goods and information for Amazon’s global services.
Speaker: Aparna Bawa
“Regulation before we understand how AI will be used creates costs, uncertainty, and inhibits adoption. Example: Colorado’s comprehensive AI law is well‑intentioned but nobody knows how to apply it yet.”
He introduced a concrete case study showing the pitfalls of premature, overly prescriptive regulation, reinforcing the need for a risk‑based, evidence‑driven approach.
This example shifted the discussion toward concrete policy failures, encouraging Jay to discuss the dangers of blanket compliance and prompting Jarek to stress the need for transparent, common frameworks.
Speaker: David Zapolsky
“Security must overlay every layer of AI – from the sovereign infrastructure down to the models. AI agents could become the weakest link if we ignore who can access and control them.”
He added a security dimension to the governance conversation, emphasizing that sovereignty isn’t just geographic but also about access control, and warned of emerging threats like rogue AI agents.
This prompted Aparna to talk about the partnership between users and enterprises in managing risk, and led the panel to consider future‑focused threats beyond data privacy.
Speaker: Jay Chaudhry
“We need a common set of principles: define ‘high‑risk’ uses (e.g., decisions affecting life, health, civil rights) and regulate those, rather than trying to create a unified theory of AI regulation.”
He offered a pragmatic, principle‑based roadmap for global alignment, moving the dialogue from abstract alignment to actionable criteria.
His suggestion steered the conversation toward concrete policy levers, influencing Jarek’s call for transparent, globally consistent frameworks and setting up the later discussion on risk‑based product decisions.
Speaker: David Zapolsky
“AI agents can be hijacked and cause massive damage; we must extend zero‑trust architectures to cover identity, authorization, and control of AI agents.”
He highlighted a novel, technical risk that many regulators may overlook, expanding the scope of governance to include operational security of autonomous agents.
This deepened the technical layer of the debate, leading Aparna to discuss how Zoom embeds user‑level controls and prompting Jarek to note the higher stakes of agentic AI in critical domains like drug development.
Speaker: Jay Chaudhry
“The biggest issue will be AI‑enabled ransomware and nation‑state use of AI. If we don’t address these proactively, governments will over‑react with tighter rules that could stifle innovation.”
He projected a future threat landscape, linking security failures to potential regulatory backlash, thereby connecting short‑term technical safeguards with long‑term policy outcomes.
This forward‑looking warning reframed the conversation toward preventive security measures as a way to preserve regulatory flexibility, influencing the final aspirations of other panelists.
Speaker: Jay Chaudhry (closing forward‑looking question)
“Inclusivity and upskilling are essential – AI should reach villages with low bandwidth so farmers can benefit. If governments champion this, it creates markets for enterprises and lifts societies.”
She shifted the focus from corporate risk to societal impact, emphasizing that governance should enable broad access, not just protect elite users.
Her comment broadened the narrative to social equity, prompting David to speak about global consensus and ISO‑style standards that can support inclusive deployment.
Speaker: Aparna Bawa (forward‑looking question)
“We need an emerging international consensus, similar to technical standards like ISO 42001, that gives a common set of principles while allowing sovereign nuances.”
He proposed a concrete mechanism—international standards—to reconcile global alignment with national sovereignty, offering a tangible path forward.
This crystallized the earlier abstract calls for alignment into a specific solution, influencing Jarek’s optimism about a unified framework and wrapping up the discussion with a clear actionable vision.
Speaker: David Zapolsky (forward‑looking question)
“DeepL’s mission is to let anyone collaborate regardless of language or geography; governance should enable that, not hinder it. We must give customers tools to manage risk themselves.”
He tied the company’s core purpose to the governance debate, emphasizing user empowerment and the need for flexible, customer‑centric controls in a global context.
His perspective reinforced the theme of user choice introduced by Aparna and highlighted the practical side of implementing governance, rounding out the discussion with a focus on product design.
Speaker: Jarek Kutylowski
Overall Assessment

The discussion pivoted around the tension between global AI alignment and the need for flexibility. Early remarks by Jay and Aparna framed the problem of fragmented regulation versus innovation, which was sharpened by David’s concrete example of Colorado’s premature law. Subsequent comments introduced security (Jay), risk‑based principles (David), and user‑centric product design (Aparna, Jarek). Each of these insights redirected the conversation toward actionable frameworks—common high‑risk definitions, zero‑trust for AI agents, and inclusive access—while also warning of future threats that could trigger over‑regulation. Collectively, these pivotal comments moved the panel from abstract policy talk to concrete, multi‑dimensional solutions, culminating in a shared vision of international standards that balance sovereignty, security, and inclusive innovation.

Follow-up Questions
How can governments define and agree on common principles to identify ‘high‑risk’ AI uses across jurisdictions?
David highlighted the need to work backwards from observable harms to define high‑risk AI, noting current uncertainty hampers regulation and innovation.
Speaker: David Zapolsky
What flexible, risk‑based regulatory frameworks can keep pace with rapid AI development without stifling innovation?
Jay argued that over‑regulation slows progress and called for policies that evolve with AI’s unknown behaviors.
Speaker: Jay Chaudhry
How can a security overlay be effectively implemented across the five layers of AI sovereignty to prevent data poisoning and rogue agents?
Jay warned that sovereign AI stacks can be vulnerable if not protected at each layer, emphasizing the need for comprehensive security measures.
Speaker: Jay Chaudhry
What methods can be used to assess and mitigate the impact of AI agents being hijacked or misused within enterprise environments?
He noted the emerging risk of AI agents becoming the weakest link, requiring new security controls and identity/authorization mechanisms.
Speaker: Jay Chaudhry
How can cross‑border data flows be balanced with privacy and security requirements to support global AI innovation?
Aparna stressed that unrestricted data movement is essential for AI, but must be reconciled with national privacy and security norms.
Speaker: Aparna Bawa
What strategies can ensure inclusive AI access and upskilling for users in low‑bandwidth or underserved regions (e.g., rural India)?
She highlighted the importance of democratizing AI benefits and the challenge of delivering technology where connectivity is limited.
Speaker: Aparna Bawa
What should an international AI standards framework (e.g., ISO 42001) encompass to provide a common set of principles and technical specifications?
David advocated for a converging global consensus on standards to guide responsible AI deployment and reduce regulatory fragmentation.
Speaker: David Zapolsky
How can companies like DeepL design governance policies for agentic AI that satisfy diverse regulatory regimes while maintaining global interoperability?
Jarek discussed the shift from translation to autonomous agents, raising the need for adaptable governance that meets varying country requirements.
Speaker: Jarek Kutylowski
What best practices can be established for giving end‑users granular control over AI features (e.g., transcription, data usage) to protect privacy and comply with regulations?
She described the necessity of user‑level opt‑outs and controls to balance functionality with legal and ethical obligations.
Speaker: Aparna Bawa
How can cloud providers embed transparent guardrails and disclosures in AI services (like Amazon Bedrock) to enable enterprises to manage risk globally?
David outlined Amazon’s approach to security, data ownership, and content filtering, suggesting a need for standardized, globally applicable safeguards.
Speaker: David Zapolsky
What metrics or research approaches can quantify the impact of AI regulation on innovation and market entry timelines?
Several panelists noted that over‑regulation delays product launches, indicating a need for empirical studies on regulatory effects.
Speaker: Multiple (Jay Chaudhry, David Zapolsky, Aparna Bawa)
How can governments differentiate regulatory requirements based on AI application domains (e.g., consumer recommendation vs. medical documentation) to avoid blanket restrictions?
He emphasized that risk varies by use case, and undifferentiated rules could hinder beneficial AI applications.
Speaker: David Zapolsky

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.