Aligning AI Governance Across the Tech Stack ITI C-Suite Panel

20 Feb 2026 11:00h - 12:00h

Aligning AI Governance Across the Tech Stack ITI C-Suite Panel

Session at a glance

Summary

This discussion at the AI Impact Summit focused on global AI governance and the need for international alignment in regulating artificial intelligence while preserving innovation. The panel, moderated by Jason Oxman, included technology leaders from Zscaler, Zoom, Amazon, and DeepL, who explored how governments can work with industry to manage AI risks without stifling technological progress.


A central theme emerged around finding the right balance between governance and innovation. Jay Chaudhry from Zscaler emphasized that while some level of governance is necessary, over-regulation can kill innovation, particularly when compliance requirements become outdated by the time they’re implemented. Aparna Bawa from Zoom highlighted the importance of cross-border data flows for global connectivity, noting that excessive restrictions ultimately harm a country’s own citizens and economic progress.


David Zapolsky from Amazon stressed the dangers of regulating AI before fully understanding the technology, pointing to examples like Colorado and the EU where early regulations have led to implementation delays and buyer’s remorse. He advocated for a risk-based approach that focuses on high-risk use cases affecting life, health, or civil rights rather than attempting comprehensive AI regulation.


The panelists discussed the shared responsibility between technology companies and users in ensuring AI safety. They emphasized the importance of providing enterprise customers with security controls, transparency, and choice in how AI systems are deployed. Jarek Kutylowski from DeepL noted how AI applications are becoming increasingly critical, from translating maintenance records to drug development documentation.


Looking forward, the participants expressed hope for greater international consensus on AI standards, increased focus on cybersecurity threats, and ensuring AI benefits reach underserved populations globally. The discussion concluded with a vision of AI enabling better global collaboration across borders, languages, and geographies.


Keypoints

Major Discussion Points:

Global AI Governance Alignment: The critical need for international coordination on AI regulation to avoid fragmentation that could hinder innovation and cross-border collaboration. Panelists emphasized that technology naturally transcends borders, and misaligned governance approaches create operational challenges for global companies.


Balancing Innovation with Risk Management: The ongoing tension between protecting citizens through regulation versus stifling innovation through over-regulation. Multiple panelists warned against premature, overly prescriptive regulations before fully understanding AI technology’s capabilities and applications.


Security and Trust as Foundational Elements: The importance of embedding security measures across all AI systems, with particular emphasis on protecting against cyber threats, data poisoning, and the emerging risks of AI agents being compromised or going rogue.


Risk-Based Regulatory Approaches: The need for flexible, context-sensitive governance that differentiates between high-risk and low-risk AI applications rather than applying blanket regulations. Examples included distinguishing between AI used for entertainment recommendations versus medical decision-making.


Inclusive AI Development and Global Collaboration: The aspiration for AI benefits to reach underserved populations worldwide, with emphasis on upskilling, cross-border collaboration, and ensuring AI serves as a tool for global equity rather than widening existing divides.


Overall Purpose:

The discussion aimed to explore how governments and industry can work together to create aligned, effective AI governance frameworks that promote innovation while managing risks. The conversation sought to identify best practices for international cooperation on AI regulation and examine the responsibilities of both technology companies and users in ensuring safe AI deployment.


Overall Tone:

The discussion maintained a collaborative and constructive tone throughout, with panelists generally agreeing on core principles while offering diverse perspectives from their respective industries. The conversation was forward-looking and optimistic, emphasizing partnership between government and industry rather than adversarial regulation. The tone remained professional yet accessible, with panelists sharing practical examples and personal anecdotes to illustrate complex policy concepts. There was a consistent thread of cautious optimism about AI’s potential while acknowledging legitimate concerns about security and governance challenges.


Speakers

Jason Oxman: Moderator/Host of the discussion


Aparna Bawa: Chief Operating Officer (COO) of Zoom


David Zapolsky: Chief Global Affairs and Legal Officer at Amazon


Jay Chaudhry: CEO of Zscaler (security expert)


Jarek Kutylowski: CEO of DeepL (language AI platform)


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session report

This comprehensive discussion at the AI Impact Summit brought together technology leaders from across the AI ecosystem to address one of the most pressing challenges facing the industry: how to achieve effective global AI governance that manages risks in a way that supports global innovation and interoperability. The panel, moderated by Jason Oxman, featured Jay Chaudhry (CEO of Zscaler), Aparna Bawa (COO of Zoom), David Zapolsky (Chief Global Affairs and Legal Officer at Amazon), and Dr. Jarek Kutylowski (CEO of DeepL), representing diverse perspectives from cybersecurity, communications, cloud infrastructure, and language AI sectors.


The Central Challenge: Balancing Governance with Innovation

The discussion opened with a fundamental tension that permeated the entire conversation: the need for governments to protect citizens through AI regulation whilst avoiding the stifling of technological innovation. Jay Chaudhry established the philosophical foundation by arguing that whilst some level of governance is necessary, excessive regulation can be counterproductive. His key insight that “compliance doesn’t mean security” challenged conventional thinking about regulatory approaches, highlighting how bureaucratic compliance processes often lag behind rapidly evolving technological threats and needs.


This perspective was reinforced by David Zapolsky’s concrete examples of regulatory overreach, particularly citing Colorado’s AI regulation, which was implemented but subsequently put on hold because regulators couldn’t determine how to apply it practically. He emphasised the need to understand “what we’re regulating before you can really pull the trigger,” noting that Amazon faced a three-month process just to educate federal certifiers about Zscaler’s firewall-free approach. Similarly, he referenced the European Union’s struggles with AI regulation, where policymakers are “looking for ways to not have to put the thing into practice because they don’t really know how it’s going to play out.”


Global Interoperability as a Business Imperative

A critical theme that emerged was the absolute necessity of cross-border data flows and global interoperability for modern AI systems to function effectively. Aparna Bawa provided a compelling illustration through Zoom’s experience, noting that “we would not exist if we didn’t have cross-border data flows and free unencumbered data flow.” Her observation that restrictions on data flows ultimately “impede their own citizens’ progress” reframed the governance debate from a purely protective stance to one considering economic opportunity costs.


This perspective was echoed across all panellists, with each describing how their business models fundamentally depend on global operations. David Zapolsky explained that Amazon’s diverse services—from e-commerce to cloud computing to satellite internet—all “depend on free flow of goods, free flow of information, open skies.” Jarek Kutylowski, drawing on his background as a scientist, added the economic dimension, noting that AI companies require global market access to achieve the economies of scale necessary to offset high R&D costs, making fragmented governance approaches particularly damaging to innovation.


Risk-Based Approaches: Moving Beyond One-Size-Fits-All Regulation

Perhaps the most sophisticated aspect of the discussion was the consensus around risk-based governance approaches that differentiate between various AI applications and use cases. Jay Chaudhry provided a memorable example from his experience with General Electric’s CISO Larry Virginia, who explained: “when I tried to secure everything, I secured nothing.” The distinction between protecting intellectual property for jet engines versus washing machines illustrated how different assets require different levels of protection—a principle that applies equally to AI governance.


David Zapolsky expanded this concept by arguing for regulation that focuses on “high-risk use cases” affecting “life, health, or civil rights” rather than attempting to create a “unified field theory of AI regulation.” This approach acknowledges that AI used for entertainment recommendations presents vastly different risks than AI used for medical decision-making or financial services.


Jarek Kutylowski provided concrete examples of this risk differentiation in practice, noting how DeepL’s translation services have evolved from handling routine emails to translating critical documents like patent applications and “R&D documentation for new drugs that actually influences how those drugs are developed and whether they’re being approved by the FDA.” This evolution demonstrates how the same underlying technology can have dramatically different risk profiles depending on its application.


Jay Chaudhry also referenced India’s “five pillars” approach to AI governance, recommending that security should serve as an overlay across all regulatory layers rather than being treated as a separate concern.


Security as a Foundational Layer

The discussion revealed sophisticated thinking about AI security that goes beyond traditional data protection concerns. Jay Chaudhry introduced forward-looking security challenges, warning that “tomorrow AI agents will be your weakest link” as they become more autonomous and widespread. His concern about agents being “hacked or hijacked in your company with access to all kinds of stuff” highlighted emerging vulnerabilities that current governance frameworks haven’t yet addressed.


Chaudhry specifically warned about data poisoning attacks and emphasised the need for zero trust architecture to extend to AI agents. The security discussion also encompassed broader threats, including the potential for bad actors to use AI to enhance cyberattacks by enabling rapid discovery of attack surfaces, creating convincing phishing emails, and automating network reconnaissance. These concerns extend to nation-state actors who might use AI for espionage or to plant backdoors in systems, representing a category of risk that requires international cooperation to address effectively.


Enterprise and User Responsibility: A Nuanced Partnership Model

One of the most sophisticated aspects of the discussion was the exploration of shared responsibility between technology companies, enterprise customers, and individual users. Aparna Bawa drew on Zoom’s pandemic experience to illustrate this complexity, explaining how the company had to adapt from serving primarily enterprise customers with sophisticated IT administrators to suddenly supporting individual consumers and institutions like public schools without technical expertise.


This experience led to insights about graduated responsibility models. For enterprise customers with sophisticated IT teams, companies can provide extensive controls and let customers make their own security and privacy decisions. However, for individual consumers or less technically sophisticated organisations, companies have greater responsibility to implement appropriate default protections. As Bawa explained, companies have “an obligation as an enterprise to make sure that there’s sufficient controls for the individual user and it scales all the way up to the enterprise.”


Bawa also shared her approach to user education, noting how she teaches her children not to put personal information into ChatGPT prompts, illustrating how user experience philosophy drives security decisions at the individual level.


David Zapolsky described Amazon’s approach through their Bedrock platform, which provides enterprise customers with access to over 100 different AI models whilst building in security infrastructure, guardrails, and transparency tools. This approach puts “control of how this technology is deployed into the hands of enterprises and users” whilst ensuring that customer data remains under customer control rather than being used to train models.


The Vision for Inclusive AI Development

The discussion concluded with forward-looking perspectives on AI’s potential for global inclusion and collaboration. Aparna Bawa, identifying herself as Indian American, emphasised the importance of ensuring AI benefits reach underserved populations, noting that “even enterprises who have more of a chance of adopting AI and gaining some of the efficiencies of AI, they need a market.” Her vision extended to farmers in rural Karnataka who could use AI to transform their lives and create opportunities for successive generations.


Jarek Kutylowski articulated perhaps the most aspirational vision, hoping to see AI enable collaboration among “amazing humans across all of the continents of this world” regardless of geography, language, or professional background. Drawing on his scientific background, he emphasised the critical importance of trust in AI outcomes, particularly in high-stakes applications.


Emerging Consensus and Future Directions

Despite representing different sectors and business models, the panellists demonstrated remarkable consensus on several key principles. They agreed that effective AI governance requires flexible, risk-based approaches rather than prescriptive regulations. They emphasised the critical importance of maintaining global interoperability whilst building in appropriate safeguards. Most significantly, they converged on a vision of governance as a partnership between governments, industry, and users rather than a top-down regulatory approach.


Looking forward to the next year and the upcoming summit in Switzerland, each panellist articulated specific priorities:


– Jay Chaudhry emphasised the urgent need to protect against AI-enhanced cyber threats to prevent government overreaction that could stifle innovation


– Aparna Bawa focused on making progress on inclusivity and upskilling, ensuring AI benefits reach underserved populations globally


– David Zapolsky expressed optimism about international convergence on basic governance approaches whilst respecting national sovereignty, noting there’s “sort of a consensus around AI regulation that’s kind of yearning to get out”


– Jarek Kutylowski envisioned enhanced global collaboration that transcends language and geographic barriers


The discussion revealed that whilst the challenges of AI governance are complex and multifaceted, there are practical pathways forward that can balance innovation with appropriate risk management. The key lies in developing sophisticated, flexible approaches that can evolve with the technology whilst maintaining focus on protecting citizens and enabling global collaboration. The panellists’ insights suggest that the most effective governance frameworks will emerge from continued dialogue between industry and government, grounded in practical experience and focused on outcomes rather than theoretical constructs.


This conversation at the AI Impact Summit demonstrated that whilst perfect solutions may not exist, there is substantial common ground among industry leaders about the principles and approaches that can guide effective AI governance in an increasingly connected world.


Session transcript

Jason Oxman

The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and interoperability. So today’s discussion, we’re very fortunate to have leaders from across the AI stack, if you will, who are here with us to discuss how governments can help industry work in partnership with industry, if you will, to align responsibilities, to reduce fragmentation, and to build trust in AI systems that are built for scale. We are very pleased to have with us some luminaries from across the tech ecosystem. Jay Choudhury is the CEO of Zscaler. Aparna will be joining us in just a moment. David Zapolsky. I almost missed that. David Zapolsky, who made it, is the Chief Global Affairs and Legal Officer at Amazon.

And Dr. Jarek Kutylowski. How did I do there? Thank you, is the CEO of DeepL. So to set up the conversation, I wanted to ask each of our panelists to help us think through the AI governance conversation that’s taking place globally. So as we’ve seen here at the AI Impact Summit, there are efforts among global governments to align their approach, even though they may take different directions. Hi, Aparna. And as Aparna is now joining us, I will introduce Aparna Bawa, who is the chief operating officer of Zoom, which is not only a technology company, it is also a verb. And so thank you, Aparna, for being here with us today. So as we were getting ready to talk about AI governance conversations, it is absolutely the case that there is a need for governments around the world to align their approaches to AI governance, because, of course, technology doesn’t, by its very nature, want to stop at borders.

It wants to cross borders and unite people around the world. So I wanted to ask each of our esteemed panelists, and, Jay, I’ll start with you. for perhaps your philosophical perspective on how AI alignment can take place across governments. Why is it that that alignment matters? And perhaps even share your perspective on what happens if that AI alignment breaks down and governments are going off in different directions and taking different approaches. Where do you see the biggest challenges around this idea of alignment of AI governance around the world? Jay, thank you.

Jay Chaudhry

Thank you. So we are a highly connected world. Imagine any large corporation that’s doing business in 50 countries. If each country has its own governance rules and all but using AI, and you’re using some systems locally, some systems globally, it’ll create a lot of issues. some line of alignment is good, but over -alignment doesn’t help either. In fact, I have similar thoughts on governance too. Some level of governance is needed. When we start doing too much governance, too much compliance, we start killing innovations. So that’s personally my view. No,

Jason Oxman

it’s an important viewpoint because there is this idea that governments need to act. They need to protect citizens. They need to ensure security. But acting too much, perhaps in advance, can stifle innovation. So, Aparna, I want to go to you with the same question. As we’re having this global AI governance conversation here at the AI Impact Summit, governments are going in different directions in many cases. This is the first time the conversation has taken place in the global south, so I think that’s a good thing for aligning governance approaches. So from where you sit, why is alignment across the AI governance ecosystem internationally so important, and what can happen when it doesn’t happen and goes wrong?

I

Aparna Bawa

will say, just to start, as an Indian American and someone who has lived in India, and we talked about this this morning at a breakfast we were at, it is quite striking to me some of the haves and have -nots. Like even we were talking about this morning, for example, during COVID, how some countries were fighting for PPE and fighting for oxygen tanks. And, you know, we in California were stockpiling toilet paper. I mean, the contrast is so stark. And I remember during COVID thinking to myself, that doesn’t seem right. And so I do feel like countries should protect the rights of their citizens and should want to advance their economies. But it is a tradeoff.

And I think it’s very well put to say it’s a tradeoff. So, for example, Zoom. imagine you would not be able to connect with people globally if we did not have cross -border data flow. So when we’re talking about AI, you can talk about AI, but it’s no different at the data layer. But we would not exist if we didn’t have cross -border data flows and free unencumbered data flow. And when governments start putting more and more restrictions on them within their own countries, it impedes their own citizens’ progress. And so at some point, it becomes a tradeoff. Now, obviously, the requirements around privacy and security are table stakes. If you get on a Zoom meeting with someone, you want to know that the person on the other side is that person.

That is sort of table stakes. But I’m with Jay on this one. I think there’s a basic level framework that is necessary. to be honest we live today with multiple in the United States we live with multiple states privacy frameworks and is it great no is it inefficient yes there’s something in between where you have a framework that is commonly understood with common set of norms and values I also respect a right of sovereignty for a nation so something there has to be a balance that

Jason Oxman

makes sense David Amazon operates pretty much in every country on the planet although I’m sure you can name a few that you’re not in yet there’s a few yeah there’s a few small number can you share your view on how this AI governance conversation needs to have some perhaps some unity to it

David Zapolsky

sure and first of all I I’m going to try not to repeat Aparna’s view because I basically agree with everything you just said if you think about every one of Amazon’s business models our stores the way we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets We’re looking to take that to 80. If you look at the cloud, if you look at our entertainment business, if you look at the satellites that we’re launching to launch a global Internet service, every one of them depends on free flow of goods, free flow of information, open skies.

That’s just kind of the way we’ve designed the company, to be global and to have interoperable services. And so every time a government erects barriers to that, it creates friction. It creates potential problems. And I think the global trend towards more of that is concerning. With AI particularly, I think the danger of some of the regulation that we’ve seen around the world is that we all still don’t really know how it’s going to be used, where it’s going to be most effective, where it’s going to be dangerous. There’s a lot of theories about it. There’s a lot of fear, uncertainty, and doubt about that, a lot of science fiction. And I think the danger in regulation, before you really understand the technology or how it’s going to play out, is that you create costs.

you create uncertainty and you inhibit innovation you inhibit adoption and that’s kind of what we’re seeing a couple years into this large language model journey there are parts of the world that were quick to regulate and civil society was all over that we’re going to regulate all these things we’re going to come up with these theoretical constructs of high risk, low risk and we don’t really know what that means in practice yet and so what’s happening? well, look at Colorado Colorado was one of the first states out of the box with comprehensive AI regulation which, by the way, isn’t bad in principle but they don’t know how to apply it no one really knows how to apply it and I think you’re seeing some buyers regret they put the implementation on hold they want to figure out standards I won’t even talk about the EU but they’re pretty much in the same boat they’re all looking for ways to not have to put the thing into practice because they don’t really know how it’s going to play out so I think what we need to do is step back look for some common principles what what is a high risk use?

what can we all agree are high risk? well, if you’re using a technology to make decisions that’s going to affect the life, health, or civil rights of an individual let’s talk about that are there laws that protect that already? do we need to supplement? them. Let’s work backwards from the harms we can see today and regulate there versus trying to come up with the unified field theory of AI regulation because that’s only going to slow us down.

Jason Oxman

Great. Yarek, we’ve been talking about unifying global governance approaches, making sure one might say that they all speak a common language. That’s what DeepL does. See what I did there? Your language AI platform is all about making sure everyone can communicate with each other regardless of the language they speak. From your perspective, you’re our European headquartered representative here, but you do business around the world. What can you share with us about how AI governance conversations being unified across governments is important to DeepL?

Jarek Kutylowski

I truly believe that any successful technology needs to be inherently global. That holds both for the commercial models of the companies that we’re representing, but it also holds for the AI. just the access and the ability of reach towards the whole globe with what we are building. I think this creates the economies of scale on everything that we’re building. And when you are in AI, like obviously you’re running very, very high R &D costs and you have to be able to offset that with a huge customer base. So having a global market and being able to deploy to the whole world and therefore also to fulfill the mission of our companies, whether it’s just enabling communication, maybe in the case of Zoom, or making sure that this communication can happen multilingually, as in the case of DeepL, that really depends on a framework that is transparent and on a framework that is maybe not too different in all of the parts of this world.

And therefore, having some common layer, having this right balance of… of protecting the sovereignty and… And protecting maybe like a slightly different approach and slightly different mindset to certain topics like privacy, where we do have differences across the world. But doing that in a way that has a common understanding, that would be incredibly valuable. I think not only for the companies that we represent, but also really for our users and for our customers who depend on the best possible solutions.

Jason Oxman

Jay, I want to come back to you because you are our resident security expert and sometimes doomsayer about what happens if we don’t include trust and security as part of the conversation. I’ve heard you remind members of the government of India, indeed, that although the five pillars are enormously valuable, if you don’t have security overlaying them, we’re all in trouble. Talk to us about how the trust and security conversation is still a vital component around all the excitement.

Jay Chaudhry

Yeah, I have said that AI is powerful. but AI is dangerous because this technology can be abused. In India there’s a great focus on five layers and the focus is about being sovereign having everything that you can control. It starts with application then models underneath and so on and so forth. While it’s good to have that sovereign stuff imagine a bad guy can control all of that sovereign stuff sitting somewhere out there. Data poisoning can be done. All kinds of stuff can be done. So having a layer of security across all five layers becomes very important. So we should think about sovereignty not just in terms of this thing is sitting in my country but also in terms of who can access, who can do some of these things with it which is often overlooked.

And also the adoption of AI is happening very fast. And it’s wonderful And I’m not saying we should slow it down. I think we should embrace fast, but we should also start thinking about embracing cyber to make sure things are used securely at the same pace.

Jason Oxman

And in order to make sure that security is part of the AI ecosystem, Aparna, I want to ask you about what we all have responsibility to be thinking about as users, what enterprises have a responsibility to be thinking about. You know, we’ve talked about governance from the policy perspective, but, of course, users and enterprises also have a responsibility around AI. And as the COO of Zoom, you look over both the public policy and business aspects of what you’re deploying. How does the conversation about what we all should be thinking about factor into product development and deployment conversations?

Aparna Bawa

It is a true partnership. And you know what? When Jay was talking, it resonated with me. When you work for a technology company, you’re not just working for a company. that is what you want to develop technology and you want people to adopt it as fast as possible you want them to be early adopters it’s so exciting in fact in our company you know companies have lots of different functions obviously our engineers our developers our product people are super they’re super early adopting their first to to take any sort of app that’s come out with its cursor etc and use it in their day -to -day and then there’s other people who have other day jobs i mean there’s finance people and the people people the hr people they have day jobs and they’re learning ai at night because they’re realizing that if i’m not on the ai bandwagon i’m going to get left behind and by the way if you’re looking to develop apps it’s actually yes you can focus on the sort of the the tech applications but the real so the the secret that not getting a ton of attention maybe a little bit of attention is this non -technical roles that could be augmented with ai so in that frame of mind i think it’s a really important thing to do and i think it’s a really important framework when you work for that kind of technology company it can be difficult to then start saying, but wait a minute, you need to slow down because you need to make sure that your CICD work is still going and it’s amplified because of the risks of AI, your security certifications, your red teaming, your privacy standards, all of that stuff is maintained.

I will tell you, the user plus the enterprise that is pushing out this technology, it’s a partnership. It is so important. The one thing that we learned during the pandemic, if you think about Zoom before pandemic, it was an enterprise -focused company, a work -focused company. And basically, when the pandemic hit, we said, okay, all you consumers, we will just hand you a platform that we usually give to IT administrators. And what do IT administrators at our customers do? They decide whether to turn up the security and privacy controls, turn down usability because it’s a tradeoff. It’s a definite tradeoff. They decide. We, in turn, just handed it to consumers and said, you can’t do that.

Who decides? and we realize, okay, public schools, they don’t have IT administrators. They don’t know how to turn on waiting rooms. They don’t know how to, you know, hide the meeting invite. They don’t know how to do these kinds of things. You have an obligation as an enterprise to make sure that there’s sufficient controls for the individual user and it scales all the way up to the enterprise and maintain that level of flexibility. You have that obligation. But on the same side, I would say the user, to be smart, has to understand some basic levels. I’ll tell you for an example. My kids use all the AI engines, ChatGPT, Cloud. They use it all. And it is a conversation we have to say is you don’t put all your information into your prompt because if you put all your information in your prompt, it is going into that engine and it will train that engine.

On the flip side, we as an enterprise provider, we have made the statement and we have made the policy decision that we will not use our customer content to train data. When I’m training my kids, I have to tell them, you can’t put your address into ChatGPT. You have to make sure that you’re safe in some way. So those are the kinds of things that you have to keep in mind. It’s a partnership between the user and the enterprise. And I think the enterprise obligation scales as you get down into the consumer use.

Jason Oxman

And I want to stay on this theme of training the user, if you will, whether they’re your children or a customer, because it is important for the tech industry to be mindful of the downstream. And, David, I want to come to you with this question. Amazon is, in a lot of ways, an upstream operator. You enable business and consumer customers on everything you do, from content to e -commerce to broadband in the future to your cloud customers. So how do you think about the upstream governance decisions that you’re making at Amazon and how they impact the downstream? How do you think about the downstream decisions or ways of operating that your customers are going to have to make as a result of those decisions you make at the Amazon level?

David Zapolsky

Well, we’re fortunate to have the scale to be able to serve enterprises in the cloud at the service layer. And so we have, you know, even before the AI, the current AI craze, we have a couple of decades of experience in thinking through what does governance and security look like for our enterprise customers. And as we’ve moved into this, you know, newer age where there’s AI services available, you know, one of the best solutions that we could come up with is creating an environment within the cloud services that so many hundreds of thousands of enterprises already use to give them access to models, not just our own, and we do our own models, and there’s upstream governance on those, you know, testing, making sure there’s, you know, we correct for bias, the things that a responsible model builder will do.

But at this enterprise level and the services, this is called bedrock. You know, we try to think through what are customers going to need. So we build in security. We build in the type of infrastructure that allows customers to scale up or down. We build in choice. Enterprises can choose from over 100 different models, open source and closed source. Not just ours, but, you know, all of the leading models from all around the world. And so we try to create an environment, a platform, where enterprise customers can come to use this new technology. First of all, get access to it without having to build their own servers and train their own models. And secondly, to do it in a way where they can rely on the security of the infrastructure.

The other thing that we will provide customers is that the data they use to employ those models, you know, stays their data. It doesn’t go to the model builders and it doesn’t go to us. So, you know, you can build that into the system. And then on top of that, given the way… that enterprises are using this technology, we try to build as many tools as possible to put the control of how this technology is deployed into the hands of enterprises and users. And so, for instance, on the Bedrock platform, we provide guardrails that allow you, as an enterprise, to basically control what types of outputs the models are going to give you. Now, are they more toxic?

Are they less biased? Can you filter for certain types of content? We build those controls right into the interface so enterprises can have that control. We build disclosures into the types of services that we offer so that we provide some visibility and transparency into here’s how this thing is built, here’s what you should use it for, here’s what you probably shouldn’t use it for, and we provide those kinds of choices to consumers. And so you have to think through the overall security in the system. in the environment. and the accessibility of this technology. And as far as our approach is, the cloud is probably the best place to do that. It’s certainly the easiest way to access the technology and likely the safest.

Jason Oxman

Jarek, you’ve moved DeepL’s business model from it started as translation. Now it’s getting into agentic AI, and you have agents on your platform that can execute tasks on behalf of your customers. Which I can imagine raises very different governance policy decisions that you have to make on behalf of your customers when you’re just translating versus when agents can act autonomously, particularly because you’re a global business and they can act autonomously across borders. How are you thinking about the policies and procedures for governance that you have to put in place in an agentic AI world that are different than perhaps you did in a language translation? world?

Jarek Kutylowski

I think generally, but also in the language space, it’s just like the stakes are becoming higher and higher. AI is becoming more and more powerful. And even if you look into translation, like a couple of years ago, Diebel would be translating your typical email to your customer. And that is important, of course. You want to look great in front of the customer. You want to be eloquent. You want to be able to connect with them, maybe like really on a human level when it comes to the language that this customer is speaking. And you’re enabling your business to basically become global very, very easily. But now what Diebel is translating, it’s plain maintenance records. It’s R &D documentation for new drugs that actually influences how those drugs are developed and whether they’re being approved by the FDA or not.

So these are highly critical use cases. And I think it has been mentioned that like privacy and… it is just the table stakes it’s just the beginning I think creating a layer of trust into the outcomes of the AI whether that’s translation whether that’s agentic AI that those decisions are really following what the enterprise is expecting of the AI that is really where kind of the battle is right now and that is where both the governance aspect of that that’s coming from the political side and from the governmental side needs to obviously be included but there’s also the aspect of how do the enterprise how do our customers want to regulate the AI that is being deployed and how flexible the products that we all are providing can be towards those very different approaches that we’re seeing across the world and with different types of enterprises maybe even

Jason Oxman

Each of you mentioned the concept of risk management in your comments and I want to come back to the balance that Jay alluded to earlier between risk management between promoting innovation and balancing risks and obviously there is a trade -off it’s a sliding scale the more you regulate risk the less room there is for innovation I want to ask each of our panelists, Jay, I’ll start with you, about how you’ve seen a flexible risk -based approach from government be the most effective, where you see that flexible approach still leave room for innovation, or the flip side to that, if you want to give any examples, where you’ve seen it go wrong, where a more prescriptive approach to regulation has denied you the opportunity to bring products or services to market or has generally been more of a challenge for industry because a government didn’t get the balance right between managing risk and promoting innovation?

Jay Chaudhry

There are many facets of governance and risk. Take, for example, data privacy. Obviously, that’s one kind of factor. But potentially, hacker attacks from a cyber point of view is a different kind of factor. We look at more in terms of two things. One, making sure your data is not lost. So the data becomes very important. There’s a consumer end of data, but there’s a bigger issue on the data side is enterprises. And you don’t try to treat the same data the same way in the practical business world. I’ll give you an example. When I worked with General Electric, the CISO, a very smart guy, Larry Virginia, would say, when I tried to secure everything, I secured nothing.

So then he would give an example. He says, as a CISO, I need to protect IP or intellectual property of my products. But my washers and dryers are out there. I don’t spend time trying to protect its IP at all. You can buy them in a store. And figure it out. But I’m dead serious about protecting IP on my jet engine. That’s very important. Trying to just say all consumer data, all this data, it just starts creating issues. That’s why I also like to say compliance doesn’t mean security. In fact, when you work on compliance, all this thing works through the government entities, pros, cons, and it takes a lot longer. And by the time it’s out there, the cyber and compliance needs have moved on.

So the stuff you put in place many, many times is old. In fact, when Zscaler came out with our Zero Trust cloud -based architecture, a lot of these regulators came in, wait a second, where is your firewall? So what do you mean firewall? Firewalls don’t, we don’t use firewalls. We are anti -firewalls. And they said, no, no, no, wait a second. The banks can use it. If you know. It’s not a firewall. When we went through certification for the federal government in the U.S. the certifying body first came firewalls no it took us three months to educate them so that’s why I think over regulation I really don’t like it there needs to be a way of saying what’s the impact of this thing on what kind of stuff that’s the right approach all data is not created equal trying to put the onus off securing all data gets hard then classifying data gets hard so these are not simple issues AI makes it very hard we don’t even fully understand how AI does what it does so I think a flexible policy that evolves is a better thing while keeping track of the most important data and then beyond data hackers too, that’s a big problem we talked about agents today a user is the weakest link tomorrow AI agents will be your weakest link and they’ll be all over they are maturing they’ll come Imagine an agent getting hacked or hijacked in your company with access to all kinds of stuff.

So that’s where companies like Zscaler, we are focused on making sure our zero trust change can be extended to deal with agents, starting with understanding their identity, authorization, all those things. Those things are very important the way we look at it. Otherwise, business will shut down.

Jason Oxman

So Aparna Zoom brings some amazing innovations using AI to the platform that we’re all familiar with. It makes it a lot easier for us to do everything from transcribe meetings to pretend to be a cat when you’re in court. No, that’s not a – that’s a –

Aparna Bawa

I was going to say it can summarize your meeting. It can take notes for you. It can send action items to your teams. It can calendar those action item follow -ups. It can give them deadlines. All done.

Jason Oxman

There it is. But I can imagine you’ve had some challenges around the world in that balance between innovation. and risk management from governments. Can you either share a positive example of where that’s gone well in your mind, or if you want to, an example of where it hasn’t gone well, where consumers and businesses have been denied Zoom innovation because that balance isn’t struck? Or perhaps you can keep it at a higher level if you prefer.

Aparna Bawa

between innovation, our product team, innovate, innovate, innovate, our governance team, security, privacy, et cetera, is always thinking about that as well. And so how do you strike that balance? And I think I’ll start at the top level. It’s a sliding scale on many different fronts. But if you look at it like a layer cake or even a data stack, but the top level, it’s customer choice. So David was very appropriate when he said customer choice, but customer choice is different by the category of customer. If you are an enterprise and you have 200 people on an IT admin team or under the CIO, and you are buying Zoom and you have a giant security team and a giant compliance team, you’re going to be making choices for yourself.

I’m not going to tell HSBC what they’re going to do. They’re going to decide what they’re going to do. And we deliver the platform and we have toggles for them to decide what they want to deploy, what they don’t want to deploy, who they want to deploy it to. We make it very easy. So we provide a lot of choice. So the same platform services Fortune One. The same platform also services my mother -in -law, who is on the free account and who is chatting with her friends and won’t upgrade. I tell her, please upgrade. She gets off, waits five minutes, gets back on, and that’s how they do it. So for her, it’s very different.

So for her, you have to mandate a few things. You can’t give your meeting ID to everybody. It cannot be on the top of the UI. You know, those are some basic things. You have to have waiting rooms. If you’re in a school environment, you have to have mandatory passcodes. These are sorts of things that you – so that’s a sliding scale. I would say take it one level deeper. I think the biggest thing I have learned from working at Zoom, and in all honesty, I credit our founder for this. The biggest thing I’ve learned working at Zoom is everything goes back to the user experience. And our customers are not monoliths. They don’t just want to take down.

They don’t want to take down all the technology. They want to do it in a safe and secure way. They don’t want to be surprised. So you have to think, I am a user. I’m an end user. It doesn’t matter that I sell to Zscaler. Thank you very much. It doesn’t matter that I sell to Zscaler. I need to worry about how Jay Choudhury’s engineer feels when he gets on Zoom. And that’s the user experience I’m going for. So if you are a user and you feel like, wait a second, I don’t really want – if I’m a finance person in Jay Choudhury’s team and I say I don’t really want my meeting to be automatically transcribed and then spit into an AI engine because I’m worried, or if I’m a lawyer, I’m worried about attorney -client privilege, well, I need to give them the option to say I opt out of that.

I need to be able to give them choice. And I think that’s how I think about it. Every risk -based decision is you are a user. You’re not one kind of user. You have multiple types of users. How do you make it easy? How do you make it easy for, at a very lowest common denominator, for them to trust you? And that’s really the answer that you go through.

Jason Oxman

That’s great. David, let’s go from different kinds of users to different kind of products. You were the first on the panel to use the phrase risk -based approach, and nowhere is that more evident than Amazon’s wide range of products and services to your customers. I can imagine it’s a very different internal conversation about governance and risk when determining how AI is going to, on Amazon Prime, recommend my next series or show. Not a lot of risk there. But other Amazon products could have more risk to them. So on the sliding scale, and you also, you travel the world, quite literally now you’re doing it, talking to governments about that innovation versus risk management and the risk of getting that balance wrong.

How do you communicate that to governments and also make the internal product decisions that you need to around those issues?

David Zapolsky

Well, you sort of… kind of stole one of my talking points when I have some of these conversations, which is it does matter how this technology is used and where. It’s a different set of considerations when we think about what kind of protections or risks arise from an AI -assisted shopping assistant versus a tool we might make available to help doctors document how they’re treating patients and make it easier for people to prescribe medications. Those are two very different risk profiles. But if you start with a regulation that doesn’t differentiate between those, you’re going to inhibit innovation. You’re going to prevent adoption of really useful ways that this technology can be used. And so that’s… You know, that’s the pitch I make when I get to talk… to people whose business it is to think about regulation.

It is about risk. It’s about how the technology is used. And my point earlier was that we don’t really know yet how the technology is going to be used. When we see it, we can analyze it. I can’t, you know, and on that point generally, you know, there are cases where technology companies have made a decision to not bring certain types of technology into, say, Europe because of regulatory uncertainty. And typically those get worked through. But I can’t tell you how many conversations I’ve had internally where folks have come up with an idea or a product and our sort of internal mantra is we want to launch something everywhere all at once. We want to serve customers.

If we have convictions, something’s going to happen. If it’s good for customers, why just do it in one place? And sometimes the answer to that is. it’s too costly. It’s going to take more time. We can’t really figure out how this is going to fit within, you know, the regulatory scheme in a certain other jurisdiction because they haven’t thought of it either. And so we’re going to wait. We’re just going to, you know, wait on that. We’ll launch it in this place first and we’ll see if it works. And then if it works, then we’ll think about, you know, the costs associated with scaling it globally. And so that’s a real world issue that governments have to understand and deal with when they make decisions about how prescriptive their regulations are going to be, especially in the abstract.

And so those are the sorts of conversations I have. I think, you know, in the AI space, I think you can look at countries like Peru. You can look at countries like Japan that have proceeded cautiously. I think India has the same approach and I’m very encouraged by the way India is approaching these issues. You have to you can’t rule out regulation completely. And Amazon’s an advocate. of regulation that mandates that people developing and deploying this technology do it responsibly. But we have to understand what we’re regulating before you can really pull the trigger. And so those are the – I think those types of examples are useful for people to keep in mind when they’re considering how to resolve that balance.

Jason Oxman

And the results of those conversations not going in the right direction, David, is that consumers or businesses might get denied the technology that their neighbors are enjoying. So, Jarek, I wanted to ask you, as the CEO of DeepL, in the process of expanding around the globe, are there examples that you can think of where you’ve had to make a go -no -go decision entering a particular country or launching a particular product, including your new Agentech AI products? because of the regulatory environment or because of the way in which a country looks at? Or the flip side of that, if you want to take the positive, is are you attracted to a particular market because, as David said, it’s done the right thing, like Peru or Japan or even India is endeavoring to do, where they’re more likely to get deep L service because of the decisions they’ve made, the approach they take to these AI governance decisions?

Jarek Kutylowski

Yeah, Jason, let me maybe first start with a principle. I’m a scientist by heart, so I’m really excited about bringing the best possible technology to each and every one of our customers and users. I think they all deserve it. I think they all should be equipped with that. But yes, there is kind of some of those things that we need to take into account. And actually, quite often, those are not really location -based or country -based or regulation -based, but really also based on the use cases of those. Of those customers. AI can be incredibly powerful, but that power also demonstrates its possibilities in different ways in different applications. And going back to my example from earlier, like the translation of an email has just a different criticality grade than a translation of a patent application.

The execution of an agent in a particular environment versus in an enterprise environment has a different grade of complexity. But going back to kind of the regulation aspect of it, I think we’re lucky as a company to have grown in Europe in kind of an environment which is maybe like slightly earlier on regulation than other places in the world. And I think that gives us an edge to be able to understand how to work with this regulation and how to prepare and then also be very, very early in other markets, like you mentioned Colorado earlier, and be able to handle that complexity and be able to handle that complexity for our customers, really. Because most often it is our customers who do not understand this space.

We do. And we have to go all of the way to give them the possibility to figure this out for themselves, for their applications, for their use cases, and across a whole range of products. So in short, I think it can be managed, but it is really like part of the excellency of a company to be able to manage it together with the customer.

Jason Oxman

The last question that we have time. I want to address to each of you is a forward -looking question. It used to be possible to have conversations about policy outcomes years in advance. I think the best we can hope for. is for me to ask this question in advance of Switzerland hosting the next AI Impact Summit or whatever they choose to call it next year at this time. So my question to all of you on the panel is, a year from now, if we are to gather, and something had happened in the AI governance, AI regulatory space over the course of that year that you’d like to see happen and you were looking backwards to India and say, I’m really glad that one thing happened or that one thing changed or this government or this international body did this thing over the course of the last year to really help unleash the innovation and power of AI in a secure way that we all want to see, what could that one thing be that you’re looking at?

And it can be something that you’re focused on in your business as well over the course of the next year that government can help make a reality. So, Jay, I’ll start with you with this question. Then we’ll go down. I’ll go down the panel to bring our time to a close together. What’s the one thing you’re hoping if we’re talking a year from now has happened in global AI governance that’s going to make everything that we’re talking about and excited about a huge success?

Jay Chaudhry

The AI train is moving at a pretty fast pace. It will keep on moving. Then you look at the things that could go wrong. That’s where governance comes in. I think there’s too much focus on data. There’s less focus on bad things that bad guys can do. I think probably the biggest issue will be, hey, today we hear all about these ransom attacks, ransomware. AI can make it so much easier. Bad guys are very motivated to make money. Today, when they do attack, they have to find your attack surface. They’re finding those IP addresses that are open to the Internet, those firewalls and VPNs and everything. AI, you can discover it in 30 seconds. AI can write beautiful emails for phishing.

as if they come from your CFO. Once you get in, AI agents can discover your whole network to figure out what those things are. It can bring those things down. So I think we need to focus more on to make sure we can protect against those risks. I talked about AI agents going rogue. Those are one kind of risk. And then the second kind of risks government needs to worry about is nation states trying to use AI to really have advantage, understanding, getting these backdoors planted and all that kind of stuff. I think if you’re sitting next year and we’ve done enough in those areas that we don’t have some of these things that blow up.

If they blow up, then government starts tightening things more and more, which doesn’t sometimes help. So proactive areas to secure it will be very, very important.

Jason Oxman

All right. So protecting against these threats so that government doesn’t overreact and stifle innovation as a result. Aparna, what’s your one thing that you hope for for next year?

Aparna Bawa

You know, it really struck me in this impact summit, the focus on inclusivity, upskilling, skilling and upskilling people who wouldn’t otherwise have access to technology. And if you think about why we got started, we were founded because we wanted to provide free and open access to collaboration and have people from all walks of life connect. I think our founder had to travel to date his wife, you know, and didn’t want to see her more than once the next number of weeks. So, you know, it’s something powerful. In a year, I would like to actually see that happen. Now, it’s not. I think it’s completely altruistic. I do firmly believe that even enterprises who have more of a chance of adopting AI and gaining some of the efficiencies of AI, they need a market.

And the market is you, me, and all of us. And the more people in a village somewhere in a corner of India, even near – we were just talking about Karnataka in another meeting, in a village that has low bandwidth, et cetera, in Karnataka. If a farmer can adopt AI and can change their lives in successive generations, that is good for business. And so for me, progress on that. I still think it’s very – it’s all talk. But I love the idea. I love seeing a billboard where Prime Minister Modi is talking about inclusivity. That’s wonderful to hear. It’s good for business. Maybe it’s a bit altruistic, but I would think it would be good for Zoom.

Jason Oxman

I love it. AI lifting up more broadly the world. David?

David Zapolsky

I’ll take a much higher level approach. You know, I think there’s a sort of consensus around AI regulation that’s kind of yearning to get out. Like it’s sort of gelling a little bit. We saw it sort of in the Hiroshima agreements. We see it, you know, talked about in forums like this. You know, there is sort of an emerging consensus about how to approach this technology. In a responsible way, and I totally, again, agree violently with Aparna in adding the inclusiveness piece and commend the Prime Minister and India for making that a big part of the debate. But I think I would like to see countries around the world start to converge on this basic consensus.

It doesn’t mean that countries can’t have their own perspectives or sovereign outlooks, but there is sort of a… a movement toward an international standard that – and there’s a parallel with the technical standards. There’s ISO 42001, which everybody can abide by and give people a common set of principles and a common set of technical standards they need to make so that we can all be more confident in the way we roll out this technology.

Jason Oxman

I love that. A move toward more global industry consensus -based standards to help govern all that we do, hopefully put government regulators out of business if we can all do it right. Jarek, you get to bring us home with your aspiration for us as we gather together next year in Switzerland.

Jarek Kutylowski

Yeah, I think there’s place for those government regulators too. I would love, as you just explained, getting them all together and creating a framework. But I think there is a – bigger role for AI in this world. I think there’s so many amazing humans across all of the continents of this world and I would love to see in a year and once again that goes back a little bit to DeepL’s mission for them to be able to collaborate as much as they can no matter where they sit geographically no matter which language they speak, no matter what they do in their job just giving the opportunity to each and everyone in every place of this world and there’s amazing examples of cooperation between India and other countries and strengthening that even more and I think AI gives us even more possibilities to do that in the upcoming year so maybe in Switzerland we’re going to be able to look at that and see hey in India we’ve just set the cornerstone of making this possible and making this world a better place.

Jason Oxman

I bet they will. You know, it was AI action last year. Now it’s AI impact. Hopefully it will be AI collaboration or something of the sort next year. I love that that image of everybody across borders, across geographies, across languages collaborating together. What a great discussion. I love how we were both philosophical and practical. I really appreciate all of you sharing your deep insight on these important AI governance issues. And I appreciate all of you being here in the audience to hear this discussion. Please join me in recognizing and thanking our terrific panelists. And please enjoy the rest of the summit. Thank you. Now we’ve got to get a picture. Are we going to take a picture?

We have to get a picture, yeah. We’re going to have to hang back behind there. We’re going to have to hang back behind there.

J

Jay Chaudhry

Speech speed

142 words per minute

Speech length

1116 words

Speech time

469 seconds

Global Alignment & Unified AI Governance

Explanation

Chaudhry warns that if each nation imposes its own AI rules, companies operating across borders will face fragmented compliance and operational difficulties. A coordinated global approach is needed to avoid these inefficiencies.


Evidence

“If each country has its own governance rules and all but using AI, and you’re using some systems locally, some systems globally, it’ll create a lot of issues.” [5].


Major discussion point

Global Alignment & Unified AI Governance


Topics

Artificial intelligence | Internet governance | The enabling environment for digital development


Balancing Regulation with Innovation

Explanation

Chaudhry argues that overly strict compliance regimes stifle creativity and slow down AI progress, calling for flexible, risk‑based policies that evolve with the technology.


Evidence

“When we start doing too much governance, too much compliance, we start killing innovations.” [54].


Major discussion point

Balancing Regulation (Risk Management) with Innovation


Topics

The enabling environment for digital development | Artificial intelligence | Building confidence and security in the use of ICTs


Trust, Security, and Sovereignty in AI Systems

Explanation

Chaudhry highlights that AI’s power makes it a target for abuse and nation‑state exploitation, underscoring the need for security layers across all levels to protect data and systems.


Evidence

“but AI is dangerous because this technology can be abused.” [26]. “And then the second kind of risks government needs to worry about is nation states trying to use AI to really have advantage, understanding, getting these backdoors planted and all that kind of stuff.” [21].


Major discussion point

Trust, Security, and Sovereignty in AI Systems


Topics

Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society | Data governance


Future Aspirations: Emerging AI‑Enabled Threats

Explanation

Chaudhry cautions that ransomware and rogue AI agents could become major security challenges, urging proactive governance to mitigate these risks before they hinder innovation.


Evidence

“I think probably the biggest issue will be, hey, today we hear all about these ransom attacks, ransomware.” [149].


Major discussion point

Future Aspirations: Inclusivity, Standards, and International Consensus


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


A

Aparna Bawa

Speech speed

180 words per minute

Speech length

1935 words

Speech time

643 seconds

Global Alignment & Unified AI Governance

Explanation

Bawa stresses that cross‑border data flows are essential for global AI services; without them, worldwide collaboration and connectivity would collapse.


Evidence

“imagine you would not be able to connect with people globally if we did not have cross -border data flow.” [16]. “But we would not exist if we didn’t have cross -border data flows and free unencumbered data flow.” [18].


Major discussion point

Global Alignment & Unified AI Governance


Topics

Data governance | Artificial intelligence | Closing all digital divides


Balancing Regulation with Innovation

Explanation

Bawa points out that privacy and security requirements are necessary but create a trade‑off with usability, implying regulation must be balanced to keep innovation alive.


Evidence

“They decide whether to turn up the security and privacy controls, turn down usability because it’s a tradeoff.” [66]. “It’s a definite tradeoff.” [67].


Major discussion point

Balancing Regulation (Risk Management) with Innovation


Topics

The enabling environment for digital development | Human rights and the ethical dimensions of the information society | Artificial intelligence


Trust, Security, and Sovereignty in AI Systems

Explanation

Bawa notes that privacy and security are “table stakes” for AI deployments, emphasizing that robust controls are required to maintain user trust.


Evidence

“Now, obviously, the requirements around privacy and security are table stakes.” [64].


Major discussion point

Trust, Security, and Sovereignty in AI Systems


Topics

Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society


Enterprise Responsibility & User Partnership

Explanation

She frames AI adoption as a partnership where enterprises provide safeguards and users stay informed, ensuring shared responsibility across the ecosystem.


Evidence

“It’s a partnership between the user and the enterprise.” [99]. “You have an obligation as an enterprise to make sure that there’s sufficient controls for the individual user and it scales all the way up to the enterprise and maintain that level of flexibility.” [97].


Major discussion point

Enterprise Responsibility & User Partnership (Upstream/Downstream Governance)


Topics

Data governance | Human rights and the ethical dimensions of the information society | Capacity development


Future Aspirations: Inclusivity and Upskilling

Explanation

Bawa envisions AI lifting underserved populations by providing upskilling and access, aiming for a more inclusive digital future.


Evidence

“I think the focus on inclusivity, upskilling, skilling and upskilling people who wouldn’t otherwise have access to technology.” [128].


Major discussion point

Future Aspirations: Inclusivity, Standards, and International Consensus


Topics

Closing all digital divides | Capacity development | Social and economic development


D

David Zapolsky

Speech speed

169 words per minute

Speech length

1827 words

Speech time

645 seconds

Global Alignment & Unified AI Governance

Explanation

Zapolsky warns that governments must grasp the real‑world impact of AI before crafting prescriptive rules, otherwise they risk creating costly, ineffective regulations.


Evidence

“And so that’s a real world issue that governments have to understand and deal with when they make decisions about how prescriptive their regulations are going to be, especially in the abstract.” [7]. “And I think the danger in regulation, before you really understand the technology or how it’s going to play out, is that you create costs.” [11].


Major discussion point

Global Alignment & Unified AI Governance


Topics

Artificial intelligence | The enabling environment for digital development | Data governance


Balancing Regulation with Innovation

Explanation

He argues that blanket regulations that do not differentiate use‑cases will choke innovation, advocating for high‑risk‑focused rules and common principles.


Evidence

“But if you start with a regulation that doesn’t differentiate between those, you’re going to inhibit innovation.” [52]. “well, look at Colorado… they don’t know how to apply it… we need to step back look for some common principles what is a high risk use?” [76].


Major discussion point

Balancing Regulation (Risk Management) with Innovation


Topics

The enabling environment for digital development | Artificial intelligence | Human rights and the ethical dimensions of the information society


Trust, Security, and Sovereignty in AI Systems

Explanation

Zapolsky describes Amazon’s “guardrails” that let enterprises control model outputs and keep customer data private, building trust through built‑in security controls.


Evidence

“we provide guardrails that allow you, as an enterprise, to basically control what types of outputs the models are going to give you.” [103]. “the data they use to employ those models, you know, stays their data.” [104]. “We build those controls right into the interface so enterprises can have that control.” [86].


Major discussion point

Trust, Security, and Sovereignty in AI Systems


Topics

Building confidence and security in the use of ICTs | Data governance | Artificial intelligence


Enterprise Responsibility & User Partnership

Explanation

He highlights Amazon’s upstream governance—testing models, correcting bias, and providing enterprises with controls—enabling downstream customers to adopt AI safely.


Evidence

“there’s upstream governance on those, you know, testing, making sure there’s, you know, we correct for bias, the things that a responsible model builder will do.” [25]. “We build those controls right into the interface so enterprises can have that control.” [86].


Major discussion point

Enterprise Responsibility & User Partnership (Upstream/Downstream Governance)


Topics

Data governance | Artificial intelligence | The enabling environment for digital development


Future Aspirations: International Standards

Explanation

Zapolsky calls for convergence on an ISO‑like framework (ISO 42001) to give a common set of principles while allowing sovereign nuances, fostering global confidence in AI roll‑outs.


Evidence

“There’s ISO 42001, which everybody can abide by and give people a common set of principles and a common set of technical standards they need to make so that we can all be more confident in the way we roll out this technology.” [138]. “There is sort of an emerging consensus about how to approach this technology.” [141].


Major discussion point

Future Aspirations: Inclusivity, Standards, and International Consensus


Topics

Artificial intelligence | The enabling environment for digital development | Data governance


J

Jarek Kutylowski

Speech speed

159 words per minute

Speech length

1076 words

Speech time

403 seconds

Global Alignment & Unified AI Governance

Explanation

Kutylowski stresses the need for a common layer that balances sovereignty with shared privacy approaches, recognizing differing national mindsets.


Evidence

“And therefore, having some common layer, having this right balance of… of protecting the sovereignty and… And protecting maybe like a slightly different approach and slightly different mindset to certain topics like privacy, where we do have differences across the world.” [6].


Major discussion point

Global Alignment & Unified AI Governance


Topics

Artificial intelligence | Data governance | The enabling environment for digital development


Balancing Regulation with Innovation

Explanation

He notes that companies can manage regulatory complexity together with customers, turning compliance into a competitive advantage rather than a barrier.


Evidence

“So in short, I think it can be managed, but it is really like part of the excellency of a company to be able to manage it together with the customer.” [13].


Major discussion point

Balancing Regulation (Risk Management) with Innovation


Topics

The enabling environment for digital development | Artificial intelligence


Trust, Security, and Sovereignty in AI Systems

Explanation

Kutylowski advocates building a layer of trust into AI outcomes—whether translation or agentic AI—so that results align with enterprise expectations.


Evidence

“And I think it has been mentioned that like privacy and… it is just the table stakes it’s just the beginning I think creating a layer of trust into the outcomes of the AI whether that’s translation whether that’s agentic AI that those decisions are really following what the enterprise is expecting of the AI…” [20].


Major discussion point

Trust, Security, and Sovereignty in AI Systems


Topics

Building confidence and security in the use of ICTs | Artificial intelligence | Data governance


Enterprise Responsibility & User Partnership

Explanation

He emphasizes giving customers granular control over AI applications, enabling them to meet diverse regulatory and risk profiles across products.


Evidence

“And we have to go all of the way to give them the possibility to figure this out for themselves, for their applications, for their use cases, and across a whole range of products.” [121].


Major discussion point

Enterprise Responsibility & User Partnership (Upstream/Downstream Governance)


Topics

Data governance | Artificial intelligence | Capacity development


Future Aspirations: Global Collaboration & Inclusivity

Explanation

Kutylowski envisions AI enabling worldwide collaboration among diverse peoples, breaking language barriers and fostering cooperation across continents.


Evidence

“I think there are so many amazing humans across all of the continents of this world … AI gives us even more possibilities to do that in the upcoming year…” [27]. “I would love, as you just explained, getting them all together and creating a framework.” [140].


Major discussion point

Future Aspirations: Inclusivity, Standards, and International Consensus


Topics

Closing all digital divides | Artificial intelligence | Capacity development


J

Jason Oxman

Speech speed

158 words per minute

Speech length

2190 words

Speech time

829 seconds

Global Alignment & Unified AI Governance

Explanation

Oxman frames AI governance as a cross‑border issue, urging governments to align their approaches to prevent fragmentation and enable seamless technology flow.


Evidence

“there is a need for governments around the world to align their approaches to AI governance, because, of course, technology doesn’t, by its very nature, want to stop at borders.” [17]. “why is alignment across the AI governance ecosystem internationally so important, and what can happen when it doesn’t happen and goes wrong?” [22]. “Where do you see the biggest challenges around this idea of alignment of AI governance around the world?” [28].


Major discussion point

Global Alignment & Unified AI Governance


Topics

Artificial intelligence | Internet governance | The enabling environment for digital development


Agreements

Agreement points

Balance between governance and innovation is crucial

Speakers

– Jay Chaudhry
– Aparna Bawa
– David Zapolsky

Arguments

Some level of alignment is needed but over-alignment can kill innovation


Cross-border data flows are essential for global connectivity and business operations


Premature regulation before understanding technology creates costs and inhibits adoption


Summary

All speakers agree that while some level of AI governance is necessary, excessive or premature regulation can stifle innovation and harm business operations. They emphasize finding the right balance between protection and enabling technological advancement.


Topics

Artificial intelligence | The enabling environment for digital development


Risk-based approaches should differentiate between use cases

Speakers

– Jay Chaudhry
– David Zapolsky
– Jarek Kutylowski

Arguments

Data classification and protection strategies must be proportional to actual business value


Upstream governance decisions impact downstream customer capabilities and choices


Risk assessment must differentiate between use cases and applications


Summary

Speakers consensus that AI governance should adopt risk-based approaches that treat different applications according to their actual risk levels rather than applying uniform regulations across all AI uses.


Topics

Artificial intelligence | The enabling environment for digital development


Global interoperability and cross-border operations are essential

Speakers

– Jay Chaudhry
– Aparna Bawa
– David Zapolsky
– Jarek Kutylowski

Arguments

Some level of alignment is needed but over-alignment can kill innovation


Cross-border data flows are essential for global connectivity and business operations


Free flow of goods and information is fundamental to global business models


Global market access creates necessary economies of scale for AI development


Summary

All panelists agree that their business models depend on global operations and that barriers to cross-border data flows and international cooperation create significant challenges for innovation and service delivery.


Topics

The digital economy | Data governance | Artificial intelligence


Security and trust are foundational requirements

Speakers

– Jay Chaudhry
– Aparna Bawa
– David Zapolsky
– Jarek Kutylowski

Arguments

Security layer across all AI components is essential to prevent abuse and data poisoning


Enterprise and user partnership is crucial for maintaining security standards


Cloud platforms can provide secure infrastructure and enterprise controls for AI deployment


Trust in AI outcomes becomes critical as applications move to high-stakes use cases


Summary

All speakers emphasize that security, privacy, and trust are table stakes for AI deployment, though they should be implemented in ways that don’t unnecessarily restrict innovation or usability.


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Similar viewpoints

Both emphasize the importance of providing customers with choices and controls at the platform level, allowing different types of users (from enterprises to consumers) to make appropriate security and privacy decisions for their specific needs.

Speakers

– Aparna Bawa
– David Zapolsky

Arguments

Customer choice and user experience should drive governance decisions at different scales


Cloud platforms can provide secure infrastructure and enterprise controls for AI deployment


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Both speakers share a vision of AI as a tool for global inclusion and collaboration, breaking down barriers of geography, language, and access to enable broader participation in the digital economy.

Speakers

– Aparna Bawa
– Jarek Kutylowski

Arguments

AI should enable inclusivity and access for underserved populations globally


AI should enable global collaboration across languages and geographies


Topics

Closing all digital divides | Artificial intelligence | Social and economic development


Both argue against premature or overly rigid regulation, emphasizing that governance frameworks need to be flexible and evolve with understanding of the technology rather than being based on theoretical constructs.

Speakers

– Jay Chaudhry
– David Zapolsky

Arguments

Flexible policies that evolve are better than rigid compliance frameworks


Premature regulation before understanding technology creates costs and inhibits adoption


Topics

Artificial intelligence | The enabling environment for digital development


Unexpected consensus

Enterprise responsibility for user education and protection

Speakers

– Jay Chaudhry
– Aparna Bawa
– Jarek Kutylowski

Arguments

Data classification and protection strategies must be proportional to actual business value


Enterprise and user partnership is crucial for maintaining security standards


Companies must manage regulatory complexity on behalf of customers across different markets


Explanation

Despite coming from different sectors (security, communications, translation), all three speakers unexpectedly agreed that companies have significant responsibility to educate and protect users, with the level of protection scaling based on user sophistication. This represents a notable consensus on corporate responsibility beyond just providing technology.


Topics

Building confidence and security in the use of ICTs | Capacity development


Need for international standards and consensus while respecting sovereignty

Speakers

– David Zapolsky
– Jarek Kutylowski

Arguments

International consensus and standards should emerge while respecting sovereignty


Global market access creates necessary economies of scale for AI development


Explanation

Both speakers, despite representing very different business models (cloud infrastructure vs. language AI), converged on the need for international standards that respect national sovereignty while enabling global operations. This consensus bridges the tension between globalization and national control.


Topics

Artificial intelligence | The enabling environment for digital development


Overall assessment

Summary

The speakers demonstrated remarkable consensus across several key areas: the need to balance governance with innovation, the importance of risk-based approaches to regulation, the necessity of global interoperability for business operations, and the foundational role of security and trust. They also agreed on the responsibility of enterprises to educate and protect users, and the vision of AI as a tool for global inclusion and collaboration.


Consensus level

High level of consensus with significant implications for AI governance policy. The agreement among industry leaders from different sectors (security, communications, cloud services, language AI) suggests these principles could form the basis for industry-wide standards and government policy frameworks. The consensus particularly strengthens arguments for flexible, risk-based regulation over prescriptive approaches, and emphasizes the importance of maintaining global interoperability while building in appropriate safeguards.


Differences

Different viewpoints

Level of governance and regulation needed

Speakers

– Jay Chaudhry
– David Zapolsky

Arguments

Some level of governance is needed. When we start doing too much governance, too much compliance, we start killing innovations


Premature regulation before understanding technology creates costs and inhibits adoption


Summary

While both speakers agree excessive regulation is harmful, Jay focuses on the balance between governance and innovation from a compliance perspective, while David emphasizes the danger of regulating before understanding the technology’s applications


Topics

Artificial intelligence | The enabling environment for digital development


Primary security focus areas

Speakers

– Jay Chaudhry
– David Zapolsky

Arguments

Focus should be on protecting against AI-enabled cyber threats to prevent regulatory overreaction


Cloud platforms can provide secure infrastructure and enterprise controls for AI deployment


Summary

Jay emphasizes protecting against external threats and bad actors using AI for attacks, while David focuses on building secure infrastructure and enterprise controls within cloud platforms


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Unexpected differences

Enterprise versus consumer responsibility models

Speakers

– Aparna Bawa
– David Zapolsky

Arguments

Enterprise and user partnership is crucial for maintaining security standards


Cloud platforms can provide secure infrastructure and enterprise controls for AI deployment


Explanation

While both work for major technology platforms, they have different approaches to user responsibility – Aparna emphasizes a partnership model where both enterprises and users share responsibility, while David focuses more on providing enterprise customers with tools and letting them make their own decisions


Topics

Building confidence and security in the use of ICTs | Artificial intelligence | Human rights and the ethical dimensions of the information society


Overall assessment

Summary

The speakers showed remarkable consensus on major principles but differed on implementation approaches and emphasis areas


Disagreement level

Low to moderate disagreement level. The speakers largely agreed on fundamental principles like the need for global coordination, risk-based approaches, and avoiding over-regulation. Disagreements were primarily about emphasis and implementation methods rather than core objectives. This suggests a strong foundation for collaborative AI governance approaches, with room for different strategies that complement rather than conflict with each other.


Partial agreements

Partial agreements

All speakers agree that global coordination and interoperability are essential for AI development and deployment, but they differ on the mechanisms – Jay emphasizes avoiding over-alignment, Aparna focuses on data flows, David on free movement of goods/information, and Jarek on market access for economies of scale

Speakers

– Jay Chaudhry
– Aparna Bawa
– David Zapolsky
– Jarek Kutylowski

Arguments

Some level of alignment is needed but over-alignment can kill innovation


Cross-border data flows are essential for global connectivity and business operations


Free flow of goods and information is fundamental to global business models


Global market access creates necessary economies of scale for AI development


Topics

Artificial intelligence | The enabling environment for digital development | The digital economy


All three speakers agree that risk-based approaches should differentiate between different types of data and applications, but they focus on different aspects – Jay on data classification by business value, David on providing customer choice and controls, and Jarek on use case criticality

Speakers

– Jay Chaudhry
– David Zapolsky
– Jarek Kutylowski

Arguments

Data classification and protection strategies must be proportional to actual business value


Upstream governance decisions impact downstream customer capabilities and choices


Risk assessment must differentiate between use cases and applications


Topics

Artificial intelligence | Building confidence and security in the use of ICTs | Data governance


Both speakers envision AI as a tool for global inclusion and collaboration, but Aparna focuses on economic empowerment and market development while Jarek emphasizes language barriers and cross-cultural cooperation

Speakers

– Aparna Bawa
– Jarek Kutylowski

Arguments

AI should enable inclusivity and access for underserved populations globally


AI should enable global collaboration across languages and geographies


Topics

Artificial intelligence | Closing all digital divides | Social and economic development


Similar viewpoints

Both emphasize the importance of providing customers with choices and controls at the platform level, allowing different types of users (from enterprises to consumers) to make appropriate security and privacy decisions for their specific needs.

Speakers

– Aparna Bawa
– David Zapolsky

Arguments

Customer choice and user experience should drive governance decisions at different scales


Cloud platforms can provide secure infrastructure and enterprise controls for AI deployment


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Both speakers share a vision of AI as a tool for global inclusion and collaboration, breaking down barriers of geography, language, and access to enable broader participation in the digital economy.

Speakers

– Aparna Bawa
– Jarek Kutylowski

Arguments

AI should enable inclusivity and access for underserved populations globally


AI should enable global collaboration across languages and geographies


Topics

Closing all digital divides | Artificial intelligence | Social and economic development


Both argue against premature or overly rigid regulation, emphasizing that governance frameworks need to be flexible and evolve with understanding of the technology rather than being based on theoretical constructs.

Speakers

– Jay Chaudhry
– David Zapolsky

Arguments

Flexible policies that evolve are better than rigid compliance frameworks


Premature regulation before understanding technology creates costs and inhibits adoption


Topics

Artificial intelligence | The enabling environment for digital development


Takeaways

Key takeaways

Global AI governance requires balance between alignment and sovereignty – some coordination is necessary but over-alignment can stifle innovation


Cross-border data flows and interoperability are essential for AI systems to function effectively and create necessary economies of scale


Risk-based regulation should differentiate between use cases rather than applying blanket rules – regulating before understanding technology creates unnecessary barriers


Security must be built into AI systems across all layers to prevent abuse, data poisoning, and nation-state attacks


Enterprise-user partnership is critical – companies must provide appropriate controls while users need basic security awareness


Customer choice and flexibility should drive governance decisions, with different controls for different user types (enterprise vs consumer)


AI governance should focus on enabling global collaboration, inclusivity, and access to underserved populations


International consensus on AI standards is emerging and should be fostered while respecting national sovereignty


Resolutions and action items

Industry should work toward developing international consensus-based standards (similar to ISO 42001) for AI governance


Focus regulatory efforts on protecting against AI-enabled cyber threats to prevent future overreaction


Prioritize inclusivity and upskilling initiatives to ensure AI benefits reach underserved populations globally


Develop flexible, evolving policies rather than rigid compliance frameworks


Build security layers across all AI components and extend zero-trust architectures to AI agents


Unresolved issues

How to achieve the right balance between innovation and risk management across different jurisdictions


Lack of clear understanding of how AI technology will ultimately be used and deployed


Uncertainty about how to practically implement and enforce AI regulations (as seen in Colorado and EU examples)


Challenge of data classification and determining appropriate protection levels for different types of information


How to manage AI agents as potential security vulnerabilities as they become more autonomous


Coordination between different national approaches while maintaining sovereignty


Suggested compromises

Implement flexible, risk-based approaches that can evolve with technology rather than prescriptive regulations


Focus on regulating high-risk use cases (affecting life, health, civil rights) while allowing innovation in lower-risk applications


Develop common frameworks with shared principles while allowing countries to maintain their own perspectives


Create sliding scale governance based on user type and use case criticality


Establish basic security and privacy standards as ‘table stakes’ while providing flexibility for additional controls


Build customer choice into AI systems so enterprises and users can adjust security/privacy controls based on their needs


Thought provoking comments

Some level of governance is needed. When we start doing too much governance, too much compliance, we start killing innovations… compliance doesn’t mean security. In fact, when you work on compliance, all this thing works through the government entities, pros, cons, and it takes a lot longer. And by the time it’s out there, the cyber and compliance needs have moved on.

Speaker

Jay Chaudhry


Reason

This comment fundamentally reframes the governance debate by distinguishing between effective governance and bureaucratic compliance, while highlighting the temporal mismatch between regulatory processes and rapidly evolving technology threats. It challenges the assumption that more regulation equals better security.


Impact

This set the philosophical foundation for the entire discussion, establishing the central tension between innovation and regulation. It influenced subsequent speakers to adopt more nuanced positions about risk-based approaches rather than blanket regulatory solutions, and became a recurring theme throughout the conversation.


We would not exist if we didn’t have cross-border data flows and free unencumbered data flow. And when governments start putting more and more restrictions on them within their own countries, it impedes their own citizens’ progress… there’s something in between where you have a framework that is commonly understood with common set of norms and values.

Speaker

Aparna Bawa


Reason

This comment powerfully illustrates the real-world consequences of fragmented governance by using Zoom’s pandemic experience as a concrete example. It moves the discussion from abstract policy concepts to tangible impacts on citizens’ lives and economic opportunities.


Impact

This shifted the conversation toward practical examples and user-centric thinking. It prompted other panelists to provide specific examples from their own companies and influenced the discussion to focus more on balancing sovereignty with interoperability rather than just debating regulation levels.


The danger in regulation, before you really understand the technology or how it’s going to play out, is that you create costs, you create uncertainty and you inhibit innovation… look at Colorado – they put the implementation on hold they want to figure out standards… they’re all looking for ways to not have to put the thing into practice because they don’t really know how it’s going to play out.

Speaker

David Zapolsky


Reason

This comment provides concrete evidence of regulatory failure in real-time, showing how premature regulation can backfire. The Colorado example demonstrates that even well-intentioned regulation can become counterproductive when it outpaces understanding of the technology.


Impact

This comment significantly strengthened the case for measured, risk-based approaches and influenced the discussion to focus more on understanding use cases before regulating. It provided empirical support for the theoretical concerns raised earlier and helped shift the conversation toward more practical, outcome-focused governance approaches.


When I worked with General Electric, the CISO would say, when I tried to secure everything, I secured nothing… I need to protect IP on my jet engine. That’s very important… But my washers and dryers are out there. I don’t spend time trying to protect its IP at all.

Speaker

Jay Chaudhry


Reason

This analogy brilliantly illustrates the concept of risk-based differentiation in a way that’s immediately understandable. It challenges the one-size-fits-all approach to both security and regulation by showing how different assets require different levels of protection.


Impact

This analogy became a touchstone for the rest of the discussion, with other panelists referencing similar concepts of differentiated risk approaches. It helped crystallize the argument for nuanced, use-case-specific governance rather than blanket regulations.


It is so important. The one thing that we learned during the pandemic… we realize, okay, public schools, they don’t have IT administrators. They don’t know how to turn on waiting rooms… You have an obligation as an enterprise to make sure that there’s sufficient controls for the individual user and it scales all the way up to the enterprise… It’s a partnership between the user and the enterprise.

Speaker

Aparna Bawa


Reason

This comment introduces the crucial concept of shared responsibility and graduated user protection based on technical sophistication. It shows how the pandemic revealed gaps in the traditional enterprise-focused security model when technology suddenly needed to serve diverse user bases.


Impact

This fundamentally shifted the discussion from a government-industry dynamic to a three-way partnership including users. It influenced subsequent comments about user education, enterprise responsibility, and the need for flexible, scalable governance approaches that can adapt to different user contexts.


Tomorrow AI agents will be your weakest link and they’ll be all over… Imagine an agent getting hacked or hijacked in your company with access to all kinds of stuff… having a layer of security across all five layers becomes very important.

Speaker

Jay Chaudhry


Reason

This comment introduces a forward-looking security concern that most governance discussions haven’t yet addressed – the vulnerability of AI agents themselves. It extends the security conversation beyond data protection to the integrity of autonomous systems.


Impact

This comment introduced a new dimension to the governance discussion, moving beyond current AI risks to emerging threats. It influenced the conversation to consider not just how AI is governed today, but how governance frameworks need to evolve for more autonomous AI systems.


Overall assessment

These key comments fundamentally shaped the discussion by establishing a sophisticated framework that moved beyond simple pro-regulation vs. anti-regulation positions. Jay Chaudhry’s early distinction between governance and compliance set the intellectual foundation, while Aparna Bawa’s concrete examples grounded the discussion in real-world consequences. David Zapolsky’s regulatory failure examples provided empirical support for measured approaches, and the collective emphasis on risk-based, use-case-specific governance created a nuanced consensus. The discussion evolved from abstract policy debates to practical frameworks for balancing innovation, security, and user protection across diverse global contexts. The panelists’ insights collectively argued for flexible, partnership-based governance that can adapt to rapidly evolving technology while maintaining appropriate safeguards.


Follow-up questions

How can we develop common international standards for AI governance while respecting national sovereignty?

Speaker

David Zapolsky


Explanation

Zapolsky mentioned the need for countries to converge on basic consensus around AI regulation, referencing emerging standards like ISO 42001, while acknowledging that countries should maintain their sovereign perspectives


What specific frameworks are needed to secure AI agents and prevent them from being compromised?

Speaker

Jay Chaudhry


Explanation

Chaudhry highlighted that AI agents will become the ‘weakest link’ and emphasized the need to extend zero trust security to deal with agents, but didn’t elaborate on specific implementation details


How can we effectively classify and differentiate data protection requirements based on actual risk levels?

Speaker

Jay Chaudhry


Explanation

Chaudhry argued that ‘all data is not created equal’ and gave examples of different IP protection needs, suggesting more research is needed on risk-based data classification approaches


What are the best practices for implementing AI governance that scales from individual consumers to large enterprises?

Speaker

Aparna Bawa


Explanation

Bawa discussed the challenge of providing appropriate controls across different user types, from individual consumers to enterprise IT administrators, but this requires further exploration of scalable governance models


How can we measure and ensure the effectiveness of AI inclusivity initiatives in developing regions?

Speaker

Aparna Bawa


Explanation

Bawa expressed hope for progress on AI inclusivity and upskilling but noted it’s currently ‘all talk,’ indicating a need for concrete metrics and implementation strategies


What specific mechanisms can prevent nation-state actors from using AI for malicious purposes?

Speaker

Jay Chaudhry


Explanation

Chaudhry mentioned concerns about nation states using AI for backdoors and gaining advantages, but didn’t detail specific countermeasures or detection methods


How can regulatory frameworks adapt quickly enough to keep pace with rapid AI technological advancement?

Speaker

David Zapolsky


Explanation

Zapolsky noted that compliance frameworks often become outdated by the time they’re implemented, suggesting need for more agile regulatory approaches


What are the optimal enterprise controls and guardrails for different AI use cases across various risk levels?

Speaker

David Zapolsky


Explanation

Zapolsky mentioned Amazon’s Bedrock platform provides guardrails and controls, but more research is needed on best practices for different enterprise scenarios and risk profiles


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.