Regulating at the Speed of Code

21 Jan 2026 16:45h - 17:30h

Session at a glance

Summary

This discussion focused on the regulation and deregulation of technology and AI, featuring government ministers and business leaders with varying approaches to policy reform. The panel included UAE Minister Maryam bint Ahmed Al Hammadi, Argentina’s Minister of Deregulation Federico Sturzenegger, Meta’s Joel Kaplan, and NTT Data CEO Yutaka Sasaki, moderated by Nicholas Thompson from The Atlantic.


Minister Al Hammadi described the UAE’s comprehensive regulatory overhaul, where 90% of laws were updated over four years to modernize outdated regulations. She outlined an ambitious AI-driven project to create an “intelligence-led regulation model” that uses AI to analyze public feedback on laws, monitor court rulings for compliance, and assess regulatory burden on stakeholders while maintaining human oversight and constitutional safeguards.


Minister Sturzenegger presented Argentina’s aggressive deregulation approach, eliminating 13,500 regulations based on the philosophy that many laws serve special interests rather than public good. He argued against preemptive AI regulation, suggesting that artificial intelligence might actually reduce the need for traditional regulations by solving information asymmetry problems that originally justified government intervention.


Joel Kaplan emphasized the importance of maintaining regulatory environments that support AI development through access to talent, data, compute power, and energy. He praised the Trump administration’s AI Action Plan for removing innovation barriers and warned against the EU’s restrictive approach, which he argued could harm competitiveness. The discussion highlighted the tension between fostering innovation and managing potential risks, with participants agreeing that regulatory approaches must be tailored to each country’s specific context and needs.


Keypoints

Major Discussion Points:

Regulatory Reform Approaches: Two contrasting but complementary approaches were presented – the UAE’s comprehensive law modernization (updating 90% of laws in 4 years) and Argentina’s aggressive deregulation strategy (eliminating 13,500 regulations). Both aimed to remove outdated regulations while adapting to technological change.


AI-Powered Governance and Regulation: Discussion of using AI tools to improve regulatory processes, including the UAE’s “intelligence-led regulation model” that monitors public feedback, court rulings, and service delivery to continuously improve laws, while maintaining human oversight and constitutional safeguards.


AI Innovation vs. Regulation Balance: Debate over how much regulation AI needs, with arguments for minimal regulation to foster innovation (Argentina’s approach of having no AI laws) versus the need for some guardrails to build public trust and ensure safety, while avoiding the EU’s restrictive AI Act model.


US-China AI Competition and Open Source: Examination of how regulatory frameworks affect global AI competitiveness, particularly the role of open source AI models like Meta’s Llama in maintaining Western technological leadership versus Chinese alternatives like DeepSeek.


Future of Work and Democratic Governance with AI: Philosophical questions about AI’s impact on employment, the need for new legal frameworks for AI agents/robots, and the speculative possibility of AI systems becoming sophisticated enough to participate in or influence democratic decision-making.


Overall Purpose:

The discussion aimed to explore how different countries and organizations should approach regulating AI and technology, seeking common ground between innovation and necessary oversight while examining real-world examples of regulatory reform.


Overall Tone:

The conversation maintained a collaborative and intellectually curious tone throughout. While participants represented different philosophical approaches (from aggressive deregulation to careful modernization), the discussion remained respectful and constructive. The tone became slightly more playful toward the end with hypothetical scenarios about AI governance, but maintained its serious analytical foundation. There was a sense of mutual learning and appreciation for different national contexts and approaches.


Speakers

Nicholas Thompson – CEO of The Atlantic, Moderator


Maryam bint Ahmed Al Hammadi – Minister of State, Secretary General, UAE Cabinet


Joel Kaplan – Chief Global Affairs Officer of Meta


Yutaka Sasaki – President and CEO of NTT Data


Federico Sturzenegger – Minister of Deregulation, Argentina


Additional speakers:


Alejandro – (Role/title not mentioned – appears to be an assistant or colleague who participated in a demonstration during the discussion)


Full session report

Technology Regulation and AI Governance Panel Discussion

Introduction

This panel discussion, moderated by Nicholas Thompson (CEO of The Atlantic), brought together government ministers and business leaders to examine approaches to technology regulation and AI governance. The conversation featured Maryam bint Ahmed Al Hammadi (Minister of State and Secretary General of the UAE Cabinet), Federico Sturzenegger (Minister of Deregulation, Argentina), Joel Kaplan (Chief Global Affairs Officer of Meta), and Yutaka Sasaki (President and CEO of NTT Data, operating in over 50 countries).


Thompson opened by hoping to find “common ground” among participants who represent different regulatory philosophies, from comprehensive modernization to aggressive deregulation.


Contrasting Regulatory Reform Approaches

The UAE’s Modernization Strategy

Minister Al Hammadi described the UAE’s comprehensive legal overhaul, updating 90% of laws over four years through coordinated federal and local government efforts. The UAE has developed what she calls an “intelligence-led regulation model” that uses AI to analyze public feedback, monitor court rulings, and assess regulatory burden.


The system employs AI for analysis while maintaining human oversight. As Al Hammadi explained: “AI can tell us the non-compliances, but it will not impose penalties on the community.” She emphasized that constitutional safeguards and rule of law principles remain paramount, with AI serving purely in an advisory capacity.


The UAE launched a white paper on this approach and is preparing new generations of legal professionals to work with AI-integrated systems.


Argentina’s Deregulation Philosophy

Minister Sturzenegger presented a fundamentally different approach, describing Argentina’s elimination of 13,500 regulations. Rather than simplifying existing rules, his ministry questions whether regulations should exist at all. As he explained, President Milei instructed him not to be a “simplification minister” but to ask whether each regulation should exist in the first place.


Sturzenegger’s philosophy stems from his belief that most regulation results from interest groups using the state to create competitive advantages. He illustrated this with the Baltimore bridge incident, explaining how “saliency” drives regulatory responses that may not address actual problems effectively.


Using his human assistant Alejandro as a demonstration, Sturzenegger raised questions about future robot regulation, referencing Isaac Asimov’s “iRobot” stories and asking who would be responsible for autonomous agents’ actions.


Finding Common Ground

Despite their different methodologies, Al Hammadi noted that both approaches involve “deregulations at the heart or at the centre of the reform,” recognizing they address similar challenges of removing outdated barriers to progress.


AI Regulation and Global Competition

Philosophical Differences on AI Oversight

The participants revealed sharp differences about AI regulation itself. Sturzenegger advocated for Argentina to prevent any AI regulation from being created, arguing that AI might solve the asymmetric information problems that originally justified government intervention in many sectors.


Kaplan strongly opposed what he characterized as the EU AI Act’s over-regulatory approach, praising instead the Trump administration’s AI Action Plan for removing barriers across federal agencies while ensuring access to talent, data, compute power, and energy.


US-China Competition and Energy Infrastructure

Kaplan emphasized the geopolitical dimensions, noting that “China built 440 gigawatts of power in 2024, and the U.S. did about 50.” He argued that Meta’s open source Llama models (downloaded 1.2 billion times) serve strategic purposes by democratizing AI access while embedding US values globally.


The discussion touched on competition with DeepSeek and other models on platforms like Hugging Face, with Kaplan arguing that open source approaches ensure broader global participation in AI development.


Business Applications and Trust

Building Trusted AI Environments

Sasaki emphasized the importance of creating trusted environments for business AI adoption, particularly for confidential data. He advocated for hybrid approaches combining public and private AI models to balance innovation with data sovereignty concerns.


The discussion revealed consensus that building public trust is essential for realizing AI’s benefits, though participants disagreed on specific mechanisms required.


Market Competition and Entry Barriers

Sturzenegger argued that competition policy should focus on enabling market entry rather than punishing dominant firms, contending that government regulation often creates barriers that reduce competition. Kaplan supported this perspective, arguing that Meta faces fierce competition in AI despite its social media market position.


Thompson challenged this view, noting that deregulation efforts can also serve special interests rather than public good.


Future Challenges and Unresolved Questions

Autonomous Systems and Legal Frameworks

Sturzenegger raised fundamental questions about liability for autonomous AI systems and robots, asking “Who will be responsible for a robot’s actions when robots become autonomous agents in society?” He also questioned how to handle intellectual property when AI can replicate copyrighted content after reading it once.


Democratic Governance

The conversation concluded with Thompson asking whether ministers would cede decision-making to AI. While all participants maintained that humans must retain ultimate authority, they acknowledged these questions will become more pressing as AI capabilities advance.


Sturzenegger even speculated about AI political campaigns, referencing scenarios where AI systems might run for office, highlighting the need for new legal frameworks as AI becomes more sophisticated.


Areas of Convergence and Persistent Disagreements

Despite different approaches, participants agreed that organizations must proactively adapt to AI transformation and that human oversight must be maintained in AI systems. They also recognized that regulatory approaches must be tailored to national contexts rather than applying universal solutions.


However, fundamental philosophical differences remained about the appropriate extent of AI regulation, with positions ranging from Sturzenegger’s complete opposition to AI regulation to Al Hammadi’s AI-integrated regulatory frameworks.


The discussion highlighted ongoing tensions about copyright in AI training, competition policy approaches, and the balance between innovation and oversight. These disagreements reflect deeper questions about governing artificial intelligence while fostering technological progress.


The ministers’ commitment to continued dialogue, including planned follow-up meetings, suggests potential for collaborative approaches despite persistent philosophical differences about regulation’s role in technological innovation.


Session transcript

Nicholas Thompson

Hello, I’m Nicholas Thompson. I’m CEO of The Atlantic. I’ll be the moderator today.

We are going to have a fabulous conversation on regulation of technology and AI and deregulation. My hope is that at the end we have a little bit of common ground, people with very different backgrounds, what laws should be changing, what needs to change structurally to get that done, and what happens next.

So let’s introduce this absolutely marvelous panel. Starting here on my left with Maryam bint Ahmed Al Hammadi. She is the Minister of State, Secretary General, UAE Cabinet.

Joel Kaplan, Chief Global Affairs Officer of MEDA. Yutaka Sasaki is at the end. I wrote it down.

How are you doing? The present CEO of NTT Data. And Federico Sturzenegger, who says you can pronounce his name just like his uncle Arnold, the Minister of Deregulation.

Best title on this whole panel. Kind of amazing. Are you the first Minister of Deregulation in Argentina?

Federico Sturzenegger

Yes. And you know when we started thinking about this with President Millet, he said, well maybe you should be the Minister of Modernization. I said no, no, no, because the state doesn’t modernize.

The state pushes you backwards. So it would be, if we want to improve the state, that would be the backwardation minister. And I think that sounds very good, okay.

And also President Millet says, I also don’t want you to be the simplification minister because I don’t want you ever to think about simplifying something without asking yourself before if that thing that you were about to simplify should exist in the first place, okay.

So we kind of ask a deeper question and that’s how we got to the deregulation and the name.

Nicholas Thompson

That’s a cool, it’s a cool way to actually push policy and perception through titles. Yeah, I’m gonna start changing all the titles at the Atlantic. All right, so let’s start with Minister al-Hamadi.

So you have changed a remarkable number of laws in your country. Yeah. Believe it’s something like…

85% over the last 90% 90% I was out of date. I googled this three hours ago. So 90% of the laws Explain and it’s in the last three years

Maryam bint Ahmed Al Hammadi

four years four years

Nicholas Thompson

explain the most Consequential and interesting changes that you’ve done for tech and for AI

Maryam bint Ahmed Al Hammadi

There was a direction from the leadership in the UAE that we need to Relook to our laws when I’m saying about Regulations we have different layers of regulations. Now. I’m talking about the first layer, which is the law itself.

Yeah We have laws that is was valid since 30 years 40 years 50 years ahead So we will we need to know we actually did some type of deregulation Regulation in terms of that we look to our relations and To see which one of them is still valid and which one of them we need to repealed so there was Hundreds of teams working parallelly under the supervision of the cabinet in the UAE and All the teams they working because as you know United Arab Emirates is a federal country So we have Laws at the federal and whenever we do any law we have to coordinate with the federal entities and with the local Governments also we have seven local governments.

So actually the process was We were struggling a lot but everyone has the same vision because we has been cascaded from the top to down that we need to do that changes and We could within the four years 90% of our laws have been updated.

We did massive revolution in our regulatory framework The articles, even though in the United Arab Emirates, thousands of articles have been changed, thousands of them have been modified. And to do that one, actually, it took a lot of effort from us. And then we thought six months back, how can we use AI to do for us that revolutions?

So we don’t do it again as manual. And here there was a project that has been approved by the leaders in the UAE that we want to develop intelligence led regulation model. We need a model.

Is it somebody will tell me the first time, are you going to use AI tools to draft for you your laws? We said, no, we need more than this. We don’t want only, for example, Chad GBT to draft for us a law.

However, we need to have a model that listen and speak to the social media. So any law that we issue within the UAE, we need to listen at a larger scale to all our stakeholders to see what is their comment or what is their feedback about the law. So anything that, and then with the AI to analyze that one and to tell us that article.

The model, that law, needs, there is a lot of comments, positive or negative, about it. And then he will suggest for us what changes we have to do in that specific law. We need that model to just speak to the court.

So we want to assure that the law, when it goes to the court, is being implemented right. So the court ruling, is it compatible with the law or not? We need also a model that speaks to the service that is delivered to the customer and stakeholders and our investors, to see if our requirements that we put in the regulations, is it complicated?

It is too much? Is it less or more? So we combined all these features in the project that we are working with one of the AI company.

Yesterday we have launched the first white paper about what we have learned so far in this project. And we hope that within the project, it’s a two year project, we have a model that can be shared with all the government worldwide. They can share, they can learn from it.

And there will be a lot of case studies about it.

Nicholas Thompson

Fascinating. Interesting stuff. Lots in it that I want to get back to.

But let’s go to Minister Sturzenegger. You’ve knocked out, you haven’t changed 90% of your laws, but you’ve knocked out 13,500. And what was the most consequential and beneficial when it comes to AI?

Federico Sturzenegger

Okay, let me see where to start. We started working two years before Millet became president. We reviewed all the laws of Argentina, and we classified them, laws that had to go, laws that were okay, and laws that changed.

And we had all these things prepared by the time he became president. And this is what allowed us to… who make such a swift reform in one year and a half.

So deregulation requires some preparation, correct? Now, we didn’t have an AI law. So I didn’t have any AI law to deregulate because there was nothing.

So my only task in this moment with AI law is that no AI law appears, okay? So we want in Argentina to make sure and give the message that we do not want to regulate AI.

Nicholas Thompson

So you added a regulation.

Federico Sturzenegger

No, no, we don’t have, we don’t have. So I want to make, but all the congressmen, they all want to have a law regulating artificial intelligence.

Nicholas Thompson

But isn’t it a law saying we can’t have a law?

Federico Sturzenegger

Oh, well, if you want to put it that way, yes. We don’t want to, we want to have, we don’t want to have anything, okay? Just call that as the way you want.

Then I think actually two other questions which I think are interesting in the relationship between regulation or deregulation and artificial intelligence is, will artificial intelligence change the need for regulation at large?

Let me give you one example. One of the reasons we regulate some sectors is something called asymmetric information. So for example, there’s asymmetric, a financial sector is regulated because you think there’s some asymmetric information, the depositor kind of doesn’t know about the bank, so you have like a regulated system which kind of provides the guarantee, et cetera.

So asymmetric information in many areas of economic activity, you have asymmetric information in many markets, is the justification for regulation. Now I say, if artificial intelligence gives us a lot of knowledge, doesn’t it solve itself, the asymmetric information problem? And if it solves the asymmetric information problem, then we don’t need to regulate that.

So I think there’s a relationship of artificial intelligence maybe changing the need, this is a question I’m just posing, I’m just proposing the debate, but I think it may be the case that it may eliminate the need for regulation in certain areas because it will help solve those issues that the regulation was supposed to solve.

So I think that’s an interesting line of thought. The third one is, can we use AI to generate the deregulation, which is a little bit what the minister was just mentioning, and we really haven’t gotten there yet, because the reason is we’re not so much in the business of rewriting regulations, but in the business of removing regulation, because sometimes we have this fantasy that regulation in an economy is built like by a well-intentioned central planner, you know, some bureaucrat thinking about how to do good to society.

Well, of course this may be different in different countries. In the case of Argentina, my finding is that most of the regulation is not the result of a benevolent central planner, but it’s the result of interest, kind of people using the state as a way of building a regulation, which at the end of the day is like a ring-fencing competition, generating privileges.

Nicholas Thompson

Deregulation also comes from interest too.

Federico Sturzenegger

Well, I mean, if the regulation is the result of, which is what I find in Argentina, which regulation is the result of some sector which has been able to generate something which blocks entry, which is really what we’re going after, then in that case deregulation is really kind of an obvious choice.

Nicholas Thompson

Cool. You’re clearly the man in favor of deregulation. I’ve talked to you about it before.

Representative of the United States, our president, was just on stage talking about, what was it, 130 laws he cuts for every one he adds. But what are the laws that you like, right? I know that like Section 230 of the Communications Ceasancy Act, that law doesn’t exist.

Facebook’s got a lot of problems. Meta’s got a lot of problems. What are the other laws that you think are important to keep?

Joel Kaplan

So first of all, I just want to say that what both of the ministers said was really fascinating to me because they’re examples of people who lead organizations who are already thinking. deeply about how AI is going to change either the way that they conduct the work of their organization or have implications for what the substance of the work is, which is what every organization over the next couple of years is going to have to do.

And the ones that succeed, whether they’re government agencies or whether they’re businesses, are the ones who start thinking now about how AI is going to change their workflows and change the nature of the work that they do.

So I really, I thought it was really gratifying to hear both ministers on that front. Is your question about regulation that we like on AI generally or on regulation generally?

Nicholas Thompson

Not like speed limits. I mean, yeah.

Joel Kaplan

Well, there are some, you know, for the social media part of our business that we could talk about, but not here. So look, to power AI, you basically need four things. You need talent, you need data, you need compute, and you need energy.

So you have to have a regulatory environment that ensures access to all four of those things. And for the most part, what we’ve seen in the U.S. under President Trump’s AI Action Plan is removing barriers to innovation across those four key areas.

And I think that largely is what’s necessary to unleash the investment and the progress that we want to see on AI and that the President correctly views is necessary to win the battle, the most important national and economic security battle that we face right now, which is the AI race, and in particular against China.

So the AI Action Plan removes barriers to innovation across federal agencies. It ensures energy and data center permitting reform. So some of those reforms are positive laws in terms of what they require agencies to do in what time frame to make sure that you can actually get a…

the electricity generation plant that you need for a data center built. You know, you’ve got, you know, China built 440, or put into performance of 440 gigawatts of power in 2024, and the U.S. did about 50.

That’s a huge advantage that China has. We need to be much better about that. So we need to put in place regulatory structures around energy and transmission grid permitting.

That’s the kind, those are the kind of laws that I think are very positive. We have to have laws, I know this isn’t necessarily popular at the Atlantic, but we have to have laws around copyright that ensure access to the training data.

Nicholas Thompson

I like laws around copyright. Oh, about access.

Joel Kaplan

Fair use.

Nicholas Thompson

And we believe it falls on the other side, so I’m not mistaken.

Joel Kaplan

Ensuring access to data for the training. The models depend on having access to huge pools of data. And any country that doesn’t ensure that that’s going to be the case is going to be left behind because their data is not going to be included in the training models.

And so, you know, if you want, you know, if you’re another country that’s worried about AI sovereignty, really what you should be worried about is making sure that the large language models that are developed include data from your country so that the models reflect the culture and interests and expertise from your people.

So those are a couple of examples of areas where I think, you know, you do have to have kind of guardrail, but really the guardrails that you put in place are to make sure that you have access to the things you need to power these data centers.

Nicholas Thompson

Great. I’m all in favor of regulations that respect the licensing deal that you and I are going to strike right afterwards. Yutaka, will you respond?

You’re a business leader. You operate in all these countries. You operate in 50 different countries?

Yutaka Sasaki

Yeah. Yes. Yeah.

Over 50 countries.

Nicholas Thompson

So actually, let me ask you this way. How do you feel when you hear Argentina’s deregulating everything? Because what you want is you want clarity and similarity across regulations, right?

Explain how you felt listening to all this.

Yutaka Sasaki

Yes, our position is that we accelerate the business with using AI, but we need to align with the regulation. And today we have two government side people and two business side people. Our position is very, very business side.

And I think we need a trusted environment by the people, by the employees in business use cases. And I think we are not only the IT service provider, but also the data center provider. And data center, it’s so easy to be regulated.

It’s a very, a lot of rules and something related to the power supplied, yes, of some environment issues. And cloud layer, LLM layer and applications, they will be so very difficult to be managed by the regulation. But in the business use cases, we need to establish trusted infrastructure.

And that’s one of the key words is sovereignty. From the government view, from the big companies view, we need to manage the confidential data that they are reluctant to store their confidential data in the public AI. And so we need to manage their confidential data in the sovereign cloud or sovereign AI.

In AI case, we’re calling the private AI. Of course, we need to use the public AI, very intelligent employees that would like to use public AI. But in the future, we need to establish private AI.

And in the future, we will manage hybrid AI, the combination between public AI and private AI. That will be the very trusted environment.

Nicholas Thompson

But that makes very good sense. Minister Al Hammadi, will you talk a little bit more? You mentioned this public feedback, which I think when I was listening, this is one of the most important issues because one of the hardest things with AI, certainly in my country, is how little people trust it.

Like they don’t. I love AI. I think it’s amazing.

I use it all the time. It’s awesome. And like most people hate it, right?

So you need, or not most, a majority. You’re building trust in part by having feedback on everything you do and building using AI to help get comments. Explain a little more about that, because it’s fascinating.

Maryam bint Ahmed Al Hammadi

Can I zoom out a little bit?

Nicholas Thompson

You can zoom out. You can zoom in. You can pivot sideways, however you want to answer it.

Maryam bint Ahmed Al Hammadi

OK, because that is only one angle. When I spoke about the model that we are developing in the UAE, there is, you know, with the AI, always you think that we will have more agile, faster regulations. But actually, there are some of the principles that we put in the model that we are developing that we cannot compromise.

Which is the rule of law foundation and constitutional safeguards. Let me explain. For example, equity before law.

We have we are. We are not allowing that FAST to become a bias. So whenever the model or the AI tool is giving us any biased or harmful outcomes, we have developed in our model a mechanism to stop it.

There is also the rule of law. This is one of the principles that we put in the AI model. We need all the AI output to be traceable to a legal basis, not only to statistical patterns.

There is also fair procedures. AI can detect for you gaps, can tell you about the recommendations, can give you outcomes, can analyze, can tell you the red flag risk, but it will never bypass the procedures. I will tell you, for example, AI can tell us the non-compliances, but it will not impose penalties on the community.

So this one is not allowed. It’s one of the principles that we use in developing the AI model. We have also, as the professor was saying, we have put the principle in the model that we have regarding privacy and data protection.

So we need to ensure that one, because as I said before, we are dealing with a massive number of information. It’s related to federal, local, court, economical, social. We have many data.

We have to ensure that we need to tackle with them in terms of quality. We need to ensure that the data we have is quality, and it is consistent, because if we don’t have consistent, we will not have a good AI model that will help the government. So we need to lay out.

And this is what I have seen, is that there are many people who have been using AI at a large scale, in a very short time. And here is the feedback of the stakeholder and our investors. We can listen to them at the large scale using AI tools.

And also, there is one of the principles that we have put in the model, is the human accountability. Still we believe that at this stage, AI can advise, but still, human is in command. And this is very important for us at this stage.

Nicholas Thompson

Great. Minister, you wanted to respond?

Federico Sturzenegger

Yeah, I want to touch on the question you just made. First of all, a quick remark on the copyrights. It’s a kind of an interesting legal issue, because we, for example, buy a book and we pay the copyright for it, OK?

But if an AI reads the book, it becomes the book. I mean, it can replicate the book perfectly. So I think, how do you solve the problem of property rights, that once you read it once, you become the book with perfection?

Anyway, it’s a question there.

Nicholas Thompson

Who knows? We’ve sued some AI companies, and the courts will decide.

Federico Sturzenegger

We’ll see. We’ll see, exactly. But look at this.

Look at this. Alejandro, parate. Parate.

Sentate. Parate de nuevo. OK?

Sentate. So now imagine that he’s a robot, and he’s obeying my orders, correct? And so in a few years, we’re going to have people going around like that, and they’re going to be robots.

So we’ll have to build-

Nicholas Thompson

We don’t know, in fact.

Federico Sturzenegger

We don’t know if we don’t have some here. He’s human. He’s human, by the way, OK?

He was just being friendly with me. So we have to build a legal framework. framework for these guys?

Who’s responsible for the actions? Because the robot goes out and goes out stealing. Who’s going to be responsible?

Okay, so we need to build a framework. So it’s an interesting, I don’t think it’s a very complicated, I mean, there’s, someone’s going to build a robot and then it’s going to sell it to someone, and then someone will take the responsibility.

I guess that’s it. But I think something that happens is interesting, is that in regulation theory, you have something called saliency. Or I call it the Baltimore effect.

Remember this boat that crashed the bridge in Baltimore a few months ago? Well, the boat crashes the bridge, pulls down the bridge, and then you change all the regulation on navigation. And there were 1 million boats that were going, no one crashed a bridge.

So the saliency is the politician reacts, feels that they have to react to a salient event, and then regulate and imposing a tremendous cost. And I say people make mistakes all the time. But I think that if Alejandro were a robot and he would make a mistake, people would immediately jump and say, oh, we have to regulate the robot so that this doesn’t happen again.

So I think also, I think…

Nicholas Thompson

So is part of your job anticipating future demands for regulation and preempting?

Federico Sturzenegger

No. Well, I think that’s a trap. Because most of the, most of the regulators, they, they are very imaginative of all the terrible things that will happen to people if they’re free.

And I tell you, the imagination is, I mean, you can be the, it’s better than the best novelist in the world. No, I think we have to be in that sense, we have to take more risk. I mean, we have to see if something left free generates a problem.

And then react to the problem, but not try to solve a problem that we didn’t even know that we have. So I think we, but I think this is important because this question will come over and over again as AI starts to move into different activities. And I want to finish with one thing.

I’m sure you’d appreciate what I’m going to say now, which is the latest with the resistance to AI. Okay. So what’s the, so the last, you know, there’s this book by Isaac Asimov.

called iRobot. In the last, it’s a series of stories on kind of robotics and artificial intelligence written in 1960s. I mean, you read this thing today, and it kind of blows your mind away.

So in the last story, it’s about a political campaign. And there’s a guy who’s a very good politician. But people are suspicious he’s a robot.

So you have a debate in society whether we should allow. And then the politician says, no, we can’t allow having a robot to be in a politician. And other people say, no, wait a minute.

Maybe it’s better, you know? These guys are fair. These guys work 724.

They’re honest. So I think, but you see, I’m seeing artificial intelligence coming into lots of mechanisms. And I think we’re going to have a lot of resistance from the people who somehow are left to the sides because of artificial intelligence.

We may not have industrial jobs in 10, 15 years, correct? And so I think this is, we have to be aware that we need to resist this fight against the implementation of AI. Because if we don’t, we would still be using candles, OK?

Because the candle producer would have protested because Edison invented the light bulb. And we would not have implemented it. But fight against it.

Nicholas Thompson

Maybe, Joel, I would imagine that you agree philosophically here. But let me ask you, so some of the regulations we just heard about, like, let’s have a right to privacy for citizens in the UAE. And let’s build that into the AI we have.

Let’s make sure there are some protections against harms. Are there not a set of regulations that can build trust so that more people use it? And actually, the benefits spread throughout society?

Joel Kaplan

It may be that there are new regulations that are necessary as the technology develops. I think the risk is that, and we’ve got some real world examples of this, the risk is that imaginative policymakers are much more focused on the risks. And the potential harms and much less focused on the benefits of innovation and the benefits to the economy and to growth I think we’ve the model we’ve seen with the clearest model.

We’ve seen of that is in the European AI Act the EU AI Act You had that there was work underway for a long time on the regulation the regulation was was technology neutral focused on real-world harms and then chat GTP came out and And immediately the EU Changed the focus of the regulation and began actually directing the regulation at the technology And and as a result, you’ve got you had a piece of legislation passed in the EU That was very harmful to innovation that I think really risks the EU Falling quite far behind in this new technological revolution in much the same way that unfortunately They’ve become less competitive over the last 40 or 50 years because of regulation and it was largely because they were they were very focused in those early moments on all of the possible harms that could come from the technology itself, so I think Waiting to see whether you know What kind of robots are developed and which which harms they create possible harms they create and then deciding do you need to?

Regulate against those harms and figure out who’s gonna be responsible for it and legislate that way. I think is the right approach Rather than just regulating the technology on the basis of all of the possible imaginary Harms that you might imagine you talk to you the same analysis of the EU AI Act.

Yutaka Sasaki

Yes Yeah we understood you side that they have hard low and The compared it to the EU the United States side a soft low in Japan also that we and we have a soft low and we need to align the region by region and I mean As I mentioned, private AI or regional AI, we will have a lot of AI models in the world in the future.

I believe the edging AI model is different from the conventional IT systems. Conventional, traditional IT systems, it’s very programmed precisely. The input and the output is fixed.

But AI models, you know, the prompting and sometimes various answers are written. And I think the conventional IT system, it was easy to be managed. We can manage the traditional IT systems.

But AI is different. It’s very difficult. We need to monitor the performance and the quality.

And that is very important. And managed services will be very necessary in the future, yeah.

Nicholas Thompson

Minister Sursnegar, so one of the things that you mentioned is that, or not that you mentioned, but that you’ve said in the past, is that concentrated markets are good and that you should let companies get as big as the market will let them get.

Are there any areas where you believe there should be competition policy to try to prevent, say, you know, a company that controls one side of the market from using that power to get dominance in another?

Federico Sturzenegger

No, we think there should be competition policies, but competition policies should be focused on entry. It should not be, it should be focused, I’ll give you an example. For example, in European markets, you punish someone if they charge kind of way above the market prices.

Nicholas Thompson

Yeah.

Federico Sturzenegger

And they’re a dominant firm. Imagine an airline that flies a certain route and they’re the only guys, they’re dominant because they’re the only one, they have 100% of the market. And they charge an exorbitant fee.

are no competition problem there. Because anybody can enter into that route and fly that route. In fact, you want a high price to entice other people to come into the market.

And there’s a paradoxical. Actually we wrote an open ending, The Economist with President Millet on exactly this question that you’re asking, which came out last week. There’s a paradox, which is, most of the time, the restrictions to competition come from regulation.

Because regulation kind of erects all these barriers to entry, which make it more difficult. So I think you need to have, of course, if you have firms colluding to exclude other players, that certainly is like the U.S. antitrust approach to, I think that’s perfectly and should be applied.

But I think we don’t put too much attention to the need to make sure that the government is not itself responsible for generating the barriers to entry, which are responsible for, I think, markets which work less.

I’ll give you an example. Nokia, Blackberry, iPhone. So imagine that at some point we said, look, Blackberry has too large a share of the market and we should kind of split it out and split it into parts, et cetera.

I think it would have been a terrible mistake. We didn’t actually have a competition problem because a third player could come and compete and challenge that market. We call it, in economic theory, we call it contestable markets.

So as long as you have a contestable market, if you have an increase in returns to scale, which reduces the cost, well, as a society, we want to produce at a lower cost. So you want to profit from that, but always making sure that, for example, these guys, that he can be challenged by anybody. But to the extent that no one challenges, you have to let them run their increasing returns to scale and their efficiency.

So that’s kind of our approach to the thing.

Joel Kaplan

To be clear, we feel like we have very fierce competition.

Nicholas Thompson

Oh, yeah, me, we. As I read in the government’s complaint, me, we, and. Let’s talk a little bit about the U.S.-China competition, Joel, because it’s quite interesting.

Given the AI competition, which is one of the most interesting stories in the world right now between the United States companies and Chinese companies, how should the government take that into account when setting regulations?

China clearly has an entirely different regulatory framework, and it’s not one that’s fully deregulated. Explain what U.S. administrations should do to make sure that America is maximally successful.

Joel Kaplan

Yeah, I mean, I don’t want to sound too much like a broken record here, but I mean, I think the AI action plan was designed with exactly that.

Nicholas Thompson

And also add, I think another element that is so important is open source, which is the area where China…

Joel Kaplan

Yeah.

Nicholas Thompson

Let me rephrase the question. One of the most interesting critiques of American regulation is that it made open source AI much harder, and in fact, ceded that to China. What could have been done differently to prevent that outcome?

Joel Kaplan

So I actually don’t think that ultimately happened. I think there was a real risk that that was about to happen. We actually…

I mean, the first real powerful open source large language model that was released was ours.

Nicholas Thompson

Yeah. It was the framework for DeepSeek.

Joel Kaplan

Yeah. Yeah. And it saw wide scale adoption.

We released an open source model called Llama, and the Llama models have been downloaded 1.2 billion times. So that’s an incredibly democratizing effort, right? Because that means that developers all over the world, academics, universities, governments all over the world have access to the output of these incredible investments that Meta made, right?

It’s very capital intensive, as you know, to build one of these large language models. If we open source it, then everybody has access to it for free. That’s what it means.

So during the Biden administration, that was really… We released Llama. I think in 2023 timeframe, or early 2024.

There was a lot of debate and discussion within the Biden administration as to whether open source was too risky, good thing, bad thing. We believe for any number of reasons, including the democratization, diffusing the benefits of AI across populations and regions. And from a national security standpoint, we thought that there was a real benefit to having the global standard for building on top of AI to be set by a US company with US values embedded.

The Trump administration ultimately embraced that position, and that’s reflected in the AI action plan. So the debate about whether open source is a net positive, I think has been resolved in the United States in favor of open source, and I think that’s a good thing. Now, the Chinese, whatever it was a year or so ago, DeepSea came out with their open source model.

It was quite good and quite competitive, and it’s achieved a lot of adoption since then, as have a couple other Chinese models. So there’s a real, I think, battle still for where the global standard is gonna be, whose technology the global standard is gonna be based on. But I think, fortunately, that was an area where there was a real risk of the government really hindering the spread of Western values in the technology, but I don’t actually think at the end of the day that that was why the Chinese ended up having some good entrance in the open source.

Nicholas Thompson

Good entrance and more downloads on Hugging Face than American companies. Minister, when you listen to the minister from Argentina, and you listen to him taking a chainsaw to regulations, do you wish you could do some of that too? How do you respond to him?

Do you think, great job, let’s do more of that, or do you feel like, eh, maybe that works for him and I’ve got something else that works for me?

Maryam bint Ahmed Al Hammadi

I will be with him tomorrow at 8 o’clock I want to ask him a question, straight forward In Argentina, is there any regulation?

Federico Sturzenegger

Is there any regulation? Yes, yes, there is a lot of regulation

Maryam bint Ahmed Al Hammadi

How many?

Federico Sturzenegger

I still have a lot to catch

Maryam bint Ahmed Al Hammadi

I think, I feel that whatever is being done for example in the UAE or Argentina is very close because it’s only the name, His Excellency using the word of deregulations and the aim of us whenever it was four years back that we want to eliminate any law that is not valid for us at this time maybe it was good 20 years back or 30 years back but now it is maybe not valid for us so actually we did deregulations when we look to any law we look to the clauses of that law and we try to deregulate the requirement which is unnecessary and we try to make it compatible of the changes that happening in different sectors and that’s why I feel that any country that is working into laws or regulations, reforms actually they are doing deregulations at the heart or at the center of the reform they are doing One thing that also Mr.

Kaplan was saying is the people I feel with the AI or with the age of the AI government should not resist the change now actually we have to invest in our people and that’s why in the UAE we are now preparing the new generation of the legal professionals We need our legal to be blended between the law and technology.

We need to have regulatory data scientists that can handle the data, understand the data, and from the data, they can know how the law is being performed in the real. We need to have regulatory knowledge engineers who can convert the complex law text to something knowledgeable, something people can understand it. And this is actually what we are doing now, so we can have a system that we can work on it.

And I think our professional legal can also handle it, and this is actually what should we do.

Federico Sturzenegger

Can I get a two-hander? Can I get a two-hander?

Nicholas Thompson

Yeah. You can counter.

Federico Sturzenegger

No, I’m saying, I mean, the process of the regulations is different in every country. I think in the case of Argentina, it’s so extreme because we build a status quo society where interest groups have captured the state and have built a regulatory framework which was building privileges for them.

And I’m sure that it’s totally different in her setup. So I think we can be, we have to be much more aggressive in deregulation because much more of the law have this kind of, the original incentive, objective was not really the one that you want for a society which is more just and growth enhancing.

I just wanted to make that clear because if not, it seems that kind of I’m proposing the deregulation forever. I think every context is different. I just wanted to clarify.

Yutaka Sasaki

We need to understand, so we need to have various regulations in the, aligned with each country, each region, the culture, some rules. You mentioned that we need to think about the relationship between regulation and innovation. If there is strict regulation, it sometimes prevents the speed of innovation.

And so we need to have a balance. And innovation will give the benefit for the people. And so too strict regulation.

We need to see the worldwide use cases. For example, it’s a very simple use case. COVID-19, there was various responses in each country.

And we studied which country’s use case was the best. I think AI technologies is very evolving rapidly. And the common understanding will be very important.

And the technology, the literacy. The regulator side and technology innovator side, we need to have the common understanding for the latest technology of AI. How do you think?

Nicholas Thompson

Let me ask a last question. We have one minute left and I can’t resist the temptation. But Minister Scherzenegger, we talked about how we’re going to regulate robots.

If there was an AI that became much more capable than we have right now. And it could really take into account public opinion. And it could listen to all sides.

And like the UAE’s AI, it understood the law. And was so much better than the president. Would you cede decision making to this AI?

Federico Sturzenegger

Well, through a democratic process.

Nicholas Thompson

I mean…

Federico Sturzenegger

What are you asking me to… No, I mean through a democratic process.

Nicholas Thompson

But it would be hard for it to be elected. I mean, I don’t know what the constitutional rules are in Argentina. But if you had this on your desk, would you say, tell me what to do, I’ll follow it?

Federico Sturzenegger

I can’t do it as an official, I can’t take opinions, but we have a democracy and that’s the way we elect our rulers. In the story of Asimov, it was a democracy and it was an election, so people at the end decided, well, let people decide if they want to choose a robot or not. I won’t tell you how the story ends.

Nicholas Thompson

All right, thank you very much for your time, that was a fabulous conversation. Thank you so much, all of you, it was wonderful to be out of Sheristan.

M

Maryam bint Ahmed Al Hammadi

Speech speed

134 words per minute

Speech length

1414 words

Speech time

629 seconds

UAE updated 90% of laws in four years through massive coordinated effort across federal and local governments

Explanation

The UAE undertook a comprehensive legal reform initiative involving hundreds of teams working in parallel under cabinet supervision. This massive effort required coordination between federal entities and seven local governments, resulting in thousands of articles being changed or modified across the regulatory framework.


Evidence

Hundreds of teams working parallelly under the supervision of the cabinet, coordination with federal entities and seven local governments, thousands of articles changed and modified


Major discussion point

Regulatory Reform and Deregulation Approaches


Topics

Legal and regulatory


Agreed with

– Federico Sturzenegger
– Yutaka Sasaki

Agreed on

Regulatory approaches must be tailored to national contexts


Disagreed with

– Federico Sturzenegger

Disagreed on

Extent of deregulation needed


UAE developing intelligence-led regulation model using AI to analyze stakeholder feedback and court rulings

Explanation

The UAE is creating an AI model that goes beyond simple law drafting to actively listen to social media and stakeholder feedback on laws, analyze court rulings for compatibility, and assess service delivery requirements. This two-year project aims to create a comprehensive system that can be shared globally with case studies.


Evidence

AI model that listens to social media for stakeholder feedback, analyzes court rulings for law compatibility, assesses service delivery requirements, two-year project with white paper launched, will be shared globally


Major discussion point

AI Integration in Government and Regulation


Topics

Legal and regulatory | Sociocultural


Agreed with

– Joel Kaplan

Agreed on

Organizations must proactively adapt to AI transformation


Disagreed with

– Federico Sturzenegger
– Joel Kaplan

Disagreed on

Approach to AI regulation – preemptive vs reactive


AI should advise but humans must remain in command with accountability

Explanation

While AI can detect gaps, provide recommendations, analyze data, and identify risks, it should never bypass established procedures or make final decisions. Human oversight and accountability remain essential, with AI serving in an advisory capacity rather than replacing human judgment in critical decisions.


Evidence

AI can detect non-compliances but cannot impose penalties, human accountability as core principle in the AI model


Major discussion point

AI Integration in Government and Regulation


Topics

Legal and regulatory | Human rights


Agreed with

– Yutaka Sasaki

Agreed on

Human oversight must be maintained in AI systems


Constitutional safeguards and rule of law principles must be maintained even with AI integration

Explanation

The UAE’s AI model incorporates non-negotiable principles including equity before law, traceability to legal basis rather than just statistical patterns, and mechanisms to prevent bias. These safeguards ensure that AI outputs remain grounded in legal foundations and constitutional protections.


Evidence

Mechanisms to stop biased or harmful AI outcomes, requirement for AI outputs to be traceable to legal basis not just statistical patterns, fair procedures that AI cannot bypass


Major discussion point

AI Regulation Philosophy and Approach


Topics

Legal and regulatory | Human rights


Disagreed with

– Federico Sturzenegger
– Joel Kaplan

Disagreed on

Approach to AI regulation – preemptive vs reactive


Legal professionals need training blending law and technology

Explanation

The UAE is preparing a new generation of legal professionals who combine legal expertise with technological skills. This includes training regulatory data scientists who can analyze legal performance through data and regulatory knowledge engineers who can make complex legal texts more accessible and understandable.


Evidence

Training regulatory data scientists to handle and understand data for law performance analysis, regulatory knowledge engineers to convert complex law text to understandable formats


Major discussion point

Future Workforce and Legal Profession


Topics

Legal and regulatory | Economic


Public feedback mechanisms using AI can build trust in government regulation

Explanation

By using AI to systematically collect and analyze public feedback on regulations at scale, governments can build greater trust and legitimacy. This approach allows for more responsive and accountable governance by incorporating citizen input into the regulatory process.


Evidence

AI tools to listen to stakeholders at large scale, analysis of positive and negative comments about laws, suggestions for changes based on feedback


Major discussion point

Trust and Adoption of AI Technology


Topics

Legal and regulatory | Sociocultural


Investment in people preparation essential as AI transforms work

Explanation

Rather than resisting AI-driven changes, governments should invest in preparing their workforce for the AI age. This involves developing new skill sets that combine traditional expertise with technological capabilities to work effectively alongside AI systems.


Evidence

UAE preparing new generation of legal professionals blending law and technology, training regulatory data scientists and knowledge engineers


Major discussion point

Future Workforce and Legal Profession


Topics

Economic | Development


Agreed with

– Joel Kaplan

Agreed on

Organizations must proactively adapt to AI transformation


F

Federico Sturzenegger

Speech speed

184 words per minute

Speech length

2202 words

Speech time

715 seconds

Argentina eliminated 13,500 regulations by preparing comprehensive review before new administration took office

Explanation

Argentina conducted a systematic review of all laws two years before the new president took office, classifying them into categories of laws to eliminate, keep, or modify. This preparation enabled swift regulatory reform once the new administration began, demonstrating the importance of advance planning for effective deregulation.


Evidence

Two years of preparation before Milei became president, classification of all laws into categories (eliminate, keep, modify), swift reform implementation in one and a half years


Major discussion point

Regulatory Reform and Deregulation Approaches


Topics

Legal and regulatory


Disagreed with

– Maryam bint Ahmed Al Hammadi

Disagreed on

Extent of deregulation needed


Deregulation should focus on removing regulations created by interest groups rather than benevolent planning

Explanation

Most regulations are not the result of well-intentioned central planning but rather the product of special interests using the state to create competitive advantages and barriers to entry. Deregulation should therefore target these privilege-creating regulations that ring-fence competition rather than serve the public interest.


Evidence

Finding that most Argentine regulation results from interest groups using the state to build ring-fencing competition and generate privileges, not from benevolent central planners


Major discussion point

Regulatory Reform and Deregulation Approaches


Topics

Legal and regulatory | Economic


Disagreed with

– Nicholas Thompson

Disagreed on

Role of regulation in creating market barriers


Different countries require different approaches based on their regulatory context and capture by interest groups

Explanation

The extent and approach to deregulation should vary by country depending on how extensively their regulatory framework has been captured by special interests. Argentina’s aggressive deregulation is justified by its extreme situation where interest groups built a status quo society, while other countries may have different needs.


Evidence

Argentina’s extreme case of interest groups capturing the state and building regulatory framework for privileges, acknowledgment that UAE’s setup is totally different


Major discussion point

Regulatory Reform and Deregulation Approaches


Topics

Legal and regulatory


Agreed with

– Maryam bint Ahmed Al Hammadi
– Yutaka Sasaki

Agreed on

Regulatory approaches must be tailored to national contexts


Disagreed with

– Maryam bint Ahmed Al Hammadi

Disagreed on

Extent of deregulation needed


Argentina aims to prevent any AI regulation from being created

Explanation

Since Argentina had no existing AI laws, the deregulation minister’s role regarding AI is to ensure no new AI regulations are introduced. This represents a proactive approach to maintaining regulatory freedom in emerging technologies by preventing restrictive laws from being enacted in the first place.


Evidence

No existing AI law in Argentina to deregulate, goal to ensure no AI law appears, resistance to congressmen wanting to regulate artificial intelligence


Major discussion point

AI Regulation Philosophy and Approach


Topics

Legal and regulatory


Disagreed with

– Maryam bint Ahmed Al Hammadi
– Joel Kaplan

Disagreed on

Approach to AI regulation – preemptive vs reactive


AI could solve asymmetric information problems that justify many regulations, potentially reducing need for regulation

Explanation

Many sectors are regulated because of asymmetric information problems, such as in financial services where depositors lack information about banks. If AI provides greater knowledge and transparency, it could eliminate these information asymmetries and thus reduce the justification for regulatory intervention in various markets.


Evidence

Example of financial sector regulation due to asymmetric information between depositors and banks, theoretical possibility that AI knowledge could solve these information problems


Major discussion point

AI Integration in Government and Regulation


Topics

Legal and regulatory | Economic


Regulation should wait for actual problems rather than trying to solve imaginary harms

Explanation

Regulators tend to be overly imaginative about potential harms and create preemptive regulations based on hypothetical problems. Instead, policy should allow freedom and only react with regulation when actual problems manifest, rather than trying to prevent every conceivable risk.


Evidence

Baltimore bridge crash example showing how politicians react to salient events by over-regulating, comparison to potential over-reaction to robot mistakes versus human mistakes


Major discussion point

AI Regulation Philosophy and Approach


Topics

Legal and regulatory


Disagreed with

– Maryam bint Ahmed Al Hammadi
– Joel Kaplan

Disagreed on

Approach to AI regulation – preemptive vs reactive


Competition policy should focus on enabling market entry rather than punishing dominant firms

Explanation

Effective competition policy should concentrate on removing barriers to entry rather than penalizing companies for being successful or dominant. High prices from dominant firms can actually attract new entrants, and the real competition problem usually stems from regulatory barriers that prevent new players from entering markets.


Evidence

Airline route example where high prices from dominant carrier should attract new entrants, Nokia-Blackberry-iPhone example showing natural market competition, concept of contestable markets


Major discussion point

Competition Policy and Market Structure


Topics

Legal and regulatory | Economic


Government regulation often creates barriers to entry that reduce competition

Explanation

Most restrictions to competition actually come from government regulation itself, which creates barriers to entry and makes markets less competitive. This creates a paradox where the government simultaneously tries to promote competition while its own regulations are the primary obstacle to competitive markets.


Evidence

Article in The Economist with President Milei on this topic, observation that regulation erects barriers to entry making competition more difficult


Major discussion point

Competition Policy and Market Structure


Topics

Legal and regulatory | Economic


Disagreed with

– Nicholas Thompson

Disagreed on

Role of regulation in creating market barriers


Contestable markets allow for natural competition without need for intervention

Explanation

As long as markets remain contestable – meaning new players can enter and challenge incumbents – there is no need for regulatory intervention even if one firm dominates. The threat of potential competition disciplines market behavior, and society benefits from economies of scale as long as the market remains open to challengers.


Evidence

Economic theory of contestable markets, Nokia-Blackberry-iPhone succession showing natural market evolution, increasing returns to scale reducing costs for society


Major discussion point

Competition Policy and Market Structure


Topics

Legal and regulatory | Economic


Resistance to AI adoption must be overcome to prevent technological stagnation

Explanation

Society will face significant resistance to AI implementation from people whose jobs or industries are displaced, similar to how candle producers might have resisted electric lighting. This resistance must be overcome to allow beneficial technological progress, even though it may eliminate certain types of work.


Evidence

Isaac Asimov’s iRobot stories about political debate over robot politicians, prediction that industrial jobs may disappear in 10-15 years, analogy to candle producers resisting Edison’s light bulb


Major discussion point

Trust and Adoption of AI Technology


Topics

Economic | Sociocultural


J

Joel Kaplan

Speech speed

164 words per minute

Speech length

1370 words

Speech time

499 seconds

Organizations must think now about how AI is going to change their workflows to succeed

Explanation

Both government agencies and businesses need to proactively consider how AI will transform their operations and work processes. The organizations that begin this strategic thinking now will be the ones that succeed in adapting to AI-driven changes, while those that wait will fall behind.


Evidence

Examples of both ministers already thinking deeply about AI’s impact on their organizational workflows and substance of work


Major discussion point

AI Integration in Government and Regulation


Topics

Economic | Legal and regulatory


Agreed with

– Maryam bint Ahmed Al Hammadi

Agreed on

Organizations must proactively adapt to AI transformation


US needs regulatory environment ensuring access to talent, data, compute, and energy for AI development

Explanation

To power AI effectively, the United States must maintain regulatory frameworks that provide access to four critical resources: skilled talent, training data, computational power, and energy infrastructure. The regulatory environment should remove barriers to accessing these essential inputs rather than creating obstacles.


Evidence

Trump’s AI Action Plan removes barriers across federal agencies, energy and data center permitting reform, China built 440 gigawatts of power in 2024 vs US 50 gigawatts


Major discussion point

US-China AI Competition and Open Source


Topics

Legal and regulatory | Infrastructure


Disagreed with

– Nicholas Thompson

Disagreed on

Copyright and data access for AI training


EU AI Act represents harmful over-regulation that risks falling behind in technological revolution

Explanation

The European Union’s approach to AI regulation shifted from focusing on real-world harms to directly regulating the technology itself after ChatGPT’s release. This regulatory approach is harmful to innovation and risks making the EU less competitive in AI, similar to how regulation has made Europe less competitive over the past 40-50 years.


Evidence

EU AI Act changed from technology-neutral focus on real-world harms to directly regulating technology after ChatGPT release, EU becoming less competitive over 40-50 years due to regulation


Major discussion point

AI Regulation Philosophy and Approach


Topics

Legal and regulatory


Disagreed with

– Federico Sturzenegger
– Maryam bint Ahmed Al Hammadi

Disagreed on

Approach to AI regulation – preemptive vs reactive


Open source AI models like Meta’s Llama democratize access and embed US values globally

Explanation

Meta’s open source Llama models have been downloaded 1.2 billion times, providing free access to powerful AI capabilities for developers, academics, universities, and governments worldwide. This democratizes AI access while establishing US technology and values as the global standard for AI development.


Evidence

Llama models downloaded 1.2 billion times, first powerful open source large language model released by Meta, free access for developers and institutions globally


Major discussion point

US-China AI Competition and Open Source


Topics

Legal and regulatory | Development


Trump administration’s AI Action Plan removes barriers to innovation across federal agencies

Explanation

The AI Action Plan focuses on eliminating regulatory obstacles that hinder AI development and deployment across government agencies. This approach prioritizes enabling innovation and investment rather than creating new restrictions, particularly in the context of competing with China in the AI race.


Evidence

AI Action Plan removes barriers across federal agencies, ensures energy and data center permitting reform, addresses national and economic security battle against China


Major discussion point

US-China AI Competition and Open Source


Topics

Legal and regulatory


Battle continues for global AI standards between US and Chinese models

Explanation

While the US initially led in open source AI with Meta’s Llama models, Chinese companies like DeepSeek have released competitive open source models that have gained significant adoption. The competition for whose technology becomes the global standard for AI development remains active and consequential.


Evidence

Chinese DeepSeek model achieved significant adoption, Chinese models getting more downloads on Hugging Face than American companies, ongoing competition for global AI standards


Major discussion point

US-China AI Competition and Open Source


Topics

Legal and regulatory | Economic


Meta faces fierce competition in AI space

Explanation

Despite regulatory concerns about market concentration, Meta operates in a highly competitive AI environment with multiple strong competitors. This competition drives innovation and prevents any single company from dominating the AI market without challenge.


Major discussion point

Competition Policy and Market Structure


Topics

Economic | Legal and regulatory


Y

Yutaka Sasaki

Speech speed

98 words per minute

Speech length

568 words

Speech time

346 seconds

Balance needed between regulation and innovation speed, with common understanding between regulators and technologists

Explanation

Strict regulation can prevent rapid innovation, so countries need to find the right balance that allows beneficial innovation while maintaining necessary protections. This requires regulators and technology innovators to develop shared understanding of the latest AI technologies and their implications.


Evidence

COVID-19 responses varied by country with different outcomes, AI technology evolving rapidly requiring common understanding, need for technology literacy on both regulator and innovator sides


Major discussion point

Regulatory Reform and Deregulation Approaches


Topics

Legal and regulatory | Economic


Agreed with

– Federico Sturzenegger
– Maryam bint Ahmed Al Hammadi

Agreed on

Regulatory approaches must be tailored to national contexts


Trusted environment needed for business AI adoption, including sovereign/private AI for confidential data

Explanation

Businesses and governments are reluctant to store confidential data in public AI systems, creating demand for sovereign cloud and private AI solutions. Companies need trusted infrastructure that can handle sensitive information while still providing AI capabilities for their operations.


Evidence

Companies reluctant to store confidential data in public AI, need for sovereign cloud and private AI solutions, data center regulations around power supply and environmental issues


Major discussion point

Trust and Adoption of AI Technology


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Maryam bint Ahmed Al Hammadi

Agreed on

Human oversight must be maintained in AI systems


Hybrid AI combining public and private models will create trusted environments

Explanation

The future of enterprise AI will involve combining public AI systems with private AI models to create hybrid solutions. This approach allows organizations to benefit from powerful public AI while maintaining control over sensitive data through private systems, creating a more trusted overall environment.


Evidence

Employees wanting to use public AI for intelligence, future need to manage combination of public and private AI systems


Major discussion point

Trust and Adoption of AI Technology


Topics

Cybersecurity | Legal and regulatory


Soft law approaches preferable to hard regulatory frameworks

Explanation

Japan and other countries are adopting soft law approaches to AI regulation rather than the hard regulatory frameworks seen in places like the EU. This approach provides more flexibility while still offering guidance, and requires alignment across different regions with varying regulatory philosophies.


Evidence

EU has hard law while US and Japan have soft law approaches, need to align region by region with different regulatory approaches


Major discussion point

AI Regulation Philosophy and Approach


Topics

Legal and regulatory


N

Nicholas Thompson

Speech speed

191 words per minute

Speech length

1253 words

Speech time

391 seconds

AI regulation should focus on building trust to increase adoption and spread benefits throughout society

Explanation

Thompson argues that while he personally loves AI and finds it amazing, most people hate it or are distrustful of it. He suggests that certain regulations, such as privacy protections and safeguards against harms, could actually build public trust and lead to wider adoption of AI technology, ultimately spreading its benefits more broadly across society.


Evidence

Personal observation that most people hate AI despite his own positive experience with it, reference to privacy protections and harm safeguards as trust-building measures


Major discussion point

Trust and Adoption of AI Technology


Topics

Legal and regulatory | Sociocultural


Copyright law should protect content creators’ licensing rights in AI training

Explanation

Thompson indicates that The Atlantic has sued some AI companies over copyright issues and believes that copyright law should fall on the side of protecting content creators’ rights. He suggests that proper licensing deals should be struck between AI companies and content creators rather than allowing unrestricted access to copyrighted material for training purposes.


Evidence

The Atlantic has sued AI companies over copyright, reference to striking licensing deals with AI companies


Major discussion point

AI Integration in Government and Regulation


Topics

Legal and regulatory


Disagreed with

– Joel Kaplan

Disagreed on

Copyright and data access for AI training


Deregulation also comes from special interests, not just regulation

Explanation

In response to Sturzenegger’s argument that regulation is driven by interest groups, Thompson points out that deregulation efforts can also be driven by special interests rather than purely public-minded motives. This suggests that both regulatory and deregulatory efforts can be influenced by groups seeking to advance their own interests rather than the broader public good.


Major discussion point

Regulatory Reform and Deregulation Approaches


Topics

Legal and regulatory | Economic


Disagreed with

– Federico Sturzenegger

Disagreed on

Role of regulation in creating market barriers


Concentrated markets and competition policy require careful consideration of cross-market dominance

Explanation

Thompson questions whether companies should be allowed to use dominance in one market to gain control in another market. He suggests that while markets may naturally allow companies to grow large, there may still be a role for competition policy to prevent the abuse of market power across different sectors or markets.


Evidence

Question about companies using power in one market to gain dominance in another


Major discussion point

Competition Policy and Market Structure


Topics

Legal and regulatory | Economic


US regulatory framework may have inadvertently ceded open source AI leadership to China

Explanation

Thompson suggests that American regulations made open source AI development more difficult, potentially allowing China to gain an advantage in this crucial area. He notes that this represents a significant strategic concern, as open source AI development is important for maintaining technological leadership and influence in global AI standards.


Evidence

China’s success with DeepSeek and other open source models, more downloads on Hugging Face than American companies


Major discussion point

US-China AI Competition and Open Source


Topics

Legal and regulatory | Economic


Agreements

Agreement points

Organizations must proactively adapt to AI transformation

Speakers

– Joel Kaplan
– Maryam bint Ahmed Al Hammadi

Arguments

Organizations must think now about how AI is going to change their workflows to succeed


UAE developing intelligence-led regulation model using AI to analyze stakeholder feedback and court rulings


Investment in people preparation essential as AI transforms work


Summary

Both speakers agree that organizations, whether government or business, must begin strategic planning now for AI integration rather than waiting. They emphasize the need for proactive adaptation of workflows and investment in preparing people for AI-driven changes.


Topics

Economic | Legal and regulatory


Human oversight must be maintained in AI systems

Speakers

– Maryam bint Ahmed Al Hammadi
– Yutaka Sasaki

Arguments

AI should advise but humans must remain in command with accountability


Trusted environment needed for business AI adoption, including sovereign/private AI for confidential data


Summary

Both speakers emphasize that while AI can provide valuable assistance and analysis, human accountability and oversight remain essential. AI should serve in an advisory capacity rather than making autonomous decisions.


Topics

Legal and regulatory | Human rights


Regulatory approaches must be tailored to national contexts

Speakers

– Federico Sturzenegger
– Maryam bint Ahmed Al Hammadi
– Yutaka Sasaki

Arguments

Different countries require different approaches based on their regulatory context and capture by interest groups


UAE updated 90% of laws in four years through massive coordinated effort across federal and local governments


Balance needed between regulation and innovation speed, with common understanding between regulators and technologists


Summary

All three speakers acknowledge that regulatory reform approaches must be adapted to each country’s specific context, institutional structure, and needs rather than applying a one-size-fits-all solution.


Topics

Legal and regulatory


Similar viewpoints

Both speakers favor minimal preemptive regulation of AI, preferring to address actual problems as they arise rather than creating restrictive frameworks based on hypothetical risks. They view over-regulation as harmful to innovation and competitiveness.

Speakers

– Federico Sturzenegger
– Joel Kaplan

Arguments

Regulation should wait for actual problems rather than trying to solve imaginary harms


EU AI Act represents harmful over-regulation that risks falling behind in technological revolution


Topics

Legal and regulatory


Both speakers believe that effective competition policy should focus on removing barriers to market entry rather than penalizing successful companies for being dominant, emphasizing that true competition comes from contestable markets.

Speakers

– Federico Sturzenegger
– Joel Kaplan

Arguments

Competition policy should focus on enabling market entry rather than punishing dominant firms


Meta faces fierce competition in AI space


Topics

Legal and regulatory | Economic


Both ministers have undertaken massive regulatory reform efforts, removing or updating large numbers of outdated laws and regulations. They share the view that comprehensive legal reform requires systematic preparation and coordinated implementation.

Speakers

– Maryam bint Ahmed Al Hammadi
– Federico Sturzenegger

Arguments

UAE updated 90% of laws in four years through massive coordinated effort across federal and local governments


Argentina eliminated 13,500 regulations by preparing comprehensive review before new administration took office


Topics

Legal and regulatory


Unexpected consensus

Deregulation as a form of regulation reform

Speakers

– Maryam bint Ahmed Al Hammadi
– Federico Sturzenegger

Arguments

UAE updated 90% of laws in four years through massive coordinated effort across federal and local governments


Argentina eliminated 13,500 regulations by preparing comprehensive review before new administration took office


Explanation

Despite their different titles and approaches (UAE’s comprehensive law updating vs Argentina’s aggressive deregulation), both ministers recognized they are essentially doing similar work – removing outdated regulations and making legal frameworks more compatible with current needs. Al Hammadi explicitly stated they are both doing ‘deregulations at the heart or at the center of the reform.’


Topics

Legal and regulatory


Need for balanced approach to AI regulation

Speakers

– Joel Kaplan
– Yutaka Sasaki
– Nicholas Thompson

Arguments

US needs regulatory environment ensuring access to talent, data, compute, and energy for AI development


Balance needed between regulation and innovation speed, with common understanding between regulators and technologists


AI regulation should focus on building trust to increase adoption and spread benefits throughout society


Explanation

Despite representing different perspectives (business, technology services, media), these speakers found common ground in recognizing that some regulatory framework is necessary for AI – not complete deregulation, but thoughtful regulation that enables rather than hinders beneficial AI development and adoption.


Topics

Legal and regulatory | Sociocultural


Overall assessment

Summary

The speakers showed surprising consensus on several key issues: the need for proactive organizational adaptation to AI, the importance of human oversight in AI systems, the necessity of context-specific regulatory approaches, and the recognition that both regulation and deregulation can serve similar reform purposes when removing outdated barriers.


Consensus level

Moderate to high consensus on fundamental principles, with disagreement mainly on the extent and speed of deregulation rather than core objectives. This suggests potential for collaborative approaches to AI governance that balance innovation with necessary safeguards, though implementation details would likely remain contentious.


Differences

Different viewpoints

Approach to AI regulation – preemptive vs reactive

Speakers

– Federico Sturzenegger
– Maryam bint Ahmed Al Hammadi
– Joel Kaplan

Arguments

Argentina aims to prevent any AI regulation from being created


Regulation should wait for actual problems rather than trying to solve imaginary harms


UAE developing intelligence-led regulation model using AI to analyze stakeholder feedback and court rulings


Constitutional safeguards and rule of law principles must be maintained even with AI integration


EU AI Act represents harmful over-regulation that risks falling behind in technological revolution


Summary

Sturzenegger advocates for no AI regulation and waiting for problems to emerge, while Al Hammadi supports proactive AI-integrated regulatory frameworks with built-in safeguards. Kaplan takes a middle position, opposing over-regulation like the EU but supporting some regulatory structures.


Topics

Legal and regulatory


Copyright and data access for AI training

Speakers

– Joel Kaplan
– Nicholas Thompson

Arguments

US needs regulatory environment ensuring access to talent, data, compute, and energy for AI development


Copyright law should protect content creators’ licensing rights in AI training


Summary

Kaplan argues for ensuring access to training data through fair use principles, while Thompson believes copyright law should protect content creators and require proper licensing deals with AI companies.


Topics

Legal and regulatory


Role of regulation in creating market barriers

Speakers

– Federico Sturzenegger
– Nicholas Thompson

Arguments

Government regulation often creates barriers to entry that reduce competition


Deregulation should focus on removing regulations created by interest groups rather than benevolent planning


Deregulation also comes from special interests, not just regulation


Summary

Sturzenegger argues that regulation primarily serves special interests and creates barriers, while Thompson counters that deregulation efforts can also be driven by special interests rather than public good.


Topics

Legal and regulatory | Economic


Extent of deregulation needed

Speakers

– Federico Sturzenegger
– Maryam bint Ahmed Al Hammadi

Arguments

Argentina eliminated 13,500 regulations by preparing comprehensive review before new administration took office


Different countries require different approaches based on their regulatory context and capture by interest groups


UAE updated 90% of laws in four years through massive coordinated effort across federal and local governments


Summary

While both engaged in massive regulatory reform, Sturzenegger advocates for more aggressive deregulation due to Argentina’s extreme capture by interest groups, while Al Hammadi focuses on updating and modernizing existing laws rather than wholesale elimination.


Topics

Legal and regulatory


Unexpected differences

Open source AI strategy and national competitiveness

Speakers

– Joel Kaplan
– Nicholas Thompson

Arguments

Open source AI models like Meta’s Llama democratize access and embed US values globally


Battle continues for global AI standards between US and Chinese models


US regulatory framework may have inadvertently ceded open source AI leadership to China


Explanation

Despite both being from the US perspective, they disagree on whether US policy has been successful in maintaining open source AI leadership, with Thompson suggesting regulatory missteps allowed China to gain advantage while Kaplan defends the current approach.


Topics

Legal and regulatory | Economic


Human oversight requirements in AI systems

Speakers

– Maryam bint Ahmed Al Hammadi
– Federico Sturzenegger

Arguments

AI should advise but humans must remain in command with accountability


Resistance to AI adoption must be overcome to prevent technological stagnation


Explanation

Unexpectedly, the UAE minister who embraces AI innovation insists on maintaining human control, while the deregulation-focused Argentine minister is more willing to consider AI autonomy, including hypothetically ceding decision-making to superior AI.


Topics

Legal and regulatory | Human rights


Overall assessment

Summary

The discussion revealed significant philosophical differences about the role of regulation in AI governance, the balance between innovation and protection, and the appropriate level of government intervention in emerging technologies.


Disagreement level

Moderate to high disagreement on fundamental approaches, but with surprising areas of convergence on the need for regulatory reform and AI integration. The disagreements reflect deeper ideological differences about the role of government, market dynamics, and risk management in technological innovation. These differences have significant implications for international cooperation on AI governance and could lead to divergent regulatory frameworks that complicate global AI development and deployment.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers favor minimal preemptive regulation of AI, preferring to address actual problems as they arise rather than creating restrictive frameworks based on hypothetical risks. They view over-regulation as harmful to innovation and competitiveness.

Speakers

– Federico Sturzenegger
– Joel Kaplan

Arguments

Regulation should wait for actual problems rather than trying to solve imaginary harms


EU AI Act represents harmful over-regulation that risks falling behind in technological revolution


Topics

Legal and regulatory


Both speakers believe that effective competition policy should focus on removing barriers to market entry rather than penalizing successful companies for being dominant, emphasizing that true competition comes from contestable markets.

Speakers

– Federico Sturzenegger
– Joel Kaplan

Arguments

Competition policy should focus on enabling market entry rather than punishing dominant firms


Meta faces fierce competition in AI space


Topics

Legal and regulatory | Economic


Both ministers have undertaken massive regulatory reform efforts, removing or updating large numbers of outdated laws and regulations. They share the view that comprehensive legal reform requires systematic preparation and coordinated implementation.

Speakers

– Maryam bint Ahmed Al Hammadi
– Federico Sturzenegger

Arguments

UAE updated 90% of laws in four years through massive coordinated effort across federal and local governments


Argentina eliminated 13,500 regulations by preparing comprehensive review before new administration took office


Topics

Legal and regulatory


Takeaways

Key takeaways

Different countries require tailored regulatory approaches – UAE’s comprehensive law updates (90% in 4 years) vs Argentina’s aggressive deregulation (13,500 regulations eliminated) reflect different contexts and needs


AI integration in government should maintain human accountability and constitutional safeguards while leveraging AI for analysis and recommendations


Regulatory philosophy should focus on removing barriers to innovation rather than preemptively regulating imaginary harms, particularly for emerging technologies like AI


Competition policy should prioritize enabling market entry rather than punishing market dominance, with government avoiding creation of regulatory barriers to competition


Open source AI models are crucial for democratizing access and embedding national values globally, making them a key battleground in US-China AI competition


Trust-building mechanisms, including public feedback systems and hybrid AI approaches combining public and private models, are essential for AI adoption


Legal and regulatory professionals need retraining to blend law and technology expertise for the AI era


Resolutions and action items

UAE to continue developing intelligence-led regulation model as a two-year project with plans to share globally


UAE investing in training new generation of legal professionals with technology skills including regulatory data scientists and knowledge engineers


Argentina to maintain policy of preventing new AI regulations while continuing broader deregulation efforts


Ministers Al Hammadi and Sturzenegger agreed to meet the following day to discuss regulatory approaches further


Unresolved issues

Copyright and intellectual property rights for AI training data remain legally uncertain and subject to ongoing litigation


Responsibility and liability frameworks for autonomous AI systems and robots not yet established


How to balance innovation speed with necessary safeguards without stifling technological progress


Standardization of AI regulations across different countries and regions while respecting cultural and legal differences


Long-term workforce displacement concerns as AI capabilities expand


Whether AI could eventually replace human decision-making in government roles


How to prevent regulatory capture by interest groups in AI policy development


Suggested compromises

Soft law approaches rather than hard regulatory frameworks for AI governance, allowing flexibility while providing guidance


Hybrid AI systems combining public and private models to balance innovation with data sovereignty concerns


Waiting for actual AI-related problems to emerge before creating regulations, rather than regulating based on hypothetical risks


Focus on ensuring access to AI development resources (talent, data, compute, energy) while maintaining basic constitutional and legal safeguards


Common understanding and literacy development between regulators and technology innovators to bridge knowledge gaps


Thought provoking comments

President Millet says, I also don’t want you to be the simplification minister because I don’t want you ever to think about simplifying something without asking yourself before if that thing that you were about to simplify should exist in the first place, okay. So we kind of ask a deeper question and that’s how we got to the deregulation and the name.

Speaker

Federico Sturzenegger


Reason

This comment reframes the entire approach to government reform by questioning the fundamental existence of regulations rather than just improving them. It introduces a philosophical shift from ‘how can we make this better?’ to ‘should this exist at all?’ This represents a radical departure from traditional regulatory reform thinking.


Impact

This comment set the tone for the entire discussion by establishing the most extreme deregulatory position. It forced other participants to define their own approaches in relation to this radical stance, with Minister Al Hammadi later clarifying that their approaches were actually quite similar despite different terminology.


Will artificial intelligence change the need for regulation at large? Let me give you one example… if artificial intelligence gives us a lot of knowledge, doesn’t it solve itself, the asymmetric information problem? And if it solves the asymmetric information problem, then we don’t need to regulate that.

Speaker

Federico Sturzenegger


Reason

This insight connects AI capabilities directly to fundamental economic theories of regulation. By identifying asymmetric information as a core justification for regulation and suggesting AI could eliminate this problem, he proposes that AI might make entire categories of regulation obsolete.


Impact

This comment elevated the discussion from practical regulatory reform to theoretical questions about the future necessity of regulation itself. It introduced a new framework for thinking about AI’s impact on governance that other participants hadn’t explicitly addressed.


We don’t want only, for example, Chad GBT to draft for us a law. However, we need to have a model that listen and speak to the social media… We need that model to just speak to the court… We need also a model that speaks to the service that is delivered to the customer and stakeholders

Speaker

Maryam bint Ahmed Al Hammadi


Reason

This comment reveals a sophisticated, multi-layered approach to AI-assisted governance that goes far beyond simple automation. It envisions AI as a comprehensive feedback system that monitors law implementation across multiple touchpoints in real-time, representing a genuinely innovative approach to responsive governance.


Impact

This shifted the conversation from theoretical debates about regulation to concrete, practical applications of AI in governance. It demonstrated that governments are already implementing sophisticated AI systems for regulatory management, grounding the discussion in current reality rather than future speculation.


Most of the regulation is not the result of a benevolent central planner, but it’s the result of interest, kind of people using the state as a way of building a regulation, which at the end of the day is like a ring-fencing competition, generating privileges.

Speaker

Federico Sturzenegger


Reason

This comment challenges the fundamental assumption that regulation serves the public interest, instead proposing that much regulation serves private interests seeking to limit competition. This reframes the entire regulatory debate from ‘good vs. bad regulation’ to ‘public interest vs. private capture.’


Impact

This comment prompted Nicholas Thompson to immediately challenge him with ‘Deregulation also comes from interest too,’ leading to a more nuanced discussion about the political economy of both regulation and deregulation. It forced the conversation to grapple with the reality that both regulatory and deregulatory efforts can serve special interests.


Any country that doesn’t ensure that [access to training data] is going to be the case is going to be left behind because their data is not going to be included in the training models. And so… if you’re another country that’s worried about AI sovereignty, really what you should be worried about is making sure that the large language models that are developed include data from your country

Speaker

Joel Kaplan


Reason

This comment reframes the AI sovereignty debate by arguing that restrictive copyright laws actually undermine rather than protect national interests. It suggests that countries protecting their data from AI training are essentially excluding themselves from the future of AI development.


Impact

This insight shifted the discussion toward the geopolitical implications of AI regulation, connecting domestic regulatory choices to international competitiveness. It influenced the subsequent discussion about US-China AI competition and the importance of open-source models in maintaining Western technological leadership.


AI can tell us the non-compliances, but it will not impose penalties on the community. So this one is not allowed. It’s one of the principles that we use in developing the AI model.

Speaker

Maryam bint Ahmed Al Hammadi


Reason

This comment articulates a crucial principle for AI governance: the distinction between AI as an analytical tool versus AI as a decision-maker. It establishes clear boundaries for AI’s role in government, maintaining human accountability while leveraging AI capabilities.


Impact

This comment introduced the critical concept of human-AI boundaries in governance, influencing the later discussion about whether AI could or should replace human decision-makers. It provided a practical framework that other participants could reference when discussing AI’s appropriate role in government.


Overall assessment

These key comments transformed what could have been a superficial discussion about AI regulation into a sophisticated exploration of fundamental questions about the nature and purpose of regulation itself. Sturzenegger’s radical deregulatory perspective served as a catalyst, forcing other participants to articulate more nuanced positions and defend the continued relevance of regulation. Al Hammadi’s detailed description of the UAE’s AI-assisted governance model grounded the theoretical discussion in practical reality, while Kaplan’s insights about data access and geopolitical competition added crucial international dimensions. Together, these comments created a multi-layered conversation that addressed philosophical, practical, and strategic aspects of AI governance, ultimately revealing the complexity of balancing innovation, public interest, and democratic accountability in the age of artificial intelligence.


Follow-up questions

How do you solve the problem of property rights when AI can perfectly replicate copyrighted content after reading it once?

Speaker

Federico Sturzenegger


Explanation

This addresses a fundamental legal challenge where traditional copyright frameworks may not adequately address AI’s ability to perfectly reproduce content, requiring new legal frameworks for intellectual property in the AI age


Who will be responsible for a robot’s actions when robots become autonomous agents in society?

Speaker

Federico Sturzenegger


Explanation

As AI and robotics advance toward autonomous operation, establishing clear legal liability frameworks becomes critical for public safety and legal accountability


Will artificial intelligence eliminate the need for regulation in areas where asymmetric information was the original justification for regulation?

Speaker

Federico Sturzenegger


Explanation

This explores whether AI’s ability to provide comprehensive information could fundamentally change the rationale for many existing regulations, particularly in financial and other information-sensitive sectors


How can we use AI to generate deregulation processes?

Speaker

Federico Sturzenegger


Explanation

This examines the potential for AI to systematically identify and remove unnecessary regulations, though Sturzenegger noted they haven’t fully explored this yet


How do we ensure that AI models include data from all countries to reflect diverse cultures and interests?

Speaker

Joel Kaplan


Explanation

This addresses AI sovereignty concerns and the need for global representation in AI training data to prevent cultural bias and ensure inclusive AI development


What is the optimal balance between regulation strictness and innovation speed across different countries and cultures?

Speaker

Yutaka Sasaki


Explanation

This explores how different regulatory approaches affect innovation rates and how to find the right balance for each country’s specific context and culture


How can we establish common understanding and technology literacy between regulators and technology innovators?

Speaker

Yutaka Sasaki


Explanation

This addresses the knowledge gap between policymakers and technologists that could lead to ineffective or harmful regulation of rapidly evolving AI technologies


How do we monitor and manage AI model performance and quality given their unpredictable outputs compared to traditional IT systems?

Speaker

Yutaka Sasaki


Explanation

This highlights the technical challenge of governing AI systems that produce variable outputs, unlike traditional programmed systems with fixed input-output relationships


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.