State of Play: AI Governance / DAVOS 2025
22 Jan 2025 10:30h - 11:15h
State of Play: AI Governance / DAVOS 2025
Session at a Glance
Summary
This panel discussion at the World Economic Forum focused on the governance and global impact of artificial intelligence (AI). The participants, including government ministers and industry leaders, explored the challenges and opportunities presented by AI technology.
A key theme was the potential for AI to either exacerbate or bridge global divides. Minister Abdullah Al-Swaha emphasized the risk of a “dignity divide” if AI benefits are not distributed equitably. The panelists agreed on the need for inclusive AI development that doesn’t leave behind the Global South.
The discussion highlighted tensions between regulation and innovation. While some advocated for light-touch governance to avoid stifling progress, others stressed the importance of ensuring AI safety and ethical use. The European approach of risk-based regulation was presented as a potential model.
Concerns about the concentration of AI power in a few large companies were raised. Smaller players like Mistral argued for a more decentralized, open-source approach to AI development to promote wider access and innovation.
The panelists explored ways to make AI more accessible and affordable globally, including through technological advancements to reduce costs. They also discussed the importance of education and awareness to prepare societies for AI’s impact.
The social and cultural implications of AI were touched upon, with recognition that its effects on work and human interaction will be profound. The discussion concluded with calls for global collaboration to ensure AI benefits all of humanity while managing its risks.
Keypoints
Major discussion points:
– The potential of AI to either exacerbate or bridge global digital divides
– The need for balanced, risk-based AI governance and regulation approaches
– Making AI technology more accessible, affordable and decentralized globally
– The importance of focusing on AI applications/outcomes rather than just models
– Educating populations on responsible AI use and preparing for societal impacts
Overall purpose:
The goal of this discussion was to explore the current state of AI governance and how to promote responsible AI development and adoption globally, while addressing concerns about concentration of power and potential negative impacts.
Tone:
The tone was largely collaborative and optimistic about AI’s potential, with panelists from government and industry finding significant common ground. There was a shared sense of urgency about the need to make AI more inclusive and accessible. The tone became slightly more cautionary when discussing potential societal impacts towards the end, but remained generally positive about addressing challenges through cooperation.
Speakers
– Samir Saran: Moderator
– Arvind Krishna: Chairman and CEO, IBM USA
– Arthur Mensch: Co-founder and Chief Executive Officer, Mistral
– Clara Chappaz: Minister Delegate for Artificial Intelligence and Digital Technology, France
– Abdullah AlSwaha: Minister of Communications and Information Technology, Saudi Arabia
Additional speakers:
– None identified
Full session report
AI Governance and Global Impact: A World Economic Forum Panel Discussion
This panel discussion at the World Economic Forum brought together government ministers and industry leaders to explore the governance and global impact of artificial intelligence (AI). The participants delved into the challenges and opportunities presented by AI technology, focusing on its potential to either exacerbate or bridge global divides.
Key Themes and Discussion Points
1. AI Governance and Regulation
The panelists agreed on the need for a balanced approach to AI regulation, but differed slightly in their specific recommendations. Arvind Krishna, Chairman and CEO of IBM USA, advocated for a risk-based approach to regulation while emphasizing the importance of avoiding regulatory capture. Clara Chappaz, France’s Minister Delegate for Artificial Intelligence and Digital Technology, stressed the importance of light-touch regulation to avoid stifling innovation. Arthur Mensch, Co-founder and CEO of Mistral, proposed focusing on regulating applications rather than models, drawing an analogy to car safety: “When you validate a car, when you ensure that the car is safe, you ensure the entirety of the car, you don’t look only at the engine.”
The European approach of risk-based regulation was presented as a potential model for global adoption. However, the specific mechanisms for implementing such regulation across different jurisdictions remained an unresolved issue.
2. Addressing AI Divides and Promoting Inclusivity
A central theme of the discussion was the potential for AI to either exacerbate or bridge global divides. Abdullah AlSwaha, Minister of Communications and Information Technology for Saudi Arabia, emphasized the risk of a “dignity divide” if AI benefits are not distributed equitably. He highlighted the need to close compute, data, and algorithmic divides, stating, “In the analog age, we have $110 trillion today worth of GDP. Per capita, for every dollar being made in the global south, somebody makes 3.5 to 4x that in the global north. That’s not acceptable.”
The panelists proposed various strategies to make AI more accessible and affordable globally:
– Arvind Krishna focused on technological solutions to reduce costs, claiming to have “30 times line of sight” on making AI 100 times cheaper.
– Arthur Mensch advocated for a decentralized, open-source approach to AI development, emphasizing the importance of user interface design in making AI accessible across different cultures and languages.
– Clara Chappaz emphasized the importance of education and widespread adoption, particularly in educating teachers and students about AI use in education.
AlSwaha mentioned the Digital Cooperation Organization and the AI for All initiative as efforts to promote inclusivity in AI development.
3. AI’s Social and Economic Impact
The discussion highlighted the profound impact AI is expected to have on society and the economy. Arvind Krishna emphasized AI’s potential to unlock trillions in global GDP, while Arthur Mensch noted that AI will fundamentally change how people work and interact. Clara Chappaz focused on AI’s opportunity to bridge knowledge and skills divides, while Abdullah AlSwaha warned about the risk of concentrating power in the hands of a few.
Krishna drew an analogy comparing concerns about AI to historical concerns about newspapers, television, and the internet, suggesting that society will adapt to AI as it has to previous technological revolutions.
4. International Cooperation on AI
The importance of global collaboration in AI development was a recurring theme. Clara Chappaz announced that France would host a global AI summit in February 2025 to unite countries on a common vision for AI development. She also highlighted the Global Partnership on AI, an initiative launched with Canada in 2019 to foster responsible AI development.
Abdullah AlSwaha invited collaboration to close AI divides through the Digital Cooperation Organization. The panelists agreed on the need for building coalitions of like-minded nations and fostering global partnerships to develop ethical AI.
Challenges and Opportunities
The discussion highlighted several key challenges and opportunities in AI development and governance:
1. Balancing regulation and innovation: Finding the right approach to govern AI without stifling progress remains a challenge.
2. Ensuring global inclusivity: Making AI technology accessible and affordable worldwide is crucial to prevent widening existing divides.
3. Addressing societal impacts: Preparing for AI’s profound effects on work, social interaction, and power dynamics is essential.
4. Fostering diverse innovation: Clara Chappaz emphasized the importance of supporting small companies and new ways of thinking in AI development.
5. Technological solutions for inclusivity: Arvind Krishna’s focus on drastically reducing AI costs presents an opportunity for wider access.
6. Open-source and decentralization: Arthur Mensch’s advocacy for open-source models and decentralized AI development offers a path to more inclusive and diverse AI ecosystems.
Conclusion
The panel discussion revealed a high level of consensus among speakers from different sectors and regions on key issues such as the need for light-touch, risk-based regulation and the importance of making AI inclusive and accessible. This alignment suggests the potential for more coordinated efforts in AI development and regulation globally.
However, differences remained in the specific approaches to regulation, strategies for AI inclusivity, and the balance between open-source and proprietary technological solutions. These differences reflect the complexity of AI governance and the need for diverse approaches to address global challenges.
The discussion concluded with calls for global collaboration to ensure AI benefits all of humanity while managing its risks. Specific initiatives, such as France’s upcoming AI summit and the Global Partnership on AI, demonstrate concrete steps towards international cooperation. As AI continues to evolve rapidly, ongoing dialogue and cooperation between governments, industry leaders, and innovators will be crucial in shaping its future development and impact.
Session Transcript
Samir Saran: It’s 11.30, so a very warm welcome to this fascinating discussion over the next 45 minutes on the state-of-play AI governance. Let me also welcome all our online viewers, and those who want to engage with this session, please use the hashtag WEF25. We are going to have a series of interrogations, questions, conversations on different aspects of AI, which is perhaps one of the most exciting domains today. It promises to lift economies. It also, in its current state, could lead to fragmentation. It has different regimes and operating principles defining its development in different parts of the world. Many see it as a way to respond to some of the most critical challenges, but many also see it as a domain that allows concentration of unbridled power. And in some sense, what we do today as governments, as companies, as individuals, is going to define our future, our foreseeable future over the next few decades. And to decipher where we are with the relations to AI governance, we have an esteemed panel of Minister Abdullah Al Swaha from the Ministry of Communications and Information Technology of Saudi Arabia, Minister Clara Chapez, Minister Delegate for Artificial Intelligence and Digital Technology of France, Arvind Krishna, Chairman and CEO, IBM USA, and of course, Arthur Mensch, co-founder and chief executive officer, Mistral. As I learned in the green room, everyone is betting on Mistral. So clearly, Arthur, you have a responsibility to bear. But Minister, let me start with you first. Digital divides, that’s something I spoke about in my framing. There is a fear that is the age of AI going to codify the divides that exist today? Or do you believe there’s this potential to… bridge those divides? And how do you, as minister in Saudi Arabia, beneficiaries of previous rounds of industrialization, see this new wave, which will be based on AI?
Abdullah AlSwaha: We were talking about this earlier, that in the digital age, we can talk about the digital divide. But under the intelligent age, it’s really a dignity divide of deciding who gets to progress and who gets to be left behind. And you have a book called The New World Disorder. Let’s talk about the disorders that took place in the analog age, digital age, and how could we avoid it in the intelligent age. In the analog age, we have $110 trillion today worth of GDP. Per capita, for every dollar being made in the global south, somebody makes 3.5 to 4x that in the global north. That’s not acceptable. And it’s not a surprise that it’s going to cost humanity $6 trillion to close down the gender divide, climate change, and all of the problems. In the digital age, 2.6 billion people left behind in the digital age. 100 million in the global north, 2.5-ish in the global south. Again, for every dollar being captured from that $16 trillion today digital economy, 3.5 to 5x being made. Let’s talk about the new AI order. What’s happening? You’re talking about a projection, whether you’re a realist or an optimist, 700 million people to a billion people. That’s an exclusive club of a billion people who have access to compute algorithms and data. That means if this room represents the world, you’re talking in every row, if you have 10 people, one to lead and nine to be left behind. That’s not acceptable. And that’s why we need a new governance model in partnership with the kingdom and like-minded nations that we have here, and parties, and thought leaders to make sure that we deliver a comprehensive governance model that is inclusive, innovative, and impactful.
Samir Saran: Can I just dive deeper into that particular aspect? So what should the global community focus on to make it more impactful, more inclusive, and especially agentic AI becoming more purposeful?
Abdullah AlSwaha: I cannot stress on this enough. And let me draw another parallel for you. If you deprive a person of oxygen for three minutes, it will cause irreversible damage. If you dehydrate them of water for three days, you will- cause irreversible damage. Same thing for foods within three weeks. If you take out compute and you restrict it, algorithms, frontier, large models, small models, and you restrict them, and data flows, at one point, you will cause irreversible damage to economies and to societies. This is why this is an existential conversation in 2025 as we transition into the intelligent age. So we used to have three divides, a talent divide, a digital divide, and a governance divide. Those were carried over in the intelligent age with three new divides because of the scaling law. The more compute, the less noisy, the more powerful the model is. The more data, the more parameters, same thing. So we have a compute divide, a data divide, and an algorithmic divide. What are we doing? We’re partnering with France, with IBM, with Mistral, with global players to make sure that as the world today needs 63 gigawatts worth of compute power, by the way, that’s in the next five years as big as what India needs, the most populous country on Earth, and as much water, because energy translates into heat, into cooling and water, as much water as the US needs. And hence, the kingdom has a fiduciary duty to say, as the energy leader of the world, the powers of 20% of the energy mix of the world, we’re not putting quotas. We’re collaborating with everybody. We’re putting our resources to make sure that we close down that compute divide. On the algorithmic front, we’re collaborating with everybody here to make sure that large models, small models, are tackling the most pressing issues in generative AI for sickle cell disease, which I showcased last Davos, for the first heart transplant that is made with full robotics with physical AI. And in agentic AI, we have one of the real use cases with Mr. Rao and others collaborating of how we could optimize within the circular carbon economy, delivering the lowest carbon footprint and uplift cost for our energy sector, delivering a true use case for how agentic AI can deliver prosperity for the people and the planet.
Samir Saran: Thank you, Minister. I think a very vivid description of the threshold of participation, but also the opportunity to respond to some of our critical questions of our times. Minister Chappaz, France will be hosting. the big AI summit next month. And I’m delighted that Prime Minister Modi is going to co-chair that summit in Paris. So India has a voice in your conference. Thank you for that. But are you going to be disappointed into some of these basic issues that have been outlined by Minister Abdullah in his opening remarks?
Clara Chappaz: So France will host the second global summit on AI on February 9th and 10th, 2025. So just a few weeks from now, by now. And the goal of the summit is basically to bring the vision that you’ve described to the world and to unite the country to this vision. But before doing that, the summit and more globally, governments and companies, I think need to do two things. Because this conversation, for it to happen and for it to be heard, there I think needs to be collaboration on a few other topics. Number one, we’re all very convinced that AI is great in this room. Otherwise, you would probably not listen to us at that moment and you would do something else. But where government and companies need to do work together and what the AI summit will also aim to do is create trust to people that AI is going to bring positive to their daily jobs, to their just daily life. Because every time we go out of those rooms and we meet people who are not that familiar with the technology, we still have a lot of questions that we need to answer collectively. We still have a lot of questions like, oh, how is my data used? Is AI going to replace me in my job? How is it going to impact the way I consume information? All of those very fundamental questions, if governments and companies don’t come together to create a framework where we can show the positives that AI brings to the world, with science, finding new drugs, with, you talked a lot about energy, how actually AI can bring a lot of positive outcomes in this conversation, bringing ways to consume less energy, with all the topics that are. basically top of mind for the world, and where government also needs to bring regulation, not for the sake of regulation, but just to create this trust around safety, this collaboration with companies and governments is very critical. So that’s one piece the summit also will bring is showing the world what AI can bring positively so that we can really push adoption much faster because there’s a lot of progress that can happen, but if people don’t use the tools, it’s not gonna happen. Two, economic competitiveness. It’s great if we all share, I mean, we’re all at the WEF, so we probably all have very similar views at having global conversations and bringing global talents together when it comes to such a transformative technology is something very positive and can bring a lot to the world, but we can only be heard if we are competitive. And by being competitive, I mean giving, and that’s also the role of government, giving our companies everything they need to create opportunities for real solutions to be built all around the world and not just concentrated in the hands of a few players, and I’m sure, Arthur, you will come back to that. And again, regulation can only be a tool. It doesn’t, and it should not hinder innovation, it should not slow down our companies, and that’s something that France has been pushing for very much when it comes to the AI Act, for example, in Europe, the AI Act needs to be seen as a tool to help you go faster because you don’t have to go and negotiate with 27 countries like how AI rules are gonna be implemented. We can unite as Europe to say, okay, you have only one framework to look at, and this framework needs to be implemented in a way that doesn’t hinder innovation, and that’s something we’re looking at very much. But this summit is also here to say how do we unite and give companies what they need to make it as attractive for them to build those solutions in Europe as well? So it’s infrastructure, it’s financing, it’s access to data, I’m sure we’ll talk about data a lot. Let’s make sure that all you need is also in the hands of innovative players because concentration doesn’t get to innovation faster. It doesn’t, it’s not true. We need those innovative players, we need those small companies. to bring new ways of thinking with AI. And that’s something we owe as governments to do. And then if we do that right, showing the society that it’s positive, helping smaller companies develop alternative solutions, then I think we can have this global conversation. And that’s also what we want to do, obviously, with the summit, saying, how do we make sure that this humongous progress that this technology will bring goes to all countries? And to that front, obviously, chairing the summit with India is a very positive signal we want to share with the world on how do we have very concrete steps, like we are also raising a foundation, making sure we give access to all countries to that technology. Minister, quick follow-up question. It was perhaps harshly said that EU understands regulation. EU does not understand innovation. In the AI age, is EU going to be a different actor? It is a different actor. And I think in this conversation, the narrative is super important. And we all contribute to this narrative. But who is on stage talking about AI today, building model? A European person. And who is heading AI efforts at META? A French person also was somewhere in this forum, Yann LeCun. We have talents. We have some of the people, some of the brightest mind, who are working and building this technology. And we should not let ourselves kind of stuck into this idea that Europe only regulates. We regulate for the sake of getting the technology faster to the market. That’s really the message I want to have today. And the European Commission is going to announce a lot of things and their ambition in the next few weeks. But this is our time to show that we’re also putting regulation as a tool to accelerate innovation.
Samir Saran: Thank you so much. Arvind, let me turn to you. And let me ask you from an industry perspective, that how do you, and since IBM has been in this domain for very long, since its infancy, if you were to go through the cycles that Minister Al-Soha has outlined. In some sense, how will you design collaborative frameworks between AI companies and policy makers in France and other parts of the world that actually allow meaningful innovation, but also significant oversight that prevents harms? And in some sense, is that collaboration going to allow us to come up with a self-regulatory architecture that is like you mentioned in backstage, not heavy handed, light touch, but prevents harms?
Arvind Krishna: Yeah, Samir, thank you for the question. Also for the audience, both here and virtual, I think you’re having here an example of government across two different geographies and industry across two different geographies who might be remarkably more in alignment than against each other. It also probably helps that we all do work with each other quite a bit behind the scenes in common goals. So I’ll look at it from the goals of a industry player. Our goal is always to try to deploy technology that improve our client’s business. Our clients include governments and include other industries, and they’re in all countries. I also take the perspective that we have been in both what you’re calling the global South and the global North now for 110 years. I think we opened our first office in Brazil in 1920, if I remember correctly, just as of this thing. Now, I believe that regulation should try to be light touch. I use the example, is it a scalpel or is it a sledgehammer? And too often regulation, even if legislation desires it to be a scalpel, when it gets into the implementation of the actual regulators, it becomes a sledgehammer. And that’s a problem because that is friction that slows down innovation. Capital and brains will then flow to places where it is easier to make progress because these are mobile. The capital as well as smart people can pick and choose where they want to work. I offer a very simple framework on how to think about this. Number one, we need to keep this technology open. We are at the very, very early stages. For some regulator to pick who’s a winner, even if accidentally, would be a problem. So focus deeply on how open you can keep it and how do you allow lots and lots of innovation to happen. Number two, take a risk-based approach. 90% of what is to be done does not have a lot of risk in it. Allow that to happen with a very, very light touch. Then focus your heavy-handed approach only on those cases where there is extreme risk, where you can say there is a nation-level risk or there is risk to life and death. Keep it reserved for only those aspects. And lastly, this will sound strange because we also develop models, but I would turn around and say, hold model developers accountable for what they are producing. A lot of subtlety in there, lots of words to be there, because then you’d say, well, what is accountable means somebody could misuse it. I got it, but then how clear are you about what those guardrails are that you have put in? So do hold developers responsible. And from a tech vendor, I know that sounds strange. Tech people normally run and want complete immunity and no liability. And so I often just offer that as a framework to think about it.
Samir Saran: You know, you remind me of a press interview I’ve written some time back where the technology minister was asked, do politicians understand technology? She says, perhaps we don’t, and we are trying to, but do technologists understand politics? Because there, technology has political consequences. And how do you create this interesting nexus is a challenge, I guess, in the AI age more acutely. Arthur, let me turn to you. I want to, you defy the threshold challenge of AI. You know, I think Minister Abdullah had kind of outlined you need big computing. capabilities, big data centers, big money and algorithmic capabilities, and therefore many suggest that the winners of the third industrial revolution are going to be the incumbents in the fourth, because they are the ones who have gathered all these capabilities. You come out of the blue and you start making a name. What does your model or your growth and your presence have, you know, what are the lessons from that emergence of Mistral?
Arthur Mensch: So thank you very much for inviting me. So I would say even though we, I mean, we obviously don’t really have a say on the governance and we don’t actually think we should, I think there’s also some good gap to keep in between what the politics decide to do and what the industries tell them to do, because the incentives are not fully aligned and I think we need to acknowledge that. But in many respects, the creation of Mistral was driven by an intention to make the technology more widely available and to also show that there was a counter model to the scaling approach to the, it takes a hundred billion to get their approach that has been promoted by some of our U.S. friends. And so in that respect, I think what we’ve shown is that it obviously takes capital, it obviously takes innovation, it takes algorithmic innovation, but with dedicated people, and I think they exist in Europe, we have great talent there, but they also exist in many other parts of the world, including in the Middle East, including in Saudi Arabia with whom we’re working, I think we need to make sure that the AI we’re building, the industrial revolution that we’re all building together, doesn’t end up being controlled by free U.S. players, because if that’s the case, the digital divide that we’ve been observing is just going to continue to grow, the GDP per capita will continue to grow, but the chance we have is that this is not such a costly technology to make. the right choices you can make sure that this is an innovation that can be much much wider spread than the innovation that we’ve done in the previous cycles. And so the approach we’ve taken was to show that. The approach we took since we want to promote a more decentralized approach where every actor has access to its own AI, makes its own choice regarding how it behaves and how it works, we promoted an open source, we promoted open source models just to show that these models were actually harmless and I think we’ve showed in two years of existence that they haven’t been used for harmful purposes and that more importantly they contributed to lowering the barrier of entrance to making interesting technology on top of it. If you start from an open source model today and you want to put a new capacity in a certain language, in Arabic for instance, it’s much easier if you have access to very good open source models because you start from a certain level of intelligence and you can bring your own expertise. So that applies to language, that applies to culture that you will be able to put into the models themselves, that also applies to specific expertise that the industry may have. So I think AI in the way it works is very much calls for decentralization and the biggest risk we have today is a form of concentration of power, narratives that say that it’s actually a centralized technology and a technology that we should keep in a centralized way for risk purposes. So we’ve been fighting against this narrative, we’ve been showing through actual achievements that the benefits of being more open, of being more decentralized, of working with everybody in the world were actually super important and shouldn’t be given up for risk reasons I would say. And now when it comes to governance, I think the way we should think about governance is in controlling applications and making sure that applications we make on AI, actually behave as we want them to behave. So that’s very much an evaluation problem, that you design a model, you connect it to data, you connect it to tools, you give it instructions, and you want it to behave in a certain way. You want it to give, for instance, healthcare advices to physicians. But before deploying it, you obviously want to make sure that this works, because if it only works 80% of the time, it means that 20% of the time, we’ll make a bad decision. And this is actually a hard problem, this is a scientific problem, you need to evaluate, you need to come up with automated way of evaluating, you need to come up with processes to get humans to evaluate the technology. I think that’s actually the crux of what we should be doing when it comes to governance. We should make sure that whatever is put into production, whatever is exposed to end users, is actually well controlled. I think we can, where we can all agree, with our US friends, with everyone in the world, we can all agree on a way to evaluate things, on a way to decide whether the software we’ve been making, because we’re really building software, this is not really different from previous cycles. We need to agree on how do we decide that the software we’ve made is working. And for me, and I think that I would differ a little bit from your approach to model, although this is probably linked, I think we really need to focus on the application side, because what matters is the systems that you put into place. So the model is really a part, it’s an engine. When you validate a car, when you ensure that the car is safe, you ensure the entirety of the car, you don’t look only at the engine. So that’s the approach I would follow.
Samir Saran: That’s a really fascinating proposition. What you are really suggesting is that perhaps before we deploy AI solutions to populations, we need a verifiable process that prevents harms, very similar to what we have done, for example, in vaccines, that you have testing, you have audits, before you go into the population scale, but you are suggesting that maybe automation can be the basis of generating some of the outcomes.
Arthur Mensch: I’m suggesting that this is the direction of travel we should have. I’m not suggesting that we should… we should dampen the spread of the technology today.
Samir Saran: Look, I don’t disagree with you, but I think there’s some variation of that model may not be a bad principle.
Arthur Mensch: So today the technology is working well enough on certain use cases. We should deploy it to population. They should adopt it. That’s very important. But tomorrow it’s going to get more and more complex as we move into agentic behavior, et cetera. And there we do need to think more about how we evaluate. Everyone of us in the industry should think about it. I think eventually we’ll come up with ways that converge and that can be implemented at the global level.
Samir Saran: Arvind, I’ll just come to you. Arvind, I spoke to you in the green room and about the concentration of power or the concentration of the technology in a few hands. Your take on it. IBM is one of the big giants of this sector.
Arvind Krishna: Look, a simple perspective on this. If you lead to too much heavy-handed regulation, you will lead to what I would call regulatory capture. And one should wonder whether some of the players talking about heavy-handed regulation, it benefits them if it leads to regulatory capture of only a few people. And it all comes down to, are you talking about an over-increase in the amount of investment? Actually, I would point to Mr. Allen and say that’s a counterbalance because people now say hundreds. They were talking it takes multiple billions, but you guys did it with quite a bit less than that. So human ingenuity, yes, with deep skills and deep training and all that could get over those things. I think it’s incredibly important that we have hundreds of companies innovating in this because it’s also important for sovereignty. I’ll use that word. Saudi Arabia is not going to give up on having some of their own models, whether it’s for language or whether it’s for some of their unique purposes. The minister talked about 20% of global energy comes out of there. That’s an area that’s increasingly important to them as, by the way, are urban hotspots and many other things. For France, whether it’s defense, whether it’s aerospace, there’s many areas where they will want to have their own, not just a horizontal. tech company. I think you’ve got to maintain that freedom. Then you say, well, but you can do that. Well, not really, if at the bottom layer you need permission from a provider of a base model. So that is sort of my perspective. If I could jump on Arthur’s point, I do agree that the owner should lie on the deployer of the application, not the developer of the model. Where one has to get careful, though, is if one says that’s an auditable process, a regulator will immediately jump to, it’s not a self-audit, I need a third-party audit. The estimate is there’s a billion applications. Okay, put the costs in. If you’re going to go spend $100,000 on an external audit for a billion applications, that’s $100 trillion. Okay, you’re not going to do that. I mean, that is what I call heavy-handed regulation that kills it. So it has to get back to what is the risk in this presumed application.
Samir Saran: A risk-based approach on how you verify.
Arvind Krishna: Or you accept a self-audit by the person deploying it in most cases. So that is where this is extremely important, or it doesn’t work. I mean, your vaccine example, let’s just point out, what is the cost to get a vaccine through the FDA? You’re talking about a billion dollars. Yeah, huge. And that’s why I was asking that question. That analogy, I think, doesn’t quite work at the scale at which we all want to adopt it. Look, we talk about $16 trillion worth of global GDP getting unlocked with AI. If you put that heavy a burden, it’s not going to get unlocked. And I think that we all need that, given some of the other challenges we all have in front of us.
Samir Saran: Minister, you wanted to come in.
Clara Chappaz: Yeah, I wanted to jump in this conversation because that’s exactly, actually, the approach that Europe has taken to say, okay, the regulation should be based on the risks allocated with the usage of the model and not the models themselves. So coming back to the analogy that Arthur was making and the motor or the engine of a car, it’s not the engine itself that we want to look after. It’s how this engine is going to be used. And probably if an engine is used for an air dryer, you don’t need the same regulation as if it’s used for a nuclear plant. And that’s exactly the framework that has been the one that Europe has been taking with the EU AI Act to say. Okay, there are things in Europe we just don’t want to see, like using AI to rate people and social behavior. It’s just not the vision of the democracies that we want to have in Europe.
Samir Saran: That’s a fair point, Minister, because most of the secondary regulations already exist. When you deploy AI for drug development, that process already exists, and a verifiable process is already in place. So you don’t need to double regulation just because you’re building a new tool. Minister, did you want to comment on two aspects? One is concentration, but second is, since you started with the divides argument, how is the Global South and the emerging and developing part of the world, going to participate in this in numbers so that the seven billion who are not accounted for in the previous waves of growth become?
Abdullah AlSwaha: I think it’s very critical that we take a step back and look at the fundamentals. This is a classic example of challenges of perception versus perspective. We talk about governance, but let’s go to the building blocks of what is governance. It goes back to the steam engine. There’s a component called the governor. It controls flow of resources to make sure that there’s no cap on energy, there’s no cap on the steam engine, there’s no cap on anybody, and it creates that balance to make sure that it’s inclusive, innovative, impactful to help humanity if you look at the industrial revolutions. Although it’s the heart of the Arab and the Muslim world, we would argue that the governor component came 563 years before that from a judiciary from irrigation systems with gears back in the Islamic golden time. Going back to the basics, there are three layers, the data layer, the algorithmic layer, and the outcome layer. The data layer, we all agree, we all have data protection acts because consumer privacy is very critical. On the algorithmic layer, we should not stifle innovation. Let’s borrow brilliantly from B2B SaaS. What’s effectively happening in agentic AI is borrowing from the. $300 billion of unicorns in B2B SaaS we have seen in the past 20 years. If it’s in the context of healthcare, you have to conform to HIPAA. If it’s in the context of banking, it’s PCI. See that generative AI example that we have showcased last year with sickle cell disease, right? So we cut corners on drug formulation. In three years, we were able to cut down a problem statement that normally takes 15 years. But when it comes to critical clinical trials, there’s no shortcuts. That 15 patients a day that are receiving that drug are going for stage one for toxicity and stage, you know, on side effects. Stage two is affixy, then full approval. So it has to be a risk-based approach when it comes to the outcomes.
Samir Saran: You know, I want to pose a question and there are different perspectives from France, from the U.S., and of course, two ministers from different geographies. We have seen different jurisdictions adopt different frameworks to promote AI in their geographies. Of course, we are in this economic nationalism moment. Everyone wants to, you know, self-interest is now the operating principle. Fair enough, we are in that world, we have to confront it. How do you, in these polarized times, and these times of exclusive policymaking, how do you, in many ways, coordinate these different patchwork of arrangements so that companies and innovation can bloom across borders?
Abdullah AlSwaha: We have been very clear and very consistent. If you go back to 2020, the toughest year of COVID, under His Royal Highness the Crown Prince leadership, we drove for the first time consensus and commitment at the G20 level for the OECD principles of trustworthy AI. When you’re talking about all of the subsequent regulations, peace of governance forums, they all reference that point in history. Fast forward to today, we have built a coalition of like-minded nations called the Digital Cooperation Organization. 16 countries, the majority in the global south, and we have made sure that we launch the AI for all initiative, leaving no one behind. See the sickle cell disease problem? We’re working right now with nations in Middle East and Africa, tackling these genetic diseases. See the agentic AI that we have delivered for Aramco, this is achieving a billion dollar of optimization. We’re opening this as an open source to other nations to benefit from them within this industry. We have a showcase tonight within the Aramco reception. You’re all invited to see it firsthand. So we have been consistent that in the intelligent age, you can’t afford to leave people behind because the cost is as catastrophic of depriving a person of oxygen, water, and food.
Samir Saran: Minister?
Clara Chappaz: A very similar approach. Since 2019, our president has been very committed to have a global agenda on AI, and with Canada, we launched the global project on AI, which now has 44 countries involved in working with OECD on how we get commitments. This is the global partnership on AI? Global partnership on AI, yes. How we get commitments from all those different countries, including southern countries, on how we want to move forward together. And I’m actually quite hopeful this is happening because in just a few weeks, as I said, we’re hosting this summit, and the number of countries that will be represented, the number of companies, the number of scientists that all come into the same city for a few days discussing about this technology, I don’t think they come just to have a nice picture. They come because they believe in this agenda. They believe that if we would have come together, as we are gonna do it in just a few weeks, maybe 20 years ago, when a lot of the technology came into the world, we would probably be in a very different world. And to your point, we are a huge number of people all around the world, and we’ve taken the, our president has taken a strong leadership in that respect. We’ll say that there is no way we’re gonna let this technology just develop being concentrated in. the value being concentrated in the hands of a few actors. We need to have a much more global agenda because it’s so fundamental, the way it’s gonna change people’s life, that it is our one opportunity to do something much wider and much more thought through. And I’m not gonna announce everything that’s gonna be announced at the forum, but for the very first time, we’ll have a huge number of countries signing a common declaration on how they see AI development and also including their questions of ethical inclusivity, sobriety, how do we make sure that also we are responsible as countries into the usage of energy? Because it’s also a question of sustainability for this technology. I mean, the word at some point is gonna run out of all. So we need to be much more thoughtful in the way we’re developing the technology to have an understanding of the barriers that are just physical barriers. And that’s, I think, what we wanna achieve with the summit and also inviting everyone to the summit. So I’ll be happy to continue the conversation there.
Samir Saran: Arvind, this was a government view. Let me turn to you. In a polarized world, how do you navigate it? And look, we have heard about concentration of powers in the hands of a few who shall not be named. Let’s take two of them. How do you navigate them?
Arvind Krishna: Actually, so first, I would immediately say that on a diplomatic and political side, I would always defer to government and, for example, to my esteemed colleagues here and let them go do it. Let me take a different perspective completely. Let me take a completely technologist opinion to how do we get things to be much more inclusive and get billions to embrace it. One simple example from 30 years ago, when mobile phones first came around the globe, it cost a dollar a minute to make a phone call when you were in the U.S. or in France, circa mid-1990s. By the 2005. time period, countries like India and some countries in Africa were doing it for 1% of that number. So technologists have to go focus on how do you drive the cost down, not to the subsidy, still making a profit, but drive it down because if you can bring it there then the whole world can participate in those technologies and the benefits. So take AI, how can we make model deployment first literally at 1% of today’s cost. Focus on the complete stack, what is the power requirement, why does it have to be so high. It’s interesting, all of you are taking it for granted that it has to be so high. Nowhere is it written in the laws of physics it has to be this high. It can literally drop to 1% of what it is. I know because Abdullah is investing in some companies like that, right. It is not just one player, there’s many others. Two, when you look at model training, why does it have to take so long? Can one have alternate techniques? They proved one a few years ago but there’s many others that could be done. By the way, can you start with a base model and incrementally add what’s needed for a country, a domain, an industry in a very small way. So the approach we will take is let’s focus on those aspects. Can I drive it down to be a hundred times cheaper? I’ll tell you I have 30 times line of sight. I need three more. So the first big part is done. That will make it inclusive and that is what will get it done. Then my hope is that these frameworks allow us to go work in each of these countries and industries.
Samir Saran: Thank you Arvind.
Arthur Mensch: I would say I think we can we can split responsibilities in between industries and governance. The first, our responsibility would be to make it cheap and to make it controllable and private. I’ll dive into that. I think the responsibility of government should be to make it widespread, to focus on education, focus on public safety. services, focus on health care, and make sure that there’s no regulatory capture. That’s the thing. That’s how I view it. And so just diving a little deeper, how do we make it cheap? So you make algorithmic changes to have the smallest models possible. You work on the software. You work with hardware makers. You actually work with new hardware makers to reduce the energetical cost and to make sure that you’re leveraging the hardware as much as possible. That’s a software that’s a lot of R&D to do, and that’s something we’ve been investing on. Something that we’ve released in particular are edge models, so models that can be deployed on device. If you do that, you’re actually removing, you’re alleviating the cost that is carried by data centers. And so that’s one of the approaches that you can have. And then when it comes to making it controllable, this is everything related to being open source, taking the software and putting it in the hands of our customers so that they make their own custom AI systems, and we do disappear from the loop and we ensure that they have full privacy. Actually, that’s very important. It’s important because we need to reduce the interdependencies we have, because we do live in a world where interests are not always aligned, and we think that to build resilience in every country, every country should have its own AI strategy. Every country should have sufficient ownership on the AI stack to make its own choice and to build its new soft power as well, because there’s some cultural aspect to it. The way your AI systems are going to behave also tells of the value you want to promote within your country. So that’s, I think, the way we approach it. It’s very much of a technological problem. On the education part, it’s slightly a technological problem as well, because we need to come up with the right interfaces, the delightful interfaces that are going to make everyone in a country like using AI, everyone in the school use this tool to learn faster. So there’s a lot of UX aspects also. It’s not only about software, deep software science, also about the interface, the design. And the design needs to be adapted to the different part of the world in which you are because it’s different languages, sometimes you read from left to right, sometimes from right to left, and that creates a lot of differences that we need to address. And so that’s why, again, the decentralization is not only about the models, it’s also about the way you distribute it because these vary and it should vary within different countries.
Samir Saran: As we were sitting in the green room, we were discussing some aspects and one of the comments that I heard, and I thought that would be a good last segment as we close and we have now four minutes left, a quick take from all of you on global partnerships, cooperation and governance arrangements that can prevent negative political, social and cultural consequences of what AI could do, how it could play with our minds, how it could change our histories and our futures and our role in the world and our space and our localities, and that element of social AI, if I was to use that term here, how do we govern that, which is actually far more dispersed, far more primitive, far more difficult to manage, but yet something which has real consequences to sanctity and integrity of societies, nations, for democratic countries, even their elections and democracies, right? So some thoughts on that, final thoughts, how do you manage the social AI and its widespread prevalence in shaping narratives and discourses and politics?
Abdullah AlSwaha: In a minute, Dr. Samir. You all have heard it from Mistral, IBM in France and Saudi, how we’re collaborating as a coalition of like-minded nations to make sure that this general purpose technology leaves no one behind. Because the cost that we have seen in the analog world and in the digital world, in the intelligence age, is going to be so big and so large, and this is why this is a call to action, and I do invite everybody after the session to collaborate and join hands together of how we can make sure that we close down the three new divides, leveraging, for example, the kingdom’s captive market, energy and capital, on the compute algorithmic and data divide, to make sure that there’s a governance model that is inclusive, impactful, and innovative for all.
Samir Saran: Very skillfully avoided my question, but I’ll go to you.
Clara Chappaz: So I agree with what has just been said, maybe coming back to your question on making AI more social. I’m gonna double down on what Arthur just said on the role of government, because we talked a lot about safety, trust, and we talked a lot about regulatory capture, but education is definitely, I mean, we would need another session to discuss that, but it’s our responsibility as government to really think AI through when it comes to education, because not only we have an opportunity to bridge the digital divide, but we have an opportunity to bridge the knowledge and skill divide.
Samir Saran: And the responsible use of AI.
Clara Chappaz: And say, that’s some of the conversation we’re having in France with the Minister of Education nowadays is let’s not be naive and think that kids are just not using it, like they’re using it. Our responsibility is to make sure we train kids, but most importantly, teachers, on how to rethink the way they teach our kids using the technology. How this technology can help having a much more personalized approach to learning. This is definitely one of the way, I think, that will bridge the divide.
Samir Saran: Education and awareness. Yeah, Arvind.
Arvind Krishna: I’ll be brief. Look, just building on all this and agreeing with all of it, the cat’s out of the bag. People are already using it. Just, I’ll make an observation. Newspapers, same statements were made 150 years ago. Television, same statements were made 90 years ago. Internet, most of you are too young. Same statements were made 35 years ago. So this is yet again, it’s just another extreme on that. That is actually where I think legislation comes in. If a bad enough actor is caught, you go after the actor, not the technology, and that will make a big difference.
Samir Saran: Arthur, final words.
Arthur Mensch: I would say on the social impacts, we shouldn’t underplay it. It’s going to be huge. As we move toward more automation, and as we actually change profoundly the way we work, the way people work, that’s also changed the way they work together, and so that also changed the way people were in. interact. And so we do need to make sure that everybody is aware of that, is trained in the new technology, and has support to integrate that, whether it’s from the law, because we probably require some new social laws, or whether it’s from education, which is very important. So we should, I want to say, we should not leave anyone behind.
Samir Saran: No one behind. Empower, make them aware, educate them, and of course, go after the actor, not the technology. The technology can do tremendous good for the world. That’s the positive message coming out of this panel. Let’s collaborate, let’s partner, and now let’s break for tea. Thank you so much for joining us.
Arvind Krishna
Speech speed
183 words per minute
Speech length
1521 words
Speech time
497 seconds
Risk-based approach to regulation
Explanation
Arvind Krishna advocates for a risk-based approach to AI regulation. He suggests focusing regulatory efforts on high-risk applications while allowing low-risk uses to proceed with minimal oversight.
Evidence
Krishna uses the example of reserving heavy-handed regulation only for cases with extreme risk, such as nation-level risks or risks to life and death.
Major Discussion Point
AI Governance and Regulation
Agreed with
– Clara Chappaz
– Arthur Mensch
Agreed on
Light-touch, risk-based approach to AI regulation
Differed with
– Clara Chappaz
– Arthur Mensch
Differed on
Approach to AI regulation
Make AI technology cheaper and more accessible
Explanation
Krishna emphasizes the need to drive down the cost of AI technology to make it more inclusive and accessible globally. He suggests focusing on technological solutions to reduce costs and increase efficiency.
Evidence
Krishna mentions having a 30 times line of sight on cost reduction and needing three more to achieve a hundredfold decrease in costs.
Major Discussion Point
Addressing AI Divides and Inclusivity
Agreed with
– Arthur Mensch
– Clara Chappaz
– Abdullah AlSwaha
Agreed on
Addressing AI divides and promoting inclusivity
Differed with
– Arthur Mensch
– Abdullah AlSwaha
Differed on
Strategies for AI inclusivity
Potential to unlock trillions in global GDP
Explanation
Krishna highlights the significant economic potential of AI technology. He argues that heavy-handed regulation could hinder the realization of this economic benefit.
Evidence
Krishna mentions the potential to unlock $16 trillion worth of global GDP with AI.
Major Discussion Point
AI’s Social and Economic Impact
Focus on technological solutions for inclusivity
Explanation
Krishna suggests that technologists should focus on driving down costs and improving efficiency to make AI more inclusive. He believes this approach can help billions of people embrace AI technology.
Evidence
Krishna draws a parallel with the mobile phone industry, where costs were drastically reduced over time, making the technology widely accessible.
Major Discussion Point
International Cooperation on AI
Arthur Mensch
Speech speed
178 words per minute
Speech length
1668 words
Speech time
560 seconds
Focus on regulating applications, not models
Explanation
Arthur Mensch argues that AI governance should focus on controlling applications rather than the underlying models. He emphasizes the importance of evaluating how AI systems behave in specific applications.
Evidence
Mensch uses the analogy of validating a car as a whole system rather than just its engine.
Major Discussion Point
AI Governance and Regulation
Agreed with
– Arvind Krishna
– Clara Chappaz
Agreed on
Light-touch, risk-based approach to AI regulation
Differed with
– Arvind Krishna
– Clara Chappaz
Differed on
Approach to AI regulation
Decentralized approach to AI development
Explanation
Mensch advocates for a decentralized approach to AI development, where multiple actors have access to AI technology. He believes this approach can help prevent the concentration of power in a few hands.
Evidence
Mensch cites Mistral’s promotion of open-source models as an example of lowering barriers to entry in AI development.
Major Discussion Point
Addressing AI Divides and Inclusivity
Agreed with
– Arvind Krishna
– Clara Chappaz
– Abdullah AlSwaha
Agreed on
Addressing AI divides and promoting inclusivity
Differed with
– Arvind Krishna
– Abdullah AlSwaha
Differed on
Strategies for AI inclusivity
AI will profoundly change how people work and interact
Explanation
Mensch emphasizes the significant social impact of AI technology. He argues that AI will fundamentally change work processes and social interactions, requiring careful consideration and support.
Major Discussion Point
AI’s Social and Economic Impact
Open source and decentralized approaches
Explanation
Mensch promotes open-source and decentralized approaches to AI development. He argues that this strategy can help ensure wider access to AI technology and prevent the concentration of power in a few hands.
Evidence
Mensch mentions Mistral’s efforts in promoting open-source models and working with various countries, including Saudi Arabia.
Major Discussion Point
International Cooperation on AI
Clara Chappaz
Speech speed
178 words per minute
Speech length
1791 words
Speech time
603 seconds
Light-touch regulation to avoid stifling innovation
Explanation
Clara Chappaz advocates for a light-touch approach to AI regulation. She emphasizes that regulation should be a tool to accelerate innovation rather than hinder it.
Evidence
Chappaz mentions France’s efforts in shaping the EU AI Act to ensure it doesn’t slow down innovation.
Major Discussion Point
AI Governance and Regulation
Agreed with
– Arvind Krishna
– Arthur Mensch
Agreed on
Light-touch, risk-based approach to AI regulation
Differed with
– Arvind Krishna
– Arthur Mensch
Differed on
Approach to AI regulation
Focus on education and widespread adoption
Explanation
Chappaz emphasizes the importance of education in AI adoption. She argues that governments have a responsibility to integrate AI into education systems and train both students and teachers in its use.
Evidence
Chappaz mentions ongoing conversations with the French Minister of Education about rethinking teaching methods using AI technology.
Major Discussion Point
Addressing AI Divides and Inclusivity
Agreed with
– Arvind Krishna
– Arthur Mensch
– Abdullah AlSwaha
Agreed on
Addressing AI divides and promoting inclusivity
Opportunity to bridge knowledge and skills divides
Explanation
Chappaz sees AI as an opportunity to bridge knowledge and skills divides. She argues that proper integration of AI in education can lead to more personalized learning approaches.
Major Discussion Point
AI’s Social and Economic Impact
Global partnerships to develop ethical AI
Explanation
Chappaz highlights the importance of global partnerships in developing ethical AI. She emphasizes France’s efforts in fostering international cooperation on AI governance.
Evidence
Chappaz mentions the upcoming AI summit in France and the Global Partnership on AI initiative involving 44 countries.
Major Discussion Point
International Cooperation on AI
Abdullah AlSwaha
Speech speed
163 words per minute
Speech length
1379 words
Speech time
507 seconds
Governance should ensure inclusivity and innovation
Explanation
Abdullah AlSwaha emphasizes that AI governance should promote inclusivity and innovation. He argues for a comprehensive governance model that leaves no one behind in the AI revolution.
Evidence
AlSwaha mentions Saudi Arabia’s efforts in partnering with other nations and companies to close the compute divide and promote inclusive AI development.
Major Discussion Point
AI Governance and Regulation
Need to close compute, data and algorithmic divides
Explanation
AlSwaha highlights the importance of addressing the compute, data, and algorithmic divides in AI development. He argues that failing to close these divides could lead to irreversible damage to economies and societies.
Evidence
AlSwaha mentions Saudi Arabia’s efforts to collaborate with other countries and companies to close these divides, including partnerships with France, IBM, and Mistral.
Major Discussion Point
Addressing AI Divides and Inclusivity
Agreed with
– Arvind Krishna
– Arthur Mensch
– Clara Chappaz
Agreed on
Addressing AI divides and promoting inclusivity
Differed with
– Arvind Krishna
– Arthur Mensch
Differed on
Strategies for AI inclusivity
Risk of concentrating power in few hands
Explanation
AlSwaha warns about the risk of AI technology concentrating power in the hands of a few actors. He emphasizes the need for a more inclusive approach to AI development and deployment.
Evidence
AlSwaha uses the analogy of oxygen, water, and food deprivation to illustrate the critical importance of inclusive AI development.
Major Discussion Point
AI’s Social and Economic Impact
Building coalitions of like-minded nations
Explanation
AlSwaha advocates for building coalitions of like-minded nations to promote inclusive AI development. He emphasizes the importance of international cooperation in addressing AI challenges.
Evidence
AlSwaha mentions Saudi Arabia’s involvement in the Digital Cooperation Organization and the AI for All initiative.
Major Discussion Point
International Cooperation on AI
Agreements
Agreement Points
Light-touch, risk-based approach to AI regulation
speakers
– Arvind Krishna
– Clara Chappaz
– Arthur Mensch
arguments
Risk-based approach to regulation
Light-touch regulation to avoid stifling innovation
Focus on regulating applications, not models
summary
The speakers agree that AI regulation should be light-touch and risk-based, focusing on high-risk applications while allowing innovation in low-risk areas.
Addressing AI divides and promoting inclusivity
speakers
– Arvind Krishna
– Arthur Mensch
– Clara Chappaz
– Abdullah AlSwaha
arguments
Make AI technology cheaper and more accessible
Decentralized approach to AI development
Focus on education and widespread adoption
Need to close compute, data and algorithmic divides
summary
All speakers emphasize the importance of making AI technology more accessible and inclusive, addressing various divides in AI development and adoption.
Similar Viewpoints
Both speakers advocate for technological solutions and open-source approaches to make AI more inclusive and accessible.
speakers
– Arvind Krishna
– Arthur Mensch
arguments
Focus on technological solutions for inclusivity
Open source and decentralized approaches
Both speakers emphasize the importance of international cooperation and partnerships in developing ethical and inclusive AI.
speakers
– Clara Chappaz
– Abdullah AlSwaha
arguments
Global partnerships to develop ethical AI
Building coalitions of like-minded nations
Unexpected Consensus
Potential of AI to bridge divides
speakers
– Abdullah AlSwaha
– Clara Chappaz
– Arvind Krishna
arguments
Need to close compute, data and algorithmic divides
Opportunity to bridge knowledge and skills divides
Potential to unlock trillions in global GDP
explanation
Despite representing different sectors and regions, these speakers share an optimistic view of AI’s potential to bridge various divides and create economic opportunities, which is unexpected given the often-cited concerns about AI exacerbating inequalities.
Overall Assessment
Summary
The speakers generally agree on the need for light-touch, risk-based regulation, the importance of making AI inclusive and accessible, and the potential of AI to bridge various divides and create economic opportunities.
Consensus level
There is a high level of consensus among the speakers on key issues, which suggests a growing alignment between industry, government, and startups on AI governance and development. This consensus could facilitate more coordinated efforts in AI development and regulation, potentially leading to more inclusive and ethical AI systems globally.
Differences
Different Viewpoints
Approach to AI regulation
speakers
– Arvind Krishna
– Clara Chappaz
– Arthur Mensch
arguments
Risk-based approach to regulation
Light-touch regulation to avoid stifling innovation
Focus on regulating applications, not models
summary
While all speakers advocate for some form of regulation, they differ in their specific approaches. Krishna proposes a risk-based approach, Chappaz emphasizes light-touch regulation to promote innovation, and Mensch focuses on regulating applications rather than models.
Strategies for AI inclusivity
speakers
– Arvind Krishna
– Arthur Mensch
– Abdullah AlSwaha
arguments
Make AI technology cheaper and more accessible
Decentralized approach to AI development
Need to close compute, data and algorithmic divides
summary
The speakers propose different strategies for making AI more inclusive. Krishna focuses on reducing costs, Mensch advocates for a decentralized approach, while AlSwaha emphasizes closing specific divides in compute, data, and algorithms.
Unexpected Differences
Role of open-source in AI development
speakers
– Arthur Mensch
– Arvind Krishna
arguments
Open source and decentralized approaches
Focus on technological solutions for inclusivity
explanation
While both speakers advocate for inclusivity in AI development, Mensch strongly promotes open-source approaches, whereas Krishna focuses more on technological solutions to reduce costs. This difference is unexpected given that both represent technology companies.
Overall Assessment
summary
The main areas of disagreement revolve around regulatory approaches, strategies for AI inclusivity, and the balance between open-source and proprietary technological solutions.
difference_level
The level of disagreement among the speakers is moderate. While they share common goals of promoting inclusive and responsible AI development, they differ in their proposed methods and focus areas. These differences reflect the complexity of AI governance and the need for diverse approaches to address global challenges. The implications suggest that a comprehensive AI governance framework may need to incorporate multiple strategies to be effective on a global scale.
Partial Agreements
Partial Agreements
All speakers agree on the significant impact of AI on society and the economy, but they differ in their focus areas. Krishna emphasizes economic benefits, Mensch highlights social changes, Chappaz focuses on educational opportunities, and AlSwaha warns about power concentration risks.
speakers
– Arvind Krishna
– Arthur Mensch
– Clara Chappaz
– Abdullah AlSwaha
arguments
Potential to unlock trillions in global GDP
AI will profoundly change how people work and interact
Opportunity to bridge knowledge and skills divides
Risk of concentrating power in few hands
Similar Viewpoints
Both speakers advocate for technological solutions and open-source approaches to make AI more inclusive and accessible.
speakers
– Arvind Krishna
– Arthur Mensch
arguments
Focus on technological solutions for inclusivity
Open source and decentralized approaches
Both speakers emphasize the importance of international cooperation and partnerships in developing ethical and inclusive AI.
speakers
– Clara Chappaz
– Abdullah AlSwaha
arguments
Global partnerships to develop ethical AI
Building coalitions of like-minded nations
Takeaways
Key Takeaways
AI governance should take a risk-based approach, focusing on regulating applications rather than models
There is a need to address AI divides (compute, data, algorithmic) to ensure inclusivity
Making AI technology cheaper and more accessible is crucial for widespread adoption
International cooperation and partnerships are important for developing ethical AI
AI will have profound social and economic impacts, potentially unlocking trillions in global GDP
Education and awareness are critical for responsible AI use and bridging knowledge divides
Resolutions and Action Items
France to host a global AI summit in February 2025 to unite countries on a common vision for AI development
Saudi Arabia inviting collaboration to close AI divides through the Digital Cooperation Organization
Focus on developing edge models and open-source AI to increase accessibility and privacy
Unresolved Issues
Specific mechanisms for preventing concentration of AI power in the hands of a few companies
Detailed framework for evaluating AI applications before deployment at scale
Concrete steps to implement risk-based regulation across different jurisdictions
Suggested Compromises
Balance between light-touch regulation to foster innovation and necessary oversight to prevent harms
Decentralized approach to AI development while maintaining some level of global coordination
Focus on regulating AI applications rather than the underlying models or technologies
Thought Provoking Comments
In the analog age, we have $110 trillion today worth of GDP. Per capita, for every dollar being made in the global south, somebody makes 3.5 to 4x that in the global north. That’s not acceptable.
speaker
Abdullah AlSwaha
reason
This comment frames AI as a potential solution to longstanding global inequalities, setting the stage for discussing AI’s societal impact.
impact
It shifted the conversation towards considering AI’s role in addressing global economic disparities and set a tone of urgency around ensuring equitable AI development.
We need those innovative players, we need those small companies to bring new ways of thinking with AI. And that’s something we owe as governments to do.
speaker
Clara Chappaz
reason
This insight challenges the notion that AI development will be dominated solely by large tech companies, emphasizing the importance of fostering innovation broadly.
impact
It prompted discussion about the role of governments in supporting AI innovation and the importance of a diverse AI ecosystem.
I offer a very simple framework on how to think about this. Number one, we need to keep this technology open. We are at the very, very early stages. For some regulator to pick who’s a winner, even if accidentally, would be a problem.
speaker
Arvind Krishna
reason
This comment provides a clear framework for AI governance that prioritizes openness and innovation, while cautioning against premature regulation.
impact
It sparked a more nuanced discussion about the balance between regulation and innovation in AI development.
I think we really need to focus on the application side, because what matters is the systems that you put into place. So the model is really a part, it’s an engine. When you validate a car, when you ensure that the car is safe, you ensure the entirety of the car, you don’t look only at the engine.
speaker
Arthur Mensch
reason
This analogy provides a fresh perspective on AI governance, shifting focus from models to applications and systems.
impact
It led to a more holistic discussion of AI governance, considering the entire system rather than just the underlying models.
Can I drive it down to be a hundred times cheaper? I’ll tell you I have 30 times line of sight. I need three more. So the first big part is done. That will make it inclusive and that is what will get it done.
speaker
Arvind Krishna
reason
This comment provides a concrete, technology-focused approach to making AI more accessible and inclusive globally.
impact
It shifted the conversation towards practical solutions for expanding AI access, complementing earlier discussions about policy approaches.
Overall Assessment
These key comments shaped the discussion by broadening its scope from initial concerns about AI governance and regulation to a more comprehensive exploration of AI’s potential societal impacts, the importance of fostering diverse innovation, and practical approaches to ensuring global access and inclusion. The conversation evolved from high-level policy considerations to include concrete technological solutions and analogies that provided new frameworks for thinking about AI development and governance. This multifaceted approach allowed for a rich discussion that balanced regulatory, economic, and technological perspectives on the future of AI.
Follow-up Questions
How can we create a governance model for AI that is inclusive, innovative, and impactful?
speaker
Abdullah AlSwaha
explanation
This is important to ensure AI benefits are distributed globally and to prevent widening of existing divides.
How can we make AI technology more affordable and accessible to reduce the compute, data, and algorithmic divides?
speaker
Arvind Krishna
explanation
Reducing costs and increasing accessibility is crucial for widespread AI adoption and to prevent concentration of power.
How can we develop a risk-based approach to AI regulation that doesn’t stifle innovation?
speaker
Arvind Krishna
explanation
This is important to balance safety concerns with the need for continued innovation in AI.
How can we create a verifiable process for evaluating AI applications before deployment at scale?
speaker
Arthur Mensch
explanation
This is crucial for ensuring AI applications are safe and effective before widespread use.
How can education systems be adapted to prepare people for the AI age?
speaker
Clara Chappaz
explanation
Education is key to ensuring widespread understanding and responsible use of AI technologies.
How can we develop AI interfaces that are culturally appropriate and user-friendly for different regions?
speaker
Arthur Mensch
explanation
This is important for making AI accessible and useful across diverse global contexts.
How can we address the potential negative social and political consequences of AI, particularly in shaping narratives and discourses?
speaker
Samir Saran
explanation
This is crucial for maintaining the integrity of societies, democracies, and political processes in the face of AI influence.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.