Can National Security Keep Up with AI? / Davos 2025

23 Jan 2025 15:15h - 16:00h

Can National Security Keep Up with AI? / Davos 2025

Session at a Glance

Summary

This panel discussion at the World Economic Forum focused on the intersection of artificial intelligence (AI) and national security. Participants explored the dual-use nature of AI technology, highlighting both its potential benefits and risks in areas such as cybersecurity, misinformation, and critical infrastructure protection. The conversation emphasized the rapid pace of AI development and the challenges governments face in keeping up with regulation and governance.


A key theme was the tension between private sector innovation and public sector control. Panelists noted that much of AI development is driven by tech companies, raising concerns about oversight and alignment with national interests. The discussion touched on different approaches to AI governance, contrasting China’s adaptive regulatory strategy with efforts in the US and EU.


Geopolitical dynamics featured prominently, with panelists discussing the AI “arms race” between the US and China, as well as Europe’s efforts to remain competitive. The importance of international cooperation on AI safety and ethics was stressed, though challenges to achieving global alignment were acknowledged.


The panel explored the pros and cons of open-source AI models, with some arguing they democratize access and enhance security, while others cautioned about potential risks. The discussion also touched on the implications of AI development for smaller nations and emerging economies.


Overall, the panel highlighted the complex interplay between technological innovation, national security concerns, and global governance challenges in the rapidly evolving field of AI. Participants emphasized the need for thoughtful regulation, international cooperation, and a balance between innovation and security as AI continues to advance.


Keypoints

Major discussion points:


– The dual-use nature of AI technology and associated security risks


– The role of private companies vs. governments in developing and regulating AI


– Geopolitical dynamics between the US, China, and EU around AI development and regulation


– The potential benefits and risks of open-source AI models


– How smaller countries can approach AI security without relying on major powers


Overall purpose/goal:


The discussion aimed to explore the complex national security implications of AI technology, including potential threats, regulatory challenges, and geopolitical dynamics between major powers and smaller nations.


Tone:


The overall tone was serious and analytical, with panelists offering measured perspectives on complex issues. There were occasional moments of tension or disagreement, particularly around the roles of private companies vs. governments, but the discussion remained largely cordial and constructive throughout.


Speakers

– Katie Drummond: Moderator


– Nick Clegg: Outgoing President of Global Affairs at Meta


– Xue Lan: Dean, Swartzman College, Tsinghua University, People’s Republic of China


– Ian Bremmer: President and Founder of Eurasia Group


– Henna Virkkunen: Executive Vice President for Tech Sovereignty, Security, and Democracy with the European Commission


– Jeremy Fleming: Former security official from the UK


Additional speakers:


– Hannah Verkoenen: Executive Vice President for Tech Sovereignty, Security, and Democracy with the European Commission (mentioned but did not speak)


– Audience member: Devyani from India


– Audience member: Ayano Sasaki from Japan, part of the Davos 50 from the Global Shapers Committee


Full session report

AI and National Security: A Complex Landscape


This panel discussion at the World Economic Forum explored the intricate relationship between artificial intelligence (AI) and national security, delving into the multifaceted challenges and opportunities presented by this rapidly evolving technology. The conversation, moderated by Katie Drummond, brought together experts from various sectors and regions, including Nick Clegg from Meta, Xue Lan from Tsinghua University, Ian Bremmer from Eurasia Group, Henna Virkkunen from the European Commission, and Jeremy Fleming, a former UK security official.


Dual-Use Nature and Security Implications


A central theme of the discussion was the dual-use nature of AI technology, which all speakers acknowledged as having profound implications for security across various sectors. Nick Clegg emphasised AI’s role as a general-purpose technology affecting all areas of society, while Ian Bremmer highlighted its potential to make criminals more dangerous. Henna Virkkunen pointed out AI’s applications in monitoring borders and protecting critical infrastructure, demonstrating its utility for national security. However, Xue Lan cautioned about the risks of AI getting out of control, and Jeremy Fleming noted its use by criminals for fraud and its impact on information spaces.


Public-Private Dynamics in AI Development


The discussion revealed a tension between the private sector’s leading role in AI innovation and the need for government oversight and regulation. Nick Clegg argued for governance to proceed in parallel with technology development, while Xue Lan highlighted China’s agile governance approach to AI regulation. Henna Virkkunen stressed the importance of an innovation-friendly regulatory framework in Europe, mentioning the upcoming EU competitiveness compass. Jeremy Fleming expressed concern about the private sector’s over-indexing in AI development, suggesting a potential misalignment with broader societal interests and emphasizing the need for national security agencies to adapt to this new reality.


Geopolitical Implications and Competition


The geopolitical dynamics surrounding AI development featured prominently in the conversation. Ian Bremmer characterised the US approach as focused on containing China and maintaining technological dominance, while also noting a potential future where China dominates transition energy and the US dominates AI. Xue Lan described China’s strategy as emphasising smart competition and application areas in AI rather than direct rivalry with the US, detailing China’s adaptive regulatory approach and focus on specific AI applications. Henna Virkkunen highlighted the EU’s need to close the innovation gap in AI to remain competitive globally.


Global Alignment and Cooperation


Despite the competitive aspects, there was a consensus on the need for international cooperation on AI safety and governance. Nick Clegg advocated for a multilateral approach to AI governance, while Xue Lan emphasised the importance of US-China cooperation in the AI ecosystem. Jeremy Fleming drew parallels with nuclear cooperation, suggesting lessons could be applied to AI governance. He also proposed building partnerships with like-minded countries for smaller nations to address security concerns. An audience question raised the possibility of global bodies for monitoring AI, similar to the WHO.


Regulation and Policy Approaches


The discussion revealed varying approaches to AI regulation and policy across different regions. Henna Virkkunen outlined the EU’s comprehensive approach to regulating online platforms and AI, emphasising the importance of an innovation-friendly framework. Xue Lan described China’s adaptive regulatory strategy, while Nick Clegg argued for the importance of open-source AI in fostering innovation and enhancing security applications. Jeremy Fleming stressed the need for a strategic understanding of threats and priorities in AI security.


Open-Source AI: Benefits and Concerns


Nick Clegg strongly advocated for open-source approaches in AI, drawing parallels with other technological domains like cybersecurity and the internet itself. He argued that open-source AI could democratise access to technology and enhance security applications, particularly for national security purposes. However, this view was not universally shared, with some participants expressing concerns about potential risks associated with unrestricted access to powerful AI models.


Misinformation and Content Moderation


Nick Clegg provided insights into Meta’s evolving approach to misinformation and fact-checking, particularly in the US. He explained that the company had changed its strategy due to the politicization of fact-checking and concerns about being perceived as arbiters of truth. Meta now focuses on removing content that poses imminent harm while reducing the virality of other potentially problematic content.


Risks and Societal Impact


Ian Bremmer raised significant concerns about the risks of AI, particularly its potential to make humans more like computers. He warned about the possible loss of humanity and emotional intelligence as AI becomes more integrated into daily life. Bremmer suggested that artificial human intelligence (AHI) might precede artificial general intelligence (AGI), highlighting the profound impact AI could have on human behavior and society.


Unresolved Issues and Future Directions


The discussion left several critical issues unresolved, including how to balance national interests with the need for global AI governance, determining the appropriate level of government regulation versus private sector innovation, and addressing the long-term societal impacts of AI on human behaviour and civil society. The effectiveness of open-source AI in addressing security concerns also remained a point of contention.


As the conversation concluded, it was clear that the intersection of AI and national security presents a complex landscape of challenges and opportunities. The need for thoughtful regulation, international cooperation, and a balance between innovation and security emerged as key priorities. However, the differing approaches and priorities highlighted by the panellists suggest that achieving a unified global strategy for AI development and security will remain a significant challenge in the years to come.


Session Transcript

Katie Drummond: All right. I think we are ready to get started. So welcome, everybody. Thank you all for being here. I’m delighted to be leading this session. I will keep my introduction brief. There’s a lot to cover. I will say, you know, I prepped for this session last week. I regret that decision. So much has happened sort of in the last five days that I feel like we need to tackle that I could throw these index cards out a window and we would be all set. But let me sort of lay the groundwork a little bit for everyone here. I think we all know and we can acknowledge that advances in AI are ushering in sweeping changes to the national security landscape. So yes, generative AI can help you write better emails, sure. Or it can be used to mass produce misinformation on digital platforms. AI can improve a company or a country’s cybersecurity defenses. Or it can be weaponized, right, as a tool for more sophisticated cyber attacks. So what I’m getting at with those examples ultimately is this idea of the dual use nature and ultimately the dual use dilemma that’s inherent to this technology. And that’s really what we’re going to be digging into today, some of those dilemmas, some of those challenges. And, you know, hopefully spending a fair bit of time talking about potential solutions, ways that government and industry can collaborate, and that country with different geopolitical agendas can come together to make sure that AI innovation, as quickly as it is moving, is moving forward, we hope, with the necessary safeguards in place. So a quick reminder, if you’re watching on the live stream, you can share thoughts on this panel using the hashtag WF25. And I am going to set aside time for questions. I’m hoping for about 10 minutes. So for those of you in the room, start thinking now. I want to see hands raised in about 35 minutes. Now, we have a stellar lineup, so please join me in welcoming. We have Nick Clegg, outgoing President of Global Affairs at MEDA. We have Hannah Verkoenen, Executive Vice President for Tech Sovereignty, Security, and Democracy with the European Commission. It’s hard to see all of you. We have Ian Bremmer, President and Founder of Eurasia Group. We have Dr. Lan Zhu, Dean, Swartzman College, Tsinghua University, People’s Republic of China. And lastly, we have Jeremy Fleming, who has had a many decades-long career in security out of the UK. So let’s sort of dig into this. You know, I’d like to really start quickly, rapid fire, just ground this for everybody here, for everybody on the live stream. I think so often when people think about AI and they see that paired with national security, they think about military applications, right? But this subject is so much broader than that, which I alluded to. So, Nick, starting with you, very briefly, for each of you from where you sit, give us one specific example of what kind of security threats we are talking about in the context of AI, just to really sort of articulate how big a subject this is.


Nick Clegg: Well, I’d start with that. We talk about dual use. It’s multi-use, it’s general use. That’s the point about this technology. It isn’t a thing. It won’t apply to one weapon or one system. It will apply to everything. It’ll act as an accelerant to everything we do that processes data, every interaction we do online. And so I think it’s rather… I’m not sure if it sort of captures the… The the enormity of what we’re dealing with here by trying to identify and this is the one thing I’m worried about because it’s general And actually in keeping with that probably my biggest concern Given that this is a general-purpose technology. It’s like almost The reinvention of the Internet itself actually my biggest concern is that the technology which at the moment at the foundation level is only being built by a handful of Chinese and American companies because there’s the only One with the with the money with the GPUs with the data and the energy capacity to build the underlying architecture My biggest concern is that it’s not democratized enough that it because it’s a general-purpose technology It should be generally available and in my view That’s why open source which is something that we do it matter and other companies do as well both Chinese and American I think open source is going to be so so so vital otherwise all the applications good and bad of this technology will end up in the Sort of clammy hands of a very small number of private sector operators and that seems to be unsustainable


Katie Drummond: And we’ll get into open source more down the road Hannah take us through sort of what big concern sort of stands out to you


Henna Virkkunen: No, if I’m looking to areas, for example from the European Commission side in where we are investing to AI And when we are combining it with security It’s for example our external borders because in the border control we can work much more effective way if we are using AI to monitor the situation there and Also when we look for example our critical infrastructure We know that it has been very much under attack and we have also witnessed several sabotage, especially when it comes to Undersea cables so to be able also to better protect our critical infrastructure. We are using also AI for example for that So there’s several areas now where we are working in the European Commission’s Commission when it comes to security got it Ian


Katie Drummond: So stipulating that we all are enthusiastic about AI but we’re talking about risks


Ian Bremmer: It’s clearly going to make criminals more dangerous It’s going to make tinkerers make more consequential mistakes, and it’s going to make, most worrying for me, human beings more like computers. And what I mean by that, I mean, everyone’s talking about artificial general intelligence and when we’re going to get there. I think we’re going to get to artificial human intelligence faster than we get to artificial general intelligence, AHI, right? I mean, Larry Summers talks about how IQ will become less important, EQ will become more important with AI. I agree. But AI is increasingly programming EQ out of the people that use it, right? And the more powerful AI gets, and the more it’s downloaded to give us exactly what we want and engage with us as even better than people, even smarter than people, even more capable and facile at treating us exactly the way we want to, we are losing humanity. That undermines civil society, it undermines democracy. That’s to me the most existential danger because the thing I care about the most is persisting as we are, as people. Right, right.


Katie Drummond: Absolutely. Lan?


Xue Lan: Well, I think, of course, I think AI’s potential risks are in many different levels. I think the highest level would be that out of control. I think that’s sort of the critical part. But also, I think there are also, I think, cases where I think that AI can generate security concerns. I saw one example. I think that was, I think in Nigeria, I think there was a deepfake cases where I think one of the political candidates was, you know, talking about the election as a religious war. That generated huge, you know, disruptions in the election. So I think that type of thing that can happen. Yeah.


Katie Drummond: That’s a great example. And Jeremy, lastly. Yeah. Well, I’m going to, I think, just emphasize some of the points that have been made already.


Jeremy Fleming: And firstly, the thing that is affecting the security of citizens right around the world is the way in which criminals are taking up some of these basic tools to perpetrate fraud. And I think that the debate around all of that is not nearly developed enough. I don’t think the public have understood it enough, and I don’t think politicians are on it. So there is a very short-term thing, which is around that. Longer term, then, are we worrying about the loss of control? We’re worrying about the way in which this plays into information and trust spaces, and we’re worrying about the way in which it could, in some circumstances, be used to create different ways


Katie Drummond: of causing harm, particularly in the chem-bios sectors. Yeah, so I think just the sheer sort of breadth of all of those responses, I think, really illustrates exactly how big of a scope we’re talking about here. I want to jump right into what I will describe as sort of public-private dynamics, right? So that intersection between private companies in the context of AI and security and governments, because the reality is, or at least from where I sit, covering this at WIRED, so much of the leadership and the pushing around AI technology and its uses has really been led by the private sector, right? By the biggest technology companies in the world, by really innovative startups. Governments are not necessarily in the driver’s seat here. So Nick, I want to start with you from the point of view of meta. Obviously, in that sort of AI race and a leading player in that space, how do you see the company balancing its corporate interests, right, user engagement, the bottom line, with concerns about security? And I would be remiss not to sort of tack on to that a more specific question around missing disinformation on meta’s platforms. Meta recently announced it would annex its fact-checking operations, at least in the United States. Mark Zuckerberg, in making that announcement, acknowledged that we’re going to catch less bad stuff. So in a moment where AI-generated content, right, is a huge area of concern, I’m curious for your point of view, is that a step backwards in the context of AI and security?


Nick Clegg: Yeah, I don’t actually think that has much to do with AI, but I can come back to it if you like. It’s a totally legitimate question, but it’s not really about AI. Just on the first point. Look, I think the world generally and both in the private and the public sector looked back at the last decade and a half and decided and thought, I’m talking now like two or three years ago, hang on a minute, social media and other technologies erupted incredibly quickly and it took us about a decade and a half to kind of catch up with new laws, new guardrails, new institutions and so on. And I think there was a general sentiment on actually both sides of the fence and my job for the last seven years at Meta has been to try and kind of speak the language on both sides of the fence, that this time governance and issues of public and multilateral governance should proceed in parallel with the technology as much as it can and not an afterthought about a decade and a half later. So I think that was a very strong impulse and then that then led to things like the Bletchley Park Summit, which was the first summit on AI safety and that was then succeeded by the Seoul Summit and so on and then institutions were set up, the UK and the EU and others set up an AI safety institute, the EU, as the Commissioner can describe better than I had, was already working on an AI law and then before chatGBT erupted on the scene and then suddenly tried to retrofit it as soon as generative AI became a thing and sort of stuck a whole bunch of amendments in it and the sort of final bits of it. The US then created also or at least the beginnings of a US AI safety institute where companies like Meta and others as we develop our models could go through testing and so on. The Biden administration, I was there along with other tech leaders in the White House, signed this executive order laying all this out. That executive order has now been scrapped literally in the last week. I don’t know what the status is of the US AI safety institute. You might be surprised to hear, even though I’m an outgoing executive of Meta, that it sort of leaves us in a slightly peculiar position as an American company because we might end up having to jump through the hoops and tests of non-US AI safety institutes if there isn’t one in the US because until last week… our approach was, well, we only work with other safety institutes as a sort of hub and spoke extrapolation of the one that exists in the US. So it’s very much up in kind of, you know, in flux now and as like everything, we will hear from President Trump in half an hour, I think. Yeah, don’t all leave right before five o’clock. So we might hear more, but I personally think that broadly speaking, not completely, broadly speaking, the approach that I saw being promulgated in these multilateral summits, I thought was relatively thoughtful. It was trying to preserve optionality for innovation. It wasn’t trying to weigh down everything by trying to second guess or third or fourth guess, you know, downstream risks, which are almost impossible for a generalized technology. They were starting to set up relatively light touch institutions. So I hope broadly speaking, that multilateral approach will preserve. But I mean, finally, and this is Ian’s and others domain. I mean, the interesting thing, why this is so fascinating is this is a classic example of the clash of two forces, which will shape our world. It’s the globalization of technology and the de-globalization of politics. And it’ll be discussed ad infinitum at forums like this, but that’s what’s going on. And actually what we’ve seen in the last week is a great example of that. So you’ve got these technologies, you started having the outlines of a multilateral approach and now suddenly it’s all called, you know, it’s now thrown into doubt. It’s a classic example when these two things pull in different directions. Do you want me to answer the second one quickly? Yeah, I do. So just on misinformation and how you deal with it, the idea of crowdsourcing, in other words, getting users to be involved with identifying misinformation really is not new and it’s not exclusive to Meta. Clearly X has done it for a while on something community notes. YouTube is experimenting with it itself. And the reason why we’re starting with this in the U.S. is really for two reasons. One, in the U.S. you can agree or disagree on it, you can analyze it till the cows come home, but there was a complete and catastrophic collapse in legitimacy in the fact-checking network we had established in the United States amongst roughly 50% of the population. It is simply not sort of sustainable operating at the scale that Meta does at societal level on something as important as misinfo, having in place a system where basically half the population think that you are deliberately biasing everything against them. You can agree or disagree with it, but it’s just not sustainable. There’s a basic level of legitimacy you need. And the second thing is scale. We operate at vast, vast scale. And we have at the moment, we work with the largest network of fact-checkers around the world in over, I think, 60, 70 languages. We can employ two, three, four times as many fact-checkers that exist. You’re still only ever going to be scraping a tiny, tiny amount of misinfo content at the very top of the full volume of content. The idea of a sort of Wikipedia, crowdsource-style approach to misinfo where you get users saying, actually, this doesn’t look right, and then you label it appropriately is in theory, and I stress in theory, of course, Meta now needs to do the hard yards, is in theory something you could actually scale more effectively than just relying on a handful of sort of professional fact-checkers. So that’s what the company’s going to try and do. That’s the rationale for it. Sure. And I certainly understand that, you know, Meta uses AI technology to detect AI-generated


Katie Drummond: content on its platforms. Is that correct? Yeah. But my understanding was that the independent fact-checkers were essentially a backstop to that technology. If that’s not the case,


Nick Clegg: that’s fine, and I’m happy to move on. The fact-checkers can fact-check and so on. I mean, I just, can I just, one thing I’d like, before this, last year was the largest set of elections in the democratic era. Billions of people were able to go. It’s never happened before in the democratic era. And at this time last year, I was being told constantly events like this, that the whole year was going to be disfigured. AI was going to hijack. democracy, democracy was going to end, every election was going to be completely flooded with synthetic content. The most important conclusion of the last year, and I don’t want to be complacent, I’m not saying this is a prediction of the future, is how little of that happened. And what you see in companies like Meta is that they actually build systems which go after misinfo and disinfo, however you define that. They go after the content, not the genesis of it. It doesn’t matter whether it’s produced by a human being or a robot. If it’s disinfo or a misinfo, the systems, and they are AI, they’re machine learning systems, are actually extremely sophisticated at going after the content. And I got super frustrated last year. Everyone freaked out about the fact that you now have AI generated content. But the systems are built to go after the content itself. And so far they’ve been proven to be relatively robust. Doesn’t mean, by the way, in the future we won’t have to deal with vast amounts of synthetic slop. We will do. And that’s a challenge. Sure. And I think let’s leave it there. I mean, look, it is up to Meta to make this community notes approach and sort of the approach that they are taking to moderation.


Katie Drummond: And to prove it. Sure. Effective. Proof will be in the pudding. We’ll see in four years. I want to make sure to move on. And Lan, before we step away from this sort of public-private conversation, I do want to make sure to get your point of view right from China. I think that’s important. When you think about that dynamic between the public and private sector, what stands out to you from China’s approach? How do you think these two sectors should be working


Xue Lan: together where national security is concerned? I think in China’s case, I think the AI has already developed very fast over the last decade or so. I think China has taken an approach as I would call it an agile governance approach. I think initially in the early years, I think China had an AI development plan, an overall national plan. And then China came out with some so-called governing principles providing some general guidance of the AI development. And then also pushed out some foundational legislations such as, you know, personal information protections and so on. And then I think China was taking a more of a adaptive approach when there were problems. I mean, first of all, you encourage innovation, but when there were problems coming out, and then have specific regulations to govern those specific domains or application areas. One example is the generative AI, and China actually was the first country came out with a regulation on the generative AI. So I think that’s the kind of approach. Of course, China is willing to update of those regulations and also some principles. So I think through that, I think as you can see that AI has been developed very fast, but also I think in terms of the overall, I think on the security side, I think it’s relatively okay. It’s sort of this idea of sort of government keeping up with the pace of innovation. Exactly. Also, I think that also, I think in many cases, when the government is developing some regulation, they actually do have a talk with the companies, inviting company representatives to hear the draft and to see whether that’d be too much or too little.


Katie Drummond: Perfect, thank you so much. Now, let’s move on to sort of talking about government a little bit and specifically sort of companies spending time with government, which brings me to the United States. Ian, I need to ask you, everything that has happened sort of in the last four days with regards to the incoming new Trump administration and AI. So Donald Trump earlier this week revoked President Biden’s executive order around AI, which contained a lot of detail around security implications specifically. Also announced yesterday along with Sam Altman, Larry Ellison and others, Stargate, this massive investment in sort of AI infrastructure in the United States. I’m curious sort of your view as of now, obviously there’s a lot that is still to come, what changes in the United?


Ian Bremmer: States with regards to AI and national security, what’s worrying you? What’s sort of standing out to you right now? Well I mean the thing that’s changed the most of course is that the technology companies and the technology leaders are doing their damnedest to align with this administration in as full-throated and obvious a way as humanly possible. I mean the one thing we know about Trump as a businessman and as a president is that he’s very transactional. What that means is he’s pay to play, right? So the more money you give the more he’s going to pay attention to what you want whether it’s his position on TikTok or whether it’s you know whether you’re going to have certain subsidies and the more you can do that directly for him the more it matters. We’re watching that play out in real time. That has always been a challenge for the US government frankly but it has become a much sharper challenge and I would argue because I don’t think anyone else on the panel will that Meta and Mark have over indexed on that compared to other companies and that is a risk that they are taking. Maybe it’s a revealed presence but preference by Mark but it’s a risk they are taking for the medium term and fortunately for Nick it is not a risk that he is now taking for the medium term. So there’s that. For the next few weeks. Yeah well no no I think he’s okay on that front. I mean I think it’s very clear that’s not coincidental. Let’s put it that way. No I mean I thought it was amazing when Mark put out his explanation for why they were changing all these things it was all about well this just of course made sense and we’ve been wrestling it with for so long and I mean you know this is the better way and you know we don’t want censorship you know we didn’t talk about moderation. It’s like he didn’t mention the fact that actually there’s a new president and he’s with Elon and we kind of want to do everything that they’re doing because that’s what’s good for us. I mean you know we’re not stupid so at least mention it right. I mean I don’t know but that’s just me. No I mean well it’s it’s


Katie Drummond: you and many other people. Bring me back to sort of AI though when we’re talking about Trump sort of you know what what is concerning. He did mention Trump by the way. No I did at the end but not in that way not in the reason


Ian Bremmer: why he was making these changes. He mentioned Trump at the end by saying of course we’re gonna have a relationship with Trump. I get it but but no look I I, that is not my primary point of criticism. It’s not like others haven’t been doing the same thing. I mean, you know, there’s strength in numbers here, right? So, let me talk about what’s similar. What’s similar is that, yeah, Trump is pulling out of global architecture and we’ve already seen some of that, not in the AI space. And yeah, he’s ripping up some Biden stuff. But in my view, the most important, strong new institutional architecture that we have seen created since the Soviet Union fell is being created right now by Biden and now by Trump, which is this new diffusion architecture, it’s AI, it’s data centers, and it is tier one, tier two, tier three, and everyone that is a really close friend of the US is gonna get on board. And this is a stated preference that of all the things the US government might do in terms of AI, all the priorities they might make to address the concerns that we have and some that we haven’t aired, the one that they really wanna focus on is containing China. The one they really wanna focus on is ensuring that America has the sharp end of the spear of industrial policy and technology and aligning all the friends to do what the Americans want. And I think that’s gonna be incredibly effective. I think the Europeans are in real trouble. And by the way, the fact that a whole bunch of EU and NATO countries aren’t in that architecture creates an opportunity for those countries individually to come to Trump and say, hey, I will do what you want to make that deal. And the Europeans better not do that. They need to act collectively, right? Because the whole EU should be in tier one. All of NATO should be in tier one. And that has to be done together. Europeans are doing a very effective job on that on Russia-Ukraine. In the last three months, I have seen the Europeans help move Trump to a more collective position on Russia-Ukraine than he was in when he won. But that’s because that’s existential for the Europeans. Are they doing that on technology? I don’t see that yet. Well, here. I mean, obviously, that’s not my, that’s more your expertise. But I worry about it. Hanna, take it from here, sort of talk us through, you know, the EU position here.


Henna Virkkunen: What does the EU potentially stand to lose ground on in this new world order when it comes to security and artificial intelligence? Sort of where do you worry about potential backsliding, as it were, with the incoming Trump administration sort of having that kind of global influence? What we have to do in Europe is that we have to close the innovation gap, because we know that we are lagging very much behind now when it comes to innovations and investments in Europe. And there we have to really work in several sectors. So I think the most important thing is that we have to have an innovation-friendly regulatory framework. We are very much facing, criticizing that we have too much bureaucracy in Europe. And I think we have to take it very seriously. We have to take it, make it easier for businesses to invest in Europe. We are too fragmented still. There’s too much barriers between our member states, and we are also missing private investments and venture capital. We are quite much using public funding when it comes to AI and new technologies. But there we also need a single market, Capital Markets Union. So there are several fields where we have to work, and we are very committed to build up our own capacities now when it comes to technologies, especially when it comes to AI, and also to quantum computing, where we are very strong already. And then, of course, semiconductors, which is very much preconditioned to any other critical technology. So we are very committed to that, that we want to invest more, and we want to be more attractive also for the innovations from all over the world. But we have some strengths also when it comes to these critical technologies. We have very strong researchers here, and also we have very strong startups. And I think we have a lot of potential. But now we have to really make it happen. And as you know, next week we will… will propose and present our competitiveness compass. And there we will very much also show now the path for next five years, what we wanna achieve. But of course, competition is very hard now globally with USA and with China when it comes to technologies. Well, let me sort of move us then into that sort of geopolitical piece that I think is so important here and that you and Ian both just highlighted, right? Ultimately, what one major geopolitical player here does will impact all of them, right? And then we’ll have a trickle down effect on smaller nations, developing economies. Lan, I want to turn to you, right? You just heard what Ian said, which I think is spot on from the point of view of the United States. I mean, it’s all about sort of global dominance in this quote unquote AI arms race with China. Obviously, China is incredibly competitive when it comes to AI development, but that dynamic between China and the US, right? How can these big geopolitical players, China and the US and the EU sort of being top of mind here, develop some sort of global standard, some sort of global norm around AI while acknowledging that their priorities may ultimately not be aligned? Like what needs to happen for that to actually become the reality, the world that we’re living in?


Katie Drummond: I think, first of all, I think we have to recognize actually both US and China have contributed greatly to this ecosystem in AI space. I think you look at the academic papers and patents, China published more papers than any other country over the last 10 years, patents the same.


Xue Lan: So I think that both countries are contributing greatly to this ecosystem, and they work together, many scholars work together and so on. So I think in terms of the sort of national competition, I think it’s more of the US viewing China as a rivalry and sort of having this chip embargo that indeed hurt China in some way. But I think that we heard, I think two days ago, the Vice Premier Ding who made the comments here talking about. I think technology and so on. He said that China would not blindly follow the trend to mindlessly compete. So I think that actually indeed shows that actually Chinese AI development has indeed, I think, not chasing the big models as such. But rather, it’s more focusing a lot on smart competition and also on the application areas. So we see some evidence of that, like DeepSeek, that really having a very smart way of really algorithm that actually allows you to having the same kind of function, but at the same time, much less energy consumption. That’s the kind of thing that I don’t think that Chinese companies see US as a big competitor. But rather, they’re really trying to find application domestically. And also, of course, I think it won’t possible compete globally. And I think that the outcome of that is you see the great benefits to the consumers, like TikTok. I mean, we see this. And when TikTok was banned, I see people find red notes. It’s another wonderful example. So I think actually there are many ways actually the two countries can work together. But also, I think on the security side, I think both US and China recognize the security concerns. So actually, I think we know that actually there was a track one dialogue. And also, there were track two dialogue, mostly on the security side. And I think that, of course, I think our colleagues mentioned about the AI safety research institutes. So there is indeed a global network of that. And China is also working, having its own network of AI safety research institutes in China.


Katie Drummond: And they’re trying to be part of that. Unfortunately, I think there are still in some forces in the US that didn’t want to China to be part of that. So we hope that this was a new administration that will change. No, oh Jeremy, I’m just gonna turn to you. Okay, well I’m gonna come in on on this and I want to bring it back to national security more from a practitioner’s perspective


Jeremy Fleming: So so on there are lessons from history on how states work together on the highest Potential threat areas and and of course the nuclear example is the one that we most readily Move to in this space and I think there are lessons from that and so looking at the areas of most potential harm trying to open make sure we have lines of communication and Dialogue looking at international law and and checking on its applicability given new new forms of of technology So there is a there’s a roadmap for there I think it’s it’s imperative that China is in alongside those sorts of conversations And and that’s why the UK took the approach it did actually last year inviting China along so we we mustn’t lose that but I Telling I want to turn back to two things that have been said previously in this conversation. The first is this point about over indexing, you know in your point and and and the problem here is that the over indexing is coming from the private sector whereas in the past these sorts of technologies and capabilities with these sorts of ramifications have largely been developed through military public sectors under government control and so the fact that a company is over indexing whether it’s metal or not or when at one of the other in plenty other Companies you could point to out there the fact that a company is over indexing that has is causing us as practitioners more concern because It’s the private sector doing it and the public sector the national security community governments are having to find out and work on new ways of Getting their arms around that of new ways of influencing and this comes back down to a different way of partnering So sat in I’m no longer there. But if you’re set at that top of a national security agency at the moment, you are looking at capabilities, you are looking at data, you’re looking at skills, and in all three of those areas you’re thinking where do they come from, and they are a lot of those that their dominance in those is coming from the United States, so as a as national security communities whether you’re in the UK or elsewhere in the world then the the the Rubik’s Cube you are trying to solve is how do you get some pieces that fit in with that trajectory of American national security that makes you as relevant as you’ve been in the past 75 years. Now it’s from the UK’s perspective we’ve always been a much smaller player than the United States, but there isn’t a day that goes past when what happens in the UK isn’t on the desk of the President, and so I think what we’ve got to do in Europe, what we do in the UK, what we do in other allies of America is work out what is the version of that for an AI and a tech dominated world that gives collective security not only to those people who might be the main recipients but to America as well. Ian do you have a view on sort of what that could look like,


Katie Drummond: and and I’m curious you know what I wanted to make sure to to ask you and and a few of you, this idea of global alignment right between the US and China, EU, you know these big players,


Ian Bremmer: how realistic does that feel at this moment in time? Well first of all I mean when I talk about like building institutions that are stronger, the difference between this and say the other one that’s come up recently which we all talked a lot about, Belt and Road, is that Belt and Road, like I mean the Chinese build a port but anyone can use the port right, so I mean like you’d like it to be built by the US or by the UK or by Japan as opposed to by China given you know sort of levels of openness and contracting and the rest, but I mean you’d still want them to have ports. On the other hand I mean if AI and what is coming in the box in a tier one country is only determined by you know sort of the US and a small number of companies, then that is necessarily really problematic to China. And if the only countries that aren’t part of that alignment are weaker, poorer countries, then I think you are setting up for big challenges. Now, to be fair, there are places where the Chinese are dominant. And you look at the other big ecosystem of new technologies, which is transition energy, and the Chinese are becoming dominant. In some ways, they already are. And so I could imagine a world where electrons are moved primarily by China in 20 years, and where AI is moved primarily by the United States. That’s a very interesting kind of world. But let’s get back to the point that was just made, which is really important, which is the over-indexing is being done by the private sector. The sovereignty is in the hands of the private sector. And so to go to another company, as opposed to Meta, I look at OpenAI, which I don’t know if they’re ahead, because I’m not a technologist, but they certainly sell themselves as being ahead. They talk about being ahead. They’re very good at that. According to them, they’re ahead. According to them, they’re ahead. And I look at what their mission is. And the mission is to create AGI. And their definition of that is where a computer or AI can do tasks better than human beings on the entire range of economically productive activity. Now, there is no problem with that, as long as the government is the one that’s doing the appropriate regulation. But if the company exerts sovereignty, that’s great for shareholders. It might even be great for consumers if they’re not products. It might be great for consumers. Depends on what they want to do with it. But it’s certainly not great for citizens. And governments are the ones that need to be looking out for citizens and determining the range of things that AI does as it becomes so powerful. I can’t think of governments out there that would want AI and AGI to be solely or maybe even principally developed. for maximizing economically productive work as defined by open AI. That strikes me as enormously problematic because the last time we did that, we got negative externalities. That’s how we got climate change. What do you think the negative externalities are gonna be for AI? So again, I have no problem with the private sector defining this however it wants as long as the ultimate sovereignty lies in the hands of the government. And right now, we are not remotely close to that being true.


Katie Drummond: Now, Nick, let me sort of take one more question to you and then we’re gonna turn to questions from the group. I did wanna touch a little bit on sort of the idea of trickle-down effects, right? That the decisions around AI and security that are made in the context of the US, China, the EU, those will have trickle-down impacts on emerging economies, on smaller countries, perhaps more fragile ones. I know Meta has been a very vocal proponent of open-source AI for many different reasons. I would love to hear just a little bit from you to articulate why that is, right? Because I think one could also make a lot of arguments around open-source AI or open-source models and national security risk, right? So talk us through sort of the benefits of this. So open-source is not new. It’s as old as the hills.


Nick Clegg: In fact, everything we do, I mean, all cybersecurity is based on open-source technology. The internet itself is based on open-source technology. Android is based on open-source technology. Encryption protocols are based on open-source. You remember the old battles between Linux and Windows. Open-source, so that you share the technology, or in the case of AI, you share the weights, it’s open weights, is something which very much goes with the grain of technology. And generally, if you look over the last 30, 40 years, open-source technology tends to win out over closed or proprietary technology, just because you get the wisdom of crowds, you get people innovating, you get people who can check on the weaknesses in an open way, rather than just relying on whichever company runs the technology. doing a sort of patch and mend job in themselves. And it’s something which Meta’s been doing for ages. We have open sourced over 1,000 AI databases and models over the last decade. It’s also, of course, in the companies. Of course, it’s a company, it’s not a charity. It’s in the company’s own interest. Our business model doesn’t depend, unlike some other business models, on us charging people a fee for access to our foundation model, because Meta has an ads-based business model. And we basically use that as a sort of cross-subsidy to spend whatever it is, $40, $50 billion a year, we’re spending, which is, by the way, which is the answer why this is being run by the private sector, because the public sector can’t afford it. Which European country could afford $50 billion a year


Katie Drummond: on data infrastructure? Open source models as sort of beneficial in the context of national security, I’m curious about that. Well, look, at the end of the day, you’re not gonna, open source is happening. It’s happening everywhere.


Nick Clegg: There are wonderful Chinese open source models. The Kuen family of open source models are highly performant models. As I said earlier, I wouldn’t think about it as like something you can keep under lock and key, and then if it escapes the cage, somehow it’s gonna be spooky and dangerous. It’s not like that. You can hack closed source models. Open source models allow for a very high degree of innovation, yes, by bad people, but crucially also by good players. And crucially, it’s the way that you democratize access to this technology. I just think, I cannot stress enough, we’ve talked a little bit about AI here in a slightly loose way. There’s lots of different levels to the stack. You’ve got the foundation, you’ve got the training model, then you’ve got the inference, then you’ve got the fine tuning, then you’ve got everything that’s built on top of it, which is, by the way, where all the value’s gonna be. That’s where all the value’s gonna be. In my own view, I’m not a European commissioner. If I was Europe, I wouldn’t even try and compete on the foundation model level. There’s no way that Europe’s gonna catch up, but Europe could leapfrog America and China in terms of deployment, what I call the sort of app level of AI. But when it comes to security, And if you look at the really innovative ways in which we changed our license requirements recently, I announced this on behalf of the company, so that the U.S. and the Five Eyes nations could use LLAMA for their own sort of security purposes. It’s just a very versatile way of using the technology. And if I may, just one final thing, which people sometimes don’t focus on. When you run an open source model, you have complete sovereignty over it on your servers. There’s no API-based link back to the servers in the West Coast. And that’s disproportionately important when you’re running sensitive, when you’re running these models in sensitive, for sensitive applications, government security and so on and so forth. So when, whether it’s Jeremy’s old agency, let’s say, was to use LLAMA, they have complete control and sovereignty of it. No data flows back to Mark Zuckerberg and Meta. That’s one of the reasons why open source technology, open source AI models are so well disposed towards innovative security applications. We’ve covered a lot of ground. I do want to make sure we have time for a couple of questions.


Katie Drummond: We’ve got about four minutes left, and then we can all go see what Donald Trump has to say. Let me open it up to the room. Go for it. Do we have microphones that we’re handing out? Let’s start here.


Audience: Hello, I’m Devyani from India. And I have a question that according to you, what are the policy and regulations the government must place to safeguard from misinformation, disinformation, and ethical use of AI? And do we need to form a body, for example, like WHO to analyze and monitor it globally as corporates? Do you think it will be beneficial for, to better operate? Wants to take that. Go for it.


Henna Virkkunen: In the European Union, we want to have a digital environment that is fair and safe and democratic. And that’s why we have put in place several regulations. And also, for example, when it comes to disinformation, we have Digital Service Act. and their wireless online platforms, they need to themselves assess and mitigate the systematic risks they are posing, for example, to civic discourse. And they have to also have practices in place how they are making sure that they are not spreading misinformation, disinformation, but they can choose themselves what kind of practices they are using. For example, in Europe, Meta is using fact checkers, not in USA anymore. Some of the platforms, they are using these community notes so that the users are flagging the content. But I think it’s important that we are putting rules also for the online platforms and for the digital service providers. Because in Europe, we see that it’s very important that the same rules which are applying in our physical world, the same rules also apply in digital world. So what is illegal offline is also illegal online. I think we probably have time for one more question.


Katie Drummond: I think I saw over here. Great.


Audience: Thank you, I’m Ayano Sasaki from Japan and as a part of the Davos 50 from the Global Shapers Committee. So without depending on any other big countries like China and the United States, relatively small country cannot secure country. That’s what I felt by listening to this session. But the security seems very fragile. Like achieving security will contribute to the emerging of the new threat, like a war and something like that. So my question is, how can we secure our country, I mean, relatively small countries in the era of this AI? That’s an almost impossible question to ask, isn’t it?


Jeremy Fleming: But I think I can offer a clue, which is at the very core of this is a strategic understanding of threats and an understanding of the things that are most important in the country to defend. And too often, I think there’s. this conversation goes deep down into technology, it goes deep down into regulation, without having a proper thought about risk. So a company like Japan, what is it that secures Japan’s prosperity, its economic success and its security in the region? And then I think you build a platform


Katie Drummond: and a strategy out from that. Ria, say something.


Henna Virkkunen: Yes, I think it’s also very important to build partnerships with the like-minded countries. So especially when it comes to security, because nearly all of the countries are too small alone. And especially, for example, in Europe, where we have European Union, which is a union of 27 different member states. For example, cybersecurity is a field where nobody couldn’t be successful alone. So there we have to work together. So I think it’s important to find like-minded, trusted partners also. Well, we are at time. I think we have a hot commodity session starting in about 45 seconds.


Katie Drummond: So thank you all to our panelists for being here. Thank you all for being here. Thank you on the live stream. Thank you. Thank you. Thank you.


K

Katie Drummond

Speech speed

191 words per minute

Speech length

1591 words

Speech time

499 seconds

Dual-use nature of AI technology

Explanation

AI technology has both beneficial and potentially harmful applications. This dual-use nature creates dilemmas and challenges in its development and regulation.


Evidence

Examples of AI helping write better emails vs. mass-producing misinformation, and improving cybersecurity defenses vs. being weaponized for cyber attacks.


Major Discussion Point

AI Security Threats and Concerns


Agreed with

– Nick Clegg
– Ian Bremmer
– Henna Virkkunen
– Xue Lan
– Jeremy Fleming

Agreed on

AI as a dual-use technology with security implications


N

Nick Clegg

Speech speed

191 words per minute

Speech length

2341 words

Speech time

731 seconds

AI as a general-purpose technology affecting all sectors

Explanation

AI is not limited to specific applications but will impact all areas that process data and online interactions. Its general-purpose nature makes it difficult to identify specific security concerns.


Evidence

Comparison to the reinvention of the Internet itself.


Major Discussion Point

AI Security Threats and Concerns


Agreed with

– Katie Drummond
– Ian Bremmer
– Henna Virkkunen
– Xue Lan
– Jeremy Fleming

Agreed on

AI as a dual-use technology with security implications


Need for governance to proceed in parallel with technology development

Explanation

Unlike previous technological advancements, there is a push for AI governance to develop alongside the technology. This approach aims to avoid the delay in establishing laws and guardrails seen with social media.


Evidence

Mentions of Bletchley Park Summit, Seoul Summit, and the creation of AI safety institutes in various countries.


Major Discussion Point

Public-Private Dynamics in AI Development


Agreed with

– Henna Virkkunen
– Xue Lan

Agreed on

Need for governance and regulation of AI


Differed with

– Xue Lan
– Henna Virkkunen

Differed on

Approach to AI regulation and governance


Importance of open-source AI for innovation and security applications

Explanation

Open-source AI models allow for greater innovation, democratization of access, and improved security applications. They provide complete sovereignty and control when used for sensitive applications.


Evidence

Examples of open-source technology in cybersecurity, internet, Android, and encryption protocols. Mention of Meta’s open-sourcing of over 1,000 AI databases and models.


Major Discussion Point

AI Regulation and Policy


I

Ian Bremmer

Speech speed

196 words per minute

Speech length

1612 words

Speech time

491 seconds

AI’s potential to make criminals more dangerous

Explanation

AI technology could enhance the capabilities of criminals, making them more dangerous. It also poses risks of unintended consequences from well-intentioned users.


Major Discussion Point

AI Security Threats and Concerns


Agreed with

– Katie Drummond
– Nick Clegg
– Henna Virkkunen
– Xue Lan
– Jeremy Fleming

Agreed on

AI as a dual-use technology with security implications


US focus on containing China and maintaining technological dominance

Explanation

The US government’s primary focus in AI development is on containing China and ensuring American dominance in industrial policy and technology. This approach aims to align allies with American interests.


Evidence

Mention of new diffusion architecture for AI, data centers, and tiered system for US allies.


Major Discussion Point

Geopolitical Implications of AI Development


Differed with

– Xue Lan

Differed on

Focus of AI development and competition


Importance of government sovereignty over AI development

Explanation

There is a concern that private companies, rather than governments, are exerting sovereignty over AI development. This could lead to AI being developed primarily for economic productivity rather than broader societal benefits.


Evidence

Example of OpenAI’s mission to create AGI for economically productive activity.


Major Discussion Point

Global Alignment and Cooperation on AI


H

Henna Virkkunen

Speech speed

180 words per minute

Speech length

996 words

Speech time

331 seconds

AI’s use in monitoring borders and protecting critical infrastructure

Explanation

AI is being utilized to enhance border control efficiency and protect critical infrastructure. This application of AI technology is seen as a key area of investment for security purposes.


Evidence

Examples of using AI for border monitoring and protecting undersea cables.


Major Discussion Point

AI Security Threats and Concerns


Agreed with

– Katie Drummond
– Nick Clegg
– Ian Bremmer
– Xue Lan
– Jeremy Fleming

Agreed on

AI as a dual-use technology with security implications


Importance of innovation-friendly regulatory framework in Europe

Explanation

Europe needs to create a more innovation-friendly environment to close the gap in AI development. This involves reducing bureaucracy, removing barriers between member states, and attracting more private investments.


Evidence

Mention of upcoming competitiveness compass presentation.


Major Discussion Point

Public-Private Dynamics in AI Development


Agreed with

– Nick Clegg
– Xue Lan

Agreed on

Need for governance and regulation of AI


Need for EU to close the innovation gap in AI

Explanation

The EU recognizes the need to catch up in AI innovation and investment. This involves working across several sectors to create a more conducive environment for AI development.


Evidence

Mention of efforts to build capacities in AI, quantum computing, and semiconductors.


Major Discussion Point

Geopolitical Implications of AI Development


EU’s approach to regulating online platforms and AI

Explanation

The EU has implemented regulations like the Digital Service Act to ensure a fair, safe, and democratic digital environment. These regulations require online platforms to assess and mitigate systematic risks they pose.


Evidence

Example of platforms choosing different practices for addressing misinformation, such as fact-checkers or community notes.


Major Discussion Point

AI Regulation and Policy


Differed with

– Nick Clegg
– Xue Lan

Differed on

Approach to AI regulation and governance


Importance of partnerships among like-minded countries for AI security

Explanation

Collaboration between like-minded countries is crucial for AI security, especially for smaller nations. This approach recognizes that most countries are too small to tackle AI security challenges alone.


Evidence

Example of cybersecurity cooperation within the European Union.


Major Discussion Point

AI Regulation and Policy


X

Xue Lan

Speech speed

155 words per minute

Speech length

753 words

Speech time

290 seconds

Risks of AI getting out of control

Explanation

One of the highest-level risks associated with AI is the potential for it to become uncontrollable. This concern highlights the critical importance of maintaining human oversight and control over AI systems.


Evidence

Example of a deepfake case in Nigeria where AI-generated content disrupted an election.


Major Discussion Point

AI Security Threats and Concerns


Agreed with

– Katie Drummond
– Nick Clegg
– Ian Bremmer
– Henna Virkkunen
– Jeremy Fleming

Agreed on

AI as a dual-use technology with security implications


China’s agile governance approach to AI regulation

Explanation

China has adopted an adaptive approach to AI governance, encouraging innovation while addressing problems as they arise. This approach involves implementing specific regulations for emerging issues in AI development.


Evidence

Example of China being the first country to regulate generative AI.


Major Discussion Point

Public-Private Dynamics in AI Development


Agreed with

– Nick Clegg
– Henna Virkkunen

Agreed on

Need for governance and regulation of AI


Differed with

– Nick Clegg
– Henna Virkkunen

Differed on

Approach to AI regulation and governance


China’s focus on smart competition and application areas in AI

Explanation

China’s AI development strategy emphasizes smart competition and practical applications rather than competing directly with the US on large AI models. This approach aims to find innovative solutions and focus on domestic applications.


Evidence

Examples of DeepSeek’s energy-efficient algorithms and consumer benefits like TikTok.


Major Discussion Point

Geopolitical Implications of AI Development


Differed with

– Ian Bremmer

Differed on

Focus of AI development and competition


Need for US-China cooperation in AI ecosystem

Explanation

Both the US and China have significantly contributed to the AI ecosystem, with collaboration between scholars from both countries. There is a recognition of the need for cooperation, particularly on security concerns.


Evidence

Mention of track one and track two dialogues on AI security between the US and China.


Major Discussion Point

Global Alignment and Cooperation on AI


J

Jeremy Fleming

Speech speed

181 words per minute

Speech length

754 words

Speech time

249 seconds

AI’s use by criminals for fraud and its impact on information spaces

Explanation

Criminals are adopting basic AI tools to perpetrate fraud, affecting citizens worldwide. This trend has not been sufficiently addressed in public debate or by politicians.


Major Discussion Point

AI Security Threats and Concerns


Agreed with

– Katie Drummond
– Nick Clegg
– Ian Bremmer
– Henna Virkkunen
– Xue Lan

Agreed on

AI as a dual-use technology with security implications


Over-indexing by private sector in AI development

Explanation

The private sector’s dominant role in AI development is causing concern for national security practitioners. This shift from traditional government-led development of critical technologies requires new ways of influencing and partnering.


Evidence

Comparison to past technologies developed under government control.


Major Discussion Point

Public-Private Dynamics in AI Development


Lessons from nuclear cooperation for AI governance

Explanation

There are lessons to be learned from how states have collaborated on high-potential threat areas in the past, such as nuclear technology. This includes establishing communication lines, dialogue, and checking the applicability of international law to new technologies.


Major Discussion Point

Global Alignment and Cooperation on AI


Need for strategic understanding of threats and priorities in AI security

Explanation

Countries need to develop a strategic understanding of threats and identify their most important assets to defend in the AI era. This approach should precede deep dives into technology and regulation.


Major Discussion Point

AI Regulation and Policy


Agreements

Agreement Points

AI as a dual-use technology with security implications

speakers

– Katie Drummond
– Nick Clegg
– Ian Bremmer
– Henna Virkkunen
– Xue Lan
– Jeremy Fleming

arguments

Dual-use nature of AI technology


AI as a general-purpose technology affecting all sectors


AI’s potential to make criminals more dangerous


AI’s use in monitoring borders and protecting critical infrastructure


Risks of AI getting out of control


AI’s use by criminals for fraud and its impact on information spaces


summary

All speakers acknowledged the dual-use nature of AI technology, recognizing its potential benefits and risks in various sectors, including security and criminal activities.


Need for governance and regulation of AI

speakers

– Nick Clegg
– Henna Virkkunen
– Xue Lan

arguments

Need for governance to proceed in parallel with technology development


Importance of innovation-friendly regulatory framework in Europe


China’s agile governance approach to AI regulation


summary

Speakers agreed on the importance of developing governance and regulatory frameworks for AI, although their approaches differed slightly based on their regional perspectives.


Similar Viewpoints

Both speakers expressed concern about the dominant role of the private sector in AI development and its implications for national security and global power dynamics.

speakers

– Ian Bremmer
– Jeremy Fleming

arguments

US focus on containing China and maintaining technological dominance


Over-indexing by private sector in AI development


Both speakers emphasized the importance of collaboration and open access in AI development, albeit from different perspectives (open-source models and international cooperation).

speakers

– Nick Clegg
– Xue Lan

arguments

Importance of open-source AI for innovation and security applications


Need for US-China cooperation in AI ecosystem


Unexpected Consensus

Recognition of China’s contributions to AI development

speakers

– Nick Clegg
– Xue Lan
– Ian Bremmer

arguments

Importance of open-source AI for innovation and security applications


Need for US-China cooperation in AI ecosystem


US focus on containing China and maintaining technological dominance


explanation

Despite geopolitical tensions, there was an unexpected acknowledgment of China’s significant contributions to AI development and the potential benefits of cooperation between the US and China in this field.


Overall Assessment

Summary

The main areas of agreement included the dual-use nature of AI, the need for governance and regulation, and the recognition of AI’s impact on global security and economic dynamics.


Consensus level

Moderate consensus on broad issues, with diverging views on specific approaches and priorities. This implies a shared understanding of AI’s importance and challenges, but potential difficulties in implementing unified global strategies for AI development and security.


Differences

Different Viewpoints

Approach to AI regulation and governance

speakers

– Nick Clegg
– Xue Lan
– Henna Virkkunen

arguments

Need for governance to proceed in parallel with technology development


China’s agile governance approach to AI regulation


EU’s approach to regulating online platforms and AI


summary

The speakers presented different approaches to AI regulation. Nick Clegg emphasized parallel development of governance and technology, Xue Lan highlighted China’s adaptive approach, while Henna Virkkunen focused on the EU’s comprehensive regulatory framework.


Focus of AI development and competition

speakers

– Ian Bremmer
– Xue Lan

arguments

US focus on containing China and maintaining technological dominance


China’s focus on smart competition and application areas in AI


summary

Ian Bremmer emphasized the US focus on containing China and maintaining technological dominance, while Xue Lan highlighted China’s emphasis on smart competition and practical applications rather than direct competition with the US.


Unexpected Differences

Role of private sector in AI development

speakers

– Nick Clegg
– Jeremy Fleming

arguments

Need for governance to proceed in parallel with technology development


Over-indexing by private sector in AI development


explanation

While both speakers are from Western countries, they have notably different views on the private sector’s role in AI development. Nick Clegg, representing a tech company, advocates for parallel development of governance and technology, while Jeremy Fleming, with a background in security, expresses concern about the private sector’s dominant role in AI development.


Overall Assessment

summary

The main areas of disagreement revolve around approaches to AI regulation, the focus of AI development and competition between major powers, the role of the private sector, and the balance between innovation and security.


difference_level

The level of disagreement among the speakers is moderate to high. While there is a general consensus on the importance of AI and its potential impacts on security, there are significant differences in how different countries and sectors approach AI development and regulation. These disagreements reflect the complex geopolitical landscape and the challenges in achieving global alignment on AI governance. The implications of these disagreements suggest that achieving a unified global approach to AI development and security will be challenging, potentially leading to fragmented regulatory environments and continued technological competition between major powers.


Partial Agreements

Partial Agreements

Both speakers recognize the importance of broader access to AI technology, but disagree on the extent of government control. Nick Clegg advocates for open-source AI to promote innovation and security, while Ian Bremmer emphasizes the need for government sovereignty in AI development to ensure broader societal benefits.

speakers

– Nick Clegg
– Ian Bremmer

arguments

Importance of open-source AI for innovation and security applications


Importance of government sovereignty over AI development


Similar Viewpoints

Both speakers expressed concern about the dominant role of the private sector in AI development and its implications for national security and global power dynamics.

speakers

– Ian Bremmer
– Jeremy Fleming

arguments

US focus on containing China and maintaining technological dominance


Over-indexing by private sector in AI development


Both speakers emphasized the importance of collaboration and open access in AI development, albeit from different perspectives (open-source models and international cooperation).

speakers

– Nick Clegg
– Xue Lan

arguments

Importance of open-source AI for innovation and security applications


Need for US-China cooperation in AI ecosystem


Takeaways

Key Takeaways

AI is a dual-use technology with wide-ranging security implications across sectors


Private sector is leading AI innovation, outpacing government regulation


Geopolitical tensions, especially between US and China, are shaping AI development and governance


There’s a need for global cooperation on AI safety and governance, despite competing national interests


Open-source AI has potential benefits for innovation and democratizing access, but also raises security concerns


Smaller countries and developing economies face challenges in AI security and development


Resolutions and Action Items

EU to present ‘competitiveness compass’ outlining 5-year plan for technology development


Meta to implement community-based approach to misinformation moderation in the US


Unresolved Issues

How to balance national interests with need for global AI governance


Appropriate level of government regulation vs. private sector innovation in AI


How smaller countries can ensure AI security without relying on major powers


Long-term societal impacts of AI on human behavior and civil society


Effectiveness of open-source AI in addressing security concerns


Suggested Compromises

Including China in global AI safety discussions despite geopolitical tensions


Balancing open-source AI development with national security considerations


EU focusing on AI applications rather than competing on foundation models


Creating multilateral institutions for AI governance while preserving innovation


Thought Provoking Comments

I think we’re going to get to artificial human intelligence faster than we get to artificial general intelligence, AHI, right? I mean, Larry Summers talks about how IQ will become less important, EQ will become more important with AI. I agree. But AI is increasingly programming EQ out of the people that use it, right?

speaker

Ian Bremmer


reason

This comment introduces a novel and concerning perspective on AI’s impact on human intelligence and emotional intelligence. It challenges the common focus on AGI and highlights a potentially overlooked consequence of AI adoption.


impact

This comment shifted the discussion towards the societal and psychological impacts of AI, beyond just technological capabilities. It prompted others to consider the broader implications of AI on human behavior and cognition.


Look, I think the world generally and both in the private and the public sector looked back at the last decade and a half and decided and thought, I’m talking now like two or three years ago, hang on a minute, social media and other technologies erupted incredibly quickly and it took us about a decade and a half to kind of catch up with new laws, new guardrails, new institutions and so on.

speaker

Nick Clegg


reason

This comment provides important context on the current approach to AI governance, drawing parallels with past technological disruptions. It highlights the proactive stance being taken with AI compared to previous technologies.


impact

This comment set the stage for a more detailed discussion on global governance approaches to AI, leading to insights on multilateral efforts and the challenges of aligning different national interests.


The sovereignty is in the hands of the private sector. And so to go to another company, as opposed to Meta, I look at OpenAI, which I don’t know if they’re ahead, because I’m not a technologist, but they certainly sell themselves as being ahead. They talk about being ahead. They’re very good at that. According to them, they’re ahead. And I look at what their mission is. And the mission is to create AGI. And their definition of that is where a computer or AI can do tasks better than human beings on the entire range of economically productive activity.

speaker

Ian Bremmer


reason

This comment raises critical questions about the balance of power between private companies and governments in shaping AI development and its societal impacts. It highlights potential misalignments between corporate goals and public interest.


impact

This comment deepened the discussion on the role of private sector in AI development and the need for government oversight. It prompted further conversation on how to ensure AI development aligns with broader societal goals and values.


Open source is not new. It’s as old as the hills. In fact, everything we do, I mean, all cybersecurity is based on open-source technology. The internet itself is based on open-source technology. Android is based on open-source technology. Encryption protocols are based on open-source.

speaker

Nick Clegg


reason

This comment provides important historical context for open-source technology and its role in technological development. It challenges the notion that open-source AI is a new or inherently risky approach.


impact

This comment shifted the discussion towards a more nuanced understanding of open-source AI, leading to a conversation about its potential benefits for innovation and security applications.


Overall Assessment

These key comments shaped the discussion by broadening its scope beyond purely technological considerations. They introduced important perspectives on the societal impacts of AI, the role of governance and regulation, the balance of power between private and public sectors, and the potential of open-source approaches. The discussion evolved from a focus on national security concerns to a more comprehensive examination of AI’s implications for human intelligence, global governance, and technological innovation. This multifaceted approach allowed for a richer, more nuanced exploration of the challenges and opportunities presented by AI in the context of national and global security.


Follow-up Questions

How will the new Trump administration’s approach to AI and national security differ from the Biden administration’s?

speaker

Katie Drummond


explanation

The recent revocation of Biden’s AI executive order and announcement of new AI initiatives by Trump creates uncertainty about the future direction of US AI policy and its security implications.


What will be the status and role of the US AI Safety Institute under the new administration?

speaker

Nick Clegg


explanation

The uncertainty around this institution impacts how companies like Meta can engage with government bodies on AI safety testing and regulation.


How can the EU close the innovation gap in AI and become more competitive globally?

speaker

Henna Virkkunen


explanation

The EU needs to address issues like bureaucracy, fragmentation, and lack of private investment to keep pace with the US and China in AI development.


How can major geopolitical players like the US, China, and EU develop global standards for AI while acknowledging their differing priorities?

speaker

Katie Drummond


explanation

Establishing common ground on AI governance is crucial for addressing global security concerns, but competing national interests make this challenging.


What could be the potential negative externalities of AI development primarily driven by private sector goals?

speaker

Ian Bremmer


explanation

There are concerns about the societal impacts of AI being developed mainly for economic productivity without sufficient government oversight.


How can smaller countries secure themselves in the era of AI without depending on larger powers like the US or China?

speaker

Audience member (Ayano Sasaki)


explanation

The security implications of AI for smaller nations that lack the resources to compete in AI development need to be addressed.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.