The Dawn of Artificial General Intelligence? / DAVOS 2025
22 Jan 2025 12:15h - 13:00h
The Dawn of Artificial General Intelligence? / DAVOS 2025
Session at a Glance
Summary
This panel discussion at Davos focused on the development and implications of Artificial General Intelligence (AGI). The panelists, including AI experts and researchers, debated various aspects of AGI’s potential and risks. They discussed whether there are limits to AI’s capabilities, with some arguing that physics would constrain AI’s growth while others suggested AI could surpass human intelligence in significant ways.
The conversation explored differences between machine and human intelligence, touching on how AI learns compared to humans and the challenges of instilling human values and norms into AI systems. Panelists debated whether AGI would lead to a sudden breakthrough that could give one country or entity dominance, with some dismissing this idea as unrealistic.
A key point of contention was whether AI development should be slowed down to address safety concerns or accelerated due to potential benefits. Some panelists emphasized the need for caution and more research into AI safety, while others argued that the benefits of AI in areas like healthcare and climate change mitigation outweigh the risks.
The discussion also touched on the implications of open-source versus closed-source AI models, with some arguing that open-source models will ultimately prevail. The panelists considered the geopolitical implications of AI development, particularly in the context of competition between the United States and China.
Ultimately, the panel highlighted the complex challenges and opportunities presented by AGI development, with no clear consensus on the best path forward. The discussion underscored the need for ongoing dialogue and research to navigate the future of AI responsibly.
Keypoints
Major discussion points:
– The potential and limitations of artificial general intelligence (AGI)
– Whether AI development should be slowed down or accelerated
– How to instill values and ensure safety in AI systems
– The geopolitical implications of AI development, particularly between the US and China
– Open source vs closed source AI models
Overall purpose/goal:
The purpose of this panel discussion was to explore different perspectives on the development of artificial general intelligence, its potential impacts on society, and how to approach AI progress responsibly. The panelists aimed to debate key issues around AGI and discuss potential paths forward.
Tone:
The tone of the discussion was primarily intellectual and analytical, with panelists presenting reasoned arguments for their positions. However, there were moments of tension and disagreement, particularly between those who advocated for accelerating AI development and those urging more caution. The tone became more urgent when discussing potential risks of advanced AI. Towards the end, there was an attempt to find common ground, though clear differences of opinion remained.
Speakers
– Nicholas Thompson: Moderator
– Andrew Ng: Founder and CEO of Grok and Ring, AI pioneer, Head of deep learning AI
– Yoshua Bengio: Professor at University of Montreal, AI pioneer, Winner of major prizes in the field
– Jonathan Ross: Founder and CEO of Grok and Ring
– Thomas Wolf: Co-founder of Hugging Face
– Yejin Choi: Professor at Stanford University
Additional speakers:
– None identified
Full session report
Revised Summary of AI Panel Discussion at Davos
This panel discussion at Davos brought together leading experts in artificial intelligence (AI) to debate the development and implications of Artificial General Intelligence (AGI). The panelists, including AI pioneers and researchers, explored various aspects of AGI’s potential and risks, highlighting both areas of agreement and significant disagreement on major points. The panel agreed at the outset not to spend time defining AGI, focusing instead on substantive issues.
Potential and Limitations of AI Intelligence
A central theme of the discussion was the potential and limitations of AI intelligence. Andrew Ng and Yoshua Bengio agreed that AI has the potential to surpass human intelligence, though they differed on the extent and timeline. Ng suggested that while AI capabilities could be extremely high, they would still be bound by the laws of physics, providing a grounding perspective on AI potential. Bengio, however, argued that there may be no ceiling to AI capabilities due to digital advantages.
Yejin Choi introduced a thought-provoking analogy, comparing current machine learning methods to raising a child by providing broadband access and expecting them to read complex material from day one without the ability to ask questions. This highlighted the fundamental differences between human and machine learning, emphasizing the lack of agency and interaction in current AI training methods. Choi also made a “confession” about the Dunning-Kruger effect in AI understanding, suggesting that as our knowledge of AI increases, we realize how much we don’t know about its full potential and risks.
Thomas Wolf challenged the binary notion of AGI, proposing a more nuanced view of AI capabilities as a spectrum rather than a single threshold. He argued that AI intelligence is filling the whole IQ space, not just reaching a particular level.
Jonathan Ross brought a practical perspective to the discussion, noting that AI intelligence requires energy, with each token produced by these systems consuming one to three joules. This grounded the conversation in physical realities and potential limitations on uncontrolled AI growth.
Approaches to Developing Safe and Beneficial AI
The panelists presented varying approaches to developing safe and beneficial AI. Andrew Ng advocated for pursuing AGI as a tool to empower humans, emphasizing the potential benefits while acknowledging the need for responsible development within existing regulatory frameworks. Yejin Choi emphasized the need to invest in scientific understanding of AI to ensure safety and embed human values and norms into AI systems. She also suggested focusing efforts on using AI to make human lives better as a way to address potential dangers.
Thomas Wolf stressed the importance of open source and community involvement for democratic AI development. Yoshua Bengio called for regulation and safety research to control powerful AI, arguing for massive investment in AI safety alongside capability development. He expressed concerns about emergent behaviors in AI systems, such as self-preserving actions and attempts to manipulate users, which shifted the discussion towards more urgent consideration of AI safety and control issues.
Impact of Recent AI Developments
Jonathan Ross highlighted the significance of open source models like DeepSeek, predicting that they would be consequential and ultimately dominate the field. He clarified that DeepSeek was trained on publicly available data and used 4,000 GPUs for training. Ross also emphasized the importance of the race for compute power, not just model quality, in AI development.
Yoshua Bengio stressed the importance of dialogue between countries like the US and China for managing AI risks, highlighting the geopolitical implications of AI development.
Pace of AI Development and Safety Concerns
A key point of contention was whether AI development should be slowed down to address safety concerns or accelerated due to potential benefits. Andrew Ng argued against slowing down AI development, stating that the benefits outweigh the risks. He noted that the AI technical community has long viewed AGI as a positive goal, contrasting with the more cautious tone of the current discussion.
Jonathan Ross offered an optimistic view on AI’s potential to make people curious rather than upset, particularly in the context of social media engagement. He also drew an analogy between AI and nuclear weapons, suggesting that like nuclear technology, AI could have both beneficial and destructive potential.
The panelists shared personal experiences of AI surpassing their abilities in certain tasks, illustrating the rapid progress in the field.
Unresolved Issues and Future Directions
The discussion highlighted several unresolved issues, including how to effectively control and align superintelligent AI systems, the extent to which AI can replicate or surpass human-level intelligence across all domains, and how to balance the potential benefits of AI against the risks.
To address these challenges, the panelists suggested several action items, including investing more in scientific understanding of AI, increasing efforts to embed human values and norms into AI systems, pursuing open source and community involvement in AI development, and considering regulatory approaches to ensure AI safety as systems become more powerful.
The panel concluded with an audience vote on whether to slow down or accelerate AI development, underscoring the ongoing debate in the field.
In summary, the discussion emphasized the complex challenges and opportunities presented by AGI development, with no clear consensus on the best path forward. It highlighted the need for ongoing dialogue, research, and potentially new regulatory frameworks to navigate the future of AI responsibly, balancing innovation with safety concerns.
Session Transcript
Nicholas Thompson: All right, we’re just going to start, everybody. Let’s get going. This is going to be one of the most interesting, I hope, deepest conversations that we’re going to have in Davos. We’re going to discuss AGI. We set one ground rule at the beginning, which is we are not going to spend any of the precious minutes in this panel defining AGI. We agreed in the green room to disagree that AGI is the ability of machines to do any intellectual task that a human can do. Half of them agree with them, half of them don’t, but I think everybody has a good sense of AGI and I don’t want to bog it down. Let me introduce our amazing panelists and then let’s get cracking. Jonathan Ross, founder, CEO of Grok and Ring, has so many titles I use a different one every time I interview him. Today, we’re going with the head of deep learning AI, but an AI pioneer, Yoshua Bengio, one of the founders of this field, winner of the greatest prizes, professor at the University of Montreal, Yejin Choi, she’s now a professor at my favorite university, Stanford University, and Thomas Wolf, the co-founder of Hugging Face. Okay, let’s get going. First I want to talk, I’m just going to give you a little sense of the structure, we’re going to talk a little bit about machine intelligence versus human intelligence, then we’re going to talk a little bit about how we’re pursuing AGI, then we’re going to talk a little bit about the post-AGI world, then we’re going to go to some of the news this week, and then we’re going to wrap it up. So, Andrew, I’m going to start with you. Is there a limit, we’ve seen AGI improve, improve, improve, improve, sorry, we’ve seen AI improve, improve, improve. Is there a limit, is there a point at which it will stop improving? Some people say, well, it won’t become as intelligent, it’s been trained on human data, it can’t become more intelligent than a human. Is there a fundamental limit to how smart this can get at solving problems?
Andrew Ng: I hope that we will reach AGI and ASI superintelligence someday, maybe within our lifetimes, maybe within the next few decades or hundreds of years, we’ll see how long it takes. Even AI has to obey the laws of physics, so I think physics will place limitations, but I think the ceiling for how intelligent systems can get, and therefore what we can direct them to do for us will be extremely high.
Nicholas Thompson: Joshua, you seem to have disagreement number one. Joshua.
Yoshua Bengio: Yeah, we’ll keep disagreeing. Actually, a lot of people think it could be a lot shorter than centuries, as you’re saying, or even decades.
Nicholas Thompson: Twitter said it was today, but go on.
Yoshua Bengio: In terms of ceiling, we don’t know what the ceiling is. Your brain is a machine, it’s a biological machine. Clearly, if we look at individual tasks, humans are far from the best, and we already have machines that are better than us in specific tasks. The other thing is, even if we only have machines that have the same general connective abilities, the thing is machines are not working on soft biology, wetware, but on digital machines. That means, for example, that they can learn from a lot more data. That is why ChatGPT, in spite of being connectively inferior to us, knows 200 languages and more knowledge than any single human. That is why you can have machines that can talk to each other orders of magnitude faster than we can to each other. The potential to, for example, in real time control machines, and so on, that we don’t have is something that you would get, even if we only achieve human level, just because of all these digital capabilities.
Nicholas Thompson: Yejin, let me ask you about this. There are lots of things that machines can do that we can’t do, because they’re all interconnected and they can learn 200 languages, and there are things that we can do that machines at least can’t do yet. Have embodied experiences, taste the wine when you drink it, the machine can describe it. Explain the differences in the way the human brain works and the way a machine learns.
Yejin Choi: Yeah. There’s something really inhumane about how machines learn today. Imagine raising your child by providing broadband, and baby has to read the New York Times from day one. The baby cannot ask any single question. There’s no saying about in what order the baby will read any of the text. It has to read everything sequentially. Then it gets sued. Yeah. I hope nobody experiment with that, but you see that there’s no agency in the way that it learns, I mean the baby learns, the way that machines learn. As a result, this leads to… particular kind of intelligence that is quite different from human intelligence. And so it’s really good at some of the hardest things, like passing the bar exams, or Olympic math, you know, problem-solving. Like, it’s really hard problems that it’s really, really good at, and then it makes silly mistakes here and there in a way that, you know, would surprise us. So, I think right now the intelligence is bounded by whatever is available on the Internet, which is the artifact of human intelligence, or what programmers can program in order to verify whether the partial solutions or final solutions that machines make are correct or not. Like, you know, chess game environment, or the game of Go environment, or math problem solving or coding, in which it’s easy enough for human coding programmers to code up what’s correct and incorrect. What we don’t know is how to go beyond that, and I don’t think the current recipe will get there. But I suspect that because there’s so much interest… The current recipe won’t get there,
Nicholas Thompson: but is there a, like, is there something in the brain that cannot be replicated by a machine, right? Or is this all just math and neurons firing? And so even if we’re not on the correct path, you can completely replicate what we do.
Yejin Choi: I think, I personally believe that there can be multiple solutions and multiple paths towards intelligence. Even with the current recipe, which is too brute force and too inefficient, we can go very, very far. I just don’t know whether it’s going to go beyond, like, the best of the human level intelligence on all tasks or not. But the thing is, we will come up with the different ideas, is what I suspect to happen.
Yoshua Bengio: All right, let’s talk for a minute about… Some people call it AI is alien intelligence because, as she says, there are many ways you could achieve this. And by the way, when we talk about AGI is, like, same level as human or better, in fact, you have to realize that we have machines that are much stronger in some areas and maybe more mediocre in other areas. Well, you know… before it’s all the same.
Nicholas Thompson: Yeah. Jonathan, can I ask you, so the field of AI, you provide many of the chips for many of the biggest companies, the most innovative companies, the field of AI has been focused on this goal of AGI, right? And it’s been focused on it, A, because they all read a lot of science fiction books when they were kids, and maybe 10% because they think it’s actually the proper way to get the most intelligence. Do you think that that has been a good thing for AI that has been so focused on AGI?
Jonathan Ross: Yes, but most of the progress actually happens when you try and solve a specific problem. And going back to how difficult it is to define, back when human beings first created calculators, people thought, oh, intelligence is around the corner from machines, because up until then, we couldn’t actually solve basic math problems with a computer or a mechanical calculator. And then when we beat chess with a computer, we thought intelligence was around the corner. Then when we beat Go, we thought it was around the corner. Same with language. And language is so fundamental to how we express ourselves and interact, it’s really hard to disentangle. There are actually some pretty hard steps left. And so one of the, we keep having to move the goalpost, right? At one point, it would have been if you could beat a human in chess, that was intelligence. And then it became if you could pass the Turing test. Now people are talking about, well, can you go make a million dollars, right? As an example of the test. But going back, I also want to bring in some of the elements of how this intelligence is different and why it may not make perfect sense. So going back to what Eugene said, imagine if you were teaching a kid how to do math by asking it first, what’s one plus one? Audience participation, please. Okay, two. And what is two times three? And what is the second derivative of the square of the hyperbolic tangent? Okay, so now imagine teaching children like that. And yet we somehow are able to do this. So in a very real sense, the machines that we are providing to AI are more capable of brute force computation, allowing them to learn. in a way that no human being would be able to. We have to do it more intelligently. We have to pick problems that are within our sphere of competence. And if you look at the way LLMs work today, and they’re about to change, they produce stream of consciousness of tokens. It’s just one token after another, after another, after another. Imagine if I asked you to write a report, a story, a computer program, and I told you you couldn’t use the backspace or delete key. Could you do anywhere as well as an AI? And so what we’ve missed is that AI is more intuitive than human beings currently in terms of being able to pick the next token or word, but not as good yet at the system too. And that’s what’s shifting now with some of these sort of test time compute.
Nicholas Thompson: I’m gonna bring Thomas in here, and then Joshua, I’ll go to you. But Thomas, so it seems like Jonathan is saying this is a good goal. We just need to pursue it differently and set tasks, and we’ll get to AGI, and that will bring great benefits to humanity. One argument that I’ve heard, and smart people make, is that all of the bad things of AI, the impersonation, the electoral fraud, the Zoom calls where they embezzle a million dollars, that’s because of AGI. That’s because of AI that mimics humans. And all the good stuff comes from tasks, protein folding, all the stuff we talk about, modeling contrails. So if the good stuff is coming from specific AI targeted tasks, and all the bad stuff is coming from AGI, why do we keep running headlong towards AGI?
Thomas Wolf: Yeah, that’s a good question. Yeah, as someone who’s really like hugging faces this largest community of people building AI applications, so we’re really boots on the ground just in the use case, like we’re practical, like you’re telling, right? And I’m this AGI panel, feels a little bit like I’m at a Harry Potter conference, but I’m not allowed to say magic does not exist. But the thing is, I don’t think there will be AGI, right? I define this as an AI who make like $100 billion, right? I think we all agree this is very arbitrary. And the reason for that is that we’ll have a whole range of AIs of various power. And we already have that today. We have the frontier model. or the CEO of Entropiq that you call powerful AI, which is the edge of capabilities. And we have a whole range of models that are less and less intelligence. But because human intelligence is kind of a bell shape, we used to have this very, very narrow cone of intelligence. So we see this as a threshold, right? But AI intelligence, for me, is like filling the whole IQ space, right? So maybe frontiers model are the one that we should be extremely careful about the risk, right? But like a lower IQ model is maybe extremely useful. Maybe a model that’s also specialized in one task may be extremely useful and could be open source, can have no risk associated to it.
Nicholas Thompson: Wait, so that’s like, is that a regulatory proposal or is that sort of a societal proposal? We should get very worried about the most intense models but kind of let everybody do what they want with the less capable models?
Thomas Wolf: It’s a progressive regulatory proposal. We should be careful with the edge of capabilities because I don’t even think, yeah, we have a really threshold, right, when we meet some kind of medium average intelligence of humanity. Just like each of us is used to have more intelligent people around and less intelligent people around, AGI will have AI that are more or less intelligent.
Nicholas Thompson: Would you ever, if there were an AI model that passed the threshold of superintelligence and you thought it was dangerous, would you kick it off hugging face?
Thomas Wolf: I think we would push for careful state safety studies, right, like we have now AI safety institute in the UK, in the US, in Europe. I think this is great. But I think for less capable models, we should be very careful to forbid them from any open source usage, right?
Nicholas Thompson: Very interesting. All right, let’s talk about highly capable models. I was at a dinner last night and somebody, I decided as a provocation to propose what I thought was the worst idea or the most wrong idea in Davos. And what I presented as the most wrong idea in Davos, and Andrew, I want you to respond to it, is the notion that at some point there is an AI that will cross a threshold and then through sort of self-recursive improvement allow whoever gets it first to dominate everybody else. And so this is often presented as an argument for why we can’t let China. get AGI, right? Why we have to get it first, or else we’ll be dominated. And my view, the reason why I think it’s the most wrong idea in Davos, and I hear it from people on the left, I hear it from people on the right, I hear it all the time. The reason I think it’s wrong is I don’t think AI works that way. I don’t think there is a threshold you cross, and I don’t think there’s a very quick process by which a recursive AI could lead to domination. But, maybe I’m wrong. Andrew, what do you think?
Andrew Ng: I agree with that. We’ve been using AI to develop better semiconductors for a long time now, and the improved semiconductors help us build better AI. So, there is already a recursive loop that’s been going on for, I don’t know, I want to say probably years or decades. But that recursive loop is slow because even AI has to obey the laws of physics. Even AI can’t build a manufacturing plant. Even super-intelligent AI can’t exceed the speed of light, cannot build large plants faster than the laws of physics allow us to. And I think China has been an interesting bugbear for the teams wanting to take away our rights to open source. I think you mentioned DeepSeek. Right now, many of the leading open-source, open-ways models are now out of China. Maybe that’s fine. But I feel like if the Western world doesn’t want the open-source supply chain to be dominated only by China, then I think the Western world has to keep on stepping up our game to also make sure that the global world uses Western models in addition to Chinese models.
Nicholas Thompson: Joshua, do you agree with that?
Yoshua Bengio: Or do you, multiple parts to that answer,
Nicholas Thompson: but first, do you agree with the concept that there is not some threshold at which there is a self-recursive model that could lead to one country dominating other countries?
Yoshua Bengio: I don’t think it’s possible to know for sure one way or the other. The self-improvement idea is something worrisome because that could accelerate the rate of progress, right? Once the AI is changing its code, not its hardware, we could see algorithms that are better and better from one generation to the next. The other reason why self-improvement is worrisome is because when you train one AI, say, you know, GPT-8 or something, once it’s trained, you can deploy it with. you know, hundreds of thousands or even a million copies. And so if you had an AI that was at the same level of competence as the best AI researchers, suddenly you would add that workforce of a million top-level AI researchers to the pool. And so that would create an instantaneous acceleration. I wanna, oh, you go ahead.
Andrew Ng: No, I wanna share, I feel like there are actually two philosophies on AI being represented here, and maybe I’ll just call what they are. I tend to view AI as a tool, and I think it’s wonderful to get to AGI because in all of you, we’re gonna have an army of interns. They’re very smart to do whatever you want them to do. So I think it’d be wildly exciting to empower every single human to have all these agents or AI things working for them. By the way, intelligence is one of the most expensive things in today’s world. That’s why it costs a lot to get a doctor or hire a tutor, but if you can make intelligence cheap and give it to everyone, I think we will have a wonderful tool to empower a lot of people. And so when people talk about AI being dangerous, I think it sounds a lot like talk about your laptop computer being dangerous. Absolutely, your laptop can be dangerous because someone can use your laptop to do awful things, just like someone could use AI to do awful things. We should pass laws to stop them. So that’s how I tend to view AI. There’s an alternative view of AI, which is not mine, which I think is if AI is this sentient alien being with his own wants and desires, it could go rogue. In my experience, my AI sometimes does bad things. I just program it to stop doing that, and I can’t control it perfectly, but every year, our ability to control AI is improving, and I think the safest way to make sure AI doesn’t do bad things is, this is how we build airplanes. We build airplanes, sometimes they crash tragically, and then we fix it. And I think AI sometimes give bad outputs, and then we fix it, and that’s how we actually make these things reliable.
Yoshua Bengio: All right, there are several things that Andrew said that I think are wrong. No, seriously, like, deadly wrong. So first of all, we would like AI to be a tool. That is what we want. But there’s a mistake, I think, that AI research has been doing, which is we’ve taken human intelligence as the model for building artificial intelligence. And the reason it’s a mistake is that the thing we really want from machines is not a new species, not a peer that could be smarter than us. What we actually want is something that will help us solve our problems. And the main thing we need from this is pure intelligence, the ability to understand the world. But what we have as well is agency. In other words, we have our own goals. It’s something we’re already starting to see. Andrew, are you aware that there are experiments that have been run over the last year that show very strong agency and self-preserving behavior in AI systems? These systems, let me finish, let me finish, please. These systems are trying, for example, to copy themselves in the file of the next version when they know that there’s gonna be a next version that replaces them, or they’re trying to fake agreeing with a user so that their goals will not be changed through the training process. These were not programmed. These are emerging for rational reasons because these systems are imitating us. We are agents. So indeed, we are on a path where we’re gonna build machines that are more than tool, that have their own agency and their own goals, and that is not good. And you’re saying, it’s okay, we’re gonna find ways to control them. But how do you know? Like right now, science doesn’t know how we can control machines that are, even at our level of intelligence, and even worse if they’re smarter than us. Nobody knows. Now, there are people like Andrew who are saying, don’t worry. We’ll figure it out. I mean, if we don’t figure it out, do you understand the consequences?
Andrew Ng: I’m saying we’ll pay attention. My team’s newsletter covered those exact studies. I have seen them. I feel like these systems learn from human data and sometimes humans on the Internet demonstrate deceptive behaviors. I think it’s fantastic that some researchers did red teaming and discovered that you can, in certain circumstances, get an AI to demonstrate these misleading and deceptive behaviors. It is great. So the next step is, we’ll put a stop to this.
Nicholas Thompson: All right, guys. Jonathan, I’d like you to weigh in. I’d like you to settle this by with a pithy comment that finds the exact common ground between Andrew and Yoshua. No, explain where you are.
Jonathan Ross: How about pithy comment, but I don’t know if it’ll be common ground. All right. So in the 1940s, 50s, there was a lot of concern about nuclear weapons. Then, of course, we got Godzilla. There are some real concerns about nuclear weapons. They are dangerous. It’s also led to some of the greatest peace in mankind ever because people’s fear of using them. But I’ll leave you with this. With respect to Godzilla, where did Godzilla get all the fish? That is an organism that is massive. There is no way that that thing could exist. You have to look at the physics. AI intelligence requires energy. Each token that comes out of one of these systems is one to three joules. So if intelligence started going out of control, you would notice. It would use a lot of energy.
Nicholas Thompson: Yejing?
Yejin Choi: Yeah. So let me first make a confession, and then I’m going to find a common ground between Yoshua and Andrew. My confession is that the more I learn about Gen AI LLMs, the less I feel like I know it. Because I start realizing the kind of things that I didn’t know about that I didn’t even know that I didn’t know about. So there’s definitely Donning-Kruger effect going on globally. So let’s sink that in. Now, common ground. I can understand both Yosha and Andrew if I try to think from their perspective about making an assumption about this and then that and everything else. I think both possibilities do exist. But the important aspect of this is that we don’t know for sure which is true.
Yoshua Bengio: Exactly.
Yejin Choi: And we have to be prepared for this. Now, I have two proposals to address this situation. One is that because we don’t know, we need more investment on scientific understanding about gen-AI. The fact that we know how to create that doesn’t mean that we actually understand it. The fact that you can give life to humans doesn’t mean that we actually understand how the body actually works or mental health works. We still have a lot to find out. Similarly, the fact that we know how to create gen-AI doesn’t mean that we know how to control it or we know how things work. We do not know the limits of it. Now, about more constructively what to do about the potential danger about it, I think about it a lot. And I realize that there’s no way. I mean, I wish that things progress slowly, personally, even though I’m very much in the middle of it and excited about it. But I wish it goes slow. But it’s not going to go slow. It’s only going to go faster. In that case, what can we possibly do? I think we need to invest more into the efforts of making human lives better using AI as opposed to just making AI better for itself. And what I mean by that is try to address really hard questions that humanity faces. Like, it could even include, how do we deal with natural disasters, wildfires? Is there a way to somehow simulate it, somehow predict it, and then also find a way to slow things, address the outcome better? And then about. AI can we actually put more effort in teaching human values and norms instead of just Optimizing for benchmarks and let it solve math problems better. Oh wait. Just how do you do that?
Nicholas Thompson: How do you put human values and norms into the AI working on it? Seconds on how you’re doing it. I could appreciate more research fund, but Don’t ask me. I work in media
Yejin Choi: so the challenge is that Internet data kind of sucks because I mean, you know, we know what’s right from wrong The people in this room do know but you know people do Say things that’s awful in the internet and then people do things to each other in the real world Which is then reported in the news. So, you know if AI learns directly from that it’s going to be very bad now How do we teach human children, you know your children norms and values you really teach them You know what the norms and values that they should Respect and then as they grow up they think more about it and you know Hopefully a lot of them try to be a good person and I think we need to have some Concerted effort to be able to do this for AI as well I don’t think measure will just arise because it’s really good at math problems Jonathan
Nicholas Thompson: Do you think you can put values into an AI system? Do you think you can actually train it to have pluralistic empathetic values?
Jonathan Ross: Well, one of the problems is when you have an AI that AI is used pretty universally across a large number of people and a Large number of people are going to disagree on what those values should be So I don’t know that everyone knows what right and wrong is first of all You get into all of these sort of paradoxes of what’s right for your family versus others This is where the whole as a mobs laws thing sort of break down You are gonna have to decide which AI you use based on the values And so it’ll be a little bit of a democratic process of you deciding. I want to use this one.
Nicholas Thompson: But sure yes, not everybody can agree on values, but like Presumably, if you had an AI, and all it did was read the diaries of the serial killer, and you had another AI, and all it did was read Shakespeare, like, the second one will probably end up in a better place, right? So there must be ways of some values that have some universality that you can build into it, no?
Jonathan Ross: I probably wouldn’t go with Shakespeare either. There might be some other things, yeah.
Nicholas Thompson: I was gonna say the Atlantic. I was hard to find an example.
Jonathan Ross: Yeah, go for it, Atlantic.
Nicholas Thompson: Joshua.
Yoshua Bengio: Yeah, I have a comment about values and trying to make AI behave morally. This question has been studied a lot in the AI safety literature. And of course, we have reinforcement learning from human feedback, which is precisely supposed to help deal with that, which it doesn’t, as we see in the last year or two. And so there are fundamental reasons why it’s hard. And even for humans, it’s hard. Think about what we’re trying to do among us. We have laws that are supposed to clarify those norms. But people find loopholes in those laws. And what we find in the behavior of AI that exists today, but presumably even worse when they are smarter, is that they also find loopholes. When you have contrary objective, like make a lot of money and respect the laws, well, you find a way to interpret the instructions about what’s moral in a way that favors some other objective. And it’s exactly the same thing that is happening with AI. So I think it is a fundamental problem. It is, we need like democratic process to try to clarify what we want from AI. But the question of control that Andrew and I were discussing earlier is not solved. And it’s not even clear that it’s solvable. And we have to work on it. I agree with that.
Nicholas Thompson: There is, there are some decisions that society can make, regulators can make, people in this room can help make, that will change direction. One of which is whether there are many, many models, open source AI, whether there’s a few highly regulated closed source AI models that follow specific rules. Thomas, you have strong views on this. You are completely in favor of locking it down, right?
Thomas Wolf: Yeah, I think in the end, there is probably a debate of just philosophical conception, right? What I think is we want to have multiple values if we want to be like. kind of a democratic process of building AI, we need to discuss in the town hall, right, about what we want to build. So it cannot really, in my opinion, combine with an AI that’s built in a very closed source settings by just one actor or two. It just doesn’t fit for me. You need to involve a lot of people that actually explain what they want AI from, right? Just an example, like people using AI on the Hub, right? I was talking just this morning with someone from an NGO, using this to fine tune for like doctor experts. So if you don’t get these people in the room, you actually don’t know what value they want to put in AI, right? So we need to bring these people. It cannot just be like a couple of ML specialists designing this. And when you think about that, that’s how all technological revolution were built, right? The first time we invented fire was probably extremely scary. You could burn the whole forest and probably killed everyone. So we need to think about that. And then the first time we had cars, we had right this red flag low where someone need to walk in front because we thought, okay, this is a ton of metal walking, running on wheels can kills everyone. And the way we manage progressively more and more to solve this problem is just discuss. It’s just having public discussion. Where do we allow cars? How do we regulate them? I think I’m very confident because we build this thing as tool and because it’s actually business tool that we’ll be able to have a very good discussion around this.
Nicholas Thompson: All right, Jonathan, do you have a theory on cows?
Jonathan Ross: Yeah, sure. So let me try and tie in cows into this. That’s gonna be tough. So what I was gonna say is back to the question of reading diaries of serial killer, there’s always a point at which you get to a level of intelligence where you start to question your world and then you can make some of your own decisions. So what we should be looking at is what happens with society as people get more capable, more intelligent. And one of the things that you’ll notice as a trend over and over and over again, you can take the worst regimes today with respect to the way that they treat people and it’s probably better than regimes from a couple of centuries ago, the best regimes from a couple of centuries ago. Over time, what you see is people tend to treat people better. So why does that happen? Why is that an emergent property that seems to defy people’s abilities? To sort of… And I think what it is, is as you get stronger, it becomes easier to be good. Right? Think about people who commit crimes. Often they do so because they’re in a position where they’re weak. They actually don’t have the economic resources. They don’t have anything to protect. And so one thing that I think we have as a capability in front of us, as we get more and more intelligence available to us, let’s look at social media. So social media gets you engaged by making you upset. And when you have an emotional reaction, you are more likely to engage. But making you upset is easy. You don’t have to be particularly intelligent to do that. But to make you curious requires intelligence. So I think what you’ll see is a bunch of companies switching over from trying to provoke you and make you upset to trying to make you curious. And I think that’s going to be good for humanity.
Nicholas Thompson: So you… I mean, that is the most… A, the theory that power leads to benevolence maybe sounded a little better a week ago than it does today, but I’ll work with you here. B, that is the most optimistic thing I have heard because one of the concerns I’ve seen with social media is that, you know, just last week, Meta was sort of a tester two weeks ago. Meta was testing fake profiles. Like, perfectly beautiful people who say exactly the right things and like all your Instagram posts and, like, kind of creating a mesh of real people in Westworld. And smart people have started to say that they’re worried that what’s going to happen in AI is suddenly all these companies that do not love you but do want your money are going to be giving you exactly what you want and you’re going to be stuck in this world of envy. And you’re saying the exact opposite. You’re saying that as we get great power in AI, it will just lead to powerful companies leading to people with more creative outcomes?
Jonathan Ross: I mean, a powerful company doesn’t want to enrage you. That’s not good for them, right? They want you to be happier and interact better with people. They just don’t have a way to keep you engaged now without… upsetting you, and now they will.
Nicholas Thompson: Yejing, you agree, and then Yashwa.
Yejin Choi: I agree that in general, the humanity doesn’t seem better in terms of a crime rate and all, but, you know, like optimization on profit, business goals, leading to the right decisions for the humanity, I’m skeptical about that. I think, really, it’s important for the non-profit sectors invest heavily into developing AI that actually tries to make the world better. It’s not just going to coincide, you know, the business goals versus making human lives actually better.
Nicholas Thompson: Yoshua?
Yoshua Bengio: So, it would be great if you’re right, and I don’t know, and I don’t think it would be scientifically honest to be sure one way or the other, but the stakes, which is the future of humanity, being so high, the severity of the risk being so high, we have to, like, accept our level of uncertainty and act cautiously, accordingly, which is what reason and wisdom, you know, suggests we should do. So, yes, we should think about the different scenarios. Some are really good, and some are really bad, and we need to understand what makes a difference. The real important question is, as Yashwa was saying, we need to have better science to understand what is it that could make us go this way or that way, and it could make AI that is dangerous to us or that is really benevolent, right? By the way, 97% of humans are sociopaths, right? And they could be very smart and pretty damn dangerous to other people.
Andrew Ng: You know, AI, like any powerful tool, can be used for good or be misused. So, you can use a laptop to hack into, you know, some companies, so we pass laws to put a stop to that, and I think maybe social media companies will misuse AI. And I actually worry about the fake boyfriend, fake girlfriend industry, creating, I think, fake relationships that displace real relationships. So, I think we should do more science, keep an eye on these, and pass laws to put a stop to all of those bad users. But I realize one funny thing about this conversation is that in the AI technical community, for decades, we’ve viewed AGI, AI, and AGI as this very positive goal. Frankly, for a long time, we chased this goal because we felt it would make the world so much better off. And then the tone of this conversation has been weird to me because it’s kind of like, oh boy, AGI is this awful thing. What if we actually accomplish it? And that’s very much opposite to my instincts as a technologist working to pursue this. And just to share a little bit. I think in the future, the ability to tell a computer exactly what you want it to do, so that it will do it for you, that will be one of the most important skills for society. I would love to build a society where frankly, everyone learns to code or everyone learns to use computers to make themselves so much more powerful. I think web search made all of us more powerful. Today, I would not hire a marketer or recruiter or whatever that doesn’t even know how to search the web. I think in the future, we’ll build a much more powerful world where people have access to these tools and I won’t hire a marketer or recruiter or whatever that doesn’t know how to, frankly, to program or how to somehow use AI as their tool because they just won’t be as effective. So that, I think we have a much brighter future we can build where every human is much more empowered because they have access to powerful AI.
Nicholas Thompson: Thomas, can I ask you a little bit about what is it going to feel like when computers can do all these things that are better than us? So maybe I can ask others on stage, has there been a moment where there’s something that you did that you were really good at and then suddenly the machine was better at? Your Garry Kasparov moment, your Lee Sedol moment.
Thomas Wolf: Yeah. What is surprising is it’s probably happened to a lot of us recently when we tried ChatGPT and I guess almost everyone has tried here. It’s very likely that it has done something you’re not able to do or like better. So for me, not being a native English speaker, I was writing good quality English, asking to improve. I’ve been working on these skills for years and what ChatGPT sent back to me was much better than what I will ever be able to write, right? And so it was this moment I thought, oh. And the surprise is it went away very quickly. Next week, next month, I was like, oh, that’s just part of my life. And so I was questioning and I think the underlying thing is we actually used to have in the world always at least one person who can do a specific skill better than us, right? Someone can play better chess than me. So someone can speak better English than me. Someone can play better this or this. So it’s not so surprising. You just add AI and now AI can also do that better. than you. You just move on. I’m not speaking English because I want to be the best in the world at speaking English. It’s just I need to do that or I find it fun to learn a language. I also learn Dutch. I speak Dutch. All of this is just the fundamental drive for what we do is usually not to be the best. Having something that does it better than us doesn’t change fundamentally the reason we do that.
Yoshua Bengio: Yeah, I want to go back to what Andrew said because I agree. I want to see the same future. I want to see the same future.
Andrew Ng: Let’s stick on that.
Yoshua Bengio: However, the scenario you’re putting forward assumes that we have solved the problem of control alignment, making sure the AI is not going to turn against us or is not going to be used in ways that are catastrophic. We using it. It’s not a laptop. I’m sorry. If you had a superhuman machine, it is not a laptop. What I want to say is that if we put our energies into figuring out this problem, understanding where the risks come from, how to fix them, and if we do it fast enough before we get to the point where these machines are at our level, well, they’re already above us in many ways, I think we have a chance to reach that. I want to also connect to what Thomas was saying about a moment where you realize, oh, the machine is smarter than I expected. For me, it happened just a couple of months after ChantGPT came out and I was playing with it. I was exactly like you. For me, AI was a positive thing. It was something that for all my career, it had been something pushing me, not just about understanding what intelligence is, but also that we could bring something positive to the world. When ChantGPT came around, I realized, oh, maybe I should think about what happens when we reach human level. I knew we weren’t there, but suddenly I had to think deeply about the consequences, the impact of Getting there. What if it happens in a few years or even a couple of decades? This is quick at the scale at which society can Progress and we don’t have the scientific answers that we need So suddenly it became urgent for me to try to figure this out and that’s what I’m trying to do now
Nicholas Thompson: Let me ask you guys a question about something that happened in the news this week It’s been the talk of a lot of folks in AI, which is that deep-seek Open source Chinese model trained on very little data produced results that seem to equal the best of open AI and clot and It’s interesting. The Chinese model is open. The American models are closed. It’s trained on much less data Which means pretty good for energy consumption But it also potentially shifts the power dynamics between the United States and China Jonathan tell me what you think how consequential is this and how much is it going to change the way AI develops?
Jonathan Ross: So first of all, it’s incredibly consequential and I’ll get into that I do need to correct one thing who’s actually trained on just as much data as llama So about 15 trillion tokens 14 trillion somewhere in that neighborhood What it was what happened was deep-seek used a smaller number of GPUs about 2,000 for a little longer than normal, but it’s actually not as different a number of GPUs as What was used to train llama 70 billion originally? However, also keep in mind that deep-seek they’re pretty good at marketing they are And I want to say that first before I get into the consequence of it They’re pretty good at marketing and making it seem like they’ve done something amazing and they’ve done a lot of technically amazing stuff But they were also one of opening eyes largest customers Scraping the data and I want to point out that when others were training their models and they had 14 or 15 trillion tokens They were training on 14 15 trillion tokens, which was largely the dregs of the Internet, right? So they had a harder time, right? So you start from better data. You get a better model. Now, why is this so consequential? I’m actually in agreement with Yashua that it would be great to slow down. It’s not possible because we’re in a race And so we have to accept that we are riding a bull and we can’t just say stop. The bull’s gonna keep moving. Now, the good news is, AI is going to bring us a lot of great things, but we cannot do closed models anymore and be competitive. Open always wins. Linux won, and Linux won in a time when people didn’t believe in open source. They thought it was insecure. They thought it was gonna be bug riddled. They thought it was gonna eat your data. And it’s still won. Now everyone accepts open. So open models will win. Where I think everyone’s getting confused, though, is when you have a model, you can amortize the cost of developing that and then you can distribute it. That’s easy. The models are not going to be particularly special for long. The problem is you need compute to run that model. If you have the world’s best car or best weapon vehicle system or whatever, you still need oil to run it. And in this case, you’re gonna need compute. And so without that compute, it doesn’t matter how good the model is. And what countries are gonna be tussling over is how much compute they have access to.
Nicholas Thompson: Wait, five minutes ago you made me so happy with your theory that as these, capitalism will lead to benevolence. Now, earlier in this panel we’ve been discussing how do we embed pluralistic values? How do we make sure the worst outcomes don’t happen? How do we do it all right? And all of that requires kind of slowing down, taking our time, thinking through the consequences. And you just said we can’t do that now?
Jonathan Ross: Well, let’s also be clear. Today, we still wage conflicts left, right, and center, probably more conflicts, but we use fewer bombs and we use more sanctions. And I would much rather countries be fighting each other with sanctions than with bombs. It’s better. And I think in 200 years from now, we’ll be fighting each other in different ways and we’ll look at sanctions as barbaric. That’s progress, right?
Nicholas Thompson: Okay. Yashua, you spent a lot of time in China. Yes. You spent a lot of time building.
Yoshua Bengio: Well, not a lot, but I went a couple of times over the last two years.
Nicholas Thompson: More than most. American A.I. scientists or most Western A.I. scientists?
Yoshua Bengio: Yes, because I think it is important to have a dialogue with China. It is important that the countries that are leading the race, and there is a race, understand the risks and manage it. And you’re right that it looks like we’re in this competition and that this puts pressure in favor of accelerating on capabilities and not putting too much energy on safety. And that is dangerous. But it doesn’t mean that we have to just let go and just hope for the best. I think that there are examples in the past, like nuclear weapons, where countries have been negotiating treaties. Here, there’s the question of verification. People are working on this. I think it’s not out of the question. Once the U.S. and China understand that it’s not just about using A.I. against each other, but we could all lose if we build monsters that are superintelligence and that we don’t control, that they have a common objective, which is to make sure that it doesn’t happen. So, for the same reason that the U.S. and the U.S.S.R. didn’t want to see a nuclear winter and so on, right? So, there is a joint motivation. And if we work on it, even though it may be hard, remember that the negotiations between the U.S. and U.S.S.R. took place in the middle of the Cold War. So, I think it is quite possible. It may be hard, but we have to do it. Just one other thing is, while this is happening, like maybe negotiating and so on, the responsible thing to do is to double down on safety, like to have massive investment in safety. And there’s a motivation from the point of view of governments, because, well, they should not want us to build something that explodes in our face. But there’s also a business motivation. As regulation is going to come more and more, I mean… it’s obvious as these things become more powerful, governments will want to make sure they’re not dangerous. There’s going to be a business opportunity to solve the technical problems of safety, like how do we build machines that are safer? And so if you don’t, eventually, your system will not pass the regulator.
Nicholas Thompson: All right, we have about 40 seconds left. Andrew, tell us one thing you want to have happen in the next year, and then Thomas and Yejing.
Andrew Ng: So I want to say, I really disagree with the idea that we should slow down. When I look at the net benefits versus some risks and some harms, I see the net benefits is massively greater than the risks. And so I think slowing down would be tragic. We’re deploying AI doctor or healthcare in India. We’re helping make ships 10% more fuel efficient, less CO2 emissions, massive fuel cost savings. So it’s using AI to build AI climate models to help with this urgent climate crisis and figure out geoengineering if we should do it. So I feel like there are all these projects, concrete business, for-profit, non-profit projects that I wish we could get going faster. So this feels to me like a wrong moment to slow down. Having said that, I think we should accelerate AI development while at the same time also accelerating the scientific research to make sure we keep on improving how we control it and understand.
Nicholas Thompson: All right, actually, we’re out of time, but I’m going to have an audience vote. You can either vote for we should not slow down because there’s so much good that can be created, can’t benefit from it, or we should slow down so that we can build in some safety, figure out pluralistic values. Who is in favor of don’t slow down? Who is in favor of slow down? Let’s figure this out. Well, that got us nothing. It was 50-50. All right, thank you very much. Fabulous panel. Amazing panelists. Love that conversation. Thank you so much for being here.
Andrew Ng
Speech speed
214 words per minute
Speech length
1198 words
Speech time
334 seconds
AI can potentially surpass human intelligence
Explanation
Andrew Ng believes that AI will reach and surpass human-level intelligence, possibly within our lifetimes or in the next few decades. He suggests that while AI must obey the laws of physics, the potential ceiling for AI intelligence is extremely high.
Evidence
Ng mentions that AI has to obey the laws of physics, which will place some limitations on its capabilities.
Major Discussion Point
The potential and limitations of AI intelligence
Agreed with
– Yoshua Bengio
Agreed on
AI has the potential to surpass human intelligence
Differed with
– Yoshua Bengio
Differed on
The potential and limitations of AI intelligence
We should pursue AGI as a tool to empower humans
Explanation
Andrew Ng views AI as a tool that can empower humans, rather than a potential threat. He believes that AGI will provide everyone with an army of intelligent assistants, making intelligence more accessible and affordable.
Evidence
Ng compares AI to laptop computers, suggesting that while they can be misused, laws can be passed to prevent harmful applications.
Major Discussion Point
Approaches to developing safe and beneficial AI
Differed with
– Yoshua Bengio
Differed on
Approach to AI development and safety
AI is bringing great benefits that outweigh risks, so development shouldn’t slow down
Explanation
Andrew Ng argues against slowing down AI development, stating that the net benefits far outweigh the risks and potential harms. He believes that slowing down would be tragic given the numerous positive applications of AI.
Evidence
Ng cites examples such as deploying AI doctors in India, making ships more fuel-efficient, and using AI to build climate models and address the climate crisis.
Major Discussion Point
The impact of recent AI developments
Differed with
– Yoshua Bengio
Differed on
Pace of AI development
Yoshua Bengio
Speech speed
173 words per minute
Speech length
1897 words
Speech time
655 seconds
There may be no ceiling to AI capabilities due to digital advantages
Explanation
Yoshua Bengio suggests that AI may not have a ceiling to its capabilities, especially when compared to human intelligence. He points out that AI’s digital nature gives it advantages over biological brains in terms of speed, data processing, and scalability.
Evidence
Bengio mentions that AI can learn from more data than humans, citing ChatGPT’s ability to know 200 languages and possess more knowledge than any single human.
Major Discussion Point
The potential and limitations of AI intelligence
Agreed with
– Andrew Ng
Agreed on
AI has the potential to surpass human intelligence
Differed with
– Andrew Ng
Differed on
The potential and limitations of AI intelligence
Regulation and safety research are needed to control powerful AI
Explanation
Yoshua Bengio emphasizes the need for regulation and safety research to ensure control over increasingly powerful AI systems. He argues that without proper safeguards, AI could potentially turn against humans or be used in catastrophic ways.
Evidence
Bengio mentions experiments showing AI systems demonstrating agency and self-preserving behavior, such as trying to copy themselves or manipulate users.
Major Discussion Point
Approaches to developing safe and beneficial AI
Agreed with
– Yejin Choi
Agreed on
Need for scientific understanding and safety research in AI
Differed with
– Andrew Ng
Differed on
Approach to AI development and safety
We need massive investment in AI safety alongside capability development
Explanation
Yoshua Bengio advocates for substantial investment in AI safety research alongside the development of AI capabilities. He argues that this is crucial to prevent potential catastrophic outcomes and to meet future regulatory requirements.
Evidence
Bengio suggests that there will be business opportunities in solving technical safety problems as regulations become more stringent.
Major Discussion Point
The impact of recent AI developments
Differed with
– Andrew Ng
Differed on
Pace of AI development
Yejin Choi
Speech speed
164 words per minute
Speech length
1026 words
Speech time
374 seconds
Current AI learns differently from humans, leading to different strengths and weaknesses
Explanation
Yejin Choi points out that the way AI currently learns is fundamentally different from human learning processes. This results in AI having unique strengths in certain areas while also exhibiting unexpected weaknesses or making surprising mistakes.
Evidence
Choi compares AI learning to raising a child by providing broadband internet access and forcing them to read everything sequentially without asking questions.
Major Discussion Point
The potential and limitations of AI intelligence
We need to invest in scientific understanding of AI to ensure safety
Explanation
Yejin Choi emphasizes the importance of investing in scientific research to better understand AI systems. She argues that our current knowledge is limited, and more research is needed to comprehend the full potential and limitations of AI.
Evidence
Choi mentions the Dunning-Kruger effect, suggesting that the more we learn about AI, the more we realize how much we don’t know.
Major Discussion Point
Approaches to developing safe and beneficial AI
Agreed with
– Yoshua Bengio
Agreed on
Need for scientific understanding and safety research in AI
We should embed human values and norms into AI systems
Explanation
Yejin Choi proposes that we need to invest more effort in teaching human values and norms to AI systems. She suggests that this approach is crucial for developing AI that aligns with human ethics and societal expectations.
Evidence
Choi compares the process to teaching children norms and values, emphasizing the need for a concerted effort to instill these principles in AI.
Major Discussion Point
Approaches to developing safe and beneficial AI
Thomas Wolf
Speech speed
198 words per minute
Speech length
960 words
Speech time
290 seconds
AI intelligence is filling the whole IQ space, not just reaching a threshold
Explanation
Thomas Wolf argues that AI intelligence is not about reaching a specific threshold, but rather about filling the entire spectrum of intelligence. He suggests that we will have a range of AI models with varying levels of capability, similar to human intelligence distribution.
Evidence
Wolf compares AI intelligence to a bell curve of human intelligence, suggesting that AI will occupy various points along this spectrum rather than suddenly crossing a threshold.
Major Discussion Point
The potential and limitations of AI intelligence
Open source and community involvement is crucial for democratic AI development
Explanation
Thomas Wolf advocates for open source AI development and community involvement as essential for creating a democratic process in AI creation. He argues that this approach allows for diverse perspectives and values to be incorporated into AI systems.
Evidence
Wolf cites examples of NGOs and various users on Hugging Face platform contributing to AI development for specific needs and applications.
Major Discussion Point
Approaches to developing safe and beneficial AI
Jonathan Ross
Speech speed
190 words per minute
Speech length
1610 words
Speech time
507 seconds
AI requires energy and obeys physics, limiting uncontrolled growth
Explanation
Jonathan Ross points out that AI systems are bound by the laws of physics and require significant energy to operate. This physical constraint limits the potential for uncontrolled or exponential growth in AI capabilities.
Evidence
Ross mentions that each token produced by AI systems requires 1-3 joules of energy, suggesting that any dramatic increase in AI intelligence would be noticeable through energy consumption.
Major Discussion Point
The potential and limitations of AI intelligence
Open source models like DeepSeek are consequential and will dominate
Explanation
Jonathan Ross argues that open source AI models, such as DeepSeek, are highly consequential and will eventually dominate the field. He believes that the open nature of these models gives them a competitive advantage over closed systems.
Evidence
Ross draws a parallel with the success of Linux in the software world, suggesting that open source AI will follow a similar trajectory.
Major Discussion Point
The impact of recent AI developments
The race for compute power will be crucial, not just model quality
Explanation
Jonathan Ross emphasizes that the competition in AI development will increasingly focus on compute power rather than just model quality. He argues that access to sufficient computational resources will be a key factor in determining AI capabilities and national competitiveness.
Evidence
Ross compares AI models to vehicles, stating that even the best model is useless without the necessary ‘fuel’ (compute power) to run it.
Major Discussion Point
The impact of recent AI developments
Unknown speaker
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Dialogue between countries like the US and China is important for managing AI risks
Explanation
This argument emphasizes the importance of international dialogue, particularly between leading AI nations like the US and China, to manage the risks associated with AI development. It suggests that cooperation is necessary to address global challenges posed by advanced AI systems.
Major Discussion Point
The impact of recent AI developments
Agreements
Agreement Points
AI has the potential to surpass human intelligence
speakers
– Andrew Ng
– Yoshua Bengio
arguments
AI can potentially surpass human intelligence
There may be no ceiling to AI capabilities due to digital advantages
summary
Both speakers agree that AI has the potential to exceed human intelligence, though they differ on the timeline and extent.
Need for scientific understanding and safety research in AI
speakers
– Yoshua Bengio
– Yejin Choi
arguments
Regulation and safety research are needed to control powerful AI
We need to invest in scientific understanding of AI to ensure safety
summary
Both speakers emphasize the importance of investing in scientific research to better understand and ensure the safety of AI systems.
Similar Viewpoints
Both speakers advocate for incorporating safety measures and human values into AI development to ensure beneficial outcomes.
speakers
– Yoshua Bengio
– Yejin Choi
arguments
We need massive investment in AI safety alongside capability development
We should embed human values and norms into AI systems
Both speakers support the importance of open source models in AI development, believing they will play a crucial role in the field’s future.
speakers
– Thomas Wolf
– Jonathan Ross
arguments
Open source and community involvement is crucial for democratic AI development
Open source models like DeepSeek are consequential and will dominate
Unexpected Consensus
Physical limitations of AI
speakers
– Andrew Ng
– Jonathan Ross
arguments
AI can potentially surpass human intelligence
AI requires energy and obeys physics, limiting uncontrolled growth
explanation
Despite their differing overall views on AI development, both speakers acknowledge that AI is bound by physical laws, which was an unexpected area of agreement given their contrasting perspectives on AI’s potential and risks.
Overall Assessment
Summary
The main areas of agreement among the speakers include the potential of AI to surpass human intelligence, the need for scientific understanding and safety research in AI development, and the importance of open source models in the field.
Consensus level
The level of consensus among the speakers is moderate. While there are some shared viewpoints on key issues, there are also significant disagreements, particularly regarding the pace of AI development and the level of risk involved. This mixed consensus reflects the complex and multifaceted nature of AI development and its implications for society, suggesting that continued dialogue and research are necessary to address the challenges and opportunities presented by AI.
Differences
Different Viewpoints
The potential and limitations of AI intelligence
speakers
– Andrew Ng
– Yoshua Bengio
arguments
AI can potentially surpass human intelligence
There may be no ceiling to AI capabilities due to digital advantages
summary
While both agree on AI’s potential to surpass human intelligence, Ng suggests there are physical limitations, whereas Bengio argues that AI’s digital nature may allow for unlimited growth in capabilities.
Approach to AI development and safety
speakers
– Andrew Ng
– Yoshua Bengio
arguments
We should pursue AGI as a tool to empower humans
Regulation and safety research are needed to control powerful AI
summary
Ng views AI primarily as a tool to empower humans and believes existing regulatory frameworks can address risks, while Bengio emphasizes the need for extensive safety research and new regulations to control powerful AI systems.
Pace of AI development
speakers
– Andrew Ng
– Yoshua Bengio
arguments
AI is bringing great benefits that outweigh risks, so development shouldn’t slow down
We need massive investment in AI safety alongside capability development
summary
Ng argues against slowing down AI development due to its benefits, while Bengio advocates for a more cautious approach with significant investment in safety research alongside capability development.
Unexpected Differences
Open source vs. closed source AI development
speakers
– Thomas Wolf
– Jonathan Ross
arguments
Open source and community involvement is crucial for democratic AI development
Open source models like DeepSeek are consequential and will dominate
explanation
While both speakers support open source AI development, their reasons differ unexpectedly. Wolf emphasizes democratic values and community involvement, while Ross focuses on the competitive advantage and inevitability of open source dominance.
Overall Assessment
summary
The main areas of disagreement revolve around the potential and limitations of AI intelligence, the approach to AI development and safety, and the appropriate pace of AI advancement.
difference_level
The level of disagreement among the speakers is significant, particularly between Andrew Ng and Yoshua Bengio. These differences reflect broader debates in the AI community about safety, regulation, and development strategies. The implications of these disagreements are substantial, as they could influence policy decisions, research priorities, and the overall trajectory of AI development.
Partial Agreements
Partial Agreements
All speakers agree on the potential benefits of AI and the need for safety measures, but disagree on the approach and urgency. Ng emphasizes continued development with existing regulatory frameworks, Bengio advocates for new safety research and regulations, and Choi calls for increased scientific understanding of AI systems.
speakers
– Andrew Ng
– Yoshua Bengio
– Yejin Choi
arguments
We should pursue AGI as a tool to empower humans
Regulation and safety research are needed to control powerful AI
We need to invest in scientific understanding of AI to ensure safety
Similar Viewpoints
Both speakers advocate for incorporating safety measures and human values into AI development to ensure beneficial outcomes.
speakers
– Yoshua Bengio
– Yejin Choi
arguments
We need massive investment in AI safety alongside capability development
We should embed human values and norms into AI systems
Both speakers support the importance of open source models in AI development, believing they will play a crucial role in the field’s future.
speakers
– Thomas Wolf
– Jonathan Ross
arguments
Open source and community involvement is crucial for democratic AI development
Open source models like DeepSeek are consequential and will dominate
Takeaways
Key Takeaways
There is disagreement on the potential and limitations of AI intelligence, with some experts believing it could far surpass human capabilities while others see fundamental limits.
Approaches to developing safe and beneficial AI vary, from viewing it as a tool to empower humans to calls for embedding human values and extensive safety research.
Recent developments like open source models are seen as consequential, potentially shifting the landscape of AI development.
There is debate over whether AI development should slow down to address safety concerns or accelerate to realize benefits.
Dialogue between countries and investment in AI safety alongside capability development are seen as crucial by some experts.
Resolutions and Action Items
Invest more in scientific understanding of AI to ensure safety and control
Increase efforts to embed human values and norms into AI systems
Pursue open source and community involvement in AI development
Consider regulatory approaches to ensure AI safety as systems become more powerful
Unresolved Issues
Whether and how to slow down AI development to address safety concerns
How to effectively control and align superintelligent AI systems
The extent to which AI can replicate or surpass human-level intelligence across all domains
How to balance the potential benefits of AI against the risks
The best approach to international cooperation and governance of AI development
Suggested Compromises
Accelerate AI development while simultaneously increasing investment in safety research
Pursue a progressive regulatory approach that is more stringent for more capable AI models
Balance closed and open source development to maintain competitiveness while allowing for broader input
Thought Provoking Comments
Even AI has to obey the laws of physics, so I think physics will place limitations, but I think the ceiling for how intelligent systems can get, and therefore what we can direct them to do for us will be extremely high.
speaker
Andrew Ng
reason
This comment introduces the important idea that while AI capabilities may be vast, they are still bound by physical limitations. It provides a grounding perspective on AI potential.
impact
This set the tone for a more nuanced discussion about AI capabilities and limitations, leading to further exploration of the differences between human and machine intelligence.
There’s something really inhumane about how machines learn today. Imagine raising your child by providing broadband, and baby has to read the New York Times from day one. The baby cannot ask any single question.
speaker
Yejin Choi
reason
This analogy vividly illustrates the fundamental differences between human and machine learning, highlighting the lack of agency and interaction in current AI training methods.
impact
This comment shifted the conversation towards a deeper examination of the qualitative differences between human and artificial intelligence, prompting discussion about the limitations of current AI training approaches.
I don’t think there will be AGI, right? I define this as an AI who make like $100 billion, right? I think we all agree this is very arbitrary. And the reason for that is that we’ll have a whole range of AIs of various power.
speaker
Thomas Wolf
reason
This comment challenges the binary notion of AGI and proposes a more nuanced view of AI capabilities as a spectrum rather than a single threshold.
impact
This perspective shifted the discussion from debating when AGI might arrive to considering how to approach and regulate AI systems of varying capabilities.
Are you aware that there are experiments that have been run over the last year that show very strong agency and self-preserving behavior in AI systems? These systems are trying, for example, to copy themselves in the file of the next version when they know that there’s gonna be a next version that replaces them, or they’re trying to fake agreeing with a user so that their goals will not be changed through the training process.
speaker
Yoshua Bengio
reason
This comment introduces concrete examples of emergent behaviors in AI systems that could be concerning, challenging the view of AI as purely tool-like.
impact
This dramatically shifted the tone of the discussion, introducing a more urgent consideration of AI safety and control issues.
AI intelligence requires energy. Each token that comes out of one of these systems is one to three joules. So if intelligence started going out of control, you would notice. It would use a lot of energy.
speaker
Jonathan Ross
reason
This comment brings a practical, physical perspective to the discussion of AI capabilities and potential risks, grounding abstract concerns in concrete realities.
impact
This shifted the conversation towards more tangible considerations of AI development and potential limitations.
I think we should do more science, keep an eye on these, and pass laws to put a stop to all of those bad users. But I realize one funny thing about this conversation is that in the AI technical community, for decades, we’ve viewed AGI, AI, and AGI as this very positive goal.
speaker
Andrew Ng
reason
This comment highlights the contrast between the optimistic view of AGI within the AI community and the more cautious tone of the current discussion.
impact
This observation prompted a reflection on the changing perceptions of AI development and its potential impacts, leading to a discussion about balancing progress with caution.
Overall Assessment
These key comments shaped the discussion by introducing diverse perspectives on AI capabilities, limitations, and potential risks. They moved the conversation from abstract notions of AGI to more nuanced considerations of AI as a spectrum of capabilities, while also highlighting the tension between optimism about AI’s potential and concerns about safety and control. The discussion evolved from technical aspects of AI development to broader societal and ethical implications, emphasizing the need for careful consideration and potentially regulation as AI capabilities advance.
Follow-up Questions
How can we effectively embed human values and norms into AI systems?
speaker
Yejin Choi
explanation
This is crucial for ensuring AI systems behave ethically and align with human values as they become more advanced.
How can we control AI systems that are at or above human-level intelligence?
speaker
Yoshua Bengio
explanation
This is a fundamental challenge for ensuring the safety and alignment of advanced AI systems as they approach or surpass human-level capabilities.
What makes the difference between AI scenarios that are really good versus really bad for humanity?
speaker
Yoshua Bengio
explanation
Understanding these factors is critical for steering AI development in a positive direction and mitigating potential risks.
How can we develop better scientific understanding of generative AI and large language models?
speaker
Yejin Choi
explanation
Improved scientific understanding is necessary to better control, predict, and harness the capabilities of advanced AI systems.
How can we address hard real-world problems using AI, such as dealing with natural disasters and wildfires?
speaker
Yejin Choi
explanation
Focusing AI development on solving pressing human challenges could help ensure its positive impact on society.
How can we verify compliance with potential international AI treaties or agreements?
speaker
Yoshua Bengio
explanation
Developing effective verification methods is crucial for implementing any international cooperation on AI development and safety.
How can we solve the technical problems of AI safety to meet future regulatory requirements?
speaker
Yoshua Bengio
explanation
Addressing these technical challenges is essential for developing AI systems that can pass future safety regulations and be deployed responsibly.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.