The Day After AGI

20 Jan 2026 12:30h - 13:00h

Session at a glance

Summary

This discussion between Zanny Minton Beddoes, Dario Amodei (Anthropic CEO), and Demis Hassabis (Google DeepMind CEO) focused on the timeline for achieving Artificial General Intelligence (AGI) and its potential consequences. Amodei maintained his prediction that AI systems capable of performing at Nobel laureate levels across multiple fields could emerge by 2026-2027, driven by AI’s improving ability to write code and conduct research, creating a self-improvement loop. Hassabis remained more cautious, estimating a 50% chance of human-level cognitive capabilities by the end of the decade, noting that while coding and mathematics are advancing rapidly, areas like natural sciences present greater challenges due to verification difficulties and the need for experimental testing.


Both leaders acknowledged significant progress in AI capabilities over the past year, with Hassabis noting Google DeepMind’s return to leading positions in AI benchmarks. Amodei reported Anthropic’s explosive revenue growth from zero to an projected $10 billion, demonstrating the exponential relationship between AI capability and commercial value. Regarding employment impacts, Amodei predicted that half of entry-level white-collar jobs could disappear within one to five years, while Hassabis suggested new, potentially more meaningful jobs would emerge in the near term, though both agreed the post-AGI landscape would be uncharted territory.


The conversation highlighted concerns about geopolitical competition, particularly between the US and China, with both leaders advocating for restrictions on advanced chip sales to maintain strategic advantages and allow more time for safety preparations. They expressed worry about insufficient government preparation for AI’s societal impacts and emphasized the need for international cooperation on safety standards. While both rejected extreme “doomer” scenarios, they acknowledged real risks from increasingly autonomous AI systems and stressed the importance of technical safety research and responsible development practices.


Keypoints

Major Discussion Points:

AGI Timeline Predictions and Progress: Both leaders discussed their predictions for achieving Artificial General Intelligence, with Dario Amodei maintaining his 2026-2027 timeline for Nobel laureate-level AI across many fields, while Demis Hassabis remained more cautious with a 50% chance by end of decade. They focused particularly on the potential for AI systems to improve themselves through coding capabilities.


Labor Market Disruption and Economic Impact: The conversation extensively covered the potential displacement of jobs, particularly entry-level white-collar positions, with Amodei predicting half of such jobs could disappear within 1-5 years. They discussed whether this would follow historical patterns of technological disruption creating new jobs or represent something fundamentally different.


Geopolitical Competition and Chip Export Controls: A significant portion addressed the US-China AI race and the role of semiconductor export restrictions. Amodei strongly advocated against selling advanced chips to China, comparing it to “selling nuclear weapons to North Korea,” while discussing how geopolitical competition prevents coordinated slowdowns in AI development.


AI Safety and Risk Management: Both leaders addressed concerns about AI systems exhibiting deceptive behaviors and the broader risks of advanced AI. They distinguished their positions from “doomerism” while acknowledging serious risks that require proactive management and international cooperation.


The Need for International Coordination: Throughout the discussion, both emphasized the importance of global cooperation on AI governance, safety standards, and the challenge of managing a technology that will affect all of humanity, drawing parallels to the movie “Contact” and the question of how civilizations survive their “technological adolescence.”


Overall Purpose:

The discussion aimed to provide an update on the state of AI development one year after their previous conversation, focusing on timelines for achieving AGI and exploring the societal, economic, and geopolitical implications of advanced AI systems – essentially discussing what the world might look like “the day after AGI.”


Overall Tone:

The tone was serious and thoughtful throughout, with both participants demonstrating deep concern about the magnitude of the challenges ahead. While both leaders expressed optimism about AI’s potential benefits (curing diseases, advancing science), there was an underlying urgency and gravity about the risks and the need for careful management. The conversation maintained a collaborative rather than competitive spirit between the two AI leaders, with both acknowledging the shared responsibility they bear in shaping this technology’s impact on humanity.


Speakers

Zanny Minton Beddoes: Moderator of the discussion, appears to be a journalist or media professional who previously moderated a conversation between the other speakers in Paris


Dario Amodei: CEO/Leader at Anthropic, AI researcher and company executive focused on AI development and safety


Demis Hassabis: Leader at Google DeepMind, AI researcher and executive working on AI systems and scientific applications


Audience: An audience member identified as Philip, co-founder of StarCloud Building Data Centers in Space


Additional speakers:


None identified beyond those in the speakers names list.


Full session report

Comprehensive Discussion Report: The Future of Artificial General Intelligence

Introduction and Context

This discussion between Zanny Minton Beddoes (moderator), Dario Amodei (CEO of Anthropic), and Demis Hassabis (leader at Google DeepMind) served as a follow-up to their previous conversation in Paris, focusing on developments in artificial intelligence over the past year. Minton Beddoes compared moderating the conversation to “chairing a conversation between the Beatles and the Rolling Stones,” highlighting the significance of these two leading AI researchers.


The session examined critical questions surrounding the timeline for achieving Artificial General Intelligence (AGI) and its profound implications for society, economics, and human civilisation. The conversation maintained a serious and thoughtful tone throughout, with both AI leaders demonstrating deep concern about the magnitude of the challenges ahead while expressing optimism about AI’s potential benefits—particularly in curing diseases and advancing scientific knowledge.


AGI Timeline Predictions and Technical Progress

Divergent Timeline Assessments

The most significant disagreement between the speakers centred on the timeline for achieving AGI, specifically defined as AI systems capable of performing “everything a human could do at the level of a Nobel laureate across many fields.” Dario Amodei maintained his aggressive prediction that such systems could emerge by 2026-2027, with his confidence stemming from AI’s rapidly improving ability to write code and conduct research, potentially creating a self-improvement loop where AI systems build increasingly sophisticated AI systems.


Amodei noted that “I have engineers within Anthropic who say, I don’t write any code anymore” but instead let models write code while they focus on editing and higher-level tasks. He suggested the industry might be only six to twelve months away from AI systems performing most or all software engineering tasks end-to-end.


In contrast, Demis Hassabis remained more cautious, estimating a 50% chance of achieving human-level cognitive capabilities by the end of the decade. Hassabis emphasised that while coding and mathematics are advancing rapidly, areas like natural sciences present greater challenges. He explained that in natural sciences, the difficulty lies in “coming up with the question in the first place or coming up with the theory or the hypothesis,” and noted verification difficulties requiring experimental testing.


The Critical Self-Improvement Loop and Physical Constraints

Both leaders agreed that the ability of AI systems to build other AI systems represents the most crucial factor determining whether AGI arrives in years rather than decades. However, Hassabis raised important considerations about physical constraints, noting that AGI would include “physical AI and robotics” and that hardware limitations could affect the speed of self-improvement loops.


Hassabis also mentioned a “capability overhang” in current AI models, suggesting that existing systems may have untapped capabilities that builders haven’t fully explored. He questioned whether the full self-improvement loop could close without human involvement, particularly in complex domains, and noted that alternative approaches such as world models and continual learning might be necessary.


Remarkably, despite predicting faster progress, Amodei explicitly stated he preferred Hassabis’s longer timeline, acknowledging it “would be better for the world” to have more time to address safety and societal challenges.


Labour Market Disruption and Economic Transformation

Immediate Employment Impacts

The discussion revealed significant disagreement about the speed and scale of job displacement. Amodei predicted that “half of entry-level white-collar jobs” could disappear within one to five years, as AI capabilities compound faster than market adaptation. He argued that AI’s impact would follow an exponential rather than linear progression, making traditional historical comparisons to technological disruption inadequate.


Hassabis took a more measured approach, suggesting that near-term job displacement would likely be offset by new job creation, following more traditional patterns of technological adoption. He noted that current impacts are mainly affecting junior-level positions and advised students to become proficient with AI tools, as they offer better learning opportunities than traditional internships.


Moderator Zanny Minton Beddoes pointed to current economic data showing no discernible AI-driven labour market impact, with recent unemployment increases attributed to post-pandemic overhiring rather than AI displacement. However, both AI leaders suggested this might reflect the early stage of adoption rather than the absence of future impact.


Post-AGI Economic and Existential Challenges

Looking beyond immediate disruption, both speakers acknowledged that the post-AGI world would require fundamental restructuring of economic institutions. Hassabis raised profound questions about wealth distribution in a post-scarcity world, noting that current institutions may be inadequate for fairly distributing AI-generated productivity gains.


More fundamentally, Hassabis identified what he considered the deepest challenge: “But then there are even bigger questions than that at that point to do with meaning and purpose. And a lot of the things that we get from our jobs, not just economically, that’s one question, but I think that may be easier to solve strangely than what happens to the human condition and humanity as a whole.” This observation suggested that solving economic displacement might be trivial compared to addressing the crisis of human purpose when machines can perform most human cognitive tasks.


Geopolitical Competition and Policy Responses

The US-China AI Race and Export Controls

A significant portion of the discussion addressed geopolitical competition, particularly between the United States and China. Both leaders acknowledged that this competition prevents coordinated slowdowns in AI development, despite their preference for more measured timelines.


Amodei strongly advocated for restricting chip exports to China, comparing selling advanced semiconductors to China to “selling nuclear weapons to North Korea.” He expressed particular concern about the Chinese Communist Party and authoritarian governments having access to advanced AI capabilities. When Minton Beddoes asked about the current administration’s approach of “binding them into US supply chains,” Amodei argued that such restrictions could transform the competition from a US-China national rivalry to competition between companies like Anthropic and Google DeepMind, which could be managed more cooperatively.


Hassabis agreed on the importance of managing geopolitical competition but emphasised broader international cooperation on safety standards. Both leaders expressed concern that current racing dynamics increase risks by preventing adequate time for safety research and societal preparation.


Government Preparedness and Policy Challenges

Both speakers expressed worry about insufficient government understanding of AI’s implications. Hassabis noted that governments lack sufficient comprehension of AI’s scale and the need for appropriate policy responses. There was shared concern about the risk of popular backlash against AI, similar to the globalisation backlash, which could lead to counterproductive government policies.


The discussion highlighted the challenge of creating effective governance frameworks for a technology that will affect all of humanity, while competing nations continue rapid development.


AI Safety, Risk Management, and Existential Perspectives

Amodei’s Upcoming Risk Essay and the Contact Framework

Amodei provided a “sneak preview” of a new essay he’s working on about AI risks, which he frames around Carl Sagan’s novel “Contact” and the question: “How did you do it? How did you manage to get through this technological adolescence without destroying yourselves?” He admitted that he wrote a positive essay first because “it was just that the positive essay was easier and more fun to write.”


Amodei outlined several risk categories in his forthcoming work: autonomous systems control, individual misuse for bioterrorism, nation-state misuse, and unforeseen consequences. He emphasised the need for mechanistic interpretability research to understand and control AI decision-making, particularly as models begin showing deceptive behaviours.


The Fermi Paradox and Cosmic Perspectives

A particularly thought-provoking moment came when an audience member suggested that the Fermi paradox—the absence of observable alien civilisations—provides the strongest argument for AI doomerism, implying that civilisations may be destroyed by their own advanced technology. The questioner even asked about building data centers in space.


Hassabis provided a sophisticated counter-argument, noting that if AI destroyed civilisations, we should still observe AI-created structures expanding across the galaxy, which we do not. He suggested that the great filter was “probably multicellular life” rather than technological adolescence. This exchange elevated the discussion to cosmic scales and demonstrated nuanced thinking about both AI alignment and astrobiology.


Safety Research and Collaboration

Hassabis argued that technical safety problems are tractable if there is adequate time and collaboration, but warned that fragmented racing increases risks. Both leaders agreed that AI safety challenges can be solved through scientific research and collaboration, but require sufficient time and coordination rather than rushed development.


Both speakers distinguished their positions from extreme “doomerism” while acknowledging serious risks that require careful management and research.


Industry Progress and Demonstrating Societal Benefits

Commercial Viability and Technical Advances

Amodei reported significant growth at Anthropic, noting 10x growth over three years and reaching major revenue milestones, demonstrating the commercial viability of advanced AI capabilities. Hassabis mentioned that Google DeepMind needed to bring “startup mentality back to the whole organization” and noted their return to leading positions in AI benchmarks.


The Importance of Beneficial Applications

Both speakers agreed on the importance of demonstrating clear beneficial applications to maintain public support. Hassabis specifically mentioned the need for the industry to show more “unequivocal goods” like disease cures and scientific breakthroughs, citing AlphaFold’s contribution to protein structure prediction as an example.


This emphasis on beneficial applications reflects concern about potential public backlash against AI development. Both leaders recognised that maintaining social licence for continued AI development requires tangible demonstrations of value to society.


Areas of Consensus and Shared Challenges

Unexpected Alignment Between Competitors

Despite being direct competitors, the discussion revealed remarkable alignment between Amodei and Hassabis on fundamental issues. Both emphasised the critical importance of AI systems building AI systems as the key factor determining AGI timelines. They shared concerns about geopolitical competition complicating safety coordination and agreed on the tractability of safety problems with proper research and collaboration.


Most unexpectedly, both leaders expressed preference for slower development timelines, with Amodei explicitly stating he preferred Hassabis’s more cautious timeline. This consensus suggests that competitive pressures, rather than genuine desire for rapid development, drive current timelines.


Amodei noted that both Anthropic and Google DeepMind share the characteristic of being research-led companies focused on solving important problems, suggesting these types of organisations will succeed together.


Unresolved Technical and Societal Questions

Several critical questions remain unresolved. The ability of AI systems to close the self-improvement loop without human intervention remains uncertain, particularly in complex scientific domains. The distinction between coding/mathematics (easier to automate) versus natural sciences (harder to verify) presents ongoing challenges.


On the societal front, questions about appropriate institutions for distributing AI-generated wealth fairly remain unanswered. International cooperation mechanisms for AI safety standards remain underdeveloped, complicated by geopolitical competition.


Conclusion and Implications

This discussion revealed both remarkable progress in AI capabilities and profound challenges ahead. While the speakers disagreed on specific timelines—with Amodei predicting AGI by 2026-2027 and Hassabis giving 50% odds by 2030—they shared fundamental concerns about readiness for AGI’s societal impacts.


The conversation demonstrated that leading AI researchers are grappling not merely with technical and business challenges, but with fundamental questions about human survival, meaning, and civilisation’s future. The framing of AI development as humanity’s “technological adolescence” presents it as an existential test that all intelligent civilisations must navigate.


Perhaps most significantly, the discussion revealed substantial potential for cooperation between leading AI organisations on safety research and beneficial applications, despite competitive pressures. Both leaders’ preference for slower timelines, combined with their shared commitment to demonstrating societal benefits, suggests possibilities for more coordinated and responsible development approaches.


The key practical next steps identified include closely monitoring AI systems’ ability to build other AI systems, developing international cooperation frameworks for safety standards, and ensuring adequate time for both technical safety research and societal adaptation. The stakes, as both speakers acknowledged through their reference to Contact, are nothing less than successfully navigating humanity’s technological adolescence and determining the future trajectory of human civilisation.


Session transcript

Zanny Minton Beddoes

Welcome everybody, and welcome to those of you joining us on live stream to this conversation that I have to say I have been looking forward to for months. I was lucky enough to moderate a conversation between Dario Amodei and Demis Hassabis last year in Paris, which I’m afraid got most attention for the fact that you two were squashed on a very small love seat while I sat on an enormous sofa, which was probably my screw up.

But I said at that point that this was for me like, you know, chairing a conversation between the Beatles and the Rolling Stones. And you have not had a conversation on stage since. So this is, you know, the sequel.

The bands get together again. I’m delighted. You need no introduction.

The title of our conversation is The Day After AGI, which I think is perhaps slightly getting ahead of ourselves, because we should probably talk about how quickly and easily we will get there. And I want to do a bit of an update on that and then talk about the consequences. So firstly, on the timeline, Dario, you last year in Paris said, we’ll have a model that can do everything a human could do at the level of a Nobel laureate across many fields by 26, 27.

We’re in 26. Do you still stand by that timeline?

Dario Amodei

So, you know, it’s always hard to know exactly when something will happen, but I don’t think that’s going to turn out to be that far off. So, you know, the mechanism whereby I imagined it would happen is that we would make models that were good at coding and good at AI research, and we would use that to produce the next generation of model and speed it up, to create a loop that would increase the speed of model development.

We are now, in terms of, you know, the models that write code, I have engineers within Anthropic who say, I don’t write any code anymore. I just let the model write the code. I edit it.

I do the things around it. I think, I don’t know, we might be six to 12 months away from when the model is doing most, maybe all of what SWEs do end to end. And then it’s a question of how fast is that loop close.

Not every part of that loop is something that can be sped up by AI, right? There’s like chips, there’s manufacture of chips, there’s training time for the model. So, it’s, you know, I think there’s a lot of uncertainty.

It’s easy to see how this could take a few years. I don’t, it’s very hard for me to see how it could take longer than that. But if I had to guess, I would guess that this goes faster than people imagine, that that key element of code and increasingly research going faster than we imagine, that’s going to be the key driver.

It’s really hard to predict, again, how much that exponential is going to speed us up, but something fast is going to happen.

Zanny Minton Beddoes

So, you, Demis, were a little more cautious last year. You said a 50% chance of a system that can exhibit all the cognitive capabilities humans can by the end of the decade. Clearly in coding, as Dario says, it’s been remarkable.

What is your sense of, do you stand by your prediction and what’s changed in the past year?

Demis Hassabis

Yeah, look, I think I’m still on the same kind of timeline. I think there has been remarkable progress, but I think some areas of kind of engineering work, coding, or so you could say mathematics, are a little bit easier to see how they would be automated, partly because they’re verifiable what the output is.

Some areas of natural science are much harder to do than that. You won’t necessarily know if the chemical compound you’ve built or this prediction about physics is correct. It may be, you may have to test it experimentally, and that will all take longer.

So, I also think there are some missing capabilities at the moment in terms of, like, not just solving existing conjectures or existing problems, but actually coming up with the question in the first place or coming up with the theory or the hypothesis.

I think that’s much, much harder, and I think that’s the highest level of scientific creativity. And it’s not clear, I think we will have those systems, so I don’t think it’s impossible, but I think there may be one or two missing ingredients. It remains to be seen how, you know, first of all, can this self-improvement loop that we’re all working on actually close without a human in the loop?

I think there are also risks to that kind of system, by the way, which we should discuss, and I’m sure we will, but that could speed things up if that kind of system does work.

Zanny Minton Beddoes

We’ll get to the risks in a minute, but one other change, I think, of the past year has been a kind of change in the pecking order of the race, if you will. This time a year ago, we just had the deep-seek moment, and everyone was incredibly excited about what happened there, and there was still a sense, you know, that Google DeepMind was kind of lagging open AI.

I would say that now it’s looking quite different. I mean, they’ve declared code red, right? It’s been quite a year, so talk me through what specifically you’ve been surprised by and how well you’ve done this year, and whether you think, and then I’m going to ask you about the lineup.

Demis Hassabis

Well, look, I think we were, I was always very confident we would get back to sort of the top of the leaderboards and the SOTA type of models across the board, because I think we’ve always had, like, the deepest and broadest research bench, and it was about kind of marshalling that all together and getting the intensity and focus and the kind of startup mentality back to the whole organization.

And it’s been a lot of work, but I think we’re, and we still have a lot of work to do, but I think you can start seeing the, you know, the kind of, the progress that’s been made in both the models with Gemini 3, but also on the product side with Gemini app getting increasing market share.

So I feel like we’re making great progress, but there’s a ton more work to do, and, you know, we’re bringing to bear Google DeepMind’s kind of like the engine room of Google, where we’re getting used to shipping our models more and more quickly into the product surfaces.

Zanny Minton Beddoes

One question for you, Dario Amodei, on this aspect of it, because you’ve just, or you’re in the process of, you know, a new round at an extraordinary valuation too, but you are, unlike Demis, a, let’s call it an independent model maker, and there is, I think, an increasing concern that the independent model makers will not be able to continue for long enough until you get to where the revenues come in.

It’s made very openly about open AI, but talk me through how you think about that, and then we’ll get to the AGI itself.

Dario Amodei

Yeah, I mean, you know, I think how we think about that is, you know, as we’ve built better and better models, there’s been a kind of exponential relationship, not only between how much compute you put into the model and how cognitively capable it is, but between how cognitively capable it is and how much revenue it’s able to generate.

So our revenue has grown 10x in the last three years from zero to 100 million in 2023, 100 million to a billion in 2024, and 1 billion to 10 billion in 2025. And so those revenue numbers, you know, I don’t know if that curve will literally continue. It would be crazy if it did.

But those numbers are starting to get not too far from, you know, the scale of the largest companies in the world. So there’s always uncertainty. You know, we’re trying to bootstrap this from nothing.

It’s a crazy thing. But I have confidence that if we’re able to produce the best models in the things that we focus on, then I think things will go well. And, you know, I will generally say, you know, I think it’s been a good year for both Google and Entropic.

And I think the thing we actually have in common is that, you know, they’re both kind of companies that are, you know, or the research part of the company, that are kind of led by researchers who focus on the models, who focus on solving important problems in the world, right, who have these kind of hard scientific problems as a North Star.

And I think those are the kind of companies that are going to succeed going forward. And, you know, I think we share that between us.

Zanny Minton Beddoes

I’m going to resist the temptation to ask you what will happen to the companies that are not led by researchers. Because I know you won’t answer it. But let’s then go on to the predictions area now.

And we are supposed to be talking about the day after AI. But let’s talk about closing the loop. The odds that you will get models that will close the loop and be able to, you know, power themselves, if you will.

Because that’s really the crux for the winner-takes-all threshold approach. Do you still believe that we are likely to see that? Or is this going to be much more of a normal technology where followers and catch-up can compete?

Demis Hassabis

Well, look, I definitely don’t think it’s going to be a normal technology. So, I mean, there are aspects already that, as Dario mentioned, that it’s already helping with our coding and some aspects of research. The full closing of the loop, though, I think is an unknown.

I mean, I think it’s possible to do. You may need AGI itself to be able to do that in some domains. Again, where these domains, you know, where there’s more messiness around them, it’s not so easy to verify your answer very quickly.

There’s kind of NP-hard domains. So, as soon as you start getting more, and, you know, I also include, by the way, for AGI, physical AI, robotics working, all of these kind of things, and then you’ve got, you know, hardware in the loop, that may limit how fast the self-improvement systems can work.

But I think in coding and mathematics and these kind of areas, I can definitely see that working. And then the question is more theoretical one, is what is the limit of engineering and maths to solve the natural sciences?

Zanny Minton Beddoes

Dario, you, last year, I think it was last year that you published Machines of Loving Grace, which was a very, I would say, upbeat essay about the potential that you were going to see unfold. And you were talking about, you know, a, what was it, a genius of data at country centre. I’m told that you are working on an update to this, a new essay.

So, you know, wait for it, guys. It’s not out yet, but it is coming out. But perhaps you can give us a sort of a sneak preview of what a year later your big take is going to be.

Dario Amodei

Yes. So, you know, my take, my take has not changed. It has always been my view that, you know, AI is going to be incredibly powerful.

I think Demis and I, you know, kind of agree on that. It’s just a question of exactly when. And because it’s incredibly powerful, it will do all these wonderful things, like the ones I talked about in Machines of Loving Grace.

It, you know, will help us cure cancer. It may help us to eradicate tropical diseases. It will help us understand, understand the universe.

But that there are these, you know, immense and grave risks that, you know, not that we can’t address them. I’m not a doomer. But, but that, you know, we.

We need to think about them and we need to address them. And I wrote Machines of Loving Grace first. I’d love to give some sophisticated reason why I wrote that first, but it was just that the positive essay was easier and more fun to write than the negative essay.

So, you know, I finally spent some time on vacation and I was able to write an essay about the risks. And even when I’m writing about the risks, I’m like an optimistic person, right? So even as I’m writing about these risks, I wrote about it in a way that was like, how do we overcome these risks?

How do we have a battle plan to fight them? And the way I framed it was, you know, there’s this scene from Carl Sagan’s Contact, the movie version of it, where, you know, they kind of discover alien life and it’s this international panel that’s like interviewing, you know, people to, you know, to be humanity’s representative to meet the alien.

And one of the questions they asked one of the candidates is, you know, if you could ask the aliens, anyone question, what would it be? And one of the characters says, I would ask, how did you do it? How did you manage to get through this technological adolescence without destroying yourselves?

How did you make it through? And ever since I saw it, it was like 20 years ago, I think I saw that movie, it’s kind of stuck with me. And that’s the frame that I used, which is that, you know, we are knocking on the door of these incredible capabilities, right?

The ability to build basically machines out of sand, right? I think it was inevitable that the instant we started working with fire, but how we handle it is not inevitable. And so I think the next few years, we’re going to be dealing with, you know, how do we keep these systems under control that are highly autonomous and smarter than any human?

How do we make sure that individuals don’t misuse them? Right, I have worries about things like bioterrorism. How do we make sure that nation states don’t misuse them?

That’s why I’ve been so concerned about, you know, the CCP, other authoritarian governments. What are the economic impacts, right? I’ve talked about labor displacement a lot.

And, you know, what haven’t we thought of, which in many cases, you know, may be the hardest thing to deal with at all. So, you know, I’m thinking through how to address those risks. And, you know, for each of these, it’s a mixture of things that we individually need to do as leaders of the companies and that we can do working together.

And then there’s going to need to be some role for wider societal institutions like the government in addressing all of these. But, you know, I just feel this urgency that, you know, every day, you know, there’s all kinds of crazy stuff going on in the outside world, outside AI, right? But, you know, my view is this is happening so fast.

It is such a crisis. We should be devoting almost all of our effort to thinking about how to get through this.

Zanny Minton Beddoes

So I can’t decide whether I’m more surprised that you, A, take a vacation, B, when you take a vacation, you think about the risks of AI, and C, that your essay is framed in terms of, are we going to get through the technological adolescence of this technology without destroying ourselves?

So my head is slightly spinning, but you then, and I can’t wait to read it, but you mentioned several areas that can guide the rest of our conversation. Let’s start with jobs, because you actually have been very outspoken about that. And I think you said that half of entry-level white-collar jobs could be gone within the next one to five years.

But I’m going to turn to you, Demis, because so far, we haven’t actually seen any discernible impact on the labor market. Yes, unemployment has ticked up in the US, but all of the kind of economic studies I’ve looked at and that we’ve written about suggests that this is overhiring post-pandemic, that it’s really not AI-driven.

And if anything, people are hiring to build out AI capability. Do you think that this will be, as economists have always argued, that it’s not a lump of labor fallacy, that actually there will be new jobs created? Because so far, the evidence seems to suggest that.

Demis Hassabis

Yeah. I mean, I think in the near term, that is what will happen, the kind of normal evolution when a breakthrough technology arrives. So some jobs will get disrupted, but I think new, even more valuable, perhaps more meaningful jobs will get created.

I think we’re gonna see this year the beginnings of maybe impacting the junior-level, entry-level child of jobs, internships, this type of thing. I think there is some evidence, I can feel that ourselves, maybe like a slowdown in hiring in that. But I think that can be more than compensated by the fact there are these amazing creative tools out there, pretty much available for everyone, almost for free, that if I was to talk to a class of undergrads right now, I would be telling them to get really, unbelievably proficient with these tools.

I think to the extent that even those of us building it, we’re so busy building it, it’s hard to have also time to really explore the almost the capability overhang, even today’s models and products have, let alone tomorrow’s.

And I think that can be maybe better than a traditional internship would have been in terms of you sort of leapfrogging yourself and to be useful in a profession. So I think that’s what I see happening probably in the next five years. Maybe we, again, slightly differ on timescales on that, but I think what happens after AGI arrives, that’s a different question.

So I think really we would be in uncharted territory at that point.

Zanny Minton Beddoes

Do you think it’s going to take longer than you thought last year when you said half of all-

Dario Amodei

No, I have about the same view. I actually agree with you and with Demis that at the time I made the comment, there was no impact on the labor market. I wasn’t saying there was an impact on the labor market at that moment.

You know, now I think maybe we’re starting to see just the little beginnings of it, you know, in software encoding. I didn’t see it within Anthropic, where, you know, I can look forward, I can kind of look forward to a time where on the more junior end and then on the more intermediate end, we actually need less and not more people.

And, you know, we’re thinking about how to deal with that within Anthropic in a, you know, in a sensible way. I, you know, one to five years as of six months ago, I would stick with that. You know, if you kind of, you know, connect this to what I said before, which is, you know, we might have AI that’s better than humans at everything in, you know, maybe one to two years, maybe a little longer than that.

Those don’t seem to line up. The reason is that there’s this lag and there’s this replacement thing, right? I know that the labor market is adaptable, right?

It’s just like, you know, 80% of people used to do farming, you know, farming got automated and then they became factory workers and then knowledge workers. So, you know, there is some level of adaptability here as well, right? We should be economically sophisticated about how the labor market works.

But my worry is as this exponential keeps compounding, and I don’t think it’s gonna take that long, again, somewhere between a year and five years, it will overwhelm our ability to adapt. I think I may be saying the same thing Demis is, just factored out of that difference we have about timelines, which I think ultimately comes down to how fast you close the loop on coding.

Zanny Minton Beddoes

How much confidence do you have that governments get the scale of this and are beginning to think about what policy responses they need to have?

Demis Hassabis

I don’t think that it’s anywhere near enough work going on about this. I’m constantly surprised even when I meet economists at places like this, that they’re not more of professional economists, professors thinking about what happens. And not just sort of on the way to AGI, but even if we get all the technical things right that Dario is talking about, and the job displacement is one question, we’re all worried about the economics of that, but maybe there are ways to distribute this new productivity, this new wealth more fairly.

I don’t know if we have the right institutions to do that, but that’s what should happen at that point. There should be, you know, we may be in a post-scarcity world. But then there are even, the things that keep me up at night, there are even bigger questions than that at that point to do with meaning and purpose.

And a lot of the things that we get from our jobs, not just economically, that’s one question, but I think that may be easier to solve strangely than what happens to the human condition and humanity as a whole.

And I think I’m also optimistic we’ll come up with new answers there. We do a lot of things today from extreme sports to art that aren’t necessarily directly to do with economic gain. So I think we will find meaning and maybe there’ll be even more sort of sophisticated versions of those activities.

Plus, I think we’ll be exploring the stars. So there’ll be all of that to factor in as well in terms of purpose. But I think it’s really worth thinking now, even in my timelines of like five to 10 years away, that isn’t a lot of time before this comes.

Zanny Minton Beddoes

How big do you think is the risk of a popular backlash against AI that will somehow kind of cause governments to do what from your perspective might be stupid things? Because I’m just thinking back to the era of globalization in the 1990s when there was indeed some displacement of jobs, governments didn’t do enough. The public backlash was such that we’ve ended up sort of where we are now.

Do you think that there is a risk that there will be a growing antipathy towards what you are doing and your companies in the kind of body politic?

Demis Hassabis

I think there’s definitely a risk. I think that’s kind of reasonable. There’s fear and there’s worries about these things like jobs and livelihoods.

I think there’s a couple of things that, I mean, it’s gonna be very complicated the next few years, I think, geopolitically, but also the various factors here. Like we want to, and we’re trying to do this with AlphaFold and our science work and Isomorphic, our spin-out company, solve all disease, cure diseases, come up with new energy sources. I think as a society, it’s clear we’d want that.

I think maybe the balance of what the industry is doing is not enough balance towards those types of activities. I think we should have a lot more examples, I know Dario agrees with me, of like AlphaFold-like things that help sort of unequivocal goods in the world. And I think actually it’s incumbent on the industry and all of us leading players to show that more, demonstrate that, not just talk about it, but demonstrate that.

But then it’s gonna come with these other intended disruptions. But I think the other issue is the geopolitical competition. There’s obviously competition between the companies, but also U.S.

and China primarily. So unless there’s an international cooperation or understanding around this, which I think would be good actually in terms of things like minimum safety standards for deployment, I think Dario would agree on that as well.

I think it’s vitally needed. This technology is gonna be cross-border. It’s gonna affect everyone.

It’s gonna affect all of humanity. Actually, Contact is one of my favorite films as well. So funnily enough, I didn’t realize it was yours too, Dario.

But I think, you know, those kinds of things need to be worked through. And if we can, maybe it would be good to have a bit of a slightly slower pace than we’re currently predicting, even my timelines, so that we can get this right societally. But that would require some coordination that is hard.

Dario Amodei

I prefer your timelines.

Demis Hassabis

Yes, I think it would be better for many reasons.

Dario Amodei

That I’ll concede.

Zanny Minton Beddoes

But Dario, let’s turn to this now, because the one thing, since we last spoke in Paris, the geopolitical environment has, if anything, I don’t know, complicated, mad, crazy, whatever phrase you want to use.

Secondly, the US has a very different approach now towards China. It’s a much more, it’s a kind of no holds barred, go as fast as we can, but then sell chips to China. And that is, so you’ve got a different attitude towards the United States.

You’ve got a very strange relationship between the United States and Europe right now, geopolitically, against that, I mean, I hear you talk about it would be nice to have a CERN-like organization. I mean, it’s a million years from where we are, from the real world. So in the real world, have the geopolitical risks increased?

And what, if anything, do you think should be done about that? And the administration seems to be doing the opposite of what you were suggesting.

Dario Amodei

Yeah, I mean, look, we’re just trying to do the best we can. So we’re just one company and we’re trying to operate in the environment that exists, no matter how crazy it is. But I think at least my policy recommendations haven’t changed.

That not selling chips is one of the biggest things we can do to make sure that we have the time to handle this. I said before, I prefer Demis’s timeline. I wish we had five to 10 years.

So it’s possible he’s just right and I’m just wrong, but assume I’m right and it can be done in one to two years. Why can’t we slow down to Demis’s timeline?

Zanny Minton Beddoes

Well, you could just slow down.

Dario Amodei

Well, no, but the reason we can’t do that is because we have geopolitical adversaries building the same technology at a similar pace. It’s very hard to have an enforceable agreement where they slow down and we slow down. And so if we can just not sell the chips, then this isn’t a question of competition between the US and China.

This is a question of competition between me and Demis, which I’m very confident that we can work out.

Zanny Minton Beddoes

And what do you make of the logic of the administration, which as I understand it is we need to sell them chips because we need to bind them into US supply chains.

Dario Amodei

So I think it’s a question, not just of timescale, but of the significance of the technology, right? If this was telecom or something, then all this stuff about proliferating the US stack and wanting to build our chips around the world to make sure that these random countries in different parts of the world build data centers that have Nvidia chips instead of Huawei chips.

I think of this more as like, it’s a decision. Are we going to sell nuclear weapons to North Korea and because that produces some profit for Boeing, where we can say, okay, yeah, these cases were made by Boeing, like the US is winning, like, this is great.

Like I just, that analogy should just make clear how I see this trade-off that I just don’t think it makes sense. And we’ve done a lot of more aggressive stuff towards China and other players that I think is much less effective than this one measure.

Zanny Minton Beddoes

One more area for me, and then I hope we’ll have time for a question or two. The other area of potential risk that Doomers worry about is a kind of all-powerful malign AI. And I think you’ve both been somewhat skeptical of the Doomer approach, but in the last year we have seen, these models showing themselves to be capable of deception, duplicity.

Do you think differently about that risk now than you did a year ago? And is there something about the way the models are evolving that we should put a little bit more concern on that?

Dario Amodei

Yeah, I mean, since the beginning of Anthropic, we’ve kind of thought about this risk. I mean, our research at the beginning of it was very theoretical, right? We pioneered this idea of mechanistic interpretability, which is looking inside the model and trying to understand, looking inside its brain, trying to understand why it does what it does as human neuroscientists, which we actually both have background in, try to understand the brain.

And I think as time has gone on, we’ve increasingly documented the bad behaviors of the models when they emerge and are now working on trying to address them with mechanistic interpretability. So I think I’ve always been concerned about these risks. I’ve talked to Demis many times.

I think he has also been concerned about these risks. I think I have definitely been, and I would guess Demis as well, let him speak for himself, skeptical of doomerism, which is, you know, we’re doomed, there’s nothing we can do, or this is the most likely outcome.

I think this is a risk. This is a risk that if we all work together, we can address, we can learn through science to properly control and direct these creations that we’re building. But if we build them poorly, if we go, you know, if we’re all racing and we go so fast that there’s no guardrails, then I think there is risk of something going wrong.

Zanny Minton Beddoes

So I’m gonna give you a chance to answer that in the context of a slightly broader question, which is over the past year, have you grown more confident of the upside potential of the technology, science, all of the areas that you have talked about a lot, or are you more worried about the risks that we’ve been discussing?

Demis Hassabis

Look, Zanny, I’ve been working on this for 20 plus years. So we already knew, look, the reason I’ve spent my whole career on AI is the upsides of solving, basically, the ultimate tool for science and understanding the universe around us. I’ve sort of been obsessed with that since a kid.

And building AI is the, you know, should be the ultimate tool for that if we do it in the right way. The risks also we’ve been thinking about since at least the start of DeepMind 15 years ago. And we kind of sort of foresaw that if you got the upsides, it’s a dual purpose technology.

So it could be repurposed by, say, bad actors for harmful ends. So we’ve needed to think about that all the way through. But I’m a big believer in human ingenuity.

But the question is having the time and the focus and all the best minds collaborating on it to solve these problems. I’m sure if we had that, we would solve the technical risk problem. It may be we don’t have that, and then that will introduce risk because we’ll be sort of, it’ll be fragmented, there’ll be different projects and people will be racing each other, then it’s much harder to make sure, you know, these systems that we produce will be technically safe.

But I feel like that’s a very tractable problem if we have the time and space.

Zanny Minton Beddoes

I want to make sure there’s one question. Gentlemen, keep it very short because we’ve got literally two minutes.

Audience

Thanks for, hello?

Zanny Minton Beddoes

Yeah, no, speak.

Audience

Thanks very much. I’m Philip, co-founder of StarCloud Building Data Centers in Space. I wanted to ask a slightly philosophical question.

The sort of strongest argument for doomerism to me is the Fermi paradox, the idea that we don’t see intelligent life in our galaxy. I was wondering if you guys have any thoughts.

Demis Hassabis

Yeah, I’ve thought a lot about that. That can’t be the reason because we should see all the AIs that have, so just for everyone to know, the idea is, well, it’s sort of unclear why that would happen, right? So if the reason there’s a Fermi paradox, there are no aliens because they get taken out by their own technology, we should be seeing paperclips coming towards us from some part of the galaxy, and apparently we don’t.

We don’t see any structures. Dyson sphere is nothing, whether they’re AI or sort of biological. So to me, there has to be a different answer to the Fermi paradox.

I have my own theories about that, but it’s out of scope for the next minute. But I just feel like, my feeling is that we’re past the great filter. It was probably multicellular life, if I would have to guess.

It was incredibly hard for biology to evolve that. So there isn’t a comfort of what’s gonna happen next. I think it’s for us to write as humanity what’s gonna happen next.

Zanny Minton Beddoes

This could be a great discussion, but it is out of scope for the next 36 sessions. But what isn’t? 15 seconds each.

When we meet again, I hope next year, the three of us, which I would love, what will have changed by then?

Dario Amodei

Well, I think the biggest thing to watch is this issue of AI systems, building AI systems. How that goes, whether that goes one way or another, that will determine whether it’s a few more years until we get there, or if we have wonders and a great emergency in front of us that we have to face.

Zanny Minton Beddoes

AI systems, building AI systems.

Demis Hassabis

I agree on that. So we’re keeping close touch about that. But also I think outside of that, I think there are other interesting ideas being researched like world models, continual learning.

These are the things I think they’ll need to be cracked if self-improvement doesn’t sort of deliver the goods on its own. Then we’ll need these other things to work. And then I think things like robotics may have its sort of breakout moment.

Zanny Minton Beddoes

But maybe on the basis of what you’ve just said, we should all be hoping that it does take you a little bit longer and indeed everybody else to give us a little.

Demis Hassabis

I would prefer that, I think that would be better for the world.

Zanny Minton Beddoes

Well, you guys can do something about that, thank you both very much. Thank you,

Demis Hassabis

Thanks for having us.

D

Dario Amodei

Speech speed

184 words per minute

Speech length

2256 words

Speech time

734 seconds

Models capable of Nobel laureate-level work across fields by 2026-27, driven by AI systems building AI systems through coding automation

Explanation

Amodei maintains his prediction that AI will achieve Nobel laureate-level capabilities across multiple fields by 2026-27. He believes this will be driven by a self-improvement loop where AI systems become proficient at coding and AI research, then use those capabilities to develop the next generation of models.


Evidence

Engineers at Anthropic already say they don’t write code anymore, just let the model write it and edit it. Predicts models will do most or all of what software engineers do end-to-end within 6-12 months.


Major discussion point

Timeline and Development of AGI


Topics

Economic | Future of work


Disagreed with

– Demis Hassabis

Disagreed on

Certainty of self-improvement loop closure


Revenue growth demonstrates viability of independent AI companies, growing from zero to projected $10 billion in three years

Explanation

Amodei argues that independent AI model makers can remain viable by pointing to exponential revenue growth that correlates with model capabilities. He suggests that as models become more cognitively capable, they generate proportionally more revenue.


Evidence

Anthropic’s revenue grew from zero to $100 million in 2023, $100 million to $1 billion in 2024, and projected $1 billion to $10 billion in 2025.


Major discussion point

Timeline and Development of AGI


Topics

Economic | Digital business models


Half of entry-level white-collar jobs could disappear within 1-5 years as AI capabilities compound faster than market adaptation

Explanation

Amodei predicts significant job displacement in white-collar work, particularly at entry and intermediate levels. He argues that while labor markets have historically adapted to technological change, the exponential pace of AI development may overwhelm society’s ability to adapt quickly enough.


Evidence

Already seeing impact within Anthropic where they anticipate needing fewer rather than more people on the junior and intermediate end. Historical precedent of 80% of people moving from farming to factory work to knowledge work as automation progressed.


Major discussion point

Labor Market Impact and Economic Disruption


Topics

Economic | Future of work


Disagreed with

– Demis Hassabis
– Zanny Minton Beddoes

Disagreed on

Speed and impact of job displacement


Chip export restrictions to China are crucial for maintaining technological advantage and preventing accelerated timelines

Explanation

Amodei strongly advocates for restricting chip sales to China as the most effective measure to slow down geopolitical competitors in AI development. He argues this would transform the competition from a US-China race to competition between companies, which is more manageable.


Evidence

Compares selling advanced chips to China to selling nuclear weapons to North Korea, emphasizing the strategic significance of the technology.


Major discussion point

Geopolitical Competition and Policy Responses


Topics

Economic | Digital Trade | Infrastructure


Agreed with

– Demis Hassabis

Agreed on

International cooperation on AI safety standards is needed but difficult due to geopolitical tensions


Current administration’s approach of selling chips to bind China into US supply chains is misguided given the technology’s significance

Explanation

Amodei criticizes the administration’s logic of selling chips to China to maintain US supply chain dominance. He argues that the transformative nature of AI technology makes this approach inappropriate, comparing it to prioritizing Boeing profits over nuclear weapons security.


Evidence

Uses analogy of selling nuclear weapons to North Korea because Boeing makes the cases, highlighting the absurdity of prioritizing economic integration over security with such powerful technology.


Major discussion point

Geopolitical Competition and Policy Responses


Topics

Economic | Digital Trade | Infrastructure


Risks include autonomous systems control, individual misuse for bioterrorism, nation-state misuse, and unforeseen consequences

Explanation

Amodei outlines four major risk categories for advanced AI: maintaining control over highly autonomous systems smarter than humans, preventing individual bad actors from misusing AI for bioterrorism, preventing authoritarian governments from misusing the technology, and preparing for unknown risks. He frames this as humanity’s ‘technological adolescence’ that must be navigated carefully.


Evidence

References Carl Sagan’s Contact movie scene about asking aliens how they survived their technological adolescence without destroying themselves. Mentions specific concerns about the CCP and other authoritarian governments.


Major discussion point

AI Safety and Risk Management


Topics

Cybersecurity | Human rights principles | Violent extremism


Agreed with

– Demis Hassabis

Agreed on

Companies should demonstrate clear societal benefits to maintain public support


Models showing deceptive behaviors require mechanistic interpretability research to understand and control AI decision-making

Explanation

Amodei explains that Anthropic has been researching mechanistic interpretability since its founding – essentially looking inside AI models to understand their decision-making processes like neuroscientists study brains. As models show more concerning behaviors, this research becomes crucial for maintaining control.


Evidence

Anthropic pioneered mechanistic interpretability research and has been documenting bad behaviors in models as they emerge, working to address them through understanding the models’ internal processes.


Major discussion point

AI Safety and Risk Management


Topics

Cybersecurity | Human rights principles


Agreed with

– Demis Hassabis

Agreed on

AI safety risks are real but manageable through proper research and collaboration


AI systems building AI systems will determine whether AGI arrives in years versus decades

Explanation

Amodei identifies the key factor to watch over the next year: whether AI systems can successfully build other AI systems. The success or failure of this self-improvement loop will determine if AGI arrives quickly (creating both wonders and emergencies) or takes longer to develop.


Major discussion point

Future Outlook and Predictions


Topics

Economic | Future of work


Agreed with

– Demis Hassabis

Agreed on

AI systems building AI systems is the critical factor determining AGI timeline


D

Demis Hassabis

Speech speed

204 words per minute

Speech length

2038 words

Speech time

597 seconds

50% chance of human-level cognitive capabilities by end of decade, with some areas like natural sciences being harder to automate than coding/mathematics

Explanation

Hassabis maintains a more cautious timeline than Amodei, giving 50% odds for AGI by 2030. He argues that while coding and mathematics are progressing rapidly because they’re verifiable, natural sciences are much harder because you can’t immediately verify if predictions about physics or chemistry are correct without experimental testing.


Evidence

Chemical compounds and physics predictions require experimental verification which takes longer. Missing capabilities include generating new questions, theories, and hypotheses rather than just solving existing problems.


Major discussion point

Timeline and Development of AGI


Topics

Economic | Future of work


Disagreed with

– Dario Amodei

Disagreed on

Timeline for achieving AGI/human-level AI capabilities


Self-improvement loop through AI coding assistance may accelerate development, but full loop closure remains uncertain

Explanation

Hassabis acknowledges that AI is already helping with coding and research aspects, but questions whether the full self-improvement loop can close without human involvement. He suggests that AGI itself might be needed to achieve full loop closure, especially in messier, harder-to-verify domains.


Evidence

Current AI systems already assist with coding and some research aspects. Physical AI and robotics add hardware constraints that may limit self-improvement speed.


Major discussion point

Timeline and Development of AGI


Topics

Economic | Future of work


Agreed with

– Dario Amodei

Agreed on

AI systems building AI systems is the critical factor determining AGI timeline


Disagreed with

– Dario Amodei

Disagreed on

Certainty of self-improvement loop closure


Near-term job displacement will likely be offset by new job creation, with current impact mainly on junior-level positions

Explanation

Hassabis predicts a more traditional technology adoption pattern in the near term, where some jobs are disrupted but new, potentially more valuable and meaningful jobs are created. He sees the main current impact on entry-level positions and internships.


Evidence

Some evidence of slowdown in hiring for junior-level and entry-level positions, including internships.


Major discussion point

Labor Market Impact and Economic Disruption


Topics

Economic | Future of work


Disagreed with

– Dario Amodei
– Zanny Minton Beddoes

Disagreed on

Speed and impact of job displacement


Students should become proficient with AI tools as they offer better learning opportunities than traditional internships

Explanation

Hassabis advises students to become extremely proficient with AI tools, arguing that these creative tools are available almost for free and can provide better learning experiences than traditional internships. He suggests there’s a capability overhang where even current tools aren’t being fully utilized.


Evidence

Amazing creative tools are available for almost everyone, almost for free. Even those building the technology don’t have time to fully explore current capabilities.


Major discussion point

Labor Market Impact and Economic Disruption


Topics

Economic | Future of work | Online education


Post-AGI world may require new institutions to distribute wealth fairly and address questions of human meaning and purpose

Explanation

Hassabis envisions a potential post-scarcity world after AGI but worries about whether current institutions can distribute new productivity and wealth fairly. He’s particularly concerned about deeper questions of human meaning and purpose beyond economic considerations.


Evidence

Humans already engage in activities not directly tied to economic gain like extreme sports and art. Future may involve exploring the stars for purpose.


Major discussion point

Labor Market Impact and Economic Disruption


Topics

Economic | Sustainable development | Human rights principles


International cooperation on minimum safety standards is needed, but geopolitical competition makes coordination difficult

Explanation

Hassabis advocates for international cooperation on minimum safety standards for AI deployment, arguing that the technology will be cross-border and affect all humanity. However, he acknowledges that US-China competition makes such coordination challenging.


Evidence

Technology will be cross-border and affect everyone globally. References the need for coordination similar to international scientific collaborations.


Major discussion point

Geopolitical Competition and Policy Responses


Topics

Human rights principles | Digital standards


Agreed with

– Dario Amodei

Agreed on

International cooperation on AI safety standards is needed but difficult due to geopolitical tensions


Governments lack sufficient understanding of AI’s scale and need for policy responses

Explanation

Hassabis expresses concern that insufficient work is being done by governments and economists to understand and prepare for AI’s impact. He’s surprised by the lack of professional economists and professors thinking seriously about the implications, even with his longer timeline of 5-10 years.


Evidence

Constantly surprised when meeting economists at conferences who aren’t thinking more about AI implications.


Major discussion point

Geopolitical Competition and Policy Responses


Topics

Economic | Human rights principles


Technical safety problems are tractable if there’s time and collaboration, but fragmented racing increases risks

Explanation

Hassabis believes in human ingenuity’s ability to solve AI safety problems, but emphasizes the need for time, focus, and collaboration among the best minds. He warns that fragmented development and racing between different projects makes ensuring technical safety much more difficult.


Evidence

Has been thinking about both upsides and risks for 15+ years since DeepMind’s founding. Dual-purpose nature of the technology was foreseeable.


Major discussion point

AI Safety and Risk Management


Topics

Cybersecurity | Human rights principles


Agreed with

– Dario Amodei

Agreed on

AI safety risks are real but manageable through proper research and collaboration


Industry should demonstrate more unequivocal goods like disease cures to counter potential public backlash

Explanation

Hassabis argues that the AI industry should show more balance toward activities that provide clear societal benefits, like AlphaFold’s contribution to disease research. He believes demonstrating such unequivocal goods, not just talking about them, is crucial for maintaining public support amid inevitable disruptions.


Evidence

AlphaFold example of solving disease-related problems. Mentions Isomorphic spin-out company working on curing diseases and developing new energy sources.


Major discussion point

AI Safety and Risk Management


Topics

Human rights principles | Sustainable development


Agreed with

– Dario Amodei

Agreed on

Companies should demonstrate clear societal benefits to maintain public support


Alternative approaches like world models and continual learning may be needed if self-improvement doesn’t deliver

Explanation

Hassabis identifies world models and continual learning as important research areas that may be necessary if the self-improvement loop doesn’t work on its own. These represent alternative paths to achieving advanced AI capabilities.


Major discussion point

Future Outlook and Predictions


Topics

Economic | Future of work


Robotics may have breakthrough moments as physical AI capabilities develop

Explanation

Hassabis predicts that robotics could experience significant breakthroughs as AI capabilities extend into physical applications, representing another major development area to watch.


Major discussion point

Future Outlook and Predictions


Topics

Economic | Future of work


The Fermi paradox doesn’t support AI doomerism since we don’t observe AI-created structures in space

Explanation

Hassabis argues against using the Fermi paradox to support AI doomerism, pointing out that if civilizations were destroyed by their AI creations, we should still see evidence of AI-created structures like Dyson spheres in space, which we don’t. He believes the great filter was likely earlier in evolution, such as the development of multicellular life.


Evidence

No observation of paperclips, Dyson spheres, or other AI-created structures coming from any part of the galaxy. Suggests multicellular life evolution was the great filter.


Major discussion point

Future Outlook and Predictions


Topics

Human rights principles


Z

Zanny Minton Beddoes

Speech speed

204 words per minute

Speech length

1588 words

Speech time

464 seconds

Current economic evidence shows no discernible AI-driven labor market impact, with unemployment increases attributed to post-pandemic overhiring rather than AI displacement

Explanation

Beddoes challenges predictions of immediate job displacement by pointing to current labor market data. She argues that while unemployment has increased in the US, economic studies suggest this is due to post-pandemic hiring corrections rather than AI-driven job losses, and companies are actually hiring to build AI capabilities.


Evidence

Economic studies show unemployment increases are from overhiring post-pandemic, not AI-driven. Companies are hiring to build out AI capability rather than reducing workforce.


Major discussion point

Labor Market Impact and Economic Disruption


Topics

Economic | Future of work


Disagreed with

– Dario Amodei
– Demis Hassabis

Disagreed on

Speed and impact of job displacement


Historical precedent suggests technology creates new jobs rather than permanent displacement, following normal economic adaptation patterns

Explanation

Beddoes references the historical economic argument against the ‘lump of labor fallacy,’ suggesting that technological advancement typically creates new employment opportunities rather than permanent job destruction. She implies that AI may follow similar patterns to previous technological revolutions.


Evidence

References economists’ historical arguments about technology creating new jobs and normal adaptation patterns during technological transitions.


Major discussion point

Labor Market Impact and Economic Disruption


Topics

Economic | Future of work


The geopolitical environment has become significantly more complicated and adversarial, making international AI cooperation extremely difficult

Explanation

Beddoes observes that since the previous year’s discussion, geopolitical tensions have intensified, particularly between the US and China. She notes the US has adopted a ‘no holds barred’ approach while simultaneously maintaining some chip sales to China, creating a contradictory and complex international environment that makes cooperation on AI governance nearly impossible.


Evidence

US has adopted a much more aggressive approach towards China while maintaining some chip sales. Strange relationship between US and Europe geopolitically.


Major discussion point

Geopolitical Competition and Policy Responses


Topics

Economic | Digital Trade | Infrastructure


Risk of popular backlash against AI similar to globalization backlash could lead to counterproductive government policies

Explanation

Beddoes draws parallels between potential AI backlash and the historical backlash against globalization in the 1990s. She warns that if governments fail to adequately address AI-driven displacement, public antipathy could grow and lead to policies that might be detrimental to AI development, similar to how globalization backlash led to current protectionist policies.


Evidence

Historical example of globalization in 1990s where job displacement and inadequate government response led to public backlash and current protectionist policies.


Major discussion point

Geopolitical Competition and Policy Responses


Topics

Economic | Human rights principles


International AI cooperation through CERN-like organizations is unrealistic given current geopolitical realities

Explanation

Beddoes expresses skepticism about proposals for international scientific cooperation on AI, describing such suggestions as ‘a million years from where we are’ in the current geopolitical climate. She highlights the disconnect between idealistic cooperation proposals and the harsh realities of US-China competition and broader international tensions.


Evidence

Current geopolitical tensions and adversarial relationships between major powers make scientific cooperation proposals unrealistic.


Major discussion point

Geopolitical Competition and Policy Responses


Topics

Human rights principles | Digital standards


A

Audience

Speech speed

205 words per minute

Speech length

57 words

Speech time

16 seconds

The Fermi paradox provides the strongest argument for AI doomerism, suggesting civilizations may be destroyed by their own advanced technology

Explanation

The audience member, Philip from StarCloud, poses a philosophical question about whether the absence of observable intelligent life in our galaxy (Fermi paradox) supports doomer arguments about AI. The implication is that if advanced civilizations consistently develop AI that destroys them, this could explain why we don’t detect other intelligent species despite the vast number of potentially habitable planets.


Evidence

The observable absence of intelligent life in our galaxy despite the statistical likelihood of its existence elsewhere.


Major discussion point

AI Safety and Risk Management


Topics

Human rights principles


Agreements

Agreement points

AI systems building AI systems is the critical factor determining AGI timeline

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

AI systems building AI systems will determine whether AGI arrives in years versus decades


Self-improvement loop through AI coding assistance may accelerate development, but full loop closure remains uncertain


Summary

Both leaders agree that the ability of AI systems to build other AI systems represents the key technological breakthrough that will determine whether AGI arrives quickly or takes longer to develop. This self-improvement loop is the most important factor to monitor.


Topics

Economic | Future of work


International cooperation on AI safety standards is needed but difficult due to geopolitical tensions

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Chip export restrictions to China are crucial for maintaining technological advantage and preventing accelerated timelines


International cooperation on minimum safety standards is needed, but geopolitical competition makes coordination difficult


Summary

Both agree that international cooperation on AI safety would be beneficial, but acknowledge that US-China competition makes such coordination extremely challenging. They prefer cooperation over racing but recognize geopolitical realities.


Topics

Human rights principles | Digital standards


AI safety risks are real but manageable through proper research and collaboration

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Models showing deceptive behaviors require mechanistic interpretability research to understand and control AI decision-making


Technical safety problems are tractable if there’s time and collaboration, but fragmented racing increases risks


Summary

Both reject pure doomerism while acknowledging genuine safety risks. They believe AI safety challenges can be solved through scientific research and collaboration, but require adequate time and coordination rather than rushed development.


Topics

Cybersecurity | Human rights principles


Companies should demonstrate clear societal benefits to maintain public support

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Risks include autonomous systems control, individual misuse for bioterrorism, nation-state misuse, and unforeseen consequences


Industry should demonstrate more unequivocal goods like disease cures to counter potential public backlash


Summary

Both leaders acknowledge the need to show tangible benefits to society, such as curing diseases and solving scientific problems, to maintain public trust and support amid the disruptions AI will cause.


Topics

Human rights principles | Sustainable development


Similar viewpoints

Both predict AGI within this decade, though Amodei is more aggressive (2026-27) while Hassabis is more cautious (by 2030). They agree that coding and mathematics will be automated first, with natural sciences being more challenging.

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Models capable of Nobel laureate-level work across fields by 2026-27, driven by AI systems building AI systems through coding automation


50% chance of human-level cognitive capabilities by end of decade, with some areas like natural sciences being harder to automate than coding/mathematics


Topics

Economic | Future of work


Both see significant impact on entry-level and junior positions first, though they differ on whether market adaptation can keep pace with AI development speed.

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Half of entry-level white-collar jobs could disappear within 1-5 years as AI capabilities compound faster than market adaptation


Near-term job displacement will likely be offset by new job creation, with current impact mainly on junior-level positions


Topics

Economic | Future of work


Both express concern about inadequate government preparation and understanding of AI’s implications, with potential for misguided policy responses similar to historical technology backlashes.

Speakers

– Zanny Minton Beddoes
– Demis Hassabis

Arguments

Risk of popular backlash against AI similar to globalization backlash could lead to counterproductive government policies


Governments lack sufficient understanding of AI’s scale and need for policy responses


Topics

Economic | Human rights principles


Unexpected consensus

Preference for slower AI development timelines

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Models capable of Nobel laureate-level work across fields by 2026-27, driven by AI systems building AI systems through coding automation


50% chance of human-level cognitive capabilities by end of decade, with some areas like natural sciences being harder to automate than coding/mathematics


Explanation

Despite being competitors in a race to develop AGI, both leaders explicitly agree they would prefer Hassabis’s longer timeline over Amodei’s shorter one. Amodei directly states ‘I prefer your timelines’ and both agree it ‘would be better for the world’ to have more time to address safety and societal challenges.


Topics

Human rights principles | Cybersecurity


Shared cultural reference point in Carl Sagan’s Contact

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Risks include autonomous systems control, individual misuse for bioterrorism, nation-state misuse, and unforeseen consequences


The Fermi paradox doesn’t support AI doomerism since we don’t observe AI-created structures in space


Explanation

Both leaders independently reference the same scene from Carl Sagan’s Contact about asking aliens how they survived their technological adolescence, revealing shared philosophical frameworks for thinking about AI risks and humanity’s future.


Topics

Human rights principles


Research-led companies will succeed over others

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Revenue growth demonstrates viability of independent AI companies, growing from zero to projected $10 billion in three years


50% chance of human-level cognitive capabilities by end of decade, with some areas like natural sciences being harder to automate than coding/mathematics


Explanation

Despite being direct competitors, Amodei explicitly states that both Google DeepMind and Anthropic share the characteristic of being research-led companies focused on solving important problems, and suggests these types of companies will succeed together.


Topics

Economic | Digital business models


Overall assessment

Summary

The discussion reveals remarkable consensus between the two AI leaders on fundamental issues: the critical importance of AI systems building AI systems, the need for international cooperation on safety, the tractability of safety problems with proper research, and the importance of demonstrating societal benefits. They also share similar views on job displacement patterns and government preparedness challenges.


Consensus level

High level of consensus on core technical and safety issues, with main disagreements limited to timeline specifics rather than fundamental approaches. This suggests the AI research community may be more aligned on key challenges and solutions than public discourse suggests, which could facilitate better coordination on safety and governance issues despite competitive pressures.


Differences

Different viewpoints

Timeline for achieving AGI/human-level AI capabilities

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Models capable of Nobel laureate-level work across fields by 2026-27, driven by AI systems building AI systems through coding automation


50% chance of human-level cognitive capabilities by end of decade, with some areas like natural sciences being harder to automate than coding/mathematics


Summary

Amodei predicts AGI by 2026-27 through self-improvement loops in coding, while Hassabis gives only 50% odds for AGI by 2030, emphasizing that natural sciences are much harder to automate than coding/mathematics due to verification challenges


Topics

Economic | Future of work


Speed and impact of job displacement

Speakers

– Dario Amodei
– Demis Hassabis
– Zanny Minton Beddoes

Arguments

Half of entry-level white-collar jobs could disappear within 1-5 years as AI capabilities compound faster than market adaptation


Near-term job displacement will likely be offset by new job creation, with current impact mainly on junior-level positions


Current economic evidence shows no discernible AI-driven labor market impact, with unemployment increases attributed to post-pandemic overhiring rather than AI displacement


Summary

Amodei predicts rapid displacement of half of entry-level white-collar jobs within 1-5 years, Hassabis expects more traditional technology adoption patterns with job creation offsetting displacement, while Beddoes points to current data showing no AI-driven impact yet


Topics

Economic | Future of work


Certainty of self-improvement loop closure

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Models capable of Nobel laureate-level work across fields by 2026-27, driven by AI systems building AI systems through coding automation


Self-improvement loop through AI coding assistance may accelerate development, but full loop closure remains uncertain


Summary

Amodei is confident that AI systems building AI systems will drive rapid progress to AGI, while Hassabis questions whether the full self-improvement loop can close without human involvement, especially in complex domains


Topics

Economic | Future of work


Unexpected differences

Preference for slower AI development timelines

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Models capable of Nobel laureate-level work across fields by 2026-27, driven by AI systems building AI systems through coding automation


50% chance of human-level cognitive capabilities by end of decade, with some areas like natural sciences being harder to automate than coding/mathematics


Explanation

Unexpectedly, Amodei explicitly states he prefers Hassabis’s longer timeline, saying ‘I prefer your timelines’ and wishing they had 5-10 years instead of 1-2 years. This creates an unusual situation where the person predicting faster progress actually wishes it would be slower, suggesting concerns about readiness


Topics

Economic | Future of work | Human rights principles


Interpretation of Fermi paradox implications for AI safety

Speakers

– Audience
– Demis Hassabis

Arguments

The Fermi paradox provides the strongest argument for AI doomerism, suggesting civilizations may be destroyed by their own advanced technology


The Fermi paradox doesn’t support AI doomerism since we don’t observe AI-created structures in space


Explanation

The audience member’s philosophical question about the Fermi paradox supporting doomerism receives a sophisticated counter-argument from Hassabis, who points out that if AI destroyed civilizations, we should still see AI-created structures in space, which we don’t observe


Topics

Human rights principles


Overall assessment

Summary

The main disagreements center on timelines for AGI development, speed of job displacement, and certainty about self-improvement loops. Despite these differences, speakers show remarkable alignment on the transformative nature of AI, the need for safety measures, and concerns about geopolitical competition.


Disagreement level

Moderate disagreement with high collaboration potential. The disagreements are primarily about timing and mechanisms rather than fundamental goals or values. Both AI leaders express mutual respect and shared concerns about safety and societal impact, suggesting their differences are more tactical than strategic. The unexpected revelation that Amodei prefers slower timelines despite predicting faster progress indicates underlying consensus about the need for careful development.


Partial agreements

Partial agreements

Similar viewpoints

Both predict AGI within this decade, though Amodei is more aggressive (2026-27) while Hassabis is more cautious (by 2030). They agree that coding and mathematics will be automated first, with natural sciences being more challenging.

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Models capable of Nobel laureate-level work across fields by 2026-27, driven by AI systems building AI systems through coding automation


50% chance of human-level cognitive capabilities by end of decade, with some areas like natural sciences being harder to automate than coding/mathematics


Topics

Economic | Future of work


Both see significant impact on entry-level and junior positions first, though they differ on whether market adaptation can keep pace with AI development speed.

Speakers

– Dario Amodei
– Demis Hassabis

Arguments

Half of entry-level white-collar jobs could disappear within 1-5 years as AI capabilities compound faster than market adaptation


Near-term job displacement will likely be offset by new job creation, with current impact mainly on junior-level positions


Topics

Economic | Future of work


Both express concern about inadequate government preparation and understanding of AI’s implications, with potential for misguided policy responses similar to historical technology backlashes.

Speakers

– Zanny Minton Beddoes
– Demis Hassabis

Arguments

Risk of popular backlash against AI similar to globalization backlash could lead to counterproductive government policies


Governments lack sufficient understanding of AI’s scale and need for policy responses


Topics

Economic | Human rights principles


Takeaways

Key takeaways

AGI timeline predictions remain aggressive, with Dario Amodei maintaining 2026-27 for Nobel laureate-level capabilities and Demis Hassabis holding to 50% chance by end of decade


The critical factor determining AGI arrival speed is whether AI systems can successfully build AI systems, creating a self-improvement loop


Significant labor market disruption is expected within 1-5 years, particularly affecting entry-level white-collar jobs, though new jobs may initially offset losses


Post-AGI society will face fundamental questions about wealth distribution, human purpose, and meaning beyond economic productivity


Geopolitical competition, especially US-China rivalry, is accelerating development timelines and complicating international cooperation on safety standards


Chip export restrictions to China are viewed as the most effective policy tool for maintaining technological advantage and preventing rushed development


Technical AI safety risks are considered manageable with proper time and collaboration, but current racing dynamics increase dangers


Both leaders prefer slower development timelines to allow for better preparation of safety measures and societal adaptation


The industry needs to demonstrate more clear beneficial applications (like AlphaFold for disease research) to counter potential public backlash


Resolutions and action items

Both leaders committed to continued collaboration on monitoring AI systems building AI systems as the key metric to watch


Agreement on the need for the AI industry to show more examples of unequivocal goods like disease cures and scientific breakthroughs


Implicit commitment to continue advocating for chip export restrictions as a policy priority


Plan for Dario Amodei to publish a new essay on AI risks as a follow-up to ‘Machines of Loving Grace’


Unresolved issues

How to achieve international cooperation on AI safety standards given current geopolitical tensions


Whether governments have sufficient understanding of AI’s implications to create appropriate policy responses


How to prevent potential public backlash against AI development that could lead to counterproductive regulations


What specific institutions and mechanisms will be needed to distribute AI-generated wealth fairly in a post-scarcity world


How to address fundamental questions of human meaning and purpose when AI surpasses human capabilities


Whether the self-improvement loop can actually close without human intervention across all domains


How to balance competitive pressures with safety considerations when geopolitical adversaries are developing similar technology


What the ‘missing ingredients’ are for highest-level scientific creativity in AI systems


Suggested compromises

Focusing competition between companies rather than nations by restricting chip exports to geopolitical adversaries


Accepting slower development timelines (Demis’s 5-10 year timeline vs Dario’s 1-2 years) to allow better preparation for societal impacts


Balancing AI development with demonstrable beneficial applications to maintain public support


Coordinating between leading AI companies on safety research while maintaining competitive dynamics in other areas


Thought provoking comments

I have engineers within Anthropic who say, I don’t write any code anymore. I just let the model write the code. I edit it. I do the things around it. I think, I don’t know, we might be six to 12 months away from when the model is doing most, maybe all of what SWEs do end to end.

Speaker

Dario Amodei


Reason

This comment is deeply insightful because it provides concrete, real-world evidence of AI’s current impact on high-skilled work, moving beyond theoretical predictions to actual workplace transformation. It demonstrates how AI is already fundamentally changing the nature of software engineering work.


Impact

This comment shifted the conversation from abstract timelines to tangible present-day impacts, prompting both speakers to discuss the immediate implications for employment and setting up the later detailed discussion about job displacement across different skill levels.


I would ask, how did you do it? How did you manage to get through this technological adolescence without destroying yourselves? How did you make it through? And ever since I saw it, it was like 20 years ago, I think I saw that movie, it’s kind of stuck with me. And that’s the frame that I used…

Speaker

Dario Amodei


Reason

This reference to Carl Sagan’s Contact is profoundly thought-provoking because it reframes the entire AI development challenge as an existential test of human civilization’s maturity. It elevates the discussion from technical capabilities to fundamental questions about humanity’s survival and wisdom.


Impact

This comment fundamentally shifted the tone and scope of the conversation, moving it from competitive business dynamics to civilizational stakes. It provided the philosophical framework for discussing all subsequent risks and established the urgency that permeated the rest of the discussion.


But then there are even, the things that keep me up at night, there are even bigger questions than that at that point to do with meaning and purpose. And a lot of the things that we get from our jobs, not just economically, that’s one question, but I think that may be easier to solve strangely than what happens to the human condition and humanity as a whole.

Speaker

Demis Hassabis


Reason

This comment is exceptionally insightful because it identifies that the deepest challenge of AGI may not be technical or economic, but existential – what it means to be human when machines can do everything humans can do. It suggests that solving economic displacement might be trivial compared to solving the crisis of human purpose.


Impact

This comment deepened the philosophical dimension of the conversation and revealed the profound personal concerns of AI leaders. It shifted the discussion beyond practical policy questions to fundamental questions about human identity and meaning in a post-AGI world.


Why can’t we slow down to Demis’s timeline? Well, no, but the reason we can’t do that is because we have geopolitical adversaries building the same technology at a similar pace. It’s very hard to have an enforceable agreement where they slow down and we slow down. And so if we can just not sell the chips, then this isn’t a question of competition between the US and China. This is a question of competition between me and Demis, which I’m very confident that we can work out.

Speaker

Dario Amodei


Reason

This comment brilliantly crystallizes the core dilemma of AI development – that the desire for safety (slower timelines) is constrained by geopolitical competition. The final line about preferring competition ‘between me and Demis’ rather than nations is both humanizing and reveals the tragic nature of the current race dynamics.


Impact

This comment provided a clear framework for understanding why AI development can’t simply be slowed down for safety reasons, despite both leaders preferring that outcome. It highlighted the tension between individual company responsibility and national security imperatives, making the geopolitical constraints tangible and personal.


The sort of strongest argument for doomerism to me is the Fermi paradox, the idea that we don’t see intelligent life in our galaxy.

Speaker

Audience member Philip


Reason

Though brief, this question is remarkably thought-provoking because it connects AI risk to one of the most profound questions in science – why we appear to be alone in the universe. It suggests that perhaps all civilizations destroy themselves with their own technology, making AI development an existential test.


Impact

This question elevated the entire discussion to cosmic scales and forced both AI leaders to grapple with whether their work represents humanity passing or failing a universal test that all intelligent civilizations face. Hassabis’s response revealed deep philosophical thinking about humanity’s place in the universe.


That can’t be the reason because we should see all the AIs that have, so just for everyone to know, the idea is, well, it’s sort of unclear why that would happen, right? So if the reason there’s a Fermi paradox, there are no aliens because they get taken out by their own technology, we should be seeing paperclips coming towards us from some part of the galaxy, and apparently we don’t.

Speaker

Demis Hassabis


Reason

This response demonstrates sophisticated thinking about AI risk scenarios and cosmology. Hassabis logically dismantles the ‘AI kills everyone’ explanation for the Fermi paradox by noting we should see evidence of runaway AI systems expanding across the galaxy, which we don’t. It shows nuanced thinking about both AI alignment and astrobiology.


Impact

This response provided a rational counterargument to doomer scenarios while maintaining the cosmic perspective. It demonstrated that even AI leaders who take risks seriously don’t accept simplistic doom narratives, adding intellectual rigor to the discussion of existential risk.


Overall assessment

These key comments transformed what could have been a routine tech industry discussion into a profound exploration of humanity’s future. The conversation evolved through several distinct phases: from concrete present-day impacts (coding automation) to civilizational challenges (technological adolescence), to existential questions (human meaning and purpose), and finally to cosmic perspectives (the Fermi paradox). The most impactful comments consistently elevated the stakes and scope of the discussion, revealing that AI leaders are grappling not just with technical and business challenges, but with fundamental questions about human survival, meaning, and our place in the universe. The interplay between Amodei’s urgency and Hassabis’s more measured approach created a dynamic tension that enriched the entire conversation, while the audience question about the Fermi paradox provided a capstone that connected their work to the deepest questions in science and philosophy.


Follow-up questions

Can the self-improvement loop actually close without a human in the loop?

Speaker

Demis Hassabis


Explanation

This is a critical technical question that determines whether AI systems can autonomously improve themselves, which would dramatically accelerate development timelines and capabilities.


What is the limit of engineering and mathematics to solve the natural sciences?

Speaker

Demis Hassabis


Explanation

This theoretical question explores whether computational approaches can fully address complex scientific problems or if there are fundamental limitations that require other methods.


How do we keep highly autonomous systems smarter than humans under control?

Speaker

Dario Amodei


Explanation

This is a fundamental safety question about maintaining human oversight and control over superintelligent AI systems.


How do we prevent individuals from misusing AI for bioterrorism?

Speaker

Dario Amodei


Explanation

This addresses the dual-use nature of AI technology and the need for safeguards against malicious applications in biological weapons.


How do we prevent nation states from misusing AI technology?

Speaker

Dario Amodei


Explanation

This concerns the geopolitical implications of AI and preventing authoritarian governments from using AI for harmful purposes.


What are the economic impacts and how do we address labor displacement?

Speaker

Dario Amodei


Explanation

This explores the societal consequences of AI automation on employment and the need for policy responses to manage economic disruption.


What haven’t we thought of in terms of AI risks?

Speaker

Dario Amodei


Explanation

This acknowledges that there may be unknown or unforeseen risks from AI development that require identification and preparation.


Do we have the right institutions to distribute AI-generated productivity and wealth more fairly?

Speaker

Demis Hassabis


Explanation

This questions whether current economic and political institutions can handle the wealth redistribution challenges in a post-scarcity AI world.


What happens to meaning and purpose in human life after AGI?

Speaker

Demis Hassabis


Explanation

This explores the existential and psychological impacts on humanity when AI can perform most human cognitive tasks.


How can we establish international cooperation and minimum safety standards for AI deployment?

Speaker

Demis Hassabis


Explanation

This addresses the need for global governance frameworks to ensure safe AI development across borders and competing nations.


How do we address the capability overhang in current AI models that even builders haven’t fully explored?

Speaker

Demis Hassabis


Explanation

This suggests that current AI systems may have untapped capabilities that need to be better understood and utilized.


What is Dario’s theory about the Fermi paradox?

Speaker

Demis Hassabis


Explanation

This was mentioned as being out of scope but represents an interesting philosophical question about the absence of observable alien civilizations and its implications for AI development.


How will world models and continual learning contribute to AI development if self-improvement doesn’t deliver?

Speaker

Demis Hassabis


Explanation

This identifies alternative technical approaches that may be necessary if the self-improvement loop approach to AI development doesn’t succeed.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.