Keynote interview with Sam Altman (remote) and Nick Thompson (in-person)

30 May 2024 17:00h - 17:45h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

Sam Altman Explores the Promises and Challenges of AI’s Future in Conversation with Nicholas Thompson

In a riveting conversation with Nicholas Thompson, Sam Altman, the CEO of OpenAI, delved into the current state and future trajectory of artificial intelligence (AI). Altman discussed the immediate positive impacts of AI, particularly in enhancing productivity across various industries. He highlighted the transformative effects AI tools are having on tasks ranging from software development to healthcare, where increased efficiency is already evident.

However, Altman also acknowledged the potential negative consequences of AI, with cybersecurity being a prominent concern. He stressed the importance of vigilance against the misuse of AI tools, especially as they become more integrated into our daily workflows.

The discussion then shifted to the development of new AI models, such as GPT-4, and the challenges of language equity. Altman expressed pride in the progress made with GPT-4’s ability to understand a broader array of languages, covering 97% of the world’s primary languages. He emphasized OpenAI’s commitment to further improving language coverage in future iterations.

When asked about the anticipated improvements in the next AI model, Altman was cautiously optimistic. He predicted significant advancements in some areas while noting that progress might be less dramatic in others. He also addressed the complexities of training AI on synthetic data generated by other large language models, underscoring the need for high-quality data to avoid corruption of the system.

The conversation touched on the governance of AI, with Altman revealing that OpenAI is exploring ways to democratize AI governance, potentially allowing global representation on the company’s governance board. He also responded to concerns about OpenAI’s governance following the departure of key safety-focused team members, reassuring the audience of the company’s dedication to safety and robustness.

Altman and Thompson discussed the potential for AI to reshape the social contract and the economy. Altman expressed his belief that technology, including AI, has historically contributed to lifting global prosperity. He introduced OpenAI for Nonprofits, a new initiative aimed at making AI tools more accessible to organizations working in crisis zones, exemplifying AI’s potential to benefit those in need.

On the topic of AI’s impact on income inequality, Altman was hopeful that AI could help reduce disparities by automating tasks and making intelligence more widely available. However, he also acknowledged the likelihood of societal changes and the need for a reevaluation of the social contract as AI becomes a more dominant economic force.

Altman envisioned a future where AI could facilitate a more collaborative and inclusive form of governance, with individuals expressing their preferences directly to AI systems. He suggested that the UN could play a role in developing frameworks for such a model of governance.

In closing, Altman urged the audience to consider the long-term implications of AI and to balance the pursuit of its tremendous benefits with the mitigation of risks. He called for a holistic approach to AI regulation and governance, emphasizing the importance of empirical evidence and iterative deployment in shaping the future of AI and society.

Session transcript

Introduction:
We’ve got, in true AI for good style, a modern visionary who is likely shaping the world’s attitude to AI. It’s likely you have used some of the tools in remote conversation with none other than the Atlantic CEO, Nicholas Thompson. We are so excited to hear from the incredibly patient CEO of OpenAI who has been on the line. Thank you so much to Sam Altman. And I’ll welcome Nicholas to the stage and Sam to the screen. Thank you very much. Well done. All right.

Nicholas Thompson:
Hello, Sam. How are you? Thanks for hanging with us. Just so you know, you are speaking to a fully packed house and you are following a princess.

Introduction:
I saw that.

Nicholas Thompson:
Welcome.

Sam Altman:
Very honored.

Nicholas Thompson:
I want to cover, very grateful for you being here today. I want to cover a lot of different ground in this interview. I want to talk a little bit about where we are with AI, where we’re going, some of the big questions, and I want to talk about governance. So let’s get going with a little bit of table setting. We’re in this interesting moment in AI where everybody is aware of the power and the potential, but it hasn’t really changed the world yet. It hasn’t really changed any of the SDGs and the things we’re talking about. I’m not going to ask you when that’s going to happen, but instead let me ask you this. What is the first big good thing we’ll see happen and what is the first big bad thing we’ll see happen when it starts to really have an impact?

Sam Altman:
So I think the area where we’re seeing impact even now is productivity. Software developers are the most commonly cited example, and I think probably still the best one to use, where people can just do their work much faster, more effectively, work more on the parts that they want. And like other sort of technological tools, they become part of a workflow, and then it’s pretty quickly. becomes difficult to imagine working without them. And so I expect that pattern to happen in more areas where we’ll see different industries become much more productive than they used to be because they can use these tools, and that’ll sort of have a positive impact on everything from writing code to how people, how we teach, how we learn, how healthcare works, and it’ll see this increase in efficiency. And I think that’ll be the first really detectable positive thing, and I’d say we’re already in that. First negative thing, I mean, obviously, there are already some negative things happening with the tools. I would say cybersecurity, I don’t know if it’ll be the first, but it’s one that I want to call particular attention to is something that I think could be quite a problem.

Nicholas Thompson:
Yeah, that’s an extremely interesting one. I want to get to some of the underlying questions of that, but first let me ask you a little bit about the new model you’re training. You’ve just announced that you have begun training the next iteration, whether it’s GPT-5 or whatever you’re going to call it. One of the big concerns in this room in Geneva is that GPT-4 and the other large language models are much better at English, Spanish, French than they are at, say, Swahili. How important to you is that? How important is language equity as you train the next big iteration of your product?

Sam Altman:
I don’t know if that was a TF or not, but I’ll take it. One of the things that we’re really pleased with with GPT-4.0, which we released a couple of weeks ago, is that it is very good at a much, much wider variety of languages. We’ll make future ones better even more, but I think the stat that we announced was good coverage for 97% of people for their primary language that they speak. We were able to make a really big step forward there. People have been loving that, and we’ll continue to push on that further.

Nicholas Thompson:
As you train this next iteration, let’s stick with the next iteration of the model. As you train it, what level of improvement do you think we’re likely to see? Are we likely to see kind of a linear improvement or are we likely to see asymptotic improvement or are we likely to see any kind of exponential, very surprising improvement?

Sam Altman:
Great question. We don’t expect that we’re near an asymptote. But, you know, this is like a debate in the world. And I think the best thing for us to do is just show, not tell. You know, there’s a lot of people making a lot of predictions. And I think what we’ll try to do is just do the best research we can and then figure out how to responsibly release whatever we’re able to create. I expect that it’ll be hugely better in some areas and surprisingly not as much better in others, which has been the case with every previous model. But this feels like the conversation we’ve had every other model release. You know, when we were going from 3 to 3.5 and 3.5 to 4, there’s a lot of debate about, well, is it really going to be that much better? If so, in what ways? And the answer is there still seems to be a lot of headroom. And I expect that we will make progress on some things that people didn’t expect to be possible on the whole.

Nicholas Thompson:
Though this is also the first time you’re going to have a model that will be trained in large part on synthetic data. I mean, I presume, because the web now contains lots of synthetic data, meaning data that was created by other large language models. How worried are you that training a large language model on data created by large language models will lead to corruption of the system?

Sam Altman:
I think what you need is high-quality data. There is low-quality synthetic data. There’s low-quality human data. And as long as we can find enough quality data to train our models, or ways, another thing is ways to train… you know, get better at data efficiency and learn more from smaller amounts of data or any number of other techniques, I think that’s okay. And I’d say we feel like we have what we need for this next model.

Nicholas Thompson:
Have you created massive amounts of synthetic data to train your model on? Have you self-generated data for training?

Sam Altman:
We of course have done all sorts of experiments, including generating lots of synthetic data. My hope is that there will be something very strange if the best way to train a model was to just generate a quadrillion tokens of synthetic data and feed that back in. You’d say that somehow that seems inefficient and there ought to be something where you can just learn more from the data as you’re training. And I think we still have a lot to figure out. Of course we’ve generated lots of synthetic data to experiment training on that, but again, I think the real, the core of what you’re asking is how can you learn more from less data?

Nicholas Thompson:
That’s interesting. I didn’t know that. Let’s talk about one of, I think, the key questions that will affect how these things go out in the world. Last year, you did this fascinating interview with Patrick Collison, the founder of Stripe. And he asked this great question. He said, is there anything that could change in AI that would make you much less concerned that AI will have dramatic bad effects in the world? And you said, well, if we could understand what exactly is happening behind the scenes, if we could understand what is happening with one neuron. And if I understand that, it means like you want the AI model to be able to teach someone chemistry, but you don’t want them to be able to teach them to make a chemical weapon. And you wish that you could do that in the guts, not at just the interface level. Is that the right way to think about it? And have you solved this problem?

Sam Altman:
I think that safety is going to require like a whole. package approach, but this question of interpretability does seem like a useful thing to understand. And there’s many levels at which that could work. We certainly have not solved interpretability. There’s a number of things going on I’m very excited about, but nothing close to where I would say, yeah, you know, everybody can go home, we’ve got this figured out. It does seem to me that the more we can understand what’s happening in these models, the better. And I think that can be part of this cohesive package to how we can make and verify safety claims.

Nicholas Thompson:
But if you don’t understand what’s happening, isn’t that an argument to not keep releasing new, more powerful models?

Sam Altman:
Well, we don’t understand what’s happening in your brain at a neuron-by-neuron level, and yet we know you can, like, follow some rules and we can ask you to explain why you think something. There are other ways to understand the system besides understanding it at the sort of neuron-by-neuron level. The characteristics, the behavior of these systems is extremely well characterized. And the, you know, one of the things I think that has surprised a lot of people in the field, including me, is the degree to which we have been able to get very quickly and sort of when viewed on the history of a new technology scale, these systems to be generally considered safe and robust.

Nicholas Thompson:
My wife also says she sometimes doesn’t understand exactly what’s happening at a deep level of my brain. So we’ve got that in common there. What is the most progress we’ve made, or have there been any real breakthroughs in understanding this question of interpretability?

Sam Altman:
There was a great release recently from Anthropic that did a Golden Gate Bridge model that sort of showed off you know, one interesting feature of this, that would be like a recent thing I want to point to.

Nicholas Thompson:
Okay. Well, let me go to a proposal that Tristan Harris made this morning. As we’re talking about safety, Tristan was on this stage and he said that for every million dollars that a large language model company puts into making their models more powerful, they should also put a million dollars into safety, one for one. Do you think that’s a good idea?

Sam Altman:
I don’t know what that means. I think there’s this temptation to break the world into capabilities here and safety here. And you can like make all of these nice sounding, you know, we should have this policy and that policy. But if you look at how, if you look at the work we’ve done to actually make a model like GPT-4, used by hundreds of millions of people for increasingly frequent, important, valuable tasks, if you look at making that thing safe, you’d have a hard time characterizing where a lot of the work would fit. Because if you’re using a model in production, you want it to like accomplish your task and not go do something that’s going to have a negative effect of some sort. But you don’t know, like you as a user, you’re like, oh, did this work because it did what I want? Did that work because of capabilities work or safety work or something in between? Like getting the model to behave the way the user wants in, you know, according to these boundaries. That is, it’s an integrated thing. It’s like, you know, you get on an airplane. I guess this is a bad recent example, but I started it, so I’ll stick with it. But you get on an airplane and you… You know, you want to know it’ll get you where you want to go, and you also want to know that it’s not going to crash on the way. But I think it’s, there are some cases in that airplane design where you can say, okay, this was clearly capabilities work, and this was clearly safety work. But on the whole, you’re trying to design this integrated system that is going to safely get you where you want to go, hopefully quickly, and hopefully not panel fall off during flight. And the boundary there is sort of not as clear as it often seems.

Nicholas Thompson:
Fair enough. So in some ways, all of your engineers, to a degree, are working on safety, but let me pose it a different way.

Sam Altman:
I almost said safety is everybody’s responsibility, but that seemed like the worst of those corporate slogan things, so I didn’t.

Nicholas Thompson:
Right. So one of the reasons why this is on my mind, of course, is that, you know, the co-founder who’s most associated with safety, Ilya, just left. Jan, who’s one of the lead workers on safety, left and went to go work at Anthropic and tweeted that the company’s not prioritizing. So convince everybody here that they’re not, and you’re flying this, we’re all on your airplane right now, Sam. Convince us that the wing’s not going to fall off after these folks have left.

Sam Altman:
I think you have to look at our actions and the models that we’ve released and the work we have done, the science we have done to make, again, I made this point earlier, but if you go back and look at what people were saying at sort of the GPT three time about the plausibility of us ever being, make systems like this safe, aligned, whatever, and the fact that just a small number of years later, we’ve put something out at this standard. And this is a huge amount of work across many teams. There’s alignment research, there’s safety systems, there’s monitoring, you know, we just recently. We actually talked about some of our work to take down a bunch of influence operations. The fact that we are able to deliver this, and of course, we’re not perfect. Of course, we learn from content of reality as we go. But the fact that we are able to deliver on this at the level we do, I think, is something we’re very proud of. I also think that taking the super alignment teams and putting them closer, for the reason I was talking about earlier, putting them closer to the teams that are doing some of the research will be a very positive development.

Nicholas Thompson:
I want to talk a little bit about AGI, since this has been so much the focus of the company, something you’ve talked about a lot. Let me phrase the question this way. If I worked at OpenAI, I would probably knock on your door and say, Sam, I want a short meeting. I would come and I would say, I understand why AGI has been such a focus. It has been the thing that everybody in AI wants to build. It has been part of science fiction. Building a machine that thinks like a human means we’re building a machine like the most capable creation that we have on Earth. But I would be very concerned because a lot of the problems with AI, a lot of the bad things with AI, seem to come from its ability to impersonate a human. You talked earlier about cyber security. A lot of the problems we’re seeing are that it’s so easy for somebody to impersonate a human. I feel like there have been a lot of decisions at OpenAI to make the machine more like a human. The way the typing kind of feels like a human. Sometimes the machine uses the first person singular. The voice question, which we’ll get to, it sounds very human. Why do you keep making machines that seem more like humans instead of saying, you know what? I understand the risks. We’re going to kind of change directions here.

Sam Altman:
I think it’s important to design human-compatible systems, but I think it is a mistake to assume that they are human-like in their thinking, or capabilities, or limitations. Even though we sort of train them off of, you know, we do this behavioral cloning off of all of this human text data, they clearly can do extremely superhuman things already, and then very, very not-human things later. So I always try to think of it as an alien intelligence, and not try to project my anthropomorphic biases onto it. The reason, though, that we make some of the interface choices we do, and there’s some we don’t make also, is that we believe in designing for a very human-compatible world. The fact that these systems operate in natural language, that seems really important to us for a bunch of reasons. It seems like the right… There’s also a lot of nice safety properties around that, you can imagine, down the road. But it seems like a very important goal to make the AI be maximally human-compatible, and designed for humans, and work in a way where they’re communicating with us in language, and communicate with each other in language. There’s other versions of this, like I sort of think the world should prefer humanoid robots to other, you know, shapes or structures of robots, to encourage the world to stay maximally human-oriented. So I think like, easy to use for humans, which includes language as sort of maybe a primary interface method, but not trying to project too much human likeness onto them beyond that. So we, you know, we don’t… We didn’t like give our AI a human name. I think Chat2BT, although… extremely cumbersome as a name, done it differently, has the nice property of a word that kind of explains what it is and then three letters that sound like a robot, but it’s kind of very clear.

Nicholas Thompson:
What about doing more in that direction? What about, for example, saying that ChatTBT can never use I?

Sam Altman:
This gets to the point of human compatibility. We have played around with versions like that. Tends to frustrate users more than it helps. We’re used to some idioms when we’re interacting in language.

Nicholas Thompson:
What about with your voice? When you have a voice model, having a beep or something before that signifies this is not a person. We’re about to enter this period of elections. Everybody here is concerned about deep fakes, misinformation. How do you verify what is real? What can you do to make it at the core design level so that’s less of a problem?

Sam Altman:
Yeah, I think there are interesting audio cues like a beep. What people definitely don’t want is a robot sounding language. This gets to the kind of human compatibility point and the way we’re wired. I will say, having used voice mode, which I love much more than I expected to, it really crossed some threshold for me all at once. I was never the guy using voice mode on my phone for anything else before this one. That there is incredible value, much more than I personally realized, to what feels like a very natural voice interface I don’t think it would work the same way for me or be the same level of naturalness and fluidity if it didn’t sound like something that I was already wired to. But, you know, a beep, some other indication, that could all make sense. I think we’ve just got to study how users respond to this. We’ll launch it reasonably soon. And I’m heartened on the whole by the reaction that users had to chat GPT, where very quickly people understood, A, that it was an AI, B, what the limitations were when to use it and not use it, and C, kind of just how to integrate it, but that it was a new thing. I’m hopeful that things like voice mode will follow a very similar trajectory, but we will have a very tight feedback loop and watch it very carefully.

Nicholas Thompson:
Well, I’m hopeful that the real-time translation works, because I’m here in Switzerland and I was running in the mountains, and this guy yelled something at me in French, and because I’m pretending to be improving my French, I pretended to understand him, but I clearly didn’t, because what I was doing was heading into a dangerous zone where I almost fell off a cliff. So, once we can fix this, I’ll be much better. It’s super good for that. Let’s talk about the Scarlett Johansson episode, because there’s something about it I don’t understand. So, you demonstrate these voices. She then puts out a statement, which gets a lot of attention. Everybody here probably saw it, saying, they asked me if I could use my voice. I said no. They came back two days before the product was released. I said no again. They released it anyway. OpenAI then put out a statement saying, not quite right. We had a whole bunch of actors come in and audition. We selected five voices. After that, we asked her whether she would be part of it. She would have been the sixth voice. What I don’t get about that is that one of the five voices sounds just like Scarlett Johansson. So, it sounds almost like you are asking there to be six voices, two of which sound just like her, and I’m curious if you can explain that to me.

Sam Altman:
Yeah, it’s not her voice. It’s not supposed to be. I’m sorry for the confusion. Clearly, you think it is, but. I mean, people are going to have different opinions about how much voices sound alike, but we don’t. It’s not our voice, and yeah, we don’t think it—I’m not sure what else to say.

Nicholas Thompson:
All right. Let’s talk about authenticity, because that’s pretty related. So you’re on video. I’m in person. I’m real. I asked GPT-40 how, when you’re interviewing someone on a video screen, to prove that they’re real. And it suggested asking them about something that has happened in the last couple of hours and seeing if they can answer it. So what just happened to Magnus Carlsen?

Sam Altman:
I don’t know.

Nicholas Thompson:
All right. I got a couple. I got a couple. We’ll get one. What tech company’s earnings are the subject of Twitter right now?

Sam Altman:
Or the cloud? Salesforce.

Nicholas Thompson:
Boom. All right. It also said you could ask the person to do a complicated physical moment, like raising—can you raise your right hand while touching your nose with your left hand? All right. The man is real. Let’s talk a little bit about the globalization of AI, since that’s something that’s come up a lot during this conference. Clearly it’s in your interest for there to be one or few large language models, but where do you see the world going? Do you think that three years from now there will be many base large language models or very few? And importantly, will there be a separate large language model that is used in China, one that’s used differently in Nigeria, one that’s used differently in India? Where are we going?

Sam Altman:
Honestly, we nor anyone else know the answer to that question. There are clearly tons of models being trained there, and that will continue. And although we don’t know, I would expect that China will have their own. a large language model that’s different from the rest of the world, I would guess that there will be, like, hundreds or thousands of large language models trained. I would guess that there would be a small number, you know, 10, 20, something like that, that get a lot of the usage and that are trained with the most resources, but I think we’re still so early in this world and there’s so much, like, left to discover and so many, like, scientific breakthroughs still to happen that any confident predictions here are really hard.

Nicholas Thompson:
All right. Let me ask you about another big thing that I worry about. So one of the things that I’m most concerned about as we head to the next iteration of AI is that the web becomes almost incomprehensible, where there’s so much content being put up because it’s so easy to create web pages, it’s so easy to create stories, it’s so easy to do everything, that the web almost becomes impossible to navigate and get through. Do you worry about this? And if you think it’s a real possibility, what can be done to make it less likely?

Sam Altman:
I think the way we use the Internet is likely to somewhat change, although that’s going to take a long time. But I don’t worry about it becoming incomprehensible. I think you already see a little bit with, like, the way someone uses ChatGBT, where you can sometimes get information more effectively than going around to, like, you know, search for something and click around. And this idea that the Internet can sort of be brought to you, I think, is a cool thing about where AI is going. And so I think there could be changes to how we all use the Internet like that. But I don’t worry about it becoming incomprehensible, covered with spam-generated articles or anything.

Nicholas Thompson:
I mean, in a way, listening to you say that, I see a world where the internet almost collapses, where it is just these 10 or 20 large language models that are your interface. Is that more of what you see?

Sam Altman:
No, I think people, I think, I mean, I can imagine like somewhat, I can imagine versions where like the whole web gets made into components and you have this AI that is like putting together, this is a way in the future, you know, putting together like the perfect webpage for you every time you need something and everything is like live rendered for you instantly. But I can’t imagine that everything just gets to like one website that feels like against all instincts I’d have.

Nicholas Thompson:
Okay. Let’s talk about, to me, you know, since November of last year, I’ve kept this list of like questions where very smart people and AI disagree. And to me, one of the most interesting is whether it will make income inequality worse or whether it will make income inequality better. And I feel like I’ve listened to lots and lots of your podcasts and it’s come up a couple of times and you talk in some ways about actually potentially making income inequality worse and the need for universal basic income to counter that. But this morning on this stage, Azim Azhar was here and he was citing economic studies and other people have cited them as well that suggest that actually AI tools, when implemented in say a call center, help the lowest paid workers more than the highest paid workers. As AI has been rolled out, has this changed your view of what will happen with income inequality in the world, both within countries and across countries?

Sam Altman:
Let me give an example first. Today we launched OpenAI for Nonprofits, which is a new initiative to make our tools cheaper and more widely available for nonprofits. So there’s like discounts, there’s ways to share best practices. And people have been doing amazing work with this. One example. is the International Rescue Committee using our tools and having great, great results from that integration, supporting overstressed teachers and learners in real crisis zones. And I think that is an example of where these tools, because you can automate something that has been difficult and make intelligence, however you want to call that, much more widely available, can really help people that need it more than it would help people in already a rich context. So we’re very excited to launch that program in general, and it’s an example of what you’re talking about, that you can see ways in which, lots of ways, doesn’t take much imagination, that AI does more to help the poorest people than the richest people. And we really believe that. We’re enthusiastic about it. It is a huge part of why we want to build these tools. And I think it’s a huge part of the history of technology and the arc of what’s been happening. So that will happen for sure. I think technology does a great deal to lift the world to more abundance, to greater heights, to better prosperity, whatever you want to call it. And I’m optimistic for that. I don’t think that’ll require any special intervention. I still expect, although I don’t know what, and this is over a long period of time, this is not a next year or the year after that kind of thing, but over a long period of time, I still expect that there will be some change required to the social contract, given how powerful we expect this technology to be. I’m not a believer that there won’t be any jobs. I think we always find new things to do. But I do think the whole structure of society itself will be up for some degree of debate and reconfiguration.

Nicholas Thompson:
And that reconfiguration will be led by the large language model companies?

Sam Altman:
No, no, no. Just the way the whole economy works and what we, like, what society decides we want to do. And this has been happening for a long time as the world gets richer. Social safety nets are a great example of this. I expect we will decide we want to do more there.

Nicholas Thompson:
Let me ask you this variation of that question. You have a lot of government leaders, people from, I don’t know, that’s every country on earth, but many of the countries on earth in this room. What are the, what are examples of regulations that have been discussed in the last year and a half that you think will help with reconfiguring the social contract for a future of mass adoption of AI? And what are some examples of regulations that you’ve heard about that you think will harm that process?

Sam Altman:
I don’t, I don’t think, and I think this is mostly appropriate. I don’t think the current discussion about regulation is centered on these sorts of topics. It’s about, we’ve got elections, what are we going to do there? And, you know, you can have these other sort of really important short-term issues with AI. I don’t, I don’t think the regulations have been about like, okay, we’re going to make, we’re going to make AGI and it’s going to be like the sci-fi books, you know, what do we do there? And I think that would be premature because we don’t know yet how society and this technology are going to, are going to co-evolve. I think it is worth some debate. It just hasn’t, I don’t think it should be the current focus and it certainly hasn’t been.

Nicholas Thompson:
So are there regulatory frameworks, you know, people often talk about the framework for regulating nuclear weapons. Are the regulatory frameworks that exist, obviously nothing is perfectly parallel, but are the regulatory frameworks that exist that you think are useful to think about? as we move into this new world?

Sam Altman:
You’re talking like the long-term, what does it mean when AI can be this hugely big economic force, or the shorter-term things we were…

Nicholas Thompson:
Long-term.

Sam Altman:
There’s no… If we had a strong recommendation, we would of course realize that it’s not our decision, but we would make the strong recommendation. And we would say, given what we think is most likely to happen, given where we expect this to go, here is what we think should get more consideration. I think these things are very difficult to do in theory in advance. You have to watch as things evolve. One of the reasons that we believe in our strategy of iterative deployment, putting these systems out into… Make them safe, put them out into the world, realizing that they’re going to improve and that you’re going to learn a lot as you go. One of the reasons we think that’s important is society does not work on a whiteboard in theory. And also, society is not the static thing. As you put the technology out, society changes, the technology changes. There is this real co-evolution. And so I think the right way for us to figure out together what this is going to be is empirically. And already, since releasing ChatGPT, you see lots of examples of ways that it is starting to, in small ways, but starting to transform parts of the economy and how people do their work. And I think this approach of, it’s not like we sit down one day and say, okay, we are going to design the new social contract all at once, and here is how the world works now. That would be really bad. I think tremendously difficult to get right. But iterating our way there seems likely, much more likely to work.

Nicholas Thompson:
Okay. All right. Let’s talk about governance of open AI. So one of my… I can’t read the whole thing because there’s UN prohibitions, but this is from an interview you gave to the New Yorker eight years ago when I worked there, and you were talking about governance of open AI, and you said, we’re planning a way to allow wide swaths of the world to elect representatives to a new governance board of the company because if I weren’t in on this, I’d be like, why do these effers get to decide what happens to me? Tell me about that quote and your thoughts on it now.

Sam Altman:
Something like that is still what I believe would be good to do. We continue to talk about how to implement governance. I probably shouldn’t say too much more right now, but I remain excited about something in that direction.

Nicholas Thompson:
Say a little bit more.

Sam Altman:
I will pass. I’m sorry.

Nicholas Thompson:
All right. Let me ask you about the critique of governance now. So two of your former board members, Tasha McCauley, Helen Toner, just put out an op-ed in The Economist, and they said, after our disappointing experiences with open AI, these are the board members who voted to fire you before you came back and were reinstated as CEO. They said, you can’t trust self-governance at an AI company. Earlier this week, Toner gave an interview with the TED-AI podcast, which was quite tough. She said that the oversight had been entirely dysfunctional, and in fact, that she had learned and the board had learned about the release of ChatGPT from Twitter. Is that accurate?

Sam Altman:
Look, I respectfully, but very significantly, disagree with her recollection of events. But I will say that I think Ms. Toner, she genuinely cares about a good AGI outcome, and I appreciate that about her. I wish her well. I probably don’t want to get into a line-by-line refutation here. When we released ChatGPT, it was at the time called a low-key research preview. We did not expect what happened to happen, but we had, of course, talked a lot with our board about a release plan that we were moving towards. We at this point had 3.5, which ChatGPT was based on available for, I think, about eight months or something like that. We had long since finished training GPT-4, and we were figuring out a gradual release plan to that. I disagree with her recollection of events.

Nicholas Thompson:
Okay. That’s where you’re going to leave it?

Sam Altman:
I think so.

Nicholas Thompson:
Okay. All right. We have just a few minutes left. I want to ask you some kind of bigger questions about AI. I was just at an event with a lot of people talking about AI and humanism. One of the participants made a really interesting argument. He said, it’s possible that the human creation of something that is more powerful than humans won’t actually make us more egotistic. It’ll make us more humble. We’ll be looking in a mirror and seeing ourselves naked. We will have a sense of awe at the machine and a sense of humility about our lives, and that will teach us a new way of living. Do you think that’s what’s going to happen? Does that ever happen to you? Do you ever look at the machine and then have a greater sense of humility about replacing the world?

Sam Altman:
Personally, yes. I really do, and I think that’s going to happen more broadly. I think that—I would bet that there will be a widespread You know, I’m not going to go this way for everybody, there will be people who have egotistical freakouts about it, but I think in general there will be a widespread increase in awe for the world and a place in the universe and sort of the humbleness of the human perspective that I think will be very positive. I was reflecting recently about how, in some sense, the history of science has been humans have become less and less at the center. And you can look at all these examples where, you know, we used to believe that the sun rotated around the earth, which was sort of a very human-centered way to think about things. And then we realized that, okay, actually it’s the earth rotating around the sun. And actually there’s those little white dots in the sky are many stars, and there’s many galaxies beyond that. And then depending on how far you want to go with the analogy, you can say, all right, like there’s the multiverse thing, and this is really quite strange, and we’re really almost nothing. And the AI may be another example of where we get some additional perspective that sort of takes, gives us more sort of like humbleness and awe for the much bigger thing that we’re all part of. And I think that’s been like an ongoing and really positive thing.

Nicholas Thompson:
You know, you’re going to make my 10-year-old happy. He and I, you know, I was driving him back from a soccer game, and I’ve been listening to a lot of Sam Altman podcasts. And he was listening to one, and he said, that guy doesn’t talk about nature and animals enough. So I’m glad to have a little bit of that in that question. So thank you from James Thompson. So I want to ask you about one of the most radical positions I’ve heard you talk about. And it was in the Joe Rogan podcast, and it was in passing. And you mentioned that you can imagine a future where governance is actually every individual has a say. So they’re 8 billion, or let’s say it’s at a time when they’re 12 billion people, and you can almost input your preferences about what decisions you want to have made, and some AI that understands your preferences can then lead to better decision-making. Do you think that is a real possibility, and should the UN, we’re here at the UN, should they do that?

Sam Altman:
Well, first of all, I hope it is at a time where there’s 12 billion people and not 4 billion people. I’m definitely a little bit worried about the trends there. But yes, I think it would be a great project for the UN to start talking about how we’re gonna collect the alignment set of humanity, and also within that, like how much, like where the defaults are, where the bounds are, how people can move things within there. I think that’d be a great thing for the UN to do.

Nicholas Thompson:
So you actually, you think we could conceivably get there. What would be some of the intermediate steps in order to make a governance system where an AI can help make it more collaborative, more Athenian, and not the opposite?

Sam Altman:
We released something pretty recently, maybe like a month or so ago, called the SPEC, which was a first step towards this, where we said, okay, given what we have learned, here is, you know, here’s how we’re gonna try to like set the rules. Here’s what the model’s gonna try to do. And one reason that’s important is it at least explains when something is a bug versus behavior is designed. And so we can either fix the bug or debate the principle. And we’re now trying to have a lot of debate about that SPEC and why we do certain things. And these get down to like a pretty high level of detail, like maybe there’s something the model shouldn’t say in response if you ask it to do something that is kind of questionable. But if you ask it to translate that from English to French, it should comply with the translation request. And in trying to get to that level of. So specificity is important, but yeah, I think other things to do are now push that to a broader thing. And you can imagine a world where eventually people can chat with chatGPT about their individual preferences and have that be taken into account for the larger system, and certainly how it behaves just for them. Another thing in this world is our system message, which people have done very creative work to get the model to behave the way they want. Do you think there’s anything in the human brain that can’t be replicated by machines? Maybe subjective experiences, I don’t know. I don’t want to get into the AI and consciousness debate because I think there’s nothing I have to add to it.

Nicholas Thompson:
Well, I figured I’d ask it to you at the end because you can only basically give a yes or no question on AI or consciousness. Let me ask you another question from one of my children that I thought was pretty interesting. They sometimes have this wisdom. I have three boys. The oldest one, I was talking to him about what he fears about AI, and his answer was kind of interesting. He uses AI for debate prep, and he uses it as a tutor in school, and I’ve been trying to show him different models, and he said that he’s worried about that he’s heading into a future where relearning things is going to be incredibly important. I think we all agree on that. The world is going to change in dramatic ways to prosper, and we’ll have to relearn. He’s worried that we will become so reliant, we will no longer learn how to learn, and he’s worried about that. Do you worry about that?

Sam Altman:
There are many other things I worry about in a world that changes very quickly and where we have to figure out how we’re going to adapt to this new thing that’s moving so quickly, but learning how to learn seems… like such a deep human skill, you know, obviously it appears to me to be stronger earlier in life than later in life, but it never goes away. And I think there will be a premium on cultivating the skill. I think we’ll have even more incentive in the future than we do now to be really great at it. But it seems like deep and innate and our brains are well set up for it. So I agree with him that this is going to be important, increasingly important, but I’m optimistic that we’ll be really good at it.

Nicholas Thompson:
All right. I’ll tell him it’s fine to do his next paper with chat GPT. Last question, since we’re at the end of time, you’ve got a lot of really smart people in this room, a lot of people who are going to help regulate AI, help shape its future. What is your concluding message about the most important thing that they should think about or the one thing that you think that they should take away from this conversation as they go about helping to try to use AI, regulate AI, to bring about AI for good?

Sam Altman:
Your challenge is that there is simultaneously incredible upside and a moral duty to bring that to the world and serious safety and sort of society level concerns to mitigate. We launched a new safety and security committee of our board and some of our team this week to help us get ready for this next model. There have been a bunch of good forums around the world debating AI safety issues. And there’s a lot of talk now about preparedness frameworks. If we are right that the trajectory of improvement is going to remain steep, then figuring out what structures and policies companies and countries should put in place and sort of the international community as a whole to make sure that the world gets the tremendous benefits of this, which the world will demand and I think are very much a moral good while avoiding the risks on different timescales. So not getting too distracted by only the short term or only the long term and what all this will take. That’s very important. And I would say, consider the problem holistically. It is extremely challenging. I think there’s a lot to learn from what’s already happening and a lot of new stuff to figure out. But I guess the thing I would leave it on is like, don’t neglect the long term and don’t assume that we’re gonna ask them to cut off here.

Nicholas Thompson:
All right, well, when I asked GPT-40 how to tell whether somebody was a human, they actually had three things, right? The arm test and the news test. And the third was ask an open-ended question and see if they can give a good complex answer without buffering. So thank you very much. You prove once again that you’re human. Thank you very much, Sam Altman.

Sam Altman:
Thank you.

I

Introduction

Speech speed

145 words per minute

Speech length

99 words

Speech time

41 secs

NT

Nicholas Thompson

Speech speed

195 words per minute

Speech length

3325 words

Speech time

1024 secs

SA

Sam Altman

Speech speed

172 words per minute

Speech length

4949 words

Speech time

1730 secs