Keynote interview with Geoffrey Hinton (remote) and Nicholas Thompson (in-person)

31 May 2024 16:00h - 16:45h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

Geoffrey Hinton Discusses AI’s Potential and Perils in a Candid Conversation with Nicholas Thompson

In a thought-provoking conversation with Nicholas Thompson, Geoffrey Hinton, a luminary in the field of artificial intelligence, shared his deep insights into the transformative impact of AI on various facets of society. Renowned for his groundbreaking contributions to neural networks, Hinton discussed the astonishing progress in AI, particularly its potential to outperform humans in complex tasks such as interpreting medical data, which he believes will become a reality in the near future.

Hinton challenged conventional notions of consciousness by suggesting that AI systems could possess subjective experiences, much like humans do. He argued that the understanding of language and experiences by AI might parallel human cognition, sparking a debate on the essence of consciousness and the extent to which AI can replicate or surpass human mental processes.

The dialogue also delved into the darker aspects of AI, with Hinton expressing concerns over the rise of sophisticated cybercrime facilitated by AI, such as advanced phishing attacks. He warned of the potential for fake videos to undermine elections and proposed a countermeasure of public inoculation against misinformation. This would involve deliberately exposing the public to fake videos that are later debunked, thereby fostering skepticism and vigilance. Hinton suggested that philanthropic initiatives could support such strategies, along with verification methods like QR codes that link to authenticating websites.

Discussing AI safety and regulation, Hinton emphasized the need for a comprehensive regulatory framework. He advocated for universal basic income to address job displacement by AI and urged governments to mandate substantial investment in AI safety, drawing parallels with environmental regulations in other industries. He also proposed direct government funding for AI safety research and the provision of resources to independent researchers.

Hinton highlighted the potential of AI to democratize education by serving as a personal tutor for every learner, thus bridging educational gaps worldwide. However, he acknowledged the political challenges in accepting new technologies that reduce but do not eliminate risks. For example, the public might resist self-driving cars that cause fewer accidents than human drivers but still pose some danger.

In conclusion, the conversation with Geoffrey Hinton illuminated the profound capabilities of AI across sectors, from healthcare to education, and stressed the critical need to confront the ethical and safety challenges it poses. Hinton’s insights called for a balanced approach that leverages AI’s advantages while proactively addressing its risks through thoughtful regulation and increased public awareness.

Session transcript

Nicholas Thompson:
Thank you. Delighted to be back here again, and I’m delighted to get to share the stage with Geoff Hinton, who is one of the smartest, most wonderful, most capable, and kindest people in this entire field. How are you, Geoffrey Hinton?

Geoffrey Hinton:
I’m fine. Thank you for the over-the-top introduction.

Nicholas Thompson:
All right. Well, Geoff, I want to start with a little conversation that you and I had almost exactly a year ago, and we were in Toronto, and we were about to go on a stage, and two of my children were with me. They were then 14 and 12, and you looked at the older one, and you said, are you going to go into media like your father? And he responded, no. And you said, good. And then I said, well, if he’s not going to go into media, what should he do? And you said, he should be a plumber. And so that son has just applied for the school paper, and I’m curious if you think he’s making a grievous mistake and I should actually go send him downstairs to fix the ducks.

Geoffrey Hinton:
No, I was being somewhat humorous, but I do think that plumbing is going to last longer than most professions. I think currently with AI, the thing it’s worst at is physical manipulation. It’s getting better quickly, but that’s where it’s worst compared with people.

Nicholas Thompson:
All right. Wonderful. All right. So what I want to do in this interview, I want to start a little bit with Dr. Hinton’s background, ground this. Then I want to go into a section where I ask him some of the most interesting technical questions, some of the ones we talked about on stage. We’ll talk a little AI for good, then a little AI for bad, and then we’ll talk a little bit about regulatory frameworks. Sound good, Jeff?

Geoffrey Hinton:
Okay.

Nicholas Thompson:
Great. All right. So I first want to start 40 years ago where you were a lonely scientist. And you have what turns out to be one of the most important insights of this field, maybe of the latter 20th century, where you realize that to make an extremely powerful computer, you should pursue that by modeling it on the architecture of the human brain. And it sounds somewhat obvious now, but it wasn’t then. So tell me about that moment of insight that really gets this field going.

Geoffrey Hinton:
This is a nice myth, but there were a bunch of different people thought that. In particular, in the 1950s, both von Neumann and Turing thought that. It was very unfortunate that they both died young, otherwise the history of our field might have been very different. But it seemed to me just obvious that if you want to understand intelligence, you need to understand the most intelligent thing we know about, and that’s us. And our intelligence doesn’t come from people programming in a lot of propositions, and then it’s using logic to reason with those propositions. It emerges from a brain that was designed mainly for vision and motor control and things like that. And it’s clearly that the connection strengths in that brain change as you learn, and we just have to figure out how that happens.

Nicholas Thompson:
All right. So that makes good sense. You’re grounded in history. So now let’s go very quickly. You work on this. People say you’re heading down the wrong path. You pursue it. Other people join you. Eventually, it becomes clear you’re on a good path. It’s not clear where it will go. You win a Turin Prize. You join Google. You sell a company to Google. You then, about a year and a half ago, you leave Google. Tell me about the moment when you leave, which is a few months after the release of ChatGPT. Tell me what the last thing you worked on was and that moment of departure.

Geoffrey Hinton:
First, let me get something straight. I left for several reasons, one being that I was 75 and I decided I should retire then anyway. I didn’t just leave in order to talk about the dangers of AI. But that was another reason. And I became acutely aware of the dangers of AI, the existential threat, at the beginning of 2023. And around March 2023, when I started talking to other people who were scared about the existential threat, like Roger Gross, for example, and they encouraged me to go public. And then I made a decision to leave Google so that I could speak freely. The reason I became scared was I was working on trying to figure out how analog computers could do these large language models for 30 watts instead of megawatts. And while doing that, I became convinced that there’s something about digital computation that just makes it much better than what the brain does. Up until that point, I’d spent 50 years thinking that if we could only make it more like the brain, it would be better. And I finally realized, at the beginning of 2023, that it has something the brain can never have. Because it’s digital, you can make many copies of the same model that work in exactly the same way. And each copy can look at a different part of the dataset and get a gradient. And they can combine those gradients. And that allows them to learn much, much more. That’s why GPT-4 can know so much more than a person. It was multiple different copies running on multiple different hardware that looked at all of the internet. That’s something we can never have. So basically, what they’ve got and we haven’t got is they can share very efficiently. We can share very inefficiently. That’s what’s going on now. I produce sentences. You try and figure out how to change the synapses in your brain so you might have said that. That’s a very slow and inefficient way of sharing. Digital intelligences, if they’re different copies of the same model, can share with a bandwidth of trillions of bits.

Nicholas Thompson:
And so you have this moment, this realization that suddenly these systems can be massively more powerful than you thought. It must have been a moment both of great excitement. Why was the great fear so prevalent too?

Geoffrey Hinton:
Well, it made me think they’re going to become more intelligent than us sooner than I thought. And it made me think they’re just a better form of intelligence.

Nicholas Thompson:
Okay. Let me ask you about two of the other godfathers of AI. So you won the Turing Prize with three people, Yann LeCun, who now runs AI at Meta, Yoshua Bengio. I was trying to figure out the differences between you, and let me know if this works. You’re all godfathers. Yann kind of thinks of AI as Fredo Corleone. Not very capable, easy to control. Yoshua maybe thinks of it as Sonny, you know, potentially quite dangerous. And you view it as Michael, Michael Corleone, potentially extremely dangerous. Is that more or less correct?

Geoffrey Hinton:
I don’t think so. I think Yoshua and I have very similar views about the dangers.

Nicholas Thompson:
But your difference with Yann is essentially you view this as a much more powerful system than he does. And that’s why you are more concerned than he is.

Geoffrey Hinton:
That’s one difference, yes. That’s a main difference. So I think it really is intelligent already. And Yann thinks a cat’s more intelligent.

Nicholas Thompson:
Right. Well, let’s get into that intelligence, which I think is one of the most interesting questions. Do you think that there is anything in the human mind that cannot be replicated by these machines and by AI systems? Is there anything that our brains can do that can’t be replicated in a machine?

Geoffrey Hinton:
No.

Nicholas Thompson:
And does that mean that there’s nothing that we can do that cannot be surpassed by these intelligent machines? Can, for example, one could say that they will eventually be able to produce more beautiful music. And they will be able to do all the things that we do better than us that involve simple cognition.

Geoffrey Hinton:
That’s what I believe, yes.

Nicholas Thompson:
And you don’t believe there’s anything spiritual or outside or anything that is beyond what can be captured in a set of neural networks?

Geoffrey Hinton:
I think what we mean by spiritual could be captured by these alien intelligences. I agree with Sam Altman that it’s an alien intelligence. It’s not quite like us. It has some differences from us. But if you look at things like religion, I don’t see why you shouldn’t get religious ones.

Nicholas Thompson:
Yesterday, when I asked Altman this question, he said that, well, there might be one difference, which is subjective experience. A bot, a system can’t experience the world. Do you believe that AI systems can have subjective experience?

Geoffrey Hinton:
Yes, I do. I think they already do.

Nicholas Thompson:
All right. Let’s go into that a little bit more. Explain that. That is a controversial proposition, Jeff. You can’t get away with a one sentence answer. Please follow up, Dr. Hinton.

Geoffrey Hinton:
Okay. I was trying to give nice, sharp answers to your question in the way that Altman didn’t. But yeah, we need to follow up on that one. My view is that almost everybody has a completely wrong model of what the mind is. This is a difficult thing to sell. I’m now in a position where I have this belief that’s kind of out of sync with what most people firmly believe. I’m always very happy in that position. So most people have a view of the mind as a kind of internal theater. In fact, people are sort of so convinced this view is right that they don’t even think it’s a view. They don’t even think it’s a model they have. They think it’s just obvious. In much the same way as people thought it was just obvious that the sun goes around the earth. I mean, you just look at it and it goes around the earth. Eventually, people realised that the Sun doesn’t go around the Earth, the Earth rotates on its axis. That was a little technical error Sam made, and since I’m pedantic I like to pick him up on it. It’s not that they thought the Sun goes around the Earth and then they realised the Earth goes around the Sun. That’s not the right contrast. They thought the Sun went round the Earth and then they realised the Earth rotates on its axis. The Earth going round the Sun is to do with years, not with days. But anyway, it was obvious the Sun went round the Earth and we were wrong. We had a model, it was a straightforward model, it was obviously right, you could just see it happening, and we were wrong about that model. And I think the same is true of what most people think about the mind. Most people think about an inner theatre and they’re just wrong about that. They haven’t understood how the language of mental states works.

Nicholas Thompson:
But explain how that applies to an AI system. Explain the way, if I say to GPT-4, I say, you’ve just experienced a loud sound and something has collided with you. It isn’t feeling pain or hurt and its ears don’t ache. To what sense has it had a subjective experience?

Geoffrey Hinton:
Okay, so let’s take a nice simple example. I don’t pretend to have the full answer to what consciousness is, although I think I’ve made a little bit of progress. In fact, the progress was made by philosophers in the last century. So if I say to you, I see little pink elephants floating in front of me, one way of thinking about that is there’s an inner theatre and in my inner theatre there are little pink elephants and I can sort of directly see those little pink elephants. And if you ask what they’re made of, they’re made of stuff called qualia, maybe some pink qualia and some elephant qualia and some right way up qualia and some moving qualia. all somehow conjoined together. That’s one theory of what’s going on. It’s an inner theatre with funny, spooky stuff in it. A completely different theory is, I’m trying to tell you what my perceptual system is telling me. And my perceptual system is telling me there’s little pink elephants out there floating in the air, and I know that’s wrong. So, the way I tell you what my perceptual system is telling me is by saying, what would have to be the case for my perceptual system to be working correctly? So, really, when I say I have the subjective experience of little pink elephants floating in front of me, I can say exactly the same thing without using the word subjective experience. I can say, what my perceptual system is telling me would be correct if the world contained little pink elephants floating in front of me. In other words, what’s funny about these little pink elephants is not that they’re in an inner theatre made of funny stuff called qualia. They’re hypothetical states of the world, and it’s just a sort of indirect reference trick. I can’t directly describe what my perceptual system is telling me, but I can say what would have to be in the world for it to be correct.

Nicholas Thompson:
So, a machine can more or less do the same thing with its perception?

Geoffrey Hinton:
Yes, and so let me give you an example of that. So, I want to give you an example of a chatbot that’s obviously having a subjective experience. Suppose I have a multimodal chatbot, and it’s got a camera and a robot arm, and I train it up and it can talk and it can see things, and I put an object in front of it and say, point at the object. It’ll point at the object. Now, I put a prism in front of its lens, and without it knowing, and now I put an object in front of it and say, point at the object, and it points off to one side, and I say, no, that’s not where the object is. The object is straight in front of you, but I put a prism in front of your lens. And the chatbot says… Oh, I see, the prism bent the light rays, so the object’s actually straight in front of me, but I had the subjective experience that it was off to one side. And if the chatbot said that, I think it would be using the phrase subjective experience in exactly the way we use it. It’s not referring to spooky inner stuff that chatbots couldn’t have. It’s referring to a hypothetical state of the world such that the perception of the chatbot would have been correct.

Nicholas Thompson:
Wow. All right. You’re the first person to have argued to me about this, but that is a fascinating case to make. Let’s talk about interoperability, which was something I asked Altman about, because to him, understanding the inner core of an AI system would be the thing that would most protect us from catastrophic outcomes. You helped design these systems. Why is it so hard to look inside of them and understand what they’re doing?

Geoffrey Hinton:
Okay. Let’s take an extreme case. Let’s suppose we had a big data set, and we’re trying to answer a yes-no question. In this data set, there’s lots of weak regularities. Maybe there’s 300,000 weak regularities that suggest that the answer should be no, and there’s 600,000 weak regularities that suggest that the answer should be yes, and that the regularities are providing equal strength. So the answer is very clearly yes. There’s overwhelming evidence the answer should be yes, but this evidence is in all these weak regularities. It’s just in the combined effect of them all. This is an extreme case, of course. If you then ask someone, okay, explain why it said yes. The only way to explain why it said yes is to go into these 600,000 weak regularities. So when you’re in a domain where there’s lots and lots of weak regularities, and there’s so many of them that they’re actually significant, their combined effect is significant, there’s no reason to expect you should be able to get simple explanations of things.

Nicholas Thompson:
So, in that conversation yesterday, Altman pointed out a paper from Anthropic, which I thought was incredibly interesting. And the paper talks about analyzing the inner workings of Claude, Anthropic’s model, and finding all the connections, the sort of the neural connections to the concept of the Golden Gate Bridge. And you add weight to all those connections and you create Golden Gate Claude. And then you go into that chatbot and you say, tell me a love story. And it’s a love story that happens on the Golden Gate Bridge. And you ask it, you know, what it is, and it describes the Golden Gate Bridge. Given that, why can we not go in to a large language model and adjust the weights, not for the Golden Gate Bridge, but for, say, the concept of empathy, the concept of compassion, and then create a large language model that is much more likely to do good for the world?

Geoffrey Hinton:
I think you can make an empathetic model, but not by directly adjusting the weights. You just train it on data that exhibits empathy.

Nicholas Thompson:
Then you get the same result.

Geoffrey Hinton:
Yes.

Nicholas Thompson:
Should we be doing that?

Geoffrey Hinton:
And I don’t, there’s been lots of examples in the past of people trying to understand what individual neurons are doing. And I’ve been doing that for like 50 years. And if the neurons connected directly to the inputs or connected directly to the outputs, you stand a chance of understanding what the individual neurons are doing. But once you have multiple layers, it’s very, very hard to understand what a neuron deep inside the system is really doing, because it’s its marginal effect that counts. And its marginal effect is very different depending on what the other neurons are doing, depending on the input. So as the inputs change, the marginal effects of all these neurons change, and it’s just extremely hard to get a good theory of what they’re doing.

Nicholas Thompson:
So I could take my neural network that I’ve been building backstage and try to adjust the weights for compassion, and I actually come up with some kind of horrible animal killing machine, because I don’t know exactly what I’ve done and how everything connects.

Geoffrey Hinton:
Yeah, I may be one of the few people who’s actually tried doing this. So in the very early days of neural networks, when the learning algorithms weren’t working very well, I had a Lisp machine and the mouse had three buttons and I figured out a way of displaying all the weights in a small neural network and I made it so if you press the left button the weight got a little bit smaller and if you press the right button the weight got a little bit bigger and if you press the middle button you could see what the value of the weight was. It would print out the value of the weight and I tried fiddling around with neural nets adjusting the weights. It’s really difficult. Back prop is much better.

Nicholas Thompson:
Well, we’ll have to wait for a next level AI that is even smarter than Geoffrey Hinton to figure out how to do this. Let’s talk a little bit about some AI for good. You’ve often talked about the benefits that will come to the medical field and when you go through the SDGs, it seems like good health and medicine is an area where you feel that AI will have a lot of benefits. Is that fair and tell me why.

Geoffrey Hinton:
Yeah, I’m a bit stumped. It’s just obvious why. It’s going to be much better at interpreting medical images. In 2016 I said that by 2021 it would be much better than clinicians at interpreting medical images and I was wrong. It’s going to take another five to ten years, partly because medicine is very slow to take up new things, but also I overestimated the rate of short-term progress. So that’s a wrong prediction I made. But it clearly is getting better. Now it’s comparable with quite good medical experts at many kinds of medical images, not all of them, but many of them, and it’s getting better all the time and it can see much more data than any clinician. So it’s just obvious in the end it’s going to get better. I just thought it would happen a bit sooner. But it’s also much better at things like combining lots and lots of data about a patient, combining data about the genome, the results of all the medical tests. I mean, I would really love it if my family doctor had seen a hundred million patients and could remember stuff about them all or had incorporated information from them all. And so when I go in with some strange, weird symptoms, the doctor can immediately say what it is because she’s seen like 500 patients who are quite similar already in that hundred million that she’s seen. That’s coming and that’s going to be amazing.

Nicholas Thompson:
And so the future of the medical benefits then are, A, doctors who have seen many more patients and trained on them, B, specific tasks like analyzing images. And what about scientific breakthroughs like the stuff that your old colleagues are working for on AlphaFold 3, AlphaFold 2?

Geoffrey Hinton:
Of course, there’s going to be lots of those. It’s going to be wonderful for doing things like understanding what goes on and as well as designing new drugs. So obviously it’s going to help with designing new drugs. I think Demis is a big believer in that now. But it’s going to help us understand basic science. And in many cases, there’s large amounts of data that are not of the kind that we evolved to deal with. So it’s not visual data, it’s not acoustic data, it’s data in genomes and things. And I think these AI systems are going to be much, much better at dealing with large volumes of data and seeing patterns in it and understanding it.

Nicholas Thompson:
And that gets at one of my main critiques of the field of AI that I’m curious if you share. I understand why so many researchers and some of your former students, many people who are pioneers of the field are working so hard to make machines that are just like humans and are indistinguishable from humans. But there are also all these other people who are trying to build very specific things like AlphaFold3 or try to figure out how to use AI to push forward cancer research. Do you think I’m wrong? To feel like there’s too much weight and too much focus on the AGI side and not enough weight and focus on the specific scientific benefit side.

Geoffrey Hinton:
I think you may well be right about that. For a long time I thought that AGI isn’t going to be, there’s not going to be a moment when suddenly these things get smarter than us. They’re going to get better than us at different things at different times. So if you play chess or go, it’s clearly there’s no way a human is ever going to be as good as things like AlphaGo or AlphaZero. They’ve way surpassed us. We can learn a lot from the way they play those games and people are learning that, but they’re way ahead of us there and probably in coding they’re already way ahead of me. I’m not a very good coder. So I think the idea that all of a sudden they’re going to be better at everything is silly. They’re going to get better at different things at different times and physical manipulation is going to be one of the later things, I believe.

Nicholas Thompson:
So when your former students are asking for projects to pursue, do you often point them in the direction of say, we’ll do more basic scientific research, push for more discoveries, as opposed to continuing to go for human-like intelligence?

Geoffrey Hinton:
My former students are now all so old that they don’t ask me anymore.

Nicholas Thompson:
His former students run basically every AI company in the world, so that was like kind of a subtle way to get at that question, but we’ll let that be. So back to AI for good. Looking at the SDGs, looking at the ambitions of the people in the room, do you feel like AI will transform education in a way that helps equity, particularly as these systems become fluent in every language on earth?

Geoffrey Hinton:
Yes. So let me give you a little story. When I was at school, my father insisted I learn German because he thought that was going to be the language of science. That was because in chemistry, German sort of was the language of science, I believe, in the middle part of the last century or the early part and I wasn’t very good at German. I didn’t do very well in it and so my parents got me a private tutor and pretty soon I was top of the class in German. Private tutors are just much more efficient than sitting in the class listening to broadcasts by the teacher because the private tutor can see exactly what it is you misunderstand and give you just that little bit of information you need to understand it correctly and so I think everybody’s gonna get private tutors and until now private tutors were the domain of the rich or the middle class and ambitious so in that sense it’s gonna help a whole lot and I think the Khan Academy believes that too.

Nicholas Thompson:
Well that’s a huge thing I mean if everyone has these incredibly capable private tutors they can speak their languages which will get at some point God willing soon it’s been a big big topic here. Don’t you see the world becoming more equal?

Geoffrey Hinton:
In that sense yes. In terms of educational opportunities I think it will become more equal. The elite universities aren’t going to like this but I think it will become more equal yeah.

Nicholas Thompson:
We’re not we’re not here we’re more interested in the future of humanity this is not you know AI for elite universities it’s AI for good so I think we can take this as a win on the stage. Absolutely. But there was a gap there in your answer suggesting that you feel like AI overall won’t be a net force for equality may in fact be net net a force for inequality. Was I wrong to read that into your answer?

Geoffrey Hinton:
Well we live in a capitalist system and the capitalist systems have delivered a lot for us but we know some things about capitalist systems. If you look at things like big oil or big tobacco or asbestos or all sorts of other things we know that In capitalist systems, people are trying to make profits and you need strong regulation so that in their attempts to make profits, they don’t screw up the environment, for example. We clearly need that for AI and we’re not getting it nearly fast enough. So if you look at what Sam Altman said yesterday, he sort of gave the impression that, yeah, they’re very concerned about safety and so on. But we’ve now had an experiment on that. We’ve seen the results of an experiment where you pit safety against profits. Now the experiment was done in rather bad conditions. It was done when all of the employees of OpenAI were about to be able to turn their paper money into real money because there was a big funding round coming and they were going to be allowed to sell their shares. So it wasn’t an experiment done in ideal circumstances. But it’s clear who won out of profits and safety. And it’s clear now that what OpenAI has got, it’s got a new safety group. It’s employed some economists, or at least one economist. I think of economists as the high priests of capitalism and I don’t think it’s going to worry about the existential threat nearly as much as Ilya and the people working with him did. I also think the problem is that capitalism is about making profits, which I’m not totally against. I mean, it’s done wonderful things for us, that drive, but it needs to be regulated so it doesn’t also cause bad things. It’s going to create a lot of wealth. I think it’s clear to almost everybody that AI is going to increase productivity. The question is, where is that additional wealth going to go? And I don’t think it’s going to go to poor people, I think it’s going to go to rich people. So I think it’s going to increase the gap between rich and poor. That’s what I believe.

Nicholas Thompson:
Do you not have hope? Seems like you’re saying AI, its powers, the fact that it will probably be a small number of corporations because of the resources needed to train these large language models, that AI is kind of incompatible with capitalism and equality. Do you not have hope that some of what we were just talking about, equity and education, the ability of everybody to have access to extremely powerful machines, if not entirely as powerful as the most expensive machines, do you not have hope that that will counterbalance it?

Geoffrey Hinton:
There is some hope of that, but sort of for most of my life I thought that as people got more educated, they’d get more sensible, and it hasn’t really happened. If you look at the Republican Party now, they’re just spewing lies, and just crazy lies.

Nicholas Thompson:
This is a good moment. Let’s go into the question, I want to get into the question of how to do regulation and your ideas for that, but I also want to get through some of your other fears about where AI is taking us. Why don’t you here lay out maybe the one or two things, not that you’re worried about kind of the existential fears that you’re worried about for the economy, but your fears for the next 12 months.

Geoffrey Hinton:
So I’m worried about something I know very little about, which is cybercrime. I heard Dawn Song speak recently, and she said that phishing attacks went up 1200% last year, and of course they’re getting much, much better, because you can’t recognise them any more by spelling mistakes or funny foreign syntax, because they’re all done by chatbots now. Or a lot of them are. So I’m worried about that, but I don’t know much about that. Another thing I’m worried about a lot is fake videos corrupting elections. I think it’s fairly obvious that just before each election… there’s going to be lots of fake videos when there isn’t time to refute them and I actually think it will be a good idea to inoculate the public against fake videos So treat them like a disease and the way you inoculate against the disease is you give a kind of attenuated version of it and so I think There’s a bunch of philanthropic billionaires out there. I think they should spend their money or some of it Putting on the airwaves a month or so before these elections Lots of fake videos that are very convincing at the end. They say but this is fake That wasn’t Trump speaking and Trump never said anything like that Or that wasn’t Biden speaking and Biden never said anything like that. This was a fake video Then you’ll make people suspicious of more or less everything. That’s a good idea if there’s a lot of fake videos around But then you need a way for people to check whether the video is real That’s an easier problem checking whether it’s fake if they’re willing to put in like 30 seconds of work So Jan Tallinn suggested for example, you could have a QR code at the beginning of each video You could use a QR code to get to a website If the same videos on the website, you know that website Claims this video is real and now you’ve reduced the problem of saying whether a video is real to the problem with that website’s real And websites are unique. So if you’re sure it’s it really is the Trump campaign website Then you know the Trump campaign really put out that video

Nicholas Thompson:
So quit us pause for a second. This is why I love interviewing Jeffrey Hinton we’ve gone from like new theories of Consciousness an incredibly controversial theory about subjective feelings to an idea that we should inoculate the public Against fake news by pumping out low dosages of fake videos Let’s go to the first part because your solution there had I think if I heard correctly two parts so the first is inoculate the public against fake videos, so you mean specifically Someone should create millions of short, fake but not very damaging videos and put them on Twitter, threads?

Geoffrey Hinton:
They could be moderately damaging. They’re not going to be convincing unless they look like real political advertisements. But at the end of the advertisement, they’re short advertisements so you hope people watch to the end. At the end of the advertisement it says, this was fake. That’s the attenuation that allows you to deal with it.

Nicholas Thompson:
I see. So you watch it and you’re like, ah, this proves my point. Oh wait, that was fake. And then you’re more distrusting. I like this.

Geoffrey Hinton:
Exactly.

Nicholas Thompson:
Then the second part is that every video should have a QR code and so you see something. Now you’re aware of it and so you scan the little QR code, you go to the website. Ah, it’s real. It’s on a real website. That’s the idea?

Geoffrey Hinton:
Well, it’s not sufficient just that it takes you to a real website because fake videos could take you to the same real website. The video has to be there. The same video has to be there on that website.

Nicholas Thompson:
Fair enough. All right. Let’s talk about biases and how to prevent them. So one of the risks that people talk about is that AI systems trained on biased data will have biased results. Let’s go back to medicine where you’ve made the compelling case that net-net AI will be hugely beneficial. You can imagine a doctor who has just been trained on the medical records of people in the United States not giving the right medical advice to somebody from Zambia because they’re different medical concerns, different DNA, etc. How worried are you about this problem and what does one do to fix it?

Geoffrey Hinton:
Okay, so I’m less worried about the bias and discrimination problems than I am about the other problems. And I am aware that I’m an old white male, so that might have something to do with it. It hasn’t happened to me much. But I think if you make the goal, replace biased systems or biased people… With systems that are less biased not systems that are unbiased but systems that are less biased that seems eminently doable So if I have data of old white men deciding whether young black women should get mortgages I’m going to expect there to be some bias there Once I’ve trained in an AI system on that data I can actually freeze the weights and I can go and examine the bias in a way that I can’t with people With people if you try and examine their biases you get the kind of Volkswagen effect that they realize you’re examining them and they behave in a quite different way I just invented the name the Volkswagen effect, but there you go With AI systems if you freeze the weights you can measure the bias much better and do things to overcome it ameliorate it You’ll never get rid of it completely. It’s too difficult. I think But suppose you make the target make the new system considerably less biased than the system is replacing I think that’s eminently doable

Nicholas Thompson:
Remarkable, and do you feel as though? the focus in the industry on biases, which has been a major topic has Underappreciated the fact that actually these systems could end up being more just and that Really we should just instead of saying we’ve got to wipe out all the biases We should just say let’s make them less biased than humans and go from there

Geoffrey Hinton:
I think that would be rational. I don’t think politically. I’m not sure where this politically acceptable I mean suppose you said we’re going to introduce self-driving cars Um, they kill lots of people on the road, but only half as many as ordinary cars I don’t think you get away with that. They have to kill almost nobody for you to get away with it So I think there’s a political problem there to do with accepting new technology in a rational way But I Think we should aim for systems significantly less biased and be content with that.

Nicholas Thompson:
All right, let’s go to what you’ve described in interviews as the biggest risk with AI, which is that they would get sub-goals, right? And that they would get a goal beyond the initial goal given to them by their creators and by their users. Explain A, what you think a sub-goal is, B, why that’s so bad, and C, what we can do about it.

Geoffrey Hinton:
So an innocuous kind of sub-goal is, if I want an AI agent to plan a trip for me, I say, you’ve got to get me to North America, suppose I’m in Europe, I say, you have to get me to North America. So it will have a sub-goal of figuring out how to get me to the airport. And that’s just a classic kind of sub-goal. And if you want to make intelligent agents, they have to have sub-goals like that. They have to be able to focus on one little part of the problem and solve that without worrying about everything. Now, as soon as you have a system that can create its own sub-goals, there’s a particular sub-goal that’s very helpful. And that sub-goal is, get more control. If I get more control, I can be better at doing all sorts of things that the user wants me to do. So it just makes sense to get more control. And the worry is that eventually, an AI system will figure out, look, if I could control everything, I could give these silly humans what they want without them having any control at all. And that’s probably true. But then the worry is, suppose the AI system ever decided that it was a little bit more interested in itself than it was in the humans, we’d be done for.

Nicholas Thompson:
And in fact, even before we’re done for, as you were describing that, I got quite worried, right? So you have an AI, and the goal is get Nick to the airport on time. The AI is, in some future state, all-powerful. Well, the best way to get Nick to the airport is probably immobilize Nick, put his hands behind his back, just throw him in a car. It’s much more efficient, because then he doesn’t talk to anybody on the way out. So, you can see these sub-goals going terribly wrong, right?

Geoffrey Hinton:
Yeah, but remember, it’s a very intelligent system. By then, it should be able to not go wrong in ways that are obviously against human interests. It should be trained in such a way that it is interested in human interests.

Nicholas Thompson:
All right, excellent, because I don’t really want that to happen to me. All right, let’s go through, I want to go through some regulatory frameworks. I have a question for you that I think you may be able to answer better than most anybody, which is, one of the things that holds back the big AI companies and the AI researchers from working on safety or slowing down, it isn’t just power, it isn’t just money, it’s the dream of doing something great, or as coders say, finding something sweet. Tell me about a moment, so regulators can understand this, where you as a developer are on the cusp of a breakthrough. What that feels like, and how regulators should think about that as they think about policy.

Geoffrey Hinton:
I’m not sure I can give you good insight into that. For a researcher, for a researcher who’s driven by curiosity, working on how to make something more capable, to introduce some dramatic new capability, like a previous speaker talked about the idea that you learn a model for one language, you learn a model for a different language, and then you can take the internal representations and rotate them onto each other. That’s kind of amazing. You get a lot of joy out of seeing something like that, and I’m not sure you get the same level of joy out of working on safety. I sort of agree with you there. However, working on safety is very important, and there are some very good researchers. who are keen to spend their careers working on safety. And I think we should do everything we can to make that career path be a rewarding career path.

Nicholas Thompson:
So you would say to the young enterprise encoders in this room, this is God’s work. Work on safety, or that would be a good thing to do, perhaps even better than being a plumber?

Geoffrey Hinton:
Oh yes, if you could make progress on safety, that would be amazing, yes.

Nicholas Thompson:
All right, excellent, I’ll talk to my kids. Let’s talk about the regulatory frameworks you want. One thing you’ve talked about is, I believe you went to 10 Downing Street and said that the UK should have universal basic income. Will you explain why? And then explain the other regulations that you recommended there.

Geoffrey Hinton:
Yes, I got invited to 10 Downing Street and there were a whole bunch of Sunax advisors, his chief of staff and a whole bunch of other people who advised him on AI. And I talked to them for quite a while. At that point, I wasn’t sitting down, so I walked into this room and there was this big group of advisors and I spent a while talking to them, including saying I thought that I wasn’t confident that AI would create as many jobs as it got rid of. And so they’d need something like universal basic income. When the meeting came to an end, I started to go out the door and realized I’d been standing directly in front of a huge picture of Margaret Thatcher and explaining to people they should have socialism. I’m in front of a big picture of Margaret Thatcher, which is quite funny.

Nicholas Thompson:
All right, so universal basic income, what else is in the Geoffrey Hinton regulatory plan for a world with AI?

Geoffrey Hinton:
I think a very straightforward thing, which Sam Alton won’t like, is this idea that comparable resources should be devoted to safety. If you look at the statements made by at least one of the people who left OpenAI because it wasn’t serious enough about safety. It was to do with resources. I think the government, if it can, should insist that more resources be put into safety. It’s a bit like with an oil company. You can insist they put significant resources into cleaning up waste dumps and cleaning up the stuff they spew out. And governments can do that. And if governments don’t do that, they just keep spewing stuff out. That’s clearly the role of government, to make capitalism work without destroying everything. And that’s what they should be doing.

Nicholas Thompson:
But there’s a simpler way to do that, right? I mean, government can regulate these big companies and say you have to work on safety and we need to audit and make sure you’re doing that. But the government could also just fund a lot of safety research and take a lot of government data and make it available to safety researchers and fund a bunch of compute and give that to safety researchers. So should the government officials here all be setting up AI? Should the UN set up an AI safety institute?

Geoffrey Hinton:
I think the UN is rather strapped for funds and the UN has to do things like feeding people in Gaza. I’d rather it spend the money feeding people in Gaza. I don’t think the UN has the resources. Maybe it should have the resources, but it doesn’t. Canada, I don’t think, has the resources. Canada has made a serious effort to put money into funding compute for universities and startups. So they recently put $2 billion into that, which is a lot of money, especially for Canada. But it’s nothing compared with what the big companies can do. Maybe countries like Saudi Arabia can put in comparable money, but I’m not so sure they’re interested in safety.

Nicholas Thompson:
So Jeff, we have one minute left and I have 14 more questions. Even though you gave wonderful, wonderful, brisk answers. So I’m going to ask one big one at the end. From all of this AI research, you’ve been studying how the brain works. You have incredible theories about why we sleep. If you get a chance to talk to Dr. Hinton, I advise you ask him about that. What have we learned about the brain in the last year and a half in this explosion of AI that has surprised you?

Geoffrey Hinton:
Well, that surprised me. I’d rather go back a few years and say what’s really surprised me is how good these big language models are. I think I made the first language model in 1985 that used backpropagation to try and predict the next word in a sequence of words. The sequence is only three words long, so the whole system only had a few thousand weights. But it was the first of that kind of model. And at that time, I was very excited about the fact that it seemed to be able to unify two different theories of the meaning of a word. One theory is that it’s to do with its relations with other words. That’s the sort of de Sartre theory. And the other theory that comes from psychologists is that it’s a big set of semantic features. And what we’ve done now is by learning embeddings and having interactions between the features of embeddings of different words or word fragments, we’ve managed to unify these two different theories of meaning. And we now, I believe, have big language models that really understand what they’re saying in pretty much the same way as people do. So one final point I want to make is that the origin of these language models used backprop to predict the next word was not to make a good technology. It was to try and understand how people do it. So I think the way in which people understand language, the best model we have of that is these big AI models. So people who say, no, they don’t really understand, that’s nonsense. They understand in the same way as we understand.

Nicholas Thompson:
All right. Well, we’ll have to end on that note. It makes me somewhat heartened to know that Geoff Hinton’s incredible mind is in some way behind. the AI models that we use today. Thank you so much, Dr. Hinton. Thank you for joining us today.

GH

Geoffrey Hinton

Speech speed

196 words per minute

Speech length

5657 words

Speech time

1733 secs

NT

Nicholas Thompson

Speech speed

186 words per minute

Speech length

3096 words

Speech time

1001 secs