Geoffrey Hinton is one of the world’s biggest minds in artificial intelligence. He won the 2024 Nobel Prize in Physics. Where does he think AI is headed?
Guest
Geoffrey Hinton, Winner of the 2024 Nobel Prize in Physics with John Hopfield for “foundational discoveries and inventions that enable machine learning with artificial neural networks.” Winner, alongside two collaborators, of the 2018 Turing Award, often called “the Nobel Prize of computing.” Worked for Google’s deep-learning AI team from 2013 to 2023. Professor Emeritus at the University of Toronto.
Transcript
Part I
MEGHNA CHAKRABARTI: In 2024, Geoffrey Hinton won the Nobel Prize in Physics, a category that somewhat amused him, as we’ll hear about in just a moment. The Nobel Committee gave him the award for, quote, foundational discoveries and inventions that enable machine learning with artificial neural networks.
He shared the honor with John Hopfield. Earlier, in 2018, Hinton and two other longtime collaborators, Yoshua Bengio and Yann LeCun, also received the Turing Award, which is often called the Nobel Prize of Computing, for their work on neural networks. Hinton’s work is so foundational that he’s considered to be the godfather of a civilization changing technology that emerged from artificial neural networks.
In other words, he’s called the Godfather of AI. From 2013 to 2023, Professor Hinton worked for Google’s Deep Learning Artificial Intelligence team. He’s currently professor emeritus at the University of Toronto. And given his illustrious position in the development of artificial intelligence, your ears perk up when you hear that Geoffrey Hinton also says there is a chance that the very thing he contributed to creating could destroy humanity itself. And he joins us now. Professor Hinton, welcome to On Point.
GEOFFREY HINTON: Hello.
CHAKRABARTI: I’ve done something a little wicked in that I’ve teased listeners about the end of humanity, which I will actually ask you about later in the show. So stick with us, folks, because before we get to the Doomsday Scenario.
I’d love to actually spend some time understanding your work better, so that it helps us take your potential predictions here with much greater seriousness. I understand at the beginning of your career studying neural networks, someone once called it the unglamorous subfield of neural networks.
You think that’s a fair description? Maybe it was back then.
CHAKRABARTI: Back then. Why?
HINTON: Most people doing artificial intelligence and most people doing computer science thought it was nonsense. They thought you’d never be able to learn complicated things if you started with a neural network with random connection strengths in it.
They thought you had to have a lot of innate structure to learn complicated things. They also thought that logic was the right paradigm for intelligence, not biology. And they were wrong. So when we say neural networks, what do we mean in terms of computing, right? Because obviously in the brain, a rudimentary description of that is just the ways in which the trillions of neurons in our brains are connected.
So how does that translate into the world of computing? So we can simulate a brain on a computer. We can let it have a lot of pretend neurons with pretend connections between them. And when the brain learns it changes the strengths of those connections. So the basic problem in getting neural networks to work is how does, how do you decide whether to increase the strength of a connection or decrease the strength?
If you could figure that out, then you could make neural networks learn complicated things, and that’s what’s happened.
CHAKRABARTI: But in terms of increasing the strength in the brain of a connection, strength meaning what? There are more neurons devoted to that series of connections? I’m actually just trying to understand this at a fundamental level.
Go ahead.
HINTON: Okay I’ll give you a sort of one minute description of how the brain works.
CHAKRABARTI: Yes, please.
HINTON: You’ve got a whole bunch of neurons. A few of them get input from the senses, but most of them getting their input from other neurons. And when a neuron decides to get active, it sends a ping to other neurons.
And when that ping arrives at another neuron, it causes some charge to go into the neuron. And the amount of charge it causes to go in depends on the strength of the connection. And what each neuron does is looks to see how much input it’s getting. And if it’s getting enough input, it becomes active and sends pings to other neurons.
That’s it. That’s how the brain works. It’s just these neurons sending pings to each other, and each time a neuron receives a ping from another neuron, it injects a certain amount of charge that depends on the strength of the connection. So by changing those strengths, you can decide which neurons get active, and that’s how you learn to do things.
CHAKRABARTI: And forgive me, maybe I’m just having a really dense day, but again, the strength of the connection means what? The frequency of the pings? Or the actual level of the charge going across the synapse? So what does that mean?
HINTON: Okay, so when a ping arrives, the synapse is going to inject a certain amount of charge into the neuron that the ping arrives at.
And it’s the amount of charge that gets injected that’s changed with learning.
CHAKRABARTI: I see. Okay. Thank you for explaining that. So then again, in the world of computing, what’s the analogy that then increases a neural, a computational neural networks’ capacity for this?
HINTON: On a digital computer, we simulate that network of neurons.
A digital computer can simulate anything. And we simulate a network of neurons, and then we need to make up a rule for how the connection strengths change, as a function of the activity of the neurons. And that’s what learning is in neural nets. It’s a simulated neural net on a computer with a rule for changing the connection strengths.
CHAKRABARTI: I see. And the simulation itself, we’re just talking about lines of code, which is up in the many trillions now, I understand. But that’s what we’re talking about?
HINTON: No, it’s not trillions of lines of code. We’re talking about not that many lines of code, which are specifying what the learning procedure is.
That is, what the lines of code have to do is say, as a function of how the neurons are getting activated, how often they’re activated together, for example. How do we change the connection strength? That doesn’t change, that doesn’t require many lines of code. The thing that we have trillions of is connections and connection strengths.
CHAKRABARTI: I see.
HINTON: And so unlike most computer programs, where it’s just lines of software that do stuff, here we have a few lines of software that tell the neural net how to learn, the simulated neural net. But then, what you end up with is all these learned connections strings, and you don’t have to specify those.
That’s the whole point. It gets those from the data.
CHAKRABARTI: Huh. Okay. Thank you for bearing with me on my rudimentary questions on this, because we’ve done a lot of shows about AI, and I still, I can’t yet say with fairness that I fully understand how this works, even as it’s changing in ways both obvious and not so obvious.
Many aspects of how we live. But going back to the beginning of your career in computational neural networks, as you said, it was an underappreciated or undervalued area of computer science. What made you want to persist in this area at the time? What fascinated you about it?
HINTON: Obviously the brain has to learn somehow, and the theories around at the time that the brain is full of symbolic expressions and rules for manipulating symbolic expressions just didn’t seem at all plausible. I guess my father was a biologist, so I took a biological approach to the brain rather than a logical approach.
And it’s just obvious that you have to figure out how the brain changes the connection strengths. This was obvious to a number of people early on in computer science like von Neumann and Turing, who both believed in learning in neural nets, but unfortunately, they both died young.
CHAKRABARTI: Turing rather tragically, of course.
Now, since you mentioned your father, can we talk about him for a little bit? Because he was not just a biologist, he was a very celebrated entomologist with a kind of unique view of the world and even a unique view of his family’s place in the world. Can you talk about him a little bit more?
HINTON: I guess if I have to.
He was a not very well-adjusted man. He grew up in Mexico without a mother during all the revolutions. And so he was used to a lot of violence. He was very bright. He went to Berkeley. That was the first time he had formal education. I think he had tutors at home, because his father ran a silver mine in Mexico.
And he was good at biology, but he had very strong and incorrect views about various things.
CHAKRABARTI: Can you tell us more?
HINTON: Okay. He was a young man in the 1930s in Britain. He moved to Britain. He was a Stalinist, which sounds appalling now. It wasn’t that unusual in Britain in the 1930s, and people didn’t at that point know all the awful things Stalin had done.
He had strong political views that were not very acceptable.
CHAKRABARTI: And I won’t press on the political part here for now at the moment, Professor Hinton, but how about his passion for the insect world? How can you tell me more about that?
HINTON: Yes, that was the best aspect of him.
He loved insects, particularly beetles. His children always used to say that if we had six legs, he’d have liked us more.
CHAKRABARTI: And? You keep dangling these things in front of me professor, you’ll have to forgive me.
HINTON: And so when I was growing up, we, at weekends we’d go out to the countryside and collect insects, and I got to learn a lot about insects. He’s also interested in lots of other kinds of animals. When I was a kid, I had a pit in the garage where we kept all sorts of things.
At one point, I was looking after 42 different, 43 different species of animal, but they were all cold blooded. So there’s snakes, and turtles, and frogs and toads and newts all sorts of … fish, all sorts of cold-blooded animals.
CHAKRABARTI: I have to say, in my frequent conversations with some of the brightest minds, not just in science, but across a number of fields, this is a common theme, professor, that they had parents who either had great passions of their own, which led to somewhat unusual home lives, or those parents also never said no to their children, in terms of experiments that their kids wanted to run.
What was it like growing up in a home where in the garage you were looking after vipers and lizards?
HINTON: You know, when you’re a child, you don’t know what it’s like for other families. So it seemed perfectly normal to me.
CHAKRABARTI: Did you enjoy it?
HINTON: I did enjoy looking after all the animals. I used to get up before school and go into the garden and dig up worms to feed them.
And it was nice observing them.
CHAKRABARTI: I wonder if those observations you think had any impact on how you viewed thinking or processing of information in general. These are non-human creatures that you were spending a lot of time with.
HINTON: I’m not sure it had much impact on how I thought about cognition.
Something that probably had more impact was I used to make little relays at home, little switches where you can get a current to close a connection, which will cause another current to run through that circuit. I used to make those out of six-inch nails and copper wire and razor blades broken into old fashioned razor blades.
That probably have more impact on me.
Part II
CHAKRABARTI: Professor Hinton, just one more question about your persistence in your early research regarding neural networks, what kept you going, right? Because as you said, there was a lot of doubt regarding the legitimacy of spending time on this. But what was sufficiently interesting about it or challenging that you kept doing this research and convincing grad students to come join you to do that, as well.
HINTON: I guess there were two main things. One was the brain has to work somehow. And so you obviously have to figure out how it changes the connection strengths, because that’s how it learns. The other was my experience at school. So I came from an atheist family. And they sent me to a private Christian school.
So when I arrived there at the age of seven, everybody else believed in God. And it seemed like nonsense to me. And as time went by, more and more people agreed with me that it was nonsense. So that experience of being the only person to believe in something, and then discovering that actually lots of other people came to believe it too, was probably helpful in keeping going with neural nets.
CHAKRABARTI: More and more of your fellow students change their views on God?
HINTON: Yes, of course, because when they were seven, they all believed what they were told by the scripture teacher and possibly by their parents. By the time they grew up, they realized that a lot of it was nonsense.
CHAKRABARTI: And how many years were you in this school?
HINTON: From the age of seven to the age of seventeen or eighteen.
CHAKRABARTI: Okay. So that is a sustained period of going against the grain. So then as you were doing the research over an additional many years, was there ever a point at which, or multiple points at which you said, perhaps this isn’t the right thing to pursue?
Because either the advancements weren’t coming, the insights weren’t coming, or I don’t know, even though the funding. What were the challenges?
HINTON: Let’s see, there was never a point at which I believed this was the wrong approach. It was just obvious to me this was the right approach. There were points at which it was hard going because not much was working, particularly in the early days when computers were so much slower.
We didn’t realize in the early days that you needed enormous amounts of computing power to make neural networks work. And it wasn’t really until things like the graphics processing units for playing video games came along that we had enough processing power to show that these things really worked well.
So before that, they often had disappointing results.
CHAKRABARTI: So would you say that is one of the biggest, perhaps uncelebrated aspects of why in the past, let’s say, 10 to 15 years, we’ve seen such a leap forward in the capacity of artificial intelligence is because of to put it simply, like the hardware advancements?
HINTON: It’s not exactly uncelebrated. If you look at what Nvidia’s worth now, it’s worth about $3.5 trillion.
CHAKRABARTI: A point well taken, actually. Sorry. You know what? I stand corrected on that one. Perhaps I was just —
HINTON: You’re not entirely wrong. So in terms of academic awards, it was only recently that one of the big awards went to Jensen Huang, who founded Nvidia.
And that Nvidia was responsible for a lot of the progress.
CHAKRABARTI: I think he also has this much the same similar attitude as you, because I’ve seen interviews where he’s done, where he says if it’s not hard, or if the task doesn’t seem impossible, it’s not worth doing. Okay. So coming to today then, and we’ll talk about your time at Google. Because you did what, develop a company that was then acquired by Google, and you ended up spending many years there before you left.
But how would you describe how artificial intelligence, as we in the general public understand it, how does it learn? Does it learn like the biological neural networks that inspired your initial research?
HINTON: Okay. So at a sort of very fine level of description, there’s obviously a lot of differences.
We don’t exactly know everything about how neurons work. But in more general level of description, yes, it learns the same way as biological neurons learn. That is, we simulate a neural net, and it learns by changing connection strengths. And that’s what happens in the brain, too.
CHAKRABARTI: But I would say, my understanding is some of some people who see the neural networks, the, sorry, machine neural networks is quite different.
That a human being would learn organically by simply just, we wander our way through the world, we have experiences, our brain somehow maps out the relationships between those experiences. It’s somewhat abstracted rather than deliberate as a machine would learn. Is that not correct?
HINTON: No, that’s not.
That’s not correct. That is, the way the machine learns is just as organic as the way we learn. It’s just done in a simulation.
CHAKRABARTI: Explain that, because I was actually just reading last night about people who disagree with that. And say, the fact that machine learning is purely deliberative is one of the reasons why they don’t agree with some of your doomsday scenarios about what would happen if AI continues to develop in the way that it is.
HINTON: Yes, there are people, particularly people who believe in old fashioned symbolic AI, who think this stuff is all nonsense. It’s slightly irritating for them that it works much better than anything they ever produced. And in that camp are people like Chomsky, who think that, for example, language isn’t learned, it’s all innate, and they really don’t believe in learning things in neural networks.
For a long time, they thought that was nonsense, and they still think it’s nonsense, despite the fact that it works really well.
CHAKRABARTI: So I was reading a long interview with you in the New Yorker. And to the New Yorker reporter you said, Okay if you’re a way to understand artificial intelligence is if you, if as a human being, if I eat a sandwich my body breaks down the sandwich obviously into various nutrients, thousands of different nutrients. And so therefore is my body made up of those bits of sandwich? And you say no. And that’s important to understand in terms of how something like a modern-day neural network works. Why?
HINTON: Yes. Let me elaborate that.
When, for example, you’re doing machine translation or understanding some natural language, words come in. And when you make an answer, words come out. And the question is, what’s in between? Is it words in between? And basically, old fashioned symbolic AI thought it was something like words in between, symbolic expressions that were manipulated by rules.
Now what actually happens is, words come in, and they cause activation of neurons. In the computer, we convert words, or word fragments, into big sets of activations of simulated neurons. So the words have disappeared. We now have the activations of the simulated neurons. And these neurons interact, so think of the activation of a neuron as a feature, a detected feature.
We have interactions between features, and that’s where all the knowledge is. The knowledge is in how to convert a word into features, and how these features should interact. The knowledge doesn’t sit around in words, and so when you get a chatbot, for example, and when it learns, it doesn’t remember any strings of words.
Not literally. What it’s learning is how to convert words into features, and how to make features interact with each other, so they can predict the features of the next word. That’s where all the knowledge is. And in that sense, that’s the same sense in which the sandwich is broken down into primitive things like pyruvic acid, and then you build everything out of that.
In the same way, strings of symbols that come in, strings of words, get converted into features and interactions between features.
CHAKRABARTI: So that’s the creation of something wholly new out of those components.
HINTON: Yes. And then when, if you want it to remember something, it doesn’t literally remember it like you would do on a conventional computer.
On a conventional computer, you can store a file somewhere and then go and retrieve that file. That’s what memory is. In these neural nets, it’s quite different. It converts everything into features and interactions between features. And then, if it wants to produce language, it has to re-synthesize it, it has to create it again.
So memories in these things are always recreated, they’re not just literal copies. And it’s the same with people. And that’s why these things hallucinate, that is, they just make stuff up. With people and with these big neural networks on computers, there’s no real line between just making stuff up and remembering stuff.
Remembering is when you just make stuff up and get it right.
CHAKRABARTI: Oh, interesting. Okay, so now we’re getting into a realm, though, of when we talk about the intelligence of artificial intelligence, what specifically do we mean? Because I’m going to make the argument that human intelligence is far more than what we process linguistically.
Here’s another —
HINTON: Oh, absolutely, yes. Absolutely. There’s all sorts of visual intelligence and motor intelligence.
CHAKRABARTI: So here’s a rudimentary example. I learned a lot about the world by physically interacting with it. And artificial intelligence systems, the only information they get about the physicality of the world is through the words that we use to describe it.
Simply with that example, is it not that AI can never be as, let’s see, multidimensionally intelligent as a human being, simply because of the limitations on the information that’s inputted into these systems?
HINTON: There’s two things to be said about that. One is, it’s surprising how much information about the physical world you can get just from processing language.
There’s a lot of information implicit in that, and you can get a lot of that out from language, just by learning to predict the next word. But the basic point is right, that if you want it to understand the world in the way we do, you have to give it the same kind of knowledge of the world. So we now have these multimodal chatbots that get visual input, as well as linguistic input.
And if you have a multimodal chatbot with a robot arm or manipulators, then it could also feel the world. And you obviously need something like that to get the sort of full knowledge of the world that we have. In fact, now people are even making neural networks that can smell.
CHAKRABARTI: But if we’re concerned about the, in fact, superhuman potential of artificial intelligence, we have to, is there a common definition of what we mean by thinking, though?
Are the current AI systems that are out there, both publicly available and non, do you see any of them as actively thinking, or simply just being extraordinarily good at creating those new networks as you’re talking about.
HINTON: No, that is thinking.
CHAKRABARTI: That is thinking. Okay.
HINTON: When a question comes in, and it gets, the words in the question get converted into features, and there’s a lot of interactions between those features, and then, because of those interactions, it predicts the first word of the answer.
That’s thinking. Now in the old days, symbolic AI people thought thinking consists of having symbolic expressions in your head and manipulating them with rules. And they defined that as thinking. But that’s not what happens in us. And that’s not what happens in these artificial neural nets.
CHAKRABARTI: So then, by that definition, would a human experience of emotion be able to be replicated in a machine learning situation or an artificial intelligence system?
HINTON: I think you need to distinguish two aspects of an emotion. So let’s take embarrassment, for example. When I get embarrassed, my face goes red. Now that’s not going to happen in these computers. We could make it happen, but that doesn’t automatically happen in these computers. But also, when I get embarrassed, I try and avoid those circumstances in future.
And that cognitive aspect of emotions can happen in these things. So they can have the cognitive aspects of emotions without necessarily having the physiological aspects. And for us, the two are closely connected.
CHAKRABARTI: You’re describing disembodied sentience, right?
HINTON: Yeah.
CHAKRABARTI: This sounds it sounds like you’re saying that artificial intelligence is already capable of being sentient.
HINTON: Yes.
CHAKRABARTI: Why do you say that? Because that’s quite, in my deepest animal self, that’s quite disturbing to hear.
HINTON: Yes. People don’t like hearing that. And most people still disagree with me on that.
CHAKRABARTI: So then prove it. Why do you say that?
HINTON: Okay, so let’s, terms like sentience, ill defined. So if you ask people are these neural nets sentient? People will say with great confidence, no, they’re not sentient.
And then if you say what do you mean by sentient? They’ll say, I don’t know. So that’s a funny combination of being confident they’re not sentient, but not knowing what sentient means. So let’s take something a bit more precise. Let’s talk about subjective experience. So most people in our culture, I don’t know about other cultures, but most people in our culture think of the mind as a kind of inner theater. And there’s things going on in this theater that only I can see.
So if I say, for example, suppose I get drunk and I say, I see little pink elephants floating in front of me. Or rather I say, I have the subjective experience of little pink elephants floating in front of me. Most people and many philosophers would say that what’s going on is there’s an inner theater and in this inner theater there’s little pink elephants.
And if you ask, what are they made of? Philosophers will tell you what they’re made of. They’re made of qualia. There’s pink qualia, and elephant qualia and floating qualia and not that big qualia, all stuck together with qualia glue. And that’s what it’s all made of. Now, some philosophers, like Dan Dennett, who I agree with, think that this is just nonsense.
There is no inner theater in that sense. So let’s take an alternative view of what we mean by subjective experience. I know, if I’ve drunk too much and seen little pink elephants, I know they’re not really there. And that’s why I use the word subjective, to indicate it’s not objective. What I’m trying to do is tell you what my perceptual system is trying to tell me, even though I know my perceptual system is lying to me.
And so here’s an equivalent thing to say, I could say, my perceptual system would be telling me the truth if there were little pink elephants floating in front of me. Now what I just did was translated a sentence that involves the word subjective experience into a sentence that doesn’t involve the word subjective experience, and says the same thing.
So what we’re talking about when we talk about subjective experience is not funny internal things in an inner theatre that only I can see. What we’re talking about is a hypothetical state of the world, such that, if that were true, my perceptual system would be telling me the truth. That’s a different way of thinking about what subjective experience is.
It’s just an alternative state of the world that doesn’t actually exist. But if the world was like that, my perceptual system would be functioning normally. And that’s my rather roundabout way of telling you how my perceptual system is lying to me.
CHAKRABARTI: So what you’re talking about though is a kind of metacognition, though.
Does AI have that?
HINTON: Okay, so let’s take a chatbot now, and let’s see if we can do the same thing with a multimodal chatbot. So the chatbot has a camera, and it can talk, and it has a robot arm, and I train it up in the usual way, and then I put an object straight in front of it and say point at the object.
And it points straight in front of it. And I say, good. And now, when the chatbot’s not looking, I put a prism in front of the camera lens. Which will bend the light rays. And then I put an object straight in front of the chatbot, and I say, point at the object. And it points off to one side.
CHAKRABARTI: Okay, Professor Hinton, hang on for just a moment here, because I’m literally on tenterhooks wanting to know where this thought experiment goes. But we have to take a quick break.
Part III
CHAKRABARTI: Professor Hinton, you were walking us through this thought experiment about how to judge whether AI has metacognition or not, and you left us at a place where you have a prism in front of a machine.
Continue, please.
HINTON: Okay, so we’re trying to figure out if a chatbot could have subjective experience.
CHAKRABARTI: Yes.
HINTON: Not metacognition, but subjective experience. And the idea is you train it up, you put an object in front of it, you ask it to point to the object, it can do that just fine. Then you put a prism in front of its camera lens.
And you put an object in front of it and ask it to point to the object, and it points off to one side. Then you tell the chatbot, no that’s not where the object is, the object’s actually straight in front of you, but I put a prism in front of your lens. And the chatbot says, oh I see, the prism bent the light rays, so the object’s actually straight in front of me, but I had the subjective experience that it was off to one side.
Now if a chatbot says that, it’s using the word subjective experience in exactly the way we use them.
CHAKRABARTI: Okay, so given that, do you see at this point right now, are there any differences in, between human intelligence and artificial intelligence?
HINTON: Yes, there’s lots and lots of differences. They’re not in detail exactly the same, nothing like that.
But the point is, now, the artificial intelligence in these neural networks is in the same ballpark. It’s not exactly the same as people’s intelligence, but it’s much, much more like people than it is like lines of computer code.
CHAKRABARTI: I suppose the struggle that I’m experiencing internally, both emotionally and intellectually, is trying to make that leap into believing that we’re in a world where nonorganic entities possess a level if not equal to, then superior in intelligence than human beings.
This brings us back to where I began the show, in terms of talking about your, you have a rather doomsday scenario. That you think that there’s, what, definitely a non zero, but perhaps even a 20%, up to a 20% chance that within 30 years, artificial intelligence could lead to the extinction of the human race.
Why? Again, lay out the evidence that leads you to that 20% conclusion.
HINTON: Okay. It’s very hard to estimate these things. So people are just making up numbers, but I’m pretty confident that the chance is more than 1% and pretty confident it’s less than 99%. Some researchers think it’s less than 1% chance and other researchers think is more than 99% chance.
I think both of those groups are crazy. It’s somewhere in between. We’re dealing with something where we have no experience of this kind of thing before, so we should be very uncertain. And 10% to 20% seemed like reasonable numbers to me. As time goes by, maybe different numbers will seem reasonable.
But the point is, we are, nearly all the leading researchers think that we will eventually develop things that are more intelligent than ourselves, unless we blow up the world or something in the meantime. So superintelligence is coming, and nobody knows how we can control that. It may well be that we can come up with ways of ensuring that a superintelligence never takes over from people.
But I’m not at all convinced that we know how to do that yet. In fact, I’m convinced we don’t know how to do that. And we should always be working on that. If you ask yourself, how many examples do you know of more intelligent things being controlled by less intelligent things, where the difference in intelligence is big, not like the difference between an intelligent person and a stupid president, for example, but a big difference in intelligence.
Now, we don’t know many examples of that. In fact, the only example I know that even approaches that is a mother and child, a mother and baby. So it’s very important for the baby to control the mother, and evolution’s put a lot of work into making that happen. The mother can’t bear the sound of the baby crying and so on.
But there aren’t many examples of that. In general, more intelligent things control less intelligent things. Now, there’s reasons for believing that if we make super intelligent AI, it will want to take control. And one good reason for believing that is, if you want to get anything done, even if you’re trying to do things for other people to get stuff done, you need more control.
Having more control just helps. So imagine an adult, imagine you’re a parent with a small child of maybe three years old and you’re in a hurry to go to a party and the child decides that now is the time for it to learn to tie its own shoelaces. Maybe that happens a bit later on. If you’re a good parent, you let it try to tie its shoelaces for a minute or two, and then you say, Okay we’ll do that later.
Leave it to me. I’m going to do it now. You take control. You take control in order to get things done. And the question is, will these superintelligent AIs behave the same way? And I don’t see why they wouldn’t.
CHAKRABARTI: So I want to pause here for just a second and ask you, do you think that AI is already at the level, or would be in the near future, where if you were having this conversation with an artificial intelligence system, and it heard you say not unlike the difference between an intelligent person and a stupid president, that the AI would interrupt and say, Ha, Professor Hinton I heard what you did there.
That Elon Musk, Donald Trump comparison, can an AI system right now do that?
HINTON: Yes, it probably can.
CHAKRABARTI: Really?
HINTON: We could try it, but it probably can, yes.
CHAKRABARTI: Fascinating. Okay. So there I just wanted to say I appreciated that, your side eye, the shade that you threw there. But I want to know what you think about this, that again, some of the considerably muscular arguments against what you’re saying about the relative differences in intelligence, and the way of things in terms of dominion of one over the other.
I have to say, I understand you could look at humanity as being a perfect example that due to our intelligence, we really have dominion over the entire planet, over every other creature on this planet because of that. Okay. But on the other hand, there are some researchers who would say.
Look, there’s also this issue of, what was it called, the dumb superintelligence. And an example that I ran into the other day was a researcher saying, Hey, if we asked an artificial intelligence system, solve climate change, the AI system might very naturally come up with a solution that says, eradicate all human beings. Because human inputs of carbon into the atmosphere are what are accelerating climate change right now.
But this researcher argued that the AI system might come up with that solution, but wouldn’t, either have the capacity or wouldn’t have the capacity to act on it. Or would realize, because of its intelligence, that is not an optimal solution. So therefore, there was this sense that we would never create technology that would destroy us.
HINTON: So this is called the alignment problem, where you say to the AI, solve climate change, and if it takes you literally, and says that’s your real goal, to solve climate change, then the obvious thing to do is get rid of people. Of course, the super intelligent AI would realize that’s not what we really meant.
We meant, solve climate change so that people can live happily ever after on the planet. And it would realize that, so it wouldn’t get rid of people. But that is a problem, that AI might do things that we didn’t intend it to do. Because when we told it what we wanted, we didn’t really express ourselves fully.
We didn’t give all the constraints. It would have to understand all those constraints. One of the constraints in solving climate change is not to get rid of people.
CHAKRABARTI: But if it were truly more intelligent than human beings, isn’t it safe to assume that it would understand constraints?
HINTON: I think it would, yes, but we’re not sure that’ll happen in every case.
CHAKRABARTI: Here, so here’s another voice from, of pushback, because about a year and a half ago, in May of 2023, we actually did a show about whether AI should be regulated and it was inspired by that letter that hundreds of researchers signed that it was encouraging a pause in AI research so that regulation could catch up.
I will note that you did not sign that letter, because I understand that you don’t believe that research should be stopped at the moment. But —
HINTON: It’s not that I don’t believe it should be stopped. I don’t believe it could be stopped. There’s too many profits and too many good things would come out of it for us to stop the development of AI.
CHAKRABARTI: I always see that it’s quite difficult to stop human curiosity from continuing to try and answer questions. So point taken. But by the way, folks, if you missed that show on AI regulation, it’s at onpointradio. org. Check it out or in our podcast feed. But I wanted to play a moment that features Stuart Russell.
He’s a professor of computer science at the University of California, Berkeley. He signed that open letter that was written 2023. And he strongly believes that regulation is needed, but he really pushed back on this show about the … apocalyptic fears of AI, and here’s what he said.
STUART RUSSELL: It doesn’t seem to have formed a consistent internal model of the world, despite having read trillions of words of text about it. It still gets very basic things wrong. For example, my friend Prasad Tadepalli, who’s a professor at Oregon, sent me a conversation where he first of all asked it, which is larger, an elephant or a cat?
And it says, an elephant is larger than a cat. And you say which is not larger, an elephant or a cat? And it says, neither an elephant nor a cat is larger than the other. So it contradicts itself about a basic fact in the space of two sentences. And humans, occasionally we have sort of mental breakdowns, but by and large, we try to keep our internal model of the world consistent.
And we don’t contradict ourselves on basic facts in that way. So there’s something missing about the way these systems work.
CHAKRABARTI: So that’s Stuart Russell talking about the fact that AI still has internal contradictions that it doesn’t recognize. One more voice of pushback, also from that same show. This is Peter Stone.
He’s at the University of Texas at Austin, a computer science professor there and the director of robotics. And here’s what he said.
PETER STONE: I’d say it’s safe to assume that those discoveries will be made. I think it’s quite plausible that we will get to a point of AGI or artificial general intelligence.
But we don’t really know what that will look like. It’s not likely to be just a scaling up of current large language models. And I think it’s not plausible to me that we would, it would happen without us seeing it coming, without us being able to prepare and to try to harness, I think, to harness it for good.
CHAKRABARTI: So Professor Hinton, I’m so delighted for multiple reasons to be able to talk with you today because those two moments came from the show where I actually asked them directly to respond to some of the things you had said. So I’d love to hear your response to their doubts that we’d reach a point where AI would be capable or willing to destroy us.
HINTON: So let’s start with Stuart Russell. I have a lot of respect for his work on lethal autonomous weapons and on AI safety in general. But he’s from the old-fashioned symbolic AI school. He wrote the textbook on old fashioned symbolic AI. He never really believed in neural nets, so he has a very different view of how similar these things are to people than I do.
He thinks that people are using some kind of logic and there’s things people are doing when they reason that are just quite unlike what’s going on in these neural nets are present. I don’t think that. I think that what people are doing when they reason is quite similar to what’s going on in these neural nets.
So there’s a big difference there. Let me give you a little demonstration that people also make these mistakes that would cause you to say they can’t really think. So I’m going to do an experiment on you. I hope you’re up for it.
CHAKRABARTI: (LAUGHS) As long as it fits in two minutes, sir. Okay.
HINTON: The point about this experiment is you have to answer very fast.
CHAKRABARTI: Okay, I’ll do my best.
HINTON: We’re going to score you by how fast you answer. Just the first thing that comes into your head is your answer. Okay?
CHAKRABARTI: Okay. And I’m going to measure how fast you say it.
CHAKRABARTI: Okay.
CHAKRABARTI: Okay, here’s the question. What do cows drink?
CHAKRABARTI: Water.
HINTON: Ah, you said and then you said water.
CHAKRABARTI: I was literally gonna say milk. (LAUGHS)
HINTON: You were gonna say milk, weren’t you? Yes.
CHAKRABARTI: Yes.
HINTON: So the first thing that comes into your head is milk, and that’s not what most cows drink. Now you’re smart, and you managed to stop yourself saying milk, but you started saying it.
CHAKRABARTI: Yes, you caught me out there. So therefore, that’s the internal contradiction.
HINTON: So what’s happening is there’s all sorts of associations that make you think milk is the right answer. And you catch yourself and you realize actually most cows don’t drink milk. People make mistakes too. And so the particular example is what they call hallucinations. They should call them confabulations, when it’s a language model, where these large language models just make stuff up.
And that makes many people say they’re not like us. They just make stuff up. But we do that all the time.
CHAKRABARTI: Oh yes.
HINTON: At least I think we do. I just made that up. If you look at the Watergate trials, John Dean testified under oath and described various meetings in the Oval Office. And a lot of what he said was nonsense.
He didn’t know at the time that there were tapes. So it’s a rare case when we could take things that happened several years ago and know exactly what was said in the Oval Office. And we had John Dean doing his best to report it. And he made all sorts of mistakes. He had meetings with people that weren’t at the meeting.
And he had people saying things that other people said. But he was clearly trying to tell the truth. The way human memory works is that we just say what seems plausible given the experience we’ve had. Now if it’s a recent event, what seems plausible given the experience we just have is what actually happened.
But if it’s an event that happened some time ago, what seems plausible is affected by all sorts of things we learned in the meantime, which is why you can’t report accurate memories from early childhood. And so we’re just like these large chatbots.
CHAKRABARTI: Wow. Professor Hinton, I can’t tell you what an honor it has been to speak with you today.
I’m going to ask you one last quick yes/no question. Do you think humanity is capable of coming up with either regulations or technologies, or a different way of living such that we can stop AI as it continues to be developed from destroying us?
HINTON: I just don’t know. I wish we could. And if we get the big technology companies to spend more work on safety, maybe we can.
CHAKRABARTI: You cheated on that question. I wanted yes or no, but I appreciate it. But you know what? Your answer is the most realistic I could have hoped for.