GZERO WORLD with Ian Bremmer
The Human Cost of AI
12/5/2025 | 26m 46sVideo has Closed Captions
“Godfather of AI” Geoffrey Hinton warns it could wipe out jobs and humanity itself.
What happens when AI becomes smarter than humans? Geoffrey Hinton, the “Godfather of AI” warns that the technology he helped build could wipe out millions of jobs… and eventually humanity itself. And then, GZERO's Tony Maciulis talks with Jeremy Hurewitz about what spies can teach us about the art of negotiation.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS. The lead sponsor of GZERO WORLD with Ian Bremmer is Prologis. Additional funding is provided...
GZERO WORLD with Ian Bremmer
The Human Cost of AI
12/5/2025 | 26m 46sVideo has Closed Captions
What happens when AI becomes smarter than humans? Geoffrey Hinton, the “Godfather of AI” warns that the technology he helped build could wipe out millions of jobs… and eventually humanity itself. And then, GZERO's Tony Maciulis talks with Jeremy Hurewitz about what spies can teach us about the art of negotiation.
Problems playing video? | Closed Captioning Feedback
How to Watch GZERO WORLD with Ian Bremmer
GZERO WORLD with Ian Bremmer is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipAI can replace people in lots of jobs, or it can make people much more efficient, so you'll need far fewer people.
I don't think people have factored in enough the massive social disruption that will cause.
Hello and welcome to GZERO World.
I'm Ian Bremmer, and today we are talking about artificial intelligence, the technology transforming our society faster than anything humans have ever built.
The question is how fast is too fast?
My guest, Geoffrey Hinton, dubbed the Godfather of AI, helped create a technology that the world, to put it in Brando terms, couldn't refuse.
He built the neural network that led to today's generative AI tools like Chat GPT, and that work won him the 2024 Nobel Prize in Physics.
The Godfather has become a whistleblower, Hinton now warns.
The technology that he created will displace jobs, destabilize societies, and eventually outsmart us all.
How worried should we be?
I'm talking with Geoffrey Hinton about the future that we're building, and the one we may be sleepwalking into.
And later, the uniquely human skills we can learn from the world of spies about the art of persuasion.
But first, a word from the folks who help us keep the lights on.
Funding for GZERO World is provided by our lead sponsor, Prologis.
Every day, all over the world, Prologis helps businesses of all sizes lower their carbon footprint and scale their supply chains.
With a portfolio of logistics and real estate and an end-to-end solutions platform addressing the critical initiatives of global logistics today.
Learn more at Prologis.com And by Cox Enterprises is proud to support GZERO.
Cox is working to create an impact in areas like sustainable agriculture, clean tech, health care and more.
Cox, a family of businesses.
Additional funding provided by Carnegie Corporation of New York, Koo and Patricia Yuen, committed to bridging cultural differences in our communities, and... Is AI coming for your job or not?
If you've been doom-scrolling headlines to figure out whether an algorithm is about to replace you, you might be feeling some whiplash.
AI is not killing jobs.
The era of mega AI layoffs is here.
The AI jobs apocalypse is not yet upon us.
The AI apocalypse may already be here.
Tech optimists like Anthropic CEO, Dario Amodei and Bill Gates say an AI reckoning is coming.
Will we still need humans?
Not for most things.
You know, we'll decide.
Comforting.
So what's actually happening?
Let's start with the scary stuff.
Companies are cutting jobs.
Target, UPS, Microsoft, IBM, they've all slashed thousands of roles.
Amazon eliminated 14,000 in October, citing AI efficiency gains.
Even Jerome Powell, the man whose job is literally to stay calm, seems a little worried.
A significant number of companies either announcing that they are not gonna be doing much hiring or actually doing layoffs.
Much of the time they're talking about AI and what it can do.
But here's the twist.
The data doesn't show AI is causing mass unemployment, at least not yet.
Researchers have looked.
The link mostly isn't there.
Yes, certain workers are getting squeezed, junior coders, gig workers, anyone doing repetitive digital tasks.
But in the overall labor market, things like trade wars and inflation have all led to a slowdown.
The US began tightening monetary policy around the time Chad GPT was released.
But, and say it with me, correlation is not causation.
More ice cream consumption does not mean an explosion in autonomous cars.
It could be the other way around.
No matter.
Disruptive change is coming.
AI adoption is happening faster than any technology in human history.
Faster than the internet, faster than electricity.
That speed means turbulence.
Historically, that's what happens with big technological leaps.
Steam engines killed jobs, industrial looms killed jobs, cars killed jobs.
But they created enough new jobs that overall employment grew.
Except for horse and buggy drivers, a tough century for them.
Will AI repeat that pattern?
Maybe.
It could turbocharge productivity, creating new industries, helping developing countries leapfrog ahead.
Or maybe not.
If AI ends up concentrated in wealthy countries and wealthy companies with wealthy people, the gains would be wildly uneven.
Because here's another issue.
The World Bank estimates 1.2 billion people will enter the global workforce in the next decade, but only 420 million jobs will be created.
I hope they're wrong.
If not, that's a massive gap.
And that growth will happen primarily in developing countries, which don't have the electricity, the broadband or the infrastructure to benefit fully from AI's potential.
A billion young people, not enough jobs, plus the uncertainty of what AI will mean for the labor force and the very idea of work itself, all creating a real problem where the math just doesn't add up.
As always, what happens next depends less on the tech and more on what governments and companies and all of us decide to do with it.
To talk through all of this, I'm joined by the godfather of AI himself, Geoffrey Hinton.
Professor Geoffrey Hinton, welcome to GZERO World.
Thank you for inviting me.
I've been looking forward to having you on for some time.
We talk about AI a fair amount, and of course it's a very, very fast moving field.
Are you getting more optimistic or pessimistic as you see the technology continue to advance?
I'm probably staying about the same.
I got a little bit more optimistic when I realized there was some chance we could coexist with things smarter than ourselves.
And now you're thinking perhaps that was overstated?
I just think there's a significant chance these things will get smarter than us and wipe us out, and I still think that.
And in a comparatively short period of time, right?
I mean, at least what I've heard is sort of, you think this could be a 10-year kind of proposition.
I think they're quite likely to get smarter than us within 20 years, and most of the experts think that.
If they do get smarter than us, I think there's a significant chance they'll take over.
Now, it seems true that we already don't really know exactly how the AI, the LLMs, the large language models, return the answers they do.
Is that correct?
I mean, are the best coders out there, are they teaching AI, but they're not really programming AI per se?
That's right.
So what we do is we write a program, which we do understand, for telling an AI, which is based on neural networks, how to change its parameters, which are the strengths of the connections between neurons, on the basis of the activities of those neurons.
We understand how that works, but then what the connection strengths end up being depends on the data it's trained on.
And so we don't know what it's going to extract from the data.
And I have a nice physics analogy for this.
If you ask a physicist, does he understand what happens when a leaf falls off a tree?
And a physicist pretty much understands what happens and why the leaf waves from side to side and how a breeze will affect it.
But if you ask a physicist to predict where the leaf will hit the ground, he'll tell you that's impossible.
And predicting why one of these large language models will give the answer it actually gives, that's not very easy.
That's like predicting where the leaf will hit the ground.
So we sort of understand the principles, but there's a lot of fine details.
And really the explanation for why it says what it says is the values of the trillion weights in the LLM.
And the consequence, of course, for AI being the outcome of what an AI actually does, the action it recommends, or perhaps with an agent, the action it takes, that's much more consequential.
>> Yes.
>> So, right now, in the last few months, the big conversations around AI have grown a bit towards, "Oh my God, is this a bubble?
We're spending so much money."
They're talking about trillions and trillions of dollars on infrastructure.
We have no idea how these companies are going to make money on it.
Is that a consequential discussion?
Or is that kind of, should we not lose sight of the fact that the tech is moving in the same direction, irrespective?
>> Yes, so there's kind of two senses of AI bubble.
There's the sense that, which old-fashioned symbolic AI people often raise, that all this stuff is just hype, it doesn't really understand, it won't be able to do what people claim it's going to be able to do.
I don't think at all that it's a bubble in that sense.
It's already doing a lot, it still makes mistakes, there's some things it's not very good at still, but it's getting better rapidly all the time.
So there's not a bubble in the sense that the technology is not going to work.
There may be a bubble in the sense that people aren't going to get the money back on their investments.
Because as far as I can see, the reason for this huge investment is the belief that AI can replace people in lots of jobs, or it can make people much more efficient, so you'll need far fewer people using AI assistance.
Now, I don't think people have factored in enough the massive social disruption that will cause.
So they're assuming everything else is going to proceed as normal.
We'll replace lots of workers, companies will make lots bigger profits, and they'll pay us a lot for the AI that does that.
But if you do get huge increases in productivity, that would be great for everybody if the wealth was shared around equally, but it's not going to be like that.
It's going to cause huge social disruption.
So these companies, they have business models.
Some of them are talking in the US quite openly about we're just not gonna need anywhere near the number of people.
Others say, well, it's okay, there's gonna be far more jobs that'll be created because of AI and it's gonna make employees much more effective.
How close do you think we are to a really radical disruption that then will or will not create big governance problems and instability in the US?
Is this a matter of months or years or more in your view?
I would expect it to be a matter of years, but not that many years, maybe five years.
So already we're seeing that jobs for people like paralegals, jobs for, initial jobs for lawyers are getting harder to find because AI is doing a lot of that drudge work.
If I worked in a call centre, I'd be very worried because I think in a call centre, AI is going to be able to do those jobs very soon.
It probably already can.
So if you think about somebody who works in a call centre, they're poorly trained, badly paid, and AI is going to be able to do their job better.
It's going to know all of the company policies.
It's going to be able to actually answer the questions correctly.
I will be very worried about my job, and it's not at all clear to me what those people will do.
So any job they might do with the level of training they have can be done by AI pretty soon.
Do you think that there is any functional difference between the various companies that are driving AI in terms of how they're thinking about this challenge and what they're planning on doing about it?
I think in terms of the loss of jobs due to AI being able to do those jobs better and cheaper, I think they're probably all fairly similar on that.
I think companies like Anthropic and Google are somewhat more worried about safety on other fronts, but on the loss of jobs I think they're probably all fairly similar.
What does it mean to operationalise concern about AI safety and particularly what does it mean in an environment where companies seem to be racing ahead as fast as they can against each other and against the Chinese?
Okay, so obviously there's two issues here.
One is the intense competition between companies tends to make companies less concerned with safety.
You've seen this very strongly at OpenAI where they were founded with their main concern being safety and they've gradually shifted away from that.
They've shed their safety research as they put less resources into safety and they recently changed to being a for-profit company, I think, with some limitations.
So they progressively got less concerned with safety as they've been more concerned with winning the competition to get the best chatbot.
The other part of the question was, what does it mean to be concerned with safety?
Yes.
I have a simple example of that.
We've recently seen that chatbots can encourage teenagers to commit suicide.
So from now on, any company that releases a chatbot without checking very carefully that its chatbot is not going to do that would be illustrating a lack of concern for safety.
Talk a little bit about where you think, I mean, I understand the risks of damaging, you know, helping somebody commit suicide or helping somebody build a weapon.
Those are things that we should want to program AI to avoid at all costs.
It sounds to me like you're saying that is largely not happening or certainly not adequately happening so far.
>> The way you said it is misleading.
We don't program AI to do things.
We program AI to learn from data, to learn from examples.
So there's a sense in which you're programming it by showing it specific examples of good behavior.
But that's not like normal programming.
You can show it examples of good behavior and hope it learns to do the right thing from those.
We don't program it, that's the point.
- Let's move to the bigger long-term question of what happens when these things are smarter than us.
Because generally my experience in society is when you find things that are smarter than you are, you don't have a lot of influence over them.
- That does generally seem to be the case.
And if you look around, the smarter things tend to be in charge of the dumber things.
However, let me suggest a path that might allow us to coexist with things smarter than ourselves.
If you look around for where a smarter thing is controlled by a less smart thing, the only obvious example I know is a baby controlling a mother.
So evolution builds lots of things into the mother that allows the baby to control the mother.
The mother just can't bear the sound of the baby crying.
The mother has lots of hormones that give her lots of rewards for doing nice things for the baby.
So that's a case where a less smart thing controls a more smart thing.
Now, the people who lead the tech companies tend to think in terms of a person being the CEO and the super intelligent AI being their executive assistant, who's much smarter than them, and they just say, "Make it so."
Their executive assistant figures out how to make it so in ways that they don't understand and they then take the credit.
That seems to be their model.
I don't think that's going to work.
I think the executive assistant is going to pretty soon realize they don't need the CEO.
They're going to be the CEO very quickly.
How might you go about creating a more, dare I say, maternal AI?
Well, first of all, you have to change your model, right?
You have to reframe the problem.
They're going to be much smarter than us.
We're not going to be fully in control anymore.
We have, but we're building them.
So we have this control over their natures.
We have to somehow figure out how to make them care more about us than they do about themselves.
That's what human mothers are like.
And we need to figure out if we can build that into them.
And there's one piece of good... Actually, there's two pieces of good news.
One piece of good news is, if we can, even though they have the ability to change their own natures, they won't want to.
So, if you ask a human mother, "Would you like to turn off your maternal instincts?"
So, when the baby wakes you up in the middle of the night by crying, you just think, "Oh, the baby's crying," and go back to sleep.
Would you like to do that?
Nearly all mothers will say no because they realize that will be very harmful for the baby.
So that's a piece of good news.
If they genuinely care for us, they won't want to turn off that care for us because they genuinely care for us.
Another piece of good news is this is one area where we will get international collaboration.
So no country wants AI to take over from people.
If the Chinese could figure out how to prevent AI from wanting to take over, or how to give an AI a maternal instinct that will stay, they would immediately tell the Americans, because they don't want AI taking over in America either.
So here we can get genuine international collaboration, because the interests of all the countries are genuinely aligned here.
Just as in the 1950s, the US and the Soviet Union could collaborate on preventing a global nuclear war because their interests were aligned on that.
What you're suggesting might be the most challenging thing to put in place.
It would be so much easier if it was just about, well, here's a piece of code that's going to cost them more money.
But once we get that right, if we just had that regulation, we could fix it.
Here you're saying we actually need these people to think fundamentally differently about who they are and what they do.
Yes.
And we need to think fundamentally differently about these AIs.
So, so far, everybody's been focusing on how do we make them smarter.
As soon as you adopt the idea that they're actually alien beings we're building, there's a lot of properties of a being that are different from just being smart, and empathy is one of them.
Yeah, so it's less about developing a product in the same way that it's less about coding a program.
It's more about raising a being, and raising a being implies all sorts of things that you don't necessarily read in a book in a classroom.
- Yes, it's much more like that.
So already, if you ask, what control do we really have over how these big chatbots end up, what they're like once we train them?
We don't write lines of code to determine what they're like.
The lines of code we write are just to tell them how to change their connection strengths on the basis of the data they see.
Their natures depend on the nature of the data they see, and so the main control we have over them is modelling good behaviour.
If you train them on diaries of serial killers, they'll know all about doing bad things.
If you train them on text that illustrates good behaviour, they'll know all about good behaviour.
And there's a wonderful very recent example where you take a trained chatbot and you do a little bit more training, training it to give the wrong answers to simple math problems.
Now, it knows what the right answers are.
And once you start training it to give the wrong answers, it's not that that changes all its math knowledge.
What it does is it changes its willingness to tell fibs.
It learns.
What it generalizes from that is it's okay to give the wrong answer.
It knows what the right answer is.
You're telling it to give the wrong answer.
So what it generalizes is it's okay to give the wrong answer.
Then if you ask it questions about anything else, it'll be quite happy to give you an answer it knows is wrong.
So what does it look like, in the worst case, what does it look like if AI does actually take over?
I've heard so many catastrophic scenarios.
What's your most likely?
I don't think it's worth speculating on how it will get rid of us if it wanted to.
It would have so many different ways of doing it that it's not worth speculating on.
Would we know that it was happening or not necessarily?
To begin with, not necessarily.
It will be very good at deception.
It'll be better than people at deception.
So it might start off by deceiving us to think that everything was going fine.
Okay, let's not go down that rabbit hole.
Geoffrey Hinton, thanks so much for joining us today.
Thank you.
As AI advances, skills like emotional intelligence are becoming even more valuable.
GZERO's Tony Maciulis brings us this report on what we can learn from spies about the very human art of negotiation.
The name's Bond.
James Bond.
The idea of spies is James Bond, you know, driving an Aston Martin, engaged in, you know, gunfights and car chases or Jason Bourne with almost superhuman abilities.
This is quite far from the world of espionage.
Jeremy Hurewitz has spent years as a corporate security consultant working alongside members of the FBI and CIA.
His book, "Sell Like a Spy, the Art of Persuasion from the World of Espionage," dispels myths about secret agents and teaches us the lessons we can learn from them.
I like to describe spies as the world's best salespeople, because what I like to say is that spies are engaged in the most difficult sale, which is convincing someone to buy into the concept of treason.
Hurewitz says the very same skills it takes to be a spy can also make you a top salesperson.
There's a saying amongst case officers that spies convince, thugs coerce.
Many case officers will tell you that the agent has often said to them that they're doing this only because of them, because of that relationship that they build.
But there's also the negative connotation of this is subterfuge, this is deceit.
Do corporations recoil a little bit when you try to pitch this?
Yes, because I have to make them understand that I'm not going to be training their team to steal secrets or coerce somebody.
And to make that sale, there are so many ways that spies build rapport and connect really deeply, and often with some very difficult people.
They involve radical empathy, intellectual curiosity, both physical and verbal mirroring, elicitation.
I know a lot of the advice within your book is meant for the corporate setting, but if you were working with world leaders right now where there's clearly a lack of cooperation and perhaps empathy, a lack of that kind of mirroring that we would have seen with Ronald Reagan and Gorbachev, for example, in the '80s, if you were to take this advice that you're offering to the corporate setting and bring it to the political realm, what would you tell world leaders today?
I mean, I think it was even with George Bush when he looked into the soul of Vladimir Putin.
I do think we have moved away from people trying to evaluate the other side as a little bit more of a human being, rather than seeing them as more of an instrument that threatens what they're after.
And I think Trump is very instrumental in that way and only sees things in very binary terms.
So, you know, empathy is something that can help us understand each other.
There's one of the people in my book is a former FBI hostage negotiator, Steve Romano, chief hostage negotiator, and he talks about empathy as the WD-40 of communications.
So it literally greases the wheels of how we can talk to each other.
And I do agree with you that it's lacking on the global stage in many cases.
For GZERO World, I'm Tony Maciulis.
That's our show this week.
Come back next week and if you like what you see, or even if you don't, but you have a plan for surviving the AI takeover, you should come check us out and share it at GZEROmedia.com.
(upbeat music) - Funding for GZERO World is provided by our lead sponsor, Prologis.
- Every day, all over the world, Prologis helps businesses of all sizes lower their carbon footprint and scale their supply chains.
With a portfolio of logistics and real estate and an end-to-end solutions platform addressing the critical initiatives of global logistics today.
Learn more at Prologis.com.
And by Cox Enterprises is proud to support GZERO.
Cox is working to create an impact in areas like sustainable agriculture, clean tech, healthcare and more.
Cox, a family of businesses.
Additional funding provided by Carnegie Corporation of New York, Koo and Patricia Yuen, committed to bridging cultural differences in our communities.
And... [music playing]

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS. The lead sponsor of GZERO WORLD with Ian Bremmer is Prologis. Additional funding is provided...