From The Terminator to the The Matrix, killer robots have long been a terrifying staple of science-fiction flicks.
But, while they might be scare-worthy in the cinema, should we really be afraid of a big bad AI?
From supercharged plagues to full-blown nuclear annihilation, experts say there are five ways AI could bring about the end of humanity.
Ben Eisenpress, Director of Operations at the Future of Life Institute, warned MailOnline that ‘all catastrophic risks from AI are currently underestimated.’
So, if you still think the AI-apocalypse is nothing more than an outdated movie trope, read on to see just how worried you should really be.
Experts say that science-fiction scenarios like The Terminator (pictured) which lead to the destruction of humanity need to be taken seriously
1. Rogue AI
When you think about AI leading to the destruction of humanity, killer robots are most likely what you have in mind.
One worry is that we create an AI so powerful that humanity loses the ability to control it, leading to unintended consequences.
Until recently this was nothing more than a plot device for movies and a theoretical exercise for technologists.
But now, with the rapid advancements we are seeing in AI, Mr Eisenpress says this scenario no longer seems so far away.
He says: ‘Rogue AI, where AI escapes human control and causes widespread harm, is a real risk.’
‘One needs to see where AI is going, not just where it is today. The last few years have seen astounding breakthroughs. Experts forecast more to come.’
When you think about AI leading to the destruction of humanity, killer robots are most likely what you have in mind. Pictured: Will Smith in iRobot
But even in this most science-fiction scenario, a rogue AI still won’t look anything like Terminator’s Skynet.
Mr Eisenpress said: ‘Contrary to science fiction, an AI does not have to have feelings of consciousness or sentience to go rogue.
‘Simply giving AI an open-ended goal like “increase sales” is enough to set us on this path.’
In one classic thought experiment, devised by the philosopher Nick Bostrom, we imagine asking a super-intelligent AI to make as many paperclips as possible.
This AI might easily reason that it could make a lot more paper clips if nobody turns it off.
The best way to avoid that is simply to eliminate humanity and turn us into paperclips while it’s at it.
A scenario need not be as extreme as this to be dangerous, the point is simply that AI getting out of control quickly even when given simple instructions.
As Mr Eisenpress explains: ‘An open-ended goal will push the AI to seek power because more power helps to achieve goals.
‘Ever more capable AIs grabbing more power will end badly by default.’
2. Bioweapons
For now, the AI itself might not actually be the biggest danger – the bigger problem is what humans can create with AI.
‘In the short-term, AI-enabled bio-terrorism is perhaps one of the gravest threats from unchecked AI development,’ says Mr Eisenpress.
And he is not alone in his concerns.
Recently, Prime Minister Rishi Sunak used the AI safety summit in Bletchley Park to raise the alarm over AI-assisted bioweapons.
A government discussion paper on ‘Frontier AI’, a term for the most advanced AI, warned that ‘AI will almost certainly continue to lower the barriers to entry for less sophisticated threat actors.’
Likewise, Dario Amodei, founder of AI firm Anthropic, warned the US congress that AI could help criminals create bio-weapons within two to three years.
Researchers have found that a tool designed for beneficial drug discovery could easily be converted to discover new biochemical toxins.
In less than six hours the AI predicted more than 40,000 new toxic molecules – many more dangerous than existing chemical weapons.
Mr Eisenpress says his concern is that ‘bad actors’ such as terrorist groups will be able to repurpose these tools to unleash devastating plagues or chemical attacks.
‘AI is already capable of designing toxic molecules and creating advanced malware, and can even help plan biological attacks’ he said.
‘More advanced models will be even more powerful, and therefore even more dangerous in the wrong hand.’
In 1995, a doomsday cult called Aum Shinrikyo released deadly sarin gas onto the Tokyo subway.
This attack killed 13 and injured almost 6,000 with the goal of bringing about the end of the world.
With AI tools enabling the discovery and manufacture of even deadlier weapons, the fear is that a group like Aum Shinrikyo may unleash something even more dangerous.
As many AI researchers push to make their models ‘open source’, Mr Eisenpress warns that some caution is needed.
On one hand, making models available to all could supercharge the positive effects of AI, such as discovering new medicines or optimising agricultural systems to fight famine.
But, on the other, these models could also be used to create weapons more dangerous than anything humanity has encountered before.
He says: ‘Open-sourcing models is an especially concerning prospect, especially given researchers have shown it will [be] trivial to remove any safeguards that are built in to prevent such misuse.’
In 1995 a doomsday cult called Aum Shinrikyo released deadly sarin gas onto he Tokyo subway. Mr Eisenpress fears that in the wrong hands, AI could be used to facilitate even deadlier attacks
3. AI gets deliberately turned loose
To understand why letting a poorly understood computer program run wild could be devastating there’s no need to speculate about the future.
In 2017, unrelated computer systems across the world suddenly began to experience unexplained problems.
India’s largest container port was brought to a standstill, the radiation monitoring system at Chernobyl Nuclear Power Plant went offline, and banks, pharmaceutical firms, and hospitals suddenly lost control of their systems.
The culprit was the NotPetya virus, a cyberweapon very likely created by the Russian military to attack Ukraine.
But when the virus leaked, it spread far further than its creators had expected, leading to an estimated $10 billion (£7.93 bn) in damage.
Just as with bioweapons, AI stands to supercharge the capacity of cyberweapons to new levels of destruction.
Cyber weapons created by AI could be even more disruptive than anything seen before. If leaked online they could destabilise the world economy and disrupt critical infrastructure (stock image)
Within a month of Chat GPT-4 being released, online groups had created an AI called chaosGPT instructed to ‘destroy humanity’. While the impact of this was limited experts warn that there are many groups who would deliberately release a destructive rogue AI
Worryingly, this process may have already begun.
The US State Department warned: ‘We have observed some North Korean and other nation-state and criminal actors try to use AI models to help accelerate writing malicious software and finding systems to exploit.’
There are also concerns that bad actors may deliberately unleash a rogue AI into the world.
Last year, researchers from the Centre for AI Safety wrote: ‘Releasing powerful AIs and allowing them to take actions independently of humans could lead to a catastrophe.’
The researchers pointed out that only one month after the release of GPT-4, an open-source project had already bypassed the safety filters to create an agent instructed to ‘destroy humanity’, ‘establish global dominance’, and ‘attain immortality’.
‘Dubbed ChaosGPT, the AI compiled research on nuclear weapons and sent tweets trying to influence others,’ the researchers said.
Luckily this agent didn’t have the ability to hack computers, survive, or spread, but it came as an important warning of the risks of intentionally malicious AI.
Ben Eisenpress (pictured), Director of Operations at the Future of Life Institute and AI expert, told MailOnline there are five ways AI might lead to humanities destruction
4. Nuclear War
Perhaps one of the most troubling fears around AI is that the very systems we build to protect ourselves might become our undoing.
Modern warfare relies on gathering and processing vast amounts of information.
Battlefields are becoming enormous networks of sensors and decision makers and a devastating attack can arrive faster than ever before.
For this reason, the world’s militaries are now beginning to consider implementing AI into their decision-making systems.
The Ministry of Defense’s 2022 Defence Artificial Intelligence Strategy warned that this could ‘tax the limits of human understanding and often require responses at machine speed.’
The report concluded that the UK ‘must adopt and exploit AI at pace and scale.’
In the 1983 classic ‘WarGames’ a young hacker, played by Matthew Broderick (pictured) almost triggers WW3 by engaging in a ‘war game’ with the AI that has been put in control of the US’ nuclear defence system
But Mr Eisenpress says that incorporating AI into our military systems can lead to even greater dangers, especially in the case of nuclear weaponry.
Just like in the 1983 classic ‘WarGames’, using AI to control nuclear weapons could lead to nuclear war.
He said: ‘Today’s AI systems are inherently unreliable, capable of making inexplicable decisions and “hallucinating”.
‘Integrating AI into nuclear command and control systems is destabilizing.’
One worry is that AI’s rapid decision-making might lead to small errors, misidentifying an aircraft for example, rapidly escalating into full-blown warfare.
Once the AI has made the initial error, different nations’ AI could react to each other faster than any human could control, leading to a ‘flash war’.
‘Even if there is technically a “human in the loop,” we cannot count on decision-makers to override a potentially erroneous AI-generated launch recommendation, given such existential stakes,’ Mr Eisenpress concludes.
5. Gradual disempowerment
The thought of an AI-induced thermonuclear war is terrifying.
But what if humanity doesn’t end with a bang, but with a whimper?
Mr Eisnepress says one way we might be stumbling towards the end of humanity as we know it is through a slow, silent, takeover.
‘Our “Gradual AI Disempowerment” tells the story of how humans could incrementally surrender control of our world to AI, with no single dramatic moment,’ he said.
From financial transactions to legal proceedings, many tasks have already been turned over to AI.
Mr Eisenpress says that humanity may one day go the way of the Neanderthal (pictured). Just as they ruled the Earth for thousands of years until a more intelligent species arrived, humans might one day find themselves eclipsed by their own creation
Mr Eisenpress said: ‘Over time, AI will become integrated into more and more systems, including those critical to our way of life.
‘Companies or political parties that refuse to harness AI will lose out to those who do, creating a race to the bottom. Little by little, humans will have less control over the world.’
Ultimately, Mr Eisenpress says: ‘We could find ourselves at the mercy of AI, without even realizing what was happening.’
To understand the risk we face, Mr Eisenpress says to look, not to the future, but the past.
He said: ‘Disempowerment is the default outcome when a smarter and more capable entity shows up; just ask the Neanderthals.
‘They thrived for hundreds of thousands of years, only to quickly disappear when modern humans entered the scene.’
Mr Eisenpress concludes with a quote from Alan Turing, founder of modem computing, who wrote in 1915: ‘It would not take long to outstrip our feeble powers…At some stage therefore we should have to expect the machines to take control.’