“Come and see.” Depending on your eschatological bent, it is either an innocuous phrase or the most terrifying three words in the English language. “Come and see” appears four times in the Book of Revelation, once for each of the Four Horsemen of the Apocalypse, and they seem as relevant and resonant now as ever before. War, famine, pestilence, death, Gary Barlow’s wine show on ITV: repent, sinners, repent, for the end is nigh.

But our apocalypse will almost certainly be different from ones previously envisaged. For most of human history the end of the world was expected to come via natural cataclysm, an act of God in which humans had no agency and no role other than as helpless victims. Pretty much every ancient culture had its own version of the flood myth; three huge Middle Eastern earthquakes in the 1130s killed hundreds of thousands, a huge death toll for the time; and the 1815 eruption of Indonesia’s Mount Tambora prompted the coldest summer in more than two centuries and inspired Lord Byron to write Darkness, a primer for the end of the world. “The bright sun was extinguish’d, and the stars did wander darkling in the eternal space… And men forgot their passions in the dread of this their desolation… All earth was but one thought – and that was death, immediate and inglorious.”

This belief changed in a flash, literally, on 16 July 1945. The successful detonation of the atomic bomb was proof that, for the first time, mankind could be the architect of his own destruction. Small wonder that, at the moment of explosion, J Robert Oppenheimer remembered the words of the Bhagavad Gita: “I am become death, destroyer of worlds.” From the bomb came the clock: the Doomsday Clock, created two years later and designed to show how close the world was to nuclear annihilation at any given moment. The clock still exists, and at 100 seconds to midnight it puts us closer to the brink than ever before, but it now takes account not just of the nuclear threat but of climate change and disruptive technologies too.

This is where and how we differ from our ancestors: if our doomsday comes, we will almost certainly have brought it on ourselves. When it comes to existential risk – that is, something which would cause outright human extinction or at the very least destroy our long-term potential as a species – the threat from, say, an asteroid strike is vanishingly unlikely compared to those from AI, bio- and nanotechnology, nuclear warfare, and climate change.

Toby Ord, an Australian philosopher who works at the Future of Humanity Institute (FOHI) in Oxford, calls this “the precipice”: a stark point in human development where we can choose whether to back away from disaster or plunge headlong into the abyss. Since mankind’s ability to do things always runs ahead of our capacity to decide whether or not (or how) to do them, the question is a simple one: we have the knowledge, but do we have the wisdom?

Ord puts our chances of obliteration in the next century at one in six: the throw of a dice, a chambered bullet in Russian roulette. Those may be odds worth taking in a casino or The Deer Hunter, but all of humanity being at stake does – or should – concentrate the collective mind to be rather less reckless. And if anthropogenic risks are more dangerous than non-anthropogenic ones, then one anthropogenic risk has the potential to outweigh all the others. Not climate change, which in extremis would make large parts of the world uninhabitable and have vast political, social, economic and humanitarian ramifications, but would not destroy humanity. Not nuclear warfare, which has the ultimate deterrent in mutually assured destruction; and not molecular technology, which remains broadly a force for good. No: the biggest threat, even if it is not yet a clear and present danger, comes from AI.

Technology is in itself a morally inert tool: whether its applications are positive or negative depends on how it’s used. The problem we face is not that almost any technology can cause harm in the wrong hands – we have long been used to that – but that the wrong hands might one day belong to the technology itself.

At some stage, perhaps not in our lifetimes but almost certainly in those of our children, intelligence will escape from the constraints of biology. All human history has been predicated on our basic forms making us the smartest beings around, and the idea that this would no longer be the case – either through straight AI takeover or some transhuman/post-human, man-machine hybrid – puts pretty much everything up for grabs.

The biggest threat to humanity comes from AI

Just as the fate of earth’s animals currently depends on our goodwill and sensible custodianship, so might we one day be similarly dependent on a machine superintelligence. Every species in an ecosystem bar the apex predator exists on sufferance. “The difference in intelligence between humans and chimpanzees is tiny,” says Stuart Armstrong, an FOHI research fellow. “But in that difference lies the contrast between seven billion inhabitants and a permanent place on the endangered species list. That tells us it’s possible for a relatively small intelligence advantage to quickly compound and become decisive.”

Even now, AI has many advantages over the human brain: we cannot compete on speed, scalability, memory, reliability, duplicability or editability. Were Stanley Kubrick still around to make 2101, his famous match cut from bone to spaceship would in size terms have to go the other way round, from spaceship back down to a molecular chain deep within a computer chip.

The idea of man creating an entity more powerful than himself has deep roots: think of the golem from early Judaism, or Mary Shelley beginning to write Frankenstein the same year as Byron was penning Darkness. To this extent, popular tropes about AI are still simplistic, whether benevolent (“we can keep it under control”) or malevolent (“machines will turn malicious.”) The ultimate likelihood, however, is that AI will neither cleave to human ethical norms nor directly seek power and domination. It will not be for or against us: its interests will simply not include us.

“The basic problem is that the strong realisation of most motivations is incompatible with human existence,” says Daniel Dewey, a specialist in machine superintelligence. “An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.”

When it comes to survival, the noun “humanity” is both concrete and abstract: not merely the sum total of people on earth, but the many qualities, good and bad, which between them make up the concept of being human. And a large part of being human is to examine that state through cultural avenues, whose trends in turn reflect our current preoccupations. Stories are how we make sense of the world: the replicant Roy Batty’s dying soliloquy in Blade Runner about seeing attack ships on fire off the shoulder of Orion is a more eloquent treatise on humanity, in both senses of the word, than a thousand PhD dissertations could manage.

The contemporary glut of sci-fi and apocalyptic fiction is no surprise: we seek escape from our everyday problems by confronting our own fears in a fictional setting. Susan Sontag described disaster movies as cathartic fantasies, in which “one can participate in the fantasy of living through one’s own death and more, the death of cities, the destruction of humanity itself.” We all want to be one of the few who make it through that collective death wish: the special, the elite, the chosen ones, braver and more resilient than we are in real life. Even the longest human lifespan is only a tiny fraction of time on a cosmic scale, fleeting and so far from either end of mankind’s story. How much more meaningful would our small walk-on part be if it coincided with great upheaval? We know the world doesn’t revolve around us, but we like to think it might stop around us.

Apocalyptic fiction is not just a postcard from a future we fear. It is resonant because it encompasses all four of the archetypes which Carl Jung believed we hold within ourselves: the magician (insight, learning, innovation), the warrior (decisiveness, conviction, loyalty), the sovereign (coherence, unity, purpose), and the lover (sensuality, creativity, emotion). The magicians are the scientists, pace Arthur C Clarke’s dictum that “any sufficiently advanced technology is indistinguishable from magic”; the warriors fight for the basics of survival; the sovereign leads and organises the new community; and the lovers exemplify why all this is worth it in the first place.

In Dune, for example, the humans are so horrified by their reliance on machines more than 10,000 years into the future that they rise up in the Butlerian Jihad: destroying all computers and cognitive robotics, instituting the prohibition that “thou shalt not make a machine in the likeness of a human mind”, and ushering in a universal religion. The irony, of course, is that there is almost no practical difference between machine superintelligence and an omniscient, omnipotent God: and what is transcendence into the cloud other than resurrection and immortality? We need to believe in something even as we help maximise the chances that there’ll be nothing left to believe in.

So how do we reverse the second part of that equation and reduce existential risks? Most obviously, by working hard, smart and collectively to install measures at three separate stages – prevention (minimising the chances of those risks happening at all); response (containing them on a small scale if they do start); and resilience (increasing the chances of survival and rebuilding if the worst still comes to pass). That there is currently little effort to do any of this is as depressing as it’s unsurprising: those who work for think tanks such as the FOHI or Cambridge’s Centre for the Study of Existential Risk (CSER) could be forgiven for wondering why they’re often hollering into the void rather than having their counsel sought in the most hushed and rarified corridors of power.

The two main obstacles are political and financial. Governance mechanisms develop much more slowly than the changes to which they’re responding, which in this case has led even Elon Musk, not exactly the poster boy for government overreach, to demand action. “Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation.”

Free markets also tend to steer away from the issue. Reducing existential risk is a global public good, but what’s good for the global public is not necessarily what’s good for individual nations or institutions. We currently spend less than 0.001% of gross world product (GWP) on targeted existential risk reduction, and even in specific areas the disparity is clear: a 2020 estimate had $40bn worth of global spending on AI technologies but only around $50m, just over that 0.001%, on managing AI risk.

The ultimate return on investment would of course be enormous – what could be more lucrative than the continued existence of humanity? – but the case is much harder to make under conventional investment criteria. The more funds and resources a nation or institution commits, the smaller the percentage of the benefits they accrue: they would be helping billions of people who’ve contributed nothing, and billions more who haven’t even been born yet.

But this is exactly what’s necessary. We need to place greater value not only on the lives of people in different places and circumstances from our own, but also on the lives of those in future times which we ourselves won’t get to see. “If we think of one million years [the typical lifetime of a mammalian species] in terms of a single, 80-year life,” says Ord, “we are currently sixteen years old: just coming into our power, just old enough to get ourselves in serious trouble.” We are teenagers, but the survival of our species requires us to be functioning and mature adults.

“In ecological terms, it is not a human that is remarkable, but humanity,” Ord adds. And this humanity, if preserved, will be more remarkable than we can conceive of. Catastrophe would not just destroy all humans existing at the time of destruction: it would also betray everything that past humans have created and strived for, and would traduce the almost infinite possibilities of our future descendants amongst the stars. A time traveller from 500 years ago plonked down in our world would likely keel over from the sensory overload of things he could never have envisaged: cars and planes, electricity and computers, smartphones and supermarkets, nuclear weapons and pharmaceuticals. Imagine what we would find if we went forward 500 years, let alone a million. We can’t imagine most of it: that’s precisely the point.

But we do already have sight of how things might go wrong if we let them. The covid pandemic in effect provided a trial run for the end of the world, not in terms of the severity of the threat (a global public health crisis is a long way off cataclysm) but in terms of individual and institutional responses to that threat. On one hand, there were numerous stories of scientists pooling resources, of people looking out for friends and neighbours, and of acts of kindness. On the other, there was little meaningful international co-operation at state level, many government policies were confused and ineffective, and viciously deep political schisms opened up over response issues such as masks and vaccines. Covid could have pulled governments together to help make all of us safer, as the Cuban missile crisis did 60 years ago by ushering in arms limitation programmes, but so far it hasn’t seemed to, and only the most Panglossian of observers could surely believe that all these fault lines would magically vanish next time around. Indeed, the one racing certainty of the apocalypse is that millions of people, whether through disinformation or stubbornness, will refuse to believe how bad it is until it’s too late.

Jared Diamond’s Collapse, which examined the failures of past societies such as the Maya, Anasazi and Easter Islanders, identified two crucial choices made by enduring and successful civilisations and ignored by those which failed. The first was long-term planning, particularly the willingness “to make bold, courageous, anticipatory decisions at a time when problems have become perceptible but before they have reached crisis proportions.” The second was “the courage to make painful decisions about values. Which of the values that formerly served a society well can continue to be maintained under new changed circumstances? Which of these treasured values must instead be jettisoned and replaced with different approaches?”

We need to place greater value on the lives of those in the future

We live in a world whose technology allows us to forge connections with anyone anywhere, and delivers more knowledge more easily than ever before: but this has also enabled disinformation, tribalism and toxicity on vast scales, encouraging us to group ourselves into silos and to defend ourselves against our fears rather than attacking and dealing with the sources of those fears. This is what survivalist preppers and New Zealand-bound billionaires alike have failed to grasp (or perhaps they grasp it all too well): that the only real hope for collective survival is to act collectively. Those who would flee from and barricade themselves against the masses not only anticipate social breakdown: to some degree they also desire it, even relish it, for the frontier mentality of strong men defending the manse is baked deep into our collective cultural heritage.

But even small groups can only survive when they act together. If you’re a tech maven sequestered in your secure compound on the South Island while the world goes to hell, you still need other people. Where does your food come from? Who repairs and maintains your infrastructure? Who are the rough men standing ready in the night to visit violence on those who would do you harm? Most importantly of all, how do you keep all these people happy and working in your best interests? The most powerful person in this apocalyptic Aotearoan Alamo wouldn’t be the billionaire paymaster: it would be their head of security.

Ironically, New Zealand was one of the most isolationist countries on earth during the pandemic, and you’d have to have a heart of stone not to laugh at the prospect of Silicon Valley trillionaires being turned away from their prearranged sanctuaries by Kiwi border control. But the resonance of the country as a location in itself tells us a lot about how we view the prospect of apocalypse. It’s not just that it’s remote, surprisingly far away even from Australia (as the apocryphal Qantas announcement goes, “please put your watches forward three hours and back 50 years”): lots of places are remote. It’s also Edenic, a sparsely peopled primordial land full of jaw-dropping vistas and natural beauty. It harks back to the world before we damaged it: portraying the Shire in The Lord of the Rings, its skies brushed by long white clouds which have nothing to do with remote data storage. It is Mount Ararat, the shelter from the flood: it is bucolic fantasy and prelapsarian utopia, because it contains everything we needed before we built all the things which will, if we let them, ruin us.

Boris Starling is an award-winning author, screenwriter and journalist. He created the “Messiah” series which ran for five seasons on BBC1

More Like This

Get a free copy of our print edition

April 2024, Cover Story, Main Features

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Your email address will not be published. The views expressed in the comments below are not those of Perspective. We encourage healthy debate, but racist, misogynistic, homophobic and other types of hateful comments will not be published.