I’ve always been more inclined towards mundane visions of human decline than those baroque imaginings filled with invading alien armies, vast asteroids and – most frequently of late – artificial intelligence (AI) gone rogue. We human beings are an egotistical species and our apocalyptic visions tend toward extreme ends, but I can more easily imagine us sliding out of frame rather than going out with a bang. Children of Men – Alfonso Cuarón’s film adaptation of P D James’s novel – with a plot that hinges on ecocide and infertility feels far more real to me than AI-gone-evil tales such as last year’s sci-fi hit, The Creator, in which a war with the machines is sparked by the AI dropping a nuclear bomb in LA.

There is a peculiar kind of conceitedness to imagining that we will create an intelligence so powerful that it will dethrone us from the top of the planet’s food chain. It is a vision of humans as godlike, however briefly, and it is not confined to the realms of fiction. Elon Musk is currently suing OpenAI – which he was involved in founding – accusing it of betraying its foundational mission and putting the pursuit of profit ahead of benefiting humanity. Musk claims that artificial general intelligence (AGI) – a theoretical form of AI that could perform tasks at or above human capability – is “a grave threat to humanity”. But many experts don’t believe we will ever create AGI.

A search of Google Ngram Viewer – which can chart how often a particular term appears in books – indicates that the frequency of references to “human extinction” rose rapidly after World War II, spiked during the nuclear paranoia of the 1980s, and has continued to rise since the 1990s. The philosopher Émile Torres, author of a 542-page study called Human Extinction: A History of the Science and Ethics of Annihilation, argues that while some thinkers in the ancient world considered the end of humanity, Christianity’s focus on salvation removed it from public discourse. In that hypothesis, increased fear of our end is intrinsically linked to Christian decline.

Last year, American computer scientist Eliezer Yudkowsky, wrote an opinion piece for Time which argued that the world should pass a moratorium on AI development but also be ready to use nuclear weapons to destroy rogue AI installations flouting that ban. Yudkowsky believes that AI, on its current course, will lead to the complete extinction of humanity and all other biological life. It’s a terrifying prospect, but terrifying in a way that dystopian novels and apocalyptic science fiction films strive to be. Thinkers get a thrill from these kinds of discussions because they are so extreme and exciting.

Prosaic problems of providing water, food and shelter are far less interesting to tech barons

The slow – but accelerating – crawl towards climate disaster is a less appealing topic than a call to develop first-strike capability against the theoretical armies of AI killbots. Similarly our ongoing debates about immigration, borders and the role of the nation state are scabs to be constantly picked at but tossed aside as short-term problems by many of the thinkers who concern themselves more with the fear of evil AI or worrying about species-ending asteroid strikes.

In a written debate about the possibility of AGI with the academic and AI visionary Gary Marcus, the innovative software engineer Gary Booch observed:  “We, as computer scientists, not only vastly overestimate our abilities to create an AGI, we vastly underestimate and under-represent what behavioural scientists, psychologists, cognitive scientists, neurologists, social scientists and even the poets, philosophers and storytellers of the world know about what it means to be human.”

While AI fears draw a great deal of attention and investment, the prosaic problems of providing clean water, food and shelter to a growing population as a sliver of billionaires hoards the planet’s resources are far less interesting to tech barons who are among that ultra-gilded elite. We have to solve the same problems over and over again.

In the ’60s, the agricultural scientist Norman Borlaug’s contribution to higher-yielding dwarf wheat varieties was hailed as a famine-averting miracle and earned him the Nobel Peace Prize. By the ’80s, the long-term cost of Borlaug’s new varieties had become clear – reduced soil quality and genetic diversity, more soil erosion, and heightened vulnerability to pests. They required more water and expensive fertilisers. In already impoverished rural areas, debt and social inequality rose significantly. In 2007, Alexander Cockburn wrote for CounterPunch: “Aside from Kissinger, probably the biggest killer of all to have got the Peace Prize was Norman Borlaug, whose ‘green revolution’ wheat strains led to the death of peasants by the million.”

The problems of now deserve our attention far more than our egotistical imaginings about some blasted future. The seeds of our eventual demise are in decline that will seem glacially slow until suddenly we’re standing on the edge of the melting iceberg. The apocalypse is in every summer with record-breaking temperatures and every winter when floods and storms are more intense than we can remember. We don’t need to imagine futuristic means of our destruction. They are here now and we could be working far harder to master and defeat them.

Mic Wright is a journalist based in London. He writes about technology, culture and politics

More Like This

Get a free copy of our print edition

April 2024, Columns

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Your email address will not be published. The views expressed in the comments below are not those of Perspective. We encourage healthy debate, but racist, misogynistic, homophobic and other types of hateful comments will not be published.