We’re overwhelmed by images in daily life. Sometimes the odd one makes us stop scrolling, zoom in and pause. Maybe we’re smiling, maybe we’re bewildered. Last month gave us two sets of visuals that do just that: first from the James Webb Telescope and secondly from AI image-generator Dall-E 2. Both sources shifted our perception of the world, be it far away or closer to home.

“The deepest images of the universe” is how recently released visuals from the James Webb telescope have been described. Cosmic Cliffs (image 1) is the most striking of these, appearing to us as a seething, craggy landscape, an alien’s rendering of Van Gogh’s Starry Night. The Southern Ring Nebula (image 2) is also eerily beautiful. It plays havoc with our sense of scale: we imagine this vast planetary nebula as a delicate, jewelled box.

Image 2: ‘Southern Ring Nebula’, James Webb Space Telescope image showing details of the Southern Ring planetary nebula that were previously hidden from astronomers. Planetary nebulae are the shells of gas and dust ejected from dying stars. Image credit: NASA, ESA, CSA, and STScI

They elicit awe, showing us a realm generally articulated by science fiction. Their effect on us astronomical laymen is specific and powerful. Even if you haven’t visited NASA’s website, they feel ubiquitous since they’re continually thrust into our newspapers and social feeds. People are claiming “they have the potential to rewrite the history of the cosmos and reshape humanity’s position within it.” This of course has the Mona Lisa effect: we instantly imagine they possess a higher truth.

Looking into the construction of these pictures can help us understand both the aesthetic strategies at play, and why they’re so significant.

What’s fascinating is the level of manipulation involved. For a start, the colours we see aren’t “real”. The telescope collects photons in wavelengths generally outside what humans can perceive as visual light. These are then translated into visible light (from the non-visible wavelengths) so that we’re able to perceive what’s essentially data as naturalised photographs.

NASA uses a guide for producing these transformations: the eponymous “Hubble Palette”. The Hubble was the first space-based optical telescope, sitting above the obscuring atmosphere of planet earth and obtaining an intergalactic insight that’s often compared to Galileo’s seventeenth-century discoveries. Since its inception, astronomers have developed particular aesthetic and representational conventions for visualising the cosmos. For example, the Hubble Palette dictates that Sulphur II emissions should be red, Hydrogen-alpha should be green and Oxygen III should be blue.

This is how the images are made, but what do the colour and compositional choices remind us of stylistically? What influenced these conventions?

Elizabeth Kessler, an art historian at Stanford University, convincingly argues that the light, composition and colour of many Hubble (and James Webb) images recall nineteenth century scenes of the American West. In her book, Picturing the Cosmos, Kessler references the visual language of painters such as Albert Bierstadt and Thomas Moran, citing Moran’s Cliffs of the Upper Colorado River (image 3) to make the comparison.

Image 3: Thomas Moran, Cliffs of the Upper Colorado River, 1882, oil on canvas. Collecting of Smithsonian American Art Museum
Image 4: Image created using Dall-E 2 with the prompt “a transparent sphere on a beach with a crab looking at it”

In a chapter headed “The Astronomical Sublime and The American West” she suggests how astronomers did away with certain conventions such as a North/South orientation, in order to suggest a similarity to earthly terrain. The Hubble Palette also adjusted the contrast to: “make evident as much detail as possible, notably more than would be visible to the naked eye or in Hubble images in their raw state, and thereby gave the forms three-dimensionality and solidity.” Other apparent similarities are the focus on small regions within larger objects, dramatic backlighting, towers and pillars, and a sense of overwhelming size and scale.

Kessler’s observations give insight into the deeper relevance of these cosmic images. Viewing the cosmos through the James Webb images is an experience akin to nineteenth century American frontiersmen venturing into the unknown, she notes. And the current race to commercialise space recalls Moran and Bierstadt’s depictions of a region of untapped wealth, ready to be exploited by the American people.

“The mythos of the American frontier functions as the framework through which a new frontier is seen,” she comments, succinctly. In other words, examining the aesthetic qualities of astronomical images means we can better locate their historical, contextual and political meaning.

The James Webb Telescope is not the only image-maker people are excited about. There are a number of text-to-image AI image generators around, but the current focus is on Dall-E 2, an AI image generator from OpenAI, which has attracted claims that it “will fundamentally shift the nature of human expression”.

In essence, it’s a web-based application where you type in words – however nonsensical – and the algorithm creates a unique image from them. It is trained on more than 250 million images, and named after the surrealist artist Salvador Dalí and Pixar’s Wall-E. It’s uncanny how realistic the images are: they can mimic photographs, paintings or even ancient frescoes. Presented out of context, you wouldn’t question their veracity for a second.

However, the vast majority of users for Dall-E 2 (image 4, then 5 and 6), and Dall-E mini (which is much easier to get access to) are using the application as a meme generator, thanks to the huge potential for random absurdist humour arising from certain prompts (image 7).

Image 5: Image created using Dall-E 2 with the prompt “An ancient Egyptian painting depicting an argument over whose turn it is to take out the trash”
Image 6: Image created using Dall-E 2 with the prompt “Super Mario getting his citizenship at Ellis Island”
Image 7: Images created using Dall-E mini

Other AI image generators include Midjourney, Google’s Imagen and Parti. All the applications work through a process called “diffusion”, starting with a randomised set of pixels, which the application scans to find familiar elements of the words provided by the language prompts. It has been described in artistic terms as “pointillism in reverse”, since instead of starting with a complete picture and fractionalising it into thousands of composite parts, the process begins with the pixels and gradually builds them up into something familiar.

The artist duo Holly Herndon & Mat Dryhurst are among the more prominent artist users of Dall-E 2. They’ve said it resembles the leap from “the early electronic music period of manually stitching together pieces of tape [in order] to collage together a composition, to the introduction of the wired synthesiser studio.” But whereas that change took over half a century, the leap from early AI generators to Dall-E 2 took just three years.

If the James Webb images inspire sublime feelings about the great unknown, then Dall-E 2 leaves us feeling both disorientated and excited. Dall-E 2 creates visuals that appear to be from another reality, one eerily familiar but slightly unnerving. Arguably it also represents a new creative synthesis between humans and machines, or at least the most articulate and significant expression yet of that union. Using such programs as an artist’s tool can be traced back to “generative” art: a term referring to a broad array of practices where artists relinquish some control to a system (generally a computer program or machine).

Although we live in an increasingly digital world, with “the internet of things” connecting ever more disparate elements, it is fundamentally shaped by natural and human forces. Our content diet might be prescribed by algorithms through Spotify, Twitter or Netflix recommendations, and our romantic partners decided by a similar fate on apps like Hinge – but the products on offer are still made by humans (or are human in the case of dating apps).

Dall-E 2 and other generating apps are already changing this by populating our lives with non-human, eerie creations that are neither copies nor originals. Images we see daily – whether adverts or artworks – will increasingly be made by algorithms.

Ultimately, however beguiling their artistry, these image generators tap into our anxiety about the future. The James Webb telescope makes explicit the commercialisation of space in the US, amid accusations that Jeff Bezos and Elon Musk care more about rockets than people. Dall-E 2 taps into fears that AI will remodel an unfamiliar world, increasingly beyond our control.

Max Lunn is a journalist based in London

More Like This

Get a free copy of our print edition

Art, Arts & Culture

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Your email address will not be published. The views expressed in the comments below are not those of Perspective. We encourage healthy debate, but racist, misogynistic, homophobic and other types of hateful comments will not be published.