fbpx

The AI woz here

The image above was generated for this article by an AI engine in less than ten seconds

How do you know a human being has written this column? Yes, there’s a picture of me in the byline but that could be as computer-generated as the words you’re reading now. There’s an adage I’m loath to call it “old”, though its source, a New Yorker cartoon published in 1993, is undoubtedly from a different age: “On the Internet, nobody knows you’re a dog.” Nearly 30 years on, we’re heading towards a world where nobody knows if you’re a startlingly well-trained bit of code – artificial intelligence (AI) that’s stolen a commission from an inconvenient human with pay demands and healthcare needs.

In recent months, growing access to AI art tools, most notably OpenAI’s punningly-named DALL-E 2, has led to social media feeds filled with remarkable images generated from a text prompt. I nearly wrote “created” in that previous sentence but that would be ascribing a quality to the AI that it simply doesn’t have. What seems like “creativity” is an illusion, the product of procedures designed to introduce elements that mimic a human artist. There is no Picasso in the machine, no digital Dali twiddling its moustache, but it can do good impressions of both and suggest originality through seemingly unexpected combinations of thousands of different data points.

The problem for human artists is that AI has them utterly outgunned: it can create thousands of images an hour

OpenAI has acknowledged that the dataset used to train DALL-E 2 means it can and does replicate racial and gender biases that are present in existing images. The organisation has tried to mitigate some of the issues by preventing prompts that would generate sexual or violent content and those that would produce images featuring public figures or trademarked characters. But the developers of other tools have no such qualms.

For example, when a company called StableDiffusion released its AI art generator, which allows for prompts including the names of celebrities and characters, images of Kanye West as a member of the Taliban and President Obama “comforting” Donald Trump quickly appeared. And though the tool is set to avoid explicit imagery by default, that option can easily be turned off. Lexica, a search engine dedicated to StableDiffusion-generated images, shows that a lot of people are using it to produce virtual celebrity nudes, which range from the plausible to the positively eldritch. 
Newgrounds, one of the Internet’s more venerable sites for sharing art (it’s been around for 27 years), last year banned ArtBreeder – a tool that makes (“breeds”) new pictures from two or more existing images – and updated its guidelines to include all AI-generated art last month. It explained: “We want to keep the focus on art made by people and not have the [site] flooded with computer-generated art.” DeviantArt, another popular online community, has not followed suit yet, and its front page is regularly dominated by unedited AI art.

The problem for human artists is that AI art has them utterly outgunned. Traditional art, even made using digital tools, takes hours to produce. AI can create thousands of images an hour, without the need for constant human input. Some people argue that this is simply a natural development, the next step in a history that began with marks on cave walls, and that artists will now become skilled directors who shape the output of algorithms.

But the AI tools feed on the work of living artists, take their style and gobble it up. They are like the Borg in Star Trek, who encounter new species and tell them: “We will add your biological and technological distinctiveness to our own. Your culture will adapt to service us.” And, like the Borg, many people in tech have decided that “resistance is futile”. Yet that doesn’t have to be the case. We can, like the art communities that are rejecting AI art, choose to limit how and when these models are used.

SudoWrite – which uses the same underlying GPT-3 technology as Dall-E, but generates novels, screenplays, and articles – is already being used by students to produce essays for them and circumvent plagiarism detection. Meanwhile, a startup called Jasper promises companies its AI-enabled software will create their marketing copy without the need for costly humans. The future for people who line up words in pleasing ways looks bleak.

There are limits to what models like Dall-E 2 and SudoWrite can do for now, with plenty of oddities and inconsistencies in the images and texts they produce. But they will continue to get better and better, trained on larger and more diverse data sets. It may be that schools and academia will need to treat AI writing tools as akin to performance-enhancing drugs in athletics, while human artists will need to rebrand their product as “organic”, using its true uniqueness as an argument for the higher cost.

But journalists should be especially worried now; Josh Dzieza writing for The Verge described SudoWrite as “like a good bullshitter… better at form and style than substance.” Cast your eye over the output of the average national newspaper columnist and tell me that description doesn’t sound familiar. And unlike the highly-paid occupants of those bylines now, the virtual columnist will never require a note beneath a replacement column that reads: “The AI is away.”

Mic Wright is a freelance writer and journalist based in London. He writes about technology, culture and politics

More Like This

Get a free copy of our print edition

Life, October 2022, Tech Talk

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Your email address will not be published. The views expressed in the comments below are not those of Perspective. We encourage healthy debate, but racist, misogynistic, homophobic and other types of hateful comments will not be published.