Jonathan Hall KC suggested new laws may be needed to counter the potential use of generative AI by terrorists.

Terrorists will use artificial intelligence (AI) to promote their ideologies and plan atrocities, with “chatbot radicalisation” a problem that needs to be countered, a watchdog has warned.

Jonathan Hall KC said generative AI could be used for propaganda purposes, attack planning and spreading disinformation which may trigger acts of terrorist violence.

Mr Hall, the independent reviewer of terrorism legislation, suggested new laws should be brought in to ban the creation or possession of computer programmes designed to stir up racial or religious hatred.

Terrorist chatbots already exist “presented as fun and satirical models” but given the right prompts they are willing to promote terrorism, he said in his annual report.

Mr Hall said: “The popularity of sex-chatbots is a warning that terrorist chatbots could provide a new radicalisation dynamic, with all the legal difficulties that follow in pinning liability on machines and their creators.”

The watchdog highlighted the case of Jaswant Singh Chail, who climbed into the grounds of Windsor Castle in 2021 armed with a crossbow after conversing with a chatbot called Sarai about planning the attack.

More widely, Mr Hall said “generative artificial intelligence’s ability to create text, images and sounds will be exploited by terrorists”.

Groups such as al Qaida could avoid the technology because of their belief in “authentic messages” from senior leaders but it could be “boom time for extreme right wing forums, antisemites and conspiracy theorists who revel in creative nastiness”.

Terrorist groups could use AI to generate propaganda images or translate text into multiple languages.

The technology could be used to produce deepfakes to bring “terrorist leaders or notorious killers back from the dead” to spread their message again.

Generative AI could be used to provide technical advice on avoiding surveillance, or make knife-strikes more lethal  – reducing the need for would-be terrorists to receive training from other people.

But he said that current safeguards may deter attack planners from using AI models until offline versions were readily available.

He noted it had also been argued that in certain circumstances AI could be used to extend the way attacks are carried out, by potentially helping to create biological or chemical weapons or generating code for cyber attacks.

Warning about the spread of disinformation online, Mr Hall said the storming of the US Capitol on January 6 2021 emerged from a “soup of online conspiracy and a history of anti-government militarism that had been supercharged by the internet”.

More from Perspective

Get a free copy of our print edition

News

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Your email address will not be published. The views expressed in the comments below are not those of Perspective. We encourage healthy debate, but racist, misogynistic, homophobic and other types of hateful comments will not be published.