If Prometheus is a metaphor for atomic fire, perhaps we are entering the age of Boreas, the Greek god of winter and ice. The chill, implacable rise of humanity’s next bane has begun slowly but is gathering pace: autonomous weapons. They may be experimental, engineers’ toys at present, but cold computer logic will soon decide who lives and who dies.

Unlike with the atomic bomb, rare metals are not needed for this technology, and neither is heavy industry. The central kernel of it is developing in offices full of computers and it is helped by the many open-source AI development programmes available.
Historically, the difficulty of making an atomic bomb has constrained ownership to a few nations. Ethics have had to catch up with nuclear weapons, but have been slow to do so and, arguably, we still have progress to make.

By contrast, experimentation and simulation in machine intelligence that can control vehicles, identify objects and people, and react intelligently to situations, is becoming increasingly easy. Universities have whole departments using innocuous machines to test these algorithms – a far cry from secret installations in the desert.

Film and video, where art imitates life, have been dramatising the rise of the machine for decades. In both cases, autonomous weapon systems (AWS) are mostly human-looking depictions of good and evil that may make positive moral decisions and display humanity and emotion. On the screen, the machine is entertainment. In the battlespace, we need only rules, not emotions.

The present and near future will produce AWS that look nothing like a bipedal humanoid, however. A weapon system that is expected to act independently needs endurance to be useful. But nothing can usefully be packed into a humanoid robot. Among the offerings from the defence industry now on the verge of entering service are surface and submarine mine-hunters that are the size of conventional patrol boats. Land vehicles are mostly cargo-quad-sized but are easily upscaled to trucks. Aerial drones are commonplace. Power is the limiting factor, be it nuclear, fossil fuel or electricity.

The UK Ministry of Defence and the companies vying for contracts are coy about what they are offering. Images in brochures and online of machines capable of autonomy rarely sport their weapons. The first autonomous six-wheeled ground vehicle trialled in the UK looked nondescript. Elsewhere, an image of the same machine sports a gun turret.

Autonomous weapon systems are too far down the road to be uninvented

The decision not to have a weapon on view during early trials is unsurprising; there is no wish to have alarming stories in the press. The House of Lords’ AI in Weapon Systems Committee report of 2023 begins its title with “Proceed with Caution”. It’s a reasonable paper, by government standards, with many learned contributions from academics.

But the opinion of only one serviceman is taken. The failure to take more notice of the views of practitioners is an interesting omission, and also an alarming weakness. Liberal democracies may seek to occupy the high ground, but a soldier would note that this makes one an easy target. It may be sensible to make the process careful, but other states and organisations don’t share the same caution. Current conflicts demonstrate that we must at least match, if not overtake, developments elsewhere.

Public opinion and groups that look to ban AWS altogether will be, at best, a strong force for regulation. There is no stopping so-called “killer robots”, and that very phrase can work against campaigners, leading to accusations of hysteria. AWS are too far down the road of development to be uninvented. The best chance is for control; in the same way that nuclear weapons are regulated, AWS must be analysed and their use controlled by consensus. The laws for doing so exist, including international humanitarian law and its scions.

If the killer robot is not humanoid, what does it look like? Can it really be made today? Artificial General Intelligence (AGI) is far off; imagination, empathy and creative thinking are not binary. One and zero, on or off, can only create pale imitations of human thought.

Examining that scion of international humanitarian law, the law of armed conflict, suggests we might not need creative thought. Those who have not experienced the drama and intense emotion of close-quarter combat will not realise that it mostly comes down to drills and rehearsed behaviour. Should I shoot or shouldn’t I? Yes or no. One or zero.

The law of armed conflict is drummed into UK service people by yearly repetition of a standard lecture. Soldiers cynically reflect that this is so the government can say: “Well, we told them not to commit atrocities,” if they do. The law is distilled into targeting doctrine and rules of engagement to make it practicable.

Already, the first murmurings of the regulation of AWS can be heard in the government declaration that a “human must be in the loop” when applying lethal force. This does give some workable latitude; already there are counter-missile systems where an operator must intervene to stop automatic weapon release.

For any military to apply lethal force to a target (military personnel, materiel or structure), it must fulfil a set of conditions. There must be military necessity, humanity and distinction, and the attack must be proportional. In addition to this, the attacking asset must adhere to another set of conditions before pressing the trigger.

If time allows, a pattern of life (POL) must be established. The collateral damage must be estimated (CDE). Rules of engagement (ROE) must be applied, the target must be positively identified (PID), and the target engagement authority (TEA) must give the order.

These processes are shrunk into a series of easy-to-remember abbreviations because simplification and reduction into drills is essential. The rules apply to all engagements, from those ordered by a head of state down to the infantry soldier with their finger on the trigger. They are made as straightforward as possible.

Imagination isn’t needed and initiative is limited. Machine intelligence is probably capable of this now. Collateral damage assessment uses a computer tool to do the calculations; the US and UK use the same one. The rules of engagement are on a matrix that gives the conditions for responding to a threat and the permissible action.

For example, the matrix may indicate that simply being scanned by a radar should not elicit an aggressive response, but an aircraft could attack in response to a missile targeting radar that locks onto it. This tabulated list of “If this, then do this” conditions is easily converted into programming language.

Here is a possible scenario. An autonomous drone is tasked to attack a high-value individual over hostile territory where the likelihood of jammed communications will restrict human supervision. Its intelligence libraries are populated with all the relevant intelligence data: vehicles, mobile phone numbers, images of the target, where the individual may be and a time frame.

Stealthy and electronically silent, the drone lingers over the target’s predicted location. It doesn’t get tired or bored – it has long endurance. Patiently watching, it gathers the pattern of life around the target: who comes and goes, whether they are collateral objects, whether they are armed and, therefore, legitimate targets.

Eventually, a vehicle matching one of those in its libraries arrives and a figure matching the photographs dismounts and enters the building. The drone constantly runs the CDE software, and POL observation indicates non-combatants in the building. The CDE score is higher than the TEA has allowed it. The drone waits patiently.

A competent programmer from the games industry could write the programme for an attack

The drone could not use its facial recognition software to PID the target, and its passive communications warn of unfavourable cloud coverage. Selecting a mini-drone from the cassette in its fuselage, it drops a perch-and-stare sensor. The parachute slows down the descent of this simple device, which falls away at 600 feet, allowing the small helicopter to fly to an observation point that its master has selected. It shuts down and focuses a camera on the door.

A flurry of activity alerts the sensor and beams what it sees to the drone, which arms its two missiles. The facial recognition software gives it PID. A civilian is saying goodbye to the target. Armed individuals climb into the car. CDE restrictions fall away as the civilian returns to the building. Still, the drone waits. It knows that the car will pause at the gate and be further away from the building. It disarms the explosive missile and selects the R9X Hellfire. It falls from the wing and streaks downwards, intercepting the car as it begins its turn into the road. This missile has no explosive content and minimal collateral effect. Its six blades give it a nickname, “the flying Ginsu”. No one in the car survives.

The same machine could drive a tank and operate the weapons on board. In a conventional conflict, the targeting protocols are simplified. They become appropriate for combat engagements and self-defence. The machine intelligence can assist the human crew in tactically driving the vehicle and identifying engagement targets. It takes over if the sensors detect a threat that is developing too quickly for a human response.

The story above is extrapolated from actual incidents, current equipment and weapons. A competent programmer from the games industry could write the programme for an attack. The defence industry is good at integrating software and hardware. The production lines are gearing up.

The pace of regulation is slower than defence innovation. There is a genuine chance that we will again have to catch up in a race for arms control. The robots are coming. They won’t take over the world on their own initiative. But they are likely to constitute an existential threat, particularly to liberal democracies that prevaricate when countering them. We need to make them and regulate them.

Chris Lincoln-Jones is a former artillery officer who has specialised in the military use of drones. Operational experience in lethal targeting and work in the defence industry and academia led him to study AWS and the ethical issues surrounding their use. He was military advisor to the films “Eye in the Sky” and “Official Secrets”and author of “Dr Moore’s Automaton”, a novel highlighting the problems surrounding the rise of so-called killer robots

More Like This

Get a free copy of our print edition

April 2024, Main Features, Special Report

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Your email address will not be published. The views expressed in the comments below are not those of Perspective. We encourage healthy debate, but racist, misogynistic, homophobic and other types of hateful comments will not be published.