Remorseless killing at the initiation of artificial intelligence has been the subject of nail-biting concern for various
members of computer-digital cosmos. Be wary of such machines in war and their displacing potential regarding human will
and agency. For all that, the advent of AI-driven, automated systems in war has already become a cold-blooded reality,
deployed conventionally, and with utmost lethality by human operators.
The teasing illusion here is the idea that autonomous systems will become so algorithmically attuned and trained as to
render human agency redundant in a functional sense. Provided the targeting is trained, informed, and surgical, a utopia
of precision will dawn in modern warfare. Civilian death tolls will be reduced; the mortality of combatants and
undesirables will, conversely, increase with dramatic effect.
The staining case study that has put paid to this idea is the pulverising campaign being waged by Israel in Gaza. A report in the magazine +972 notes that the Israeli Defense Forces has indulgently availed itself of AI to identify targets and dispatch them
accordingly. The process, however, has been far from accurate or forensically educated. As Brianna Rosen of Just Security accurately posits, “Rather than limiting harm to civilians, Israel’s use of AI bolsters its ability to identify, locate, and expand
target sets which likely are not fully vetted to inflict maximum damage.”
The investigation opens by recalling the bombastically titledThe Human-Machine Team: How to Create Human and Artificial Intelligence That Will Revolutionize Our World, a 2021 publication available in English authored by one “Brigadier General Y.S.”, the current commander of the Israeli
intelligence unit 8200.
The author advances the case for a system capable of rapidly generating thousands of potential “targets” in the
exigencies of conflict. The sinister and morally arid goal of such a machine would resolve a “human bottleneck for both
locating new targets and decision-making to approve the targets.” Doing so not only dispenses with the human need to
vet, check and verify the viability of the target but dispenses with the need to seek human approval for their
termination.
The joint investigation by +972 and Local Call identifies the advanced stage of development of such a system, known to the Israeli forces as Lavender. In terms of its
murderous purpose, this AI creation goes further than such lethal predecessors as “Habsora” (“The Gospel”), which
identifies purportedly relevant military buildings and structures used by militants. Even that form of identification
did little to keep the death rate moderate, generating what a former intelligence officer described as a “mass assassination factory.”
Six Israeli intelligence officers, all having served during the current war in Gaza, reveal how Lavender “played a
central role in the unprecedented bombing of Palestinians, especially during the early stages of the war.” The effect of
using the AI machine effectively subsumed the human element while giving the targeting results of the system a fictional
human credibility.
Within the first weeks of the war, the IDF placed extensive, even exclusive reliance on Lavender, with as many as 37,000
Palestinians being identified as potential Hamas and Palestinian Islamic Jihad militants for possible airstrikes. This
reliance signalled a shift from the previous “human target” doctrine used by the IDF regarding senior military
operatives. In such cases, killing the individual in their private residence would only happen exceptionally, and only
to the most senior identified individuals, all to keep in awkward step with principles of proportionality in
international law. The commencement of “Operation Swords of Iron” in response to the Hamas attacks of October 7 led to
the adoption of a policy by which all Hamas operatives in its military wing irrespective of rank would be designated as human targets.
Officers were given expansive latitude to accept the kill lists without demur or scrutiny, with as little as 20 seconds
being given to each target before bombing authorisation was given. Permission was also given despite awareness that
errors in targeting arising in “approximately 10 percent of cases, and is known to occasionally mark individuals who
have merely a loose connection to militant groups, or no connection at all.”
The Lavender system was also supplemented by using the emetically named “Where’s Daddy?”, another automated platform
which tracked the targeted individuals to their family residences which would then be flattened. The result was mass
slaughter, with “thousands of Palestinians – most of them women and children or people not involved in the fighting”
killed by Israeli airstrikes in the initial stages of the conflict. As one of the interviewed intelligence officers
stated with grim candour, killing Hamas operatives when in a military facility or while engaged in military activity was
a matter of little interest. “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s
much easier to bomb a family’s home. The system is built to look for them in these situations.”
The use of the system entailed resorting to gruesome, and ultimately murderous calculi. Two of the sources interviewed
claimed that the IDF “also decided during the first weeks of the war that, for every junior Hamas operative that
Lavender marked, it was permissible to kill up to 15 or 20 civilians.” Were the targets Hamas officials of certain
seniority, the deaths of up to 100 civilians were also authorised.
In what is becoming its default position in the face of such revelations, the IDF continues to state, as reported in the Times of Israel, that appropriate conventions are being observed in the business of killing Palestinians. It “does not use an
artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a
terrorist”. The process, the claim goes, is far more discerning, involving the use of a “database whose purpose is to
cross-reference intelligence sources... on the military operatives of terrorist organizations”.
The UN Secretary General, António Guterres, stated how “deeply troubled” he was by reports that Israel’s bombing campaign had used “artificial intelligence as a tool in
the identification of targets, particularly in densely populated residential areas, resulting in a high level of
civilian casualties”. It might be far better to see these matters as cases of willing, and reckless misidentification,
with a conscious acceptance on the part of IDF military personnel that enormous civilian casualties are simply a matter
of course. To that end, we are no longer talking about a form of advanced, scientific war waged proportionately and with
precision, but a technologically advanced form of mass murder.
Dr. Binoy Kampmark was a Commonwealth Scholar at Selwyn College, Cambridge. He currently lectures at RMIT University.
Email: bkampmark@gmail.com