Criticizing the technological rationalization of war through automated data analysis and algorithmic decision-making is a challenge that can be met not least by evaluating information from the War on Terror and investigative research on current warfare in the Middle East. The Working Group Against Armed Drones, the Information Center on Militarization and the Forum Computer Professionals for Peace and Societal Responsibility take up this challenge and show why the practice of “targeted killing” with systems like Lavender should be outlawed as a war crime.
*
“People, things and events become ‘programmable data’. It’s all about ‘input’ and ‘output,’ variables, percentages, processes, and the like, until any connection to concrete things is abstracted away, and all that remains are abstract graphs, columns of numbers, and expressions.” Joseph Weizenbaum
Numerous articles in recent days have reported on the AI-based Lavender system used by the Israeli Defense Forces (IDF) to identify Hamas officers and mark them as human targets. The markings are passed on to the respective IDF units and another system named Where’s Daddy? is then activated to track the targets. As soon as they have entered their respective homes, they are bombed. Usually at night and in the presence of their families.
Media outlets such as The Guardian, Der Spiegel, The Washington Post and also very extensively the German newspaper nd-aktuell, referred to the investigative report on Lavender of the Israeli journalist and filmmaker Yuval Abraham. He published it in the Israeli-Palestinian magazine +972 in collaboration with the independent news website Local Call, which is published in Hebrew.
As early as November 2023, Abraham publicised an AI-based command, control and decision support system of the IDF, designed to mark buildings and structures from which Hamas militants would operate: The Gospel. Lavender, on the other hand, marks people, according to witnesses. It does not provide coordinates of bases, but creates a list of people to be killed. A kill list.
Quite a few of the terminologies used by Yuval Abraham and the soldiers interviewed inevitably bring us back to the peak days of the US Global War on Terror after 9/11. The killing of families as collateral damage – machine-generated kill lists – the lowering of the assessment threshold in the definition of a militant or terrorist – the psychological distance to the potential victims created by military and technical means in the minds of the soldiers – the farce of targeted killing, that more sophisticated technologies should enable more targeted killing with fewer civilian victims; it almost reads like an upgrade of the US drone war. Yet, the massive error-proneness of complex information-processing systems for monitoring, identifying, and recording individuals could not be remedied since then. And this despite the fact that AI research and development has made quantum leaps in the last fifteen years. Lavender is one of those representative and succinct upgrades that emerged from the trial and error zones (see below) set up during the War on Terror to research war technologies – in Afghanistan, Yemen, Somalia, and many other civilian areas of the world designated as war zones or combat zones.
High-tech war
According to available information, Lavender is one of the military systems that uses machine learning, the branch of AI that analyzes data algorithmically or uses statistical methods to identify patterns in it. These patterns serve as the basis for automated recommendations to commanders and operators to assist and support them at various levels of the chain of command in solving semi-structured and unstructured decision-making processes. For this reason, this technology is also referred to as data-driven. Together with systems such as Gospel and its companion systems Fire Factory, which prepares attacks through scheduling and prioritization functions, Depth of Wisdom, which maps the Gaza tunnel network, The Alchemist, which sends real-time warnings of potential threats to commanders’ tablets, this new release of Lavender once again demonstrates that the threshold for relying on complex information processing systems in military operations has dropped dramatically.
The active and large-scale development phase of this approach to the detection and identification of individuals began shortly after the turn of the millennium in the wake of the reactions to the terrorist attacks of September 11, 2001, which, according to official figures, killed 2,996 people in New York City and Washington. With regard to the entirely new forms of terrorist attacks and assassinations, numerous security-related funding programmes were enacted by the Pentagon, including in the areas of computational social network analysis, or computational social science (CSS), and natural language processing (NLP). Data analytics companies such as Palantir Technologies Inc., which are currently playing a significant role in Yemen, Iraq, Syria, Ukraine, Gaza, and many other wars and conflicts, began to build their small but not be underestimated empires. It was precisely these IT companies that quickly specialized in the surveillance of individuals and the analysis and consolidation of what were actually separate data sets that became major players in this millennium. The global use of drones (UAVs) as technical objects has thus been inseparable from technical systems for surveillance, reconnaissance, and combat operations such as signature strikes since the beginning of the millennium.
War on error
The military era of identifying and targeting enemies with drones began at the latest in June 2011, when then US President Barack Obama announced that he would gradually withdraw his troops from Afghanistan. Drones became the weapon of choice, a central element of high-tech warfare. Without Afghanistan, Yemen, Somalia and many more countries that mutated into combat zones in the war on terror, all countries which served as a kind of trial and error zone as a side effect of the security authorities, this type of weapon in its expanded and far more sophisticated form would not have been conceivable today. Systems and research projects such as Project Maven, SKYNET, Gorgon Stare in the drone war, Palantir’s MetaConstellation in Ukraine, or the artificial intelligence platform AIP for Defense, which is very likely to be deployed in Gaza, show the civilian public the path that states and technology companies have taken in the war on terror.
The Afghan journalist Emran Feroz said in an interview with Democracy Now! in 2021: “We see how the war on terror started in Afghanistan and how it is ending now: It’s with drones and civilian casualties.” Even the very first strike, on October 7, 2001, did not hit Taliban leader Mullah Omar, but unnamed Afghans (the names of victims of drone strikes are usually kept secret), Feroz continued. “This scenario has been repeated over and over again. Until today.” He said this shortly after it became public that the victims of the last US drone strike in the Afghan war, on August 29, 2021, near the Kabul airport, were not IS militants, as suspected by the US intelligence, but Zemari Ahmadi, a longtime US aid worker, three of his children, Zamir (20), Faisal (16), and Farzad (10), Ahmadi’s cousin Naser (30), three children of Ahmadi’s brother Romal, Arwin (7), Benyamin (6), and Hayat (2), and two 3-year-old girls, Malika and Somaya.
Multi-domain operations
The drone war continues at a relentless pace. The German armed forces (Bundeswehr) are also arming themselves with unmanned systems. The coallition agreement from 2021 included a directive which specified that Bundeswehr drones may only be armed under “binding and transparent conditions and taking into account ethical and security policy aspects” (Koal.Vertr. S. 118). This directive was disregarded by the Germany parliament without involving civil society. On April 6, 2022, the German parliament approved the procurement of precision missiles for the Heron TP drones from Israel, which are to be delivered to the Bundeswehr this year . The Bundeswehr is also currently developing systems with machine learning components to defend against drones. The goal of GhostPlay, for example, a simulation system for “AI-based decision-making at machine speed,” is to be able to process data in combat so quickly that soldiers have more time to “make ethical and informed decisions,” according to Gary Schaal, head of the Bundeswehr research project.
Like Lavender, Ghostplay is part of this research and development area of security technology and military systems for Man Machine Teaming (MMT): complex information processing systems for navigation, execution and more effective decision-making in multi-dimensional operational areas. The Bundeswehr refers to these operational areas as “gläsernes Gefechtsfeld” (glass battlefield), the US and NATO as transparent battlefield, and the operational practice required there as Multi Domain Operations (MDO). The structural reform of the Bundeswehr presented on April 4, 2024, is also based on this MDO concept: complex military operations on land, at sea, in the air, in space and in cyberspace. The Bundeswehr Planning Office describes its transformation as follows: “In addition to technology, multi-domain operations are associated with several non-technical aspects that are at least as important: Questions of leadership philosophy, adapted leadership procedures and processes, training, and the mindset of people.”
Mindsets
According to +972’s research, Lavender played a central role for the Israeli Defense Forces, especially in the first weeks after the Hamas attack on Israel on October 7, 2023, in which 1,200 men, women, and children were murdered and around 240 people were kidnapped and taken hostage. The IDF apparently placed great faith in the flawless and targeted finality that seemed to be inscribed in Lavender. However, just as in the US drone program, where technical errors led to countless civilian deaths over the years, there was, according to witnesses cited by Yuval Abraham, knowledge of this factual susceptibility to error, at least in intelligence circles and the military leadership elite, also in the Lavender missions. According to the witnesses, Israeli intelligence officers checked the accuracy of Lavender immediately after October 7. They conducted a random manual check of several hundred targets marked by the system.
However, the methodological implementation and the international legal standards of this random manual check are unknown, so that it can only be concluded from the research that the 90% hit accuracy resulting from the test runs was considered tolerable according to the IDF’s own standards. In other words: 10 percent of the human targets selected for targeted killing, also accepting additional civilian casualties, were not fighters from the military wing of Hamas. In particular, relatives, neighbors, civil defense officials, and police officers were falsely marked by Lavender, as the behavior and communication patterns were similar to those of known Hamas-affiliated militants. People who happened to have the same name or nickname, or who used a mobile phone that previously belonged to a senior Hamas militant, were also frequently mislabeled. Over time, even those who resembled lower-level Hamas members were marked, as the Israeli army quickly expanded its machine learning training database with records of non-commanding Hamas fighters and civil administration personnel. In this way, thousands upon thousands were included in the circle of suspects by default. According to +972, on some days, up to 37,000.
Collateral Murder
According to the soldiers interviewed, the army decided in the first weeks of the war that 5-20 civilian casualties could be tolerated for anyone marked as a low-ranking member of Hamas, although in practice the actual number of civilians present and killed during an attack was almost certainly hardly checked and recorded. According to the report in +972, even more than 100 civilian casualties were permitted in the case of high-ranking officers, such as a battalion or brigade commander. Applying this calculation to 37,000 people with a 10% margin of error seems cruel. However, according to the reports, this calculation was made and parts of the Israeli army would accept such a high number of civilian casualties. One of the shocking findings when this calculation is transferred from the abstract to the concrete is that “most of the people killed would be women and children.” Usually marked people were attacked at night in their homes (by “signature strikes”) when the entire family was present causing large civilian casualties. Supposedly this was done for the simple reason that the targets were easier to locate this way.
Daniel Hale, a former US Air Force intelligence analyst who was sentenced to 45 months in prison on July 27, 2021, after pleading guilty to leaking government documents that revealed the inner workings and high civilian costs of the US drone program in targeted areas, said in court that he believed it was “necessary to dispel the lie that drone warfare protects us and that our lives are worth more than theirs.” He went on to say that “with drone warfare, sometimes nine out of ten people killed are innocent, (…) You have to kill part of your conscience to do your job.”
Apparently, an active attempt is made with soldiers (not only in high-tech wars) to make it impossible for them to confront Hannah Arendt’s inner question: “Can I go on living with what I have done?”. Chelsea Manning, Edward Snowden, Brandon Bryant, Daniel Hale and many other soldiers and intelligence officers who have become deserters or whistleblowers nevertheless managed to do so.
It is an undeniable fact: soldiers must not only be trained to maintain a psychological distance from potential civilian victims, they must maintain it continuously in order to not only accept such disproportionate numbers of innocent victims but also anticipate them when planning military strikes. Civil society has learned from whistleblowers like former US drone pilot Brandon Bryant that even killing from a distance on an abstract level does not simply leave you unaffected. Machines, however, have no heart. They are cold. They take responsibility away from the human being who feels compassion in his heart, even if the heart is full of grief and hate. As one senior soldier told +972 in an interview: “There’s something about the statistical approach that sets you to a certain norm and standard. There has been an illogical amount of [bombings] in this operation. This is unparalleled, in my memory. And I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago. Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”
Machine morality?
This line of thought inevitably leads us to those approaches to machine ethics that are convinced that artificial intelligence can not only help soldiers make ethical decisions on multidimensional battlefields, but that unmanned AI systems themselves “perform better on the battlefield than humans, not only in military terms, but also in ethical terms. The result could be fewer civilian casualties and less destruction”, according to the well-known roboticist and Pentagon advisor Ronald C. Arkin.
Artificial intelligence does not have a conscience. For drone pilots or soldiers in operations centers who have to rely on them, it may be difficult to reconcile with their own conscience that they often have to rely solely on the (in)accuracy of such complex information-processing systems, and that they are dependent on previously given formal orders and events. Accidents like that of Zabet Amanullah happen to them and their comrades every day. Amanullah was a civilian activist who was “accidentally” killed by a US drone in 2011 because the drone pilots were “instead of tracking a person, targeting a cellphone whose phone number was thought to belong to a key Taliban leader.“
Awareness of the high error rate of data-driven technical systems for recording and identifying people is a constant companion in the minds of soldiers. On the other hand, soldiers are confronted with the fact that they are constantly reaching their sensory and cognitive limits when analyzing vast amounts of information. Not infrequently with deadly consequences: “When twenty-three guests at an Afghan wedding were killed in a helicopter strike in February 2011, the intelligence officers in Nevada were able to blame their mistake on information overload, claiming that their screens were “cluttered” with data – they lost track of the situation precisely because they were looking at the screens. Children were among the victims of the bombardment, but the operators had “overlooked them in the midst of the maelstrom of data” – “like an office worker who misses an urgent message in the daily flood of mail. And whom no one could accuse of having behaved immorally…” (Z. Bauman, D. Lyon, and F. Jakubzik, „Daten, Drohnen, Disziplin: Ein Gespräch über flüchtige Überwachung“ (Berlin: Suhrkamp Verlag, 2013))
“The images detached from every aspect of life merge into a common stream in which the unity of that life can no longer be recovered.” Guy Debord
What to do?
We see: Just like drones, systems like Lavender are not simply tools to do something, but have a finality inscribed into them. A finality that only manifests itself in the world through technical, purposeful, military action. Their purpose is often beyond the manufacturing process of the technical systems themselves, which makes it difficult, if not impossible, to control the research, development, and dissemination of these technologies. Max Weber, the founder of sociology as a science, which in conjunction with computer science made fields like computational social science conceivable and systems like Lavender designable, pointed out 100 years ago that we in the Western world usually do not know how our modern technical world works. Unless, of course, we are designers, programmers or engineers who create it, or specialists who design it. Otherwise, in our professional and everyday lives, we usually only know how to adapt our respective behavior so that our technical objects can fulfill their respective functions. Whether it’s a military action on the transparent battlefield or a civilian action in social media: “Marketing or death by drone, it’s the same math … You could easily turn Facebook into that. You don’t have to change the programming, just the purpose of why you have the system” (Chelsea Manning). However, with deep learning (i.e., the area of AI research and application that we are primarily talking about today, we are talking about artificial intelligence), not even the specialists themselves can explain the inner workings or the behavior of the AI as it learns.
The lesson that civil society can learn from another publication like +972’s on Lavender is that despite quantum leaps in technological development, the technical mistakes are still the same fatal mistakes as they were 15 years ago. What has significantly evolved and manifested in the high-tech wars of our time, however, is the belief that we can make autonomous, informed, and ethical decisions in a technically generated reality of war, regardless of the technologies used to generate it.
Systems like Lavender ultimately function no differently than a big, imprecise bomb. Knowing this, soldiers cannot delegate responsibility for their killing decisions to machines and systems. Technology-savvy military personnel, politicians, and society at large must be aware of this. In this respect, they are making a fundamental ethical decision, namely to order the use of these systems, knowing the consequences and repercussions of their actions.
Therefore, the obvious and urgent next step should be to outlaw the practice of targeted killings supported by AI systems such as Lavender as a war crime under international law.
Editor’s note: This statement was written by the peace movements and research institutes listed above. They are represented by Susanne Grabenhorst (Working Group Against Armed Drones), Christian Heck (Ground Zero), Christoph Marischka (Information Center on Militarization, IMI) and Rainer Rehak (Forum Computer Professionals for Peace and Societal Responsibility, FIfF).
2 comments on “The Myth of “Targeted Killing”: Against the Rationalization of War”