As we look ahead to the future use of AI in warfare, we must not place our trust in improvements in accuracy, reliability, or responsible use. In order to prevent the repetition of technologically optimized, systematic, large-scale killings of civilians, such as those in Gaza, we must build international pressure, hold the perpetrators accountable, and achieve an internationally binding regulation of military AI use, as Jens Hälterlein argues.
*
The current Gaza War, in all its horrific dimensions, should prompt reflection on many levels. One of these concerns the impact of artificial intelligence (AI) on warfare. Contrary to the promises made by proponents of military AI, its deployment by the Israeli Defense Forces (IDF) has demonstrated that this technology does not lead to more precise attacks, but rather accelerates killing. This development continues a trajectory that began with the U.S. drone warfare. Without AI-driven computer programs such as Lavender, The Gospel, and Where’s Daddy?, which identify and mark targets and determine the timing of bombardments, one of the deadliest and most destructive military campaigns in recent history would not have been possible – aside from the use of nuclear weapons, which was not an option for various reasons.
“The Human-Machine Team”
Lavender, The Gospel, and Where’s Daddy? are in-house developments of the IDF’s Unit 8200, responsible in part for electronic reconnaissance and cyber-espionage. Since early 2021, this unit has been led by Brigadier General Yossi Sariel, who – under a pseudonym – published the book “The Human-Machine Team: How to Create Synergy Between Human and Artificial Intelligence” the same year. In the book he advocated for using AI in wartime to overcome the delays caused by human decision-making. Before The Gospel helped break human speed limitations, producing 50 targets would take a year. Now, 100 targets could be generated daily.
In the initial weeks of Operation Swords of Iron, AI played such a central role that the IDF announced on November 11, 2023, that it had bombed 11,000 targets in Gaza. While the promise repeatedly made by the defense industry and military – that AI accelerates analysis processes and thus increases military effectiveness – has clearly been fulfilled, another frequently made promise has turned into a farce: that the use of AI increases precision of attacks and thereby reduces civilian casualties, that is collateral damage.
This promise takes on significant importance in light of mounting allegations that Israel is systematically committing war crimes in Gaza, or even a genocide. According to international humanitarian law, military operations cannot simply be carried out using all available destructive means. Instead, sufficient precautions must be taken to spare civilian lives. The use of AI can be presented as such a precaution. The promise of increased precision serves not only to justify actions to an international public but also to reassure more liberal segments of Israel’s own population and the IDF, thus serving an essential legitimizing, ultimately psychological function. Accordingly, the IDF regularly emphasizes how AI has improved the accuracy of military information processing and consequently reduced collateral damage.
Not unintended side effects
Yet the raw numbers tell a different story. According to IDF figures, civilian fatalities accounted for approximately 83% in May 2025. Moreover, based on satellite imagery, it was calculated that by January 2024, about 69% of all buildings in the Gaza Strip had already been destroyed. The extraordinarily high proportion of civilian casualties and the widespread destruction of liveable spaces resulting from an AI-driven warfare are by no means due to faulty data, technical errors in automated analysis, or human failure in handling these analyses. These outcomes are not unintended side effects of a flawed or misused technology.
Rather, they reflect the IDF’s prioritization of maximum lethality over careful target selection. Although AI-generated outputs must be reviewed and confirmed before being acted upon, two key factors undermine this process. First, due to the sheer volume of targets to be assessed, the so-called ‘four-eyes principle’ was abandoned, and decisions were left to a single individual, often inadequately trained for the task. Second, statements from the soldiers responsible for this work indicate that the speed of AI-driven target generation has placed immense pressure on the review process. This has made thorough scrutiny impractical, which would have been possible before October 7. After October 7, only about 20 seconds were available per ‘review.’
Consequently, it is logical that despite a known error rate of approximately 10%, human operators merely checked whether a person marked as a target by Lavender was likely male. Another aspect of this attitude within the IDF is the integration of international humanitarian law experts into military operations, who consistently certified the legality of bombing orders – despite foreseeable high levels of collateral damage.
The principle of ‘proportionality’
According to international humanitarian law, a ‘proportionality assessment’ must be conducted, weighing expected military gains against anticipated civilian harm. Yet at the war’s outset, IDF internal guidelines for acceptable civilian casualties were temporarily set at 15–20 for attacks on lower-ranking Hamas fighters and over 100 for high-ranking ones, without requiring time-consuming, case-specific analysis of the target’s military significance.
In practice, the principle of ‘proportionality’ was non-existent. At this stage of the decision-making process, the human factor was subordinated to the speed imperative imposed by technology. The application of legal expertise must not obstruct the acceleration of the ‘kill chain’ enabled by AI and thus becomes reduced to obtaining legitimacy for the mass killing of civilians. Ultimately, AI enabled the IDF to align the principle under international humanitarian law that only military objectives may be targeted with the intention of inflicting maximum destruction in Gaza.
The use of AI as a weapon of mass destruction can undoubtedly be understood as a reaction to the massacres of October 7. In numerous statements by high-ranking military and political figures, grief and the desire for revenge merge with demands for the mass killing of Palestinians without regard for their civilian status. These statements express a dehumanization of Palestinian life that has long characterized certain forms of Zionism, Israel’s apartheid regime, and the moral convictions of parts of Israel’s Jewish population even before October 7. The reduction of human targets to statistical data correlates by programs like Lavender represents the technological counterpart of this dehumanization.
The Dahiya Doctrine
However, the deployment of AI as a weapon of mass destruction is also a logical consequence of the IDF’s military strategy and its so-called ‘ethical’ code – both of which were developed long before October 7. According to the so-called Dahiya Doctrine, the IDF responds to hostile aggression with massive, disproportionate destruction to deter future attacks and prevent protracted wars. Civilian buildings, infrastructure, entire villages, and even cities may be treated as military targets if they are perceived to pose a threat to Israel.
This doctrine was declared in 2008 and subsequently developed in strategic documents. Although not an official IDF doctrine, the IDF’s military operations in Gaza since 2008 have consistently followed the approach outlined in the Dahiya Doctrine. Moreover, it is grounded in ethical principles formulated within the IDF’s own code: the lives of non-Israeli civilians are considered less valuable than those of Israeli soldiers, making it justifiable to kill the former when a threat to the latter exists.
The IDF’s 2020 modernization strategy, known as the Momentum Plan, also draws on the Dahiya Doctrine by aiming for the rapid destruction of enemy capabilities hidden behind ‘human shields’ in urban areas, while simultaneously minimizing Israeli casualties. Achieving these goals depends on focusing on areas where the IDF holds superiority: air power, military intelligence, and technology. Through AI and big data, the identification of enemy targets could be improved, enabling to struck as many of them as possible as quickly as possible.
Beyond regulation?
Thus, the use of AI as a weapon of mass destruction is neither an ‘error’ nor a result of acting in the heat of the moment, but the outcome of the interplay between military imperatives, new technological possibilities, and the dehumanization of victims. In public and academic debates, AI is often portrayed as a politically neutral technology– flawed, yes, but ethically and legally manageable and regulable. Yet its use in the current Gaza war reveals that it is part of a logic of deterrence through maximum destruction, regardless of civilian losses. This logic has not gone awry; rather, it has found its consistent technological expression in AI. Technological progress here does not serve a less bloody or more humane warfare, but the execution of necropolitical power over life and death.
Therefore, looking ahead to the future use of AI in warfare – whether by the IDF or other armed forces – we must not place trust in either improvements in accuracy and reliability or in responsible use. The only way to prevent the repetition of such technologically optimized, systematic, and mass-scale killing of civilians is to build international pressure, hold perpetrators accountable without exception, and achieve an international, legally binding regulation of military AI use.
This may currently sound utopian, especially since, alongside Israel, numerous other states have so far rejected such regulation within the framework of the UN Arms Convention. Yet for other weapons of mass destruction, this ‘utopian’ goal has already been achieved, and there is no reason to believe it cannot be achieved in the case of AI.