
Political security concepts – such as preemption – require technologies that can anticipate the future as accurately as possible. However, such policies can only be enforced by absolutizing the concept of prevention. A ‘collective acceptance of the future as a threat’ is required for the resulting measures to be widely accepted. In this context, data-driven forecasting methods are key, while big data and AI as tools for anticipating and proactively preventing future crises, conflicts, crime and terrorist threats are on the rise, along with broader military and security service trends aimed at controlling and stabilizing the future. In light of this, Christian Heck calls for a re-evaluation of the ethical and legal foundations of such measures and discusses their implications for the rule of law, international law, and human rights, arguing that it is essential to understand the cultural and social consequences as well as the limits of these preemptive systems in order to preserve social freedom and participation in democratic processes.
*
For the past year and a half, there has been a practice of dehumanizing the civilian population in Gaza on several levels. One of them is algorithmic. Thanks in part to artificially intelligent decision support systems (AI-DSS) in the operational activities of the Israeli army, this war has taken on a dimension of cruelty that is neither imaginable nor comprehensible to the world public. After the filmmaker and journalist Yuval Abraham made the AI system Lavender public, about seven weeks ago he published a report on the practice of data-driven targeted killing using so-called foundation models in chatbot-like systems á la Chat-GPT.
The ‘artificial’ aspect of this kind of human-machine interaction, generated to kill people using “intelligent” systems, raises entirely new questions of international law and ethics that we in civil society need to explore. This trend began just a few months after the release of OpenAI’s ChatGPT program. The US data analysis company Palantir was already marketing ‘AIP for Defense’ in April 2023.
A few months later, in September 2023, the first defense contractor signed a contract with Palantir. Armies began experimenting with it very quickly. Meanwhile, Meta followed with ‘Defense Llama,’ and in early March 2025, the Pentagon launched the ‘Thunderforge’ program in collaboration with Scale AI (in partnership with META), Anduril, and Microsoft (in partnership with OpenAI), which aims to use STOA language models, AI-driven simulations, and interactive war-gaming tools for military decision-making processes at machine speed.
Fictions as functions as fact
Since 2013, we’ve seen Palantir Inc. play an increasingly important role in data-driven weaponization alongside Meta, Google, OpenAI, Amazon and Microsoft, and many other major tech companies. The Germany-wide introduction of Palantir’s software products for data-driven predictive policing cannot therefore be seen in isolation from the wars waged with their systems and their weapons that kill civilians. In 2023, the introduction of Palantir’s products for nationwide data analysis was critically discussed using the example of deployments in Bavaria (VeRA), North Rhine-Westphalia (DAR), and Hesse (hessenDATA).
Back in 2023, Hesse’s Minister of the Interior, Peter Beuth, pleaded in the Hessian state parliament for the nationwide introduction of a software system called hessenDATA (a Predictive Policing Software with an Implementation of the Palantir-Softwareproduct ‘Gotham’ in the Hessian police force) in order to better predict and anticipate terrorist attacks in Germany and Europe: “With hessenDATA, we have lifted police work into a new digital age. While investigators in other federal states spend days and weeks poring over files and painstakingly highlighting possible connections, in Hesse we have been using modern software for several years now that significantly speeds up investigations, improves their quality and has become an integral part of our investigators’ work,” said Beuth.
These software systems, in which the ‘Gotham’ software system was implemented, among others, have their roots in the US Global War on Terror following September 11, 2001. They combine digital behavioral data from social media, etc., with entries in various police databases and connection data from telephone surveillance in order to identify potential criminal acts as well as criminals. ‘Gotham’ was first tested by the Joint Improvised-Threat Defeat Organization (JIEDDO), a unit of the US Department of Defense (DoD) set up to counter asymmetric warfare tactics. The CIA, NSA, and FBI quickly became Palantir customers. EUROPOL, the European police agency, now uses Palantir products for data analysis.
The debate in Germany about the nationwide introduction of Palantir software took a significant turn a few weeks ago. The Bundesrat (upper house of parliament) adopted a request by the states of Bavaria and Saxony-Anhalt, calling on the federal government to “immediately provide a centrally operated, digitally sovereign, economically viable and legally permissible automated data analysis platform for all federal and state police forces”. It is no secret that this is Palantir, especially since the pilot operation of VeRA in Bavaria ended in December 2024 and has now successfully gone into real operation.
However, the ‘qualitative improvement’ in police use of Palantir’s products that Peter Beuth highlighted in 2023 cannot be empirically proven for either VeRA or hessenDATA. This is because predictive policing usually involves recording attacks and other crimes that occur in the future. This means crimes that have not yet taken place and would be prevented by preemptive action. Also crimes that might never have happened without preemptive measures. As a result, the transformation of recorded movements, posts and crimes into symbolic representations, media phenomena and, finally, actual crimes remains mostly in the realm of speculation.
To a certain extent, the development of systems like hessenDATA and their use as tools for predictive policing implies the implementation of fictions in technical systems, but also in the daily work of security officers. Fictions that are functions that try to derive future events, such as a planned attack, as concretely as possible from individually observed processes, traces and external actions: Fictions as functions as fact.
‘Gefährder’: the personified statute of pre-emptive security policy
What really appears in the here and now, in addition to the predicted crime, is the predicted criminal. This is labeled with the vague German police term ‘Gefährder.’ A ‘Gefährder’ is someone who personifies abstract danger situations, which systems such as hessenDATA, but also risk assessment tools such as RADAR-iTE, make concrete. For example, if the calculated behavioral patterns of a certain person show a not insignificant danger due to his ‘criminal potential,’ from which a concrete future danger can be derived. For example, if you are a migrant in Germany, as the call for predictive systems for police investigations is constantly renewed, especially after attacks in Germany by people with a migration background. Or if someone is mentally unstable or in a psychosocial crisis. To this end, the German states have proposed, among other things, that Palantir be allowed to access data from health and immigration authorities. But it is also possible, and Simone Ruf of the Society for Civil Liberties (GFF) has put forward an idea that is not at all absurd, that just because ‘a person buys glue’ at the DIY store, he or she will be recorded by police software, as glue can also be used for purposes of civil disobedience as a climate activist.
In the speeches of German politicians, it is currently not only activists of the “last generation” who are mentioned in the same sentence as criminals, but also, increasingly since June 2024, the personified statute of preemptive security policy, the ‘Gefährder.’ Since the Islamist-motivated knife attack that seriously injured five demonstrators and killed the policeman Rouven Lauer, who had rushed to help, many politicians have been saying that these people, like the attacker Sulaiman Ataee, must be arrested and deported even before they can act, and they are doing so. On 20 August 2024, shortly after Olaf Scholz said in the Bundestag: “Serious offenders and ‘Gefährder’ have no place here. (…) Such criminals should be deported,” a plane took off from Leipzig airport with 28 people from Afghanistan on board for deportation. Some of them are said to have been ‘Gefährder.’
We can see that preventive security policies not only require technologies that can anticipate the future as accurately as possible, but that such policies can only be enforced by absolutizing the concept of prevention. Therefore, both the ‘collective acceptance of the future as a threat’ and the associated negative consequences for society are prerequisites for the enforceability of preemptive measures.
One comment on “Absolutizing Prevention: Big Data, AI, and ‘the Future as a Threat’”