Resisting the Techno-Fascist Takeover: Are We Ready for Decomputing?

DOGE Dystopia Protests by Colnate Group, 2025 (cc by nc)
DOGE Dystopia Protests by Colnate Group, 2025 (cc by nc)

Because they are based on centralization and abstraction, our current sociopolitical structures are susceptible to being replaced by AI. This is not only exemplified by the Department of Government Efficiency (DOGE), which was initiated by the second Trump administration; there are many less heavily publicized equivalents all over the world, including in Europe. Recognizing that our societies are on the verge of a techno-fascist takeover, Dan McQuillan evaluates the risks and offers practical strategies for resistance.

*

In the actions of DOGE in the USA, we’re seeing the kind of technopolitical turn to fascistic solutionism which is described in “Resisting AI.” That book was an attempt to preempt the further convergence of far right politics and the tech sector, but the emergence of so-called generative AI and the rising wave of fascist politics has actually speeded things up. Nevertheless, resistance is both possible and more urgent than ever. In this article I’ll outline the mechanisms by which DOGE has been able to hack the state and what that means for technopolitical resistance in the UK and Europe.

Acceleration

While the specifics of what tech, such as AI, can and cannot do are important, the broader transformation we are experiencing isn’t driven by tech itself – as disruptive as it may be – but by the ongoing collapse of existing systems, particularly the neoliberal world order. While the rhetoric around DOGE claims it’s addressing such a crisis, one it blames on state institutions being both bloated and woke, this doesn’t hold up to scrutiny. While sacking people and cancelling contracts might seem to save cash in the short term, DOGE’s recklessness is accelerating chaos rather than efficiency.

Instead of optimizing the state, technological accelerationism and reactionary politics have fused in an attempt to rollback of all forms of progressive change and social inclusion. Gains in civil rights in the USA were imperfect but hard fought for, and the theory of constitutional order is that substantial change should be hedged about with democratic and legal checks and balances. Instead, a gaggle of young men with six laptops per backpack seems to have been able to smash this social contract more-or-less overnight.

Privilege escalation’

It turns out that the centralization, bureaucratization and digitization of state institutions has rendered them vulnerable to what is essentially a form of ‘cyberattack from within.’ Relying on order-following and secure passwords doesn’t hold up when the orders are to hand root access to an intern from Tesla with moderate tech skills and limitless libertarian hubris. In hacking, this kind of takeover is called ‘privilege escalation’. Once the nerds have read/write access to HR and payment systems, the accumulated experience of even careful and conscientious public servants counts for very little.

It’s at this point that the techno-fascist commitment to AI really kicks in. Whether it’s assessing from 5-point emails which employees are surplus to requirements, or tackling empirically impossible tasks like reviewing all 76,000 contracts held by the Department of Veteran Affairs (VA) within 30 days, Elon Musk’s acolytes turned to AI, and in particular to Large Language models (LLMs). In the VA, the tool to do this was ready by the second day. Rather than a triumph of coding, this was a common-or-garden LLM instructed via the system prompt (the invisible pre-instruction prompt) that “infrastructure directly supporting patient care should be classified as NOT munchable. Contracts related to diversity, equity, and inclusion (DEI) initiatives or services… should be classified as MUNCHABLE” (where ‘munchable’ means listed for cancellation).

Of course, understanding the complex interdependencies of these services requires insights into medical care, institutional management and resource allocation. An LLM is not right for this job for many reasons, not least of which is that it literally understands nothing and actively erases context and relationality. It’s hard to know from the outside how much the DOGE boys buy into the ‘AI is nearly AGI’ line and it doesn’t really matter. They already know that the woke leviathan must be destroyed and that AI is the best way to go in hard, while at the same time throwing up a smokescreen that systems are being upgraded rather than simply trashed.

What their cyberattack has done, and what court orders seem unable to undo, is pool vast amounts of data that were previously siloed to prevent the abuse of power. Of course, as anyone on the wrong end of the neoliberal status quo knows, this data had already been thoroughly weaponized by bureaucratic cruelty to track immigrants or cut off welfare payments long before DOGE or even Donald Trump appeared on the scene. What’s different in today’s USA is that the mask is off and even the performance of democratic accountability has been thrown into the bin. Instead, companies like Palantir are happy to suck up all the data and ‘build to dominate’ (their words not mine).

Technopolitics

It’s important to understand that DOGE was just one way to achieve this and, like the Covid-19 pandemic, it’s far from over and it’s spreading fast. In the UK, for example, the Labour government is enacting much of this itself, from the data sharing to subcontracting important aspects of the National Health Service to Palantir. The populists shout for raw DOGE while supposedly left-leaning think tanks argue for ‘progressive efficiency’ and ‘DOGE done better’. Political parties everywhere are genuflecting to AI and appeasing the far right, and none of them seem willing to do what it takes to prevent a techno-fascist takeover. Pushing back is going to be down to the rest of us.

Fans of ‘sovereign AI,’ the current favorite move of European states freaked out by the thought of Trump’s finger on an AI kill switch but as hooked as the UK on AI for growth and geopolitical leadership, should note that it doesn’t escape any of the problems described here. Like everywhere else, Europe is caught up in a collapsing neoliberal order and is facing multiple social crises that it is unwilling to address except by staggering rapidly to the far right. The EU is as AI-pilled as the UK, to the extent that it now has a plan for Europe to become an ‘AI Continent’. The mechanisms of social ordering within the EU embrace the kinds of centralization, bureaucratization and digitization which make them vulnerable to techno-fascist takeover. What’s unclear at this stage is whether that will come from a more obviously destructive push, like DOGE, or whether the far right are now so powerful that they will simply take over the EU from the inside. Certainly, the far right bloc are happy to back plans for ‘European technological sovereignty and digital infrastructure’ because they know it resonates with their ultranationalism rather than disrupting it.

What we’re looking at here is a technopolitical struggle; one that sees technology not as a neutral tool but as an apparatus that condenses the politics of the past and shapes the politics to come. While it’s still hard to persuade some people that all politics is technopolitics, there are sectors where this argument doesn’t have to be made. The disability movement, for example, has a very refined understanding of the way tools make concrete the social model of disability and of what’s needed to retool for autonomy. The climate movement understands that our technologies embed the toxic ideology of infinite growth that threatens to burn up the planet, and can articulate sustainable alternatives. In my own field of higher education, some are developing a sharpened understanding of the way technologies like generative AI are undermining pedagogy, institutions, and the possibility of critical thought. All these fields are ripe for technopolitical resistance.

In “Resisting AI” I argued that one starting point is the formation of workers’ and people’s councils on AI, and I think that still holds. These structures collectivize the refusal of AI, and do so in ways that elevate the relationality, context, and care that AI itself abstracts and erases. I’m suggesting that people’s councils on AI, whether formed from union branches, parent-teacher associations or activist groups, are a way to seed an effective resistance to future attempts at techno-fascist system hacking. What I’d like to add here are two linked concepts for the ways people’s councils can interrupt the apparent inevitability of AI-driven fascization and push things in a different direction. Those concepts are ‘scale’ and ‘conviviality’.

Decomputing

AI demands scale and depends on it – in terms of the data required, the scale of the computing resources needed and the energy demands of the data centers. The industry itself accepts that scaling is its only way forward (an insight known to some as ‘the bitter lesson’). At the same time, our current sociopolitical structures – being based on centralization and abstraction and thus embodying the logic of scale – are deeply susceptible to replacement by AI.

An example of pushing back on both fronts at the same time is resistance to the development of new hyperscale data centers. The multiplication and expansion of data centers is the material basis for AI’s techno-fascist operations. Their demand for electricity and water is so huge that, when it comes to future power cuts or water shortages, it’s going to be a contest between data centers and the rest of us. Such is the power of Big Tech that they can rely on regulatory and state capture to enact their plans. Pushing back against data center development through forms of collective and directly democratic assembly, rather than by fruitlessly appealing to existing authorities to follow the rules, tackles the scaling of the material infrastructures and the distancing of decision-making at the same time.

The problematization of scale raises the question of how we should proceed instead. I think this is where the resistance to AI intersects with the movements for degrowth. Like the idea of abolishing AI, degrowth doesn’t stop with refusal but switches the focus to other ways of organizing and doing. The idea of degrowth is also a demand for alternative visions of society. A technopolitics that opposes both AI and the obsession with growth that consumes its political and financial backers is one that can provide a framework for moving forward, and this is where the concept of conviviality comes in.

The concept of conviviality, created by thinkers such as Ivan Illich, provides criteria for developing an alternative technopolitics. Illich advocated for ‘counterfoil research,’ which “has two major tasks: to provide guidelines for detecting the incipient stages of murderous logic in a tool; and to devise tools and tool-systems that optimize the balance of life, thereby maximizing liberty for all.” Subsequent work,
Andrea Vetter’s “Matrix of Convivial Technologies,” has turned this into questions we can ask of any technological innovation at any level, such as ‘how does it affect relations between people?’ and ‘how does it interact with living organisms and ecologies?’.

A rigorous and militant application of these criteria by the aforementioned people’s councils, in workplace or community settings and as as part of existing social movements, is a way to develop technopolitical counter-power. Most importantly, demanding the social determination of technology is a way to dispel the loss of collective agency which has resulted from decades of neoliberalism. This combination of degrowth and critical technopolitics is what I call ‘decomputing’.

Remaking

In current times, we must all be anti-fascists. But anti-fascism, like all forms of resistance, only makes sense as the precursor to something better. The Italian partisans who fought fascism with such determination weren’t motivated by a return to a bourgeois status quo but by the hope for a better, fairer and more ‘solidaristic’ society. Similarly, the small pockets of transformative technopolitics in the present moment, like the GKN factory occupation which has socialized the switch from making lorry axles to producing cargo bikes and solar panels, explicitly does so as part of a grassroots movement for a just transition (and also does so under the partisan slogan ‘Insorgiamo!’ or ‘Rise Up!’).

Resisting AI means rejecting its consequences, such as the resurgence of eugenics through welfare and healthcare systems. It also means rejecting the conditions that allow AI to become so important and influential, like our growth-obsessed, centralized political economies. Currently, the far right has the momentum. They are successfully projecting their nihilistic vision through technologies that are innately anti-worker, anti-democratic, and racially supremacist. Part of ‘building the new world in the shell of the old’, as the IWW put it, is developing forms of tech infrastructure that resonate with remaking society along convivial, confederal, and mutualist lines.

2 comments on “Resisting the Techno-Fascist Takeover: Are We Ready for Decomputing?

  1. Very necessary. Very interesting… “all politics is technopolitics… This combination of degrowth and critical technopolitics is what I call ‘decomputing’…”
    Thanks and best wishes*

  2. I worked in this area of thought prior to leaving academia in 2020. The notion of disconnection, non-use, or refusal typically attracts strong critique from the mainstream, especially since technology adoption is viewed as part of an imaginary effort towards some moral and social progress. The ultimate argument of my phenomenological research into this area was that to be a non-user is to become “insensible” to the user, whether the technology in question is AI, cryptocurrency, social media, fossil fuels, or other connective technologies. The “user” of a house does not sense or appreciate the experience of the “non-user” (unhoused) person, and the experience of the “user” of a hegemonic sociotechnical order makes the experience of a non-user seem beyond appreciation, or “insensible.”

    If people are to build a real resistance against these dominant technopolitical paradigms, they must collectively dismantle the expectations towards technology adoption, not merely walk away. General Ludd and his acolytes were not content to leave the machines as they were, but worked together in response to those who would impose their use on the rest of us.

Leave a Reply

Your email address will not be published. Required fields are marked *.

This site uses Akismet to reduce spam. Learn how your comment data is processed.