Unlearning Quantification: The Cruel Optimism of AI in Education

Workers at data storage factory in rain of code. Artwork: Colnate Group, 2025 (cc by nc).  Based on a photo by Robert Scoble (cc by 2.0).
Workers at data storage factory in rain of code. Artwork: Colnate Group, 2025 (cc by nc)

Using artificial intelligence (AI) algorithms in education is a political decision that deprofessionalizes educators and surveils and deskills all participants. As with the application of these algorithms in other fields, the goal is to tightly control and scrutinize the learning process, thereby taming and domesticating the minds and bodies of everyone involved. Alexandra Ștefănescu discusses possibilities of unlearning in AI-capitalism.

*

“Cruel optimism” is a term coined by Lauren Berlant that describes our tendency to cling to neoliberal fantasies, such as upward mobility and job security for all, even when these ideals are no longer achievable. Pursuing these fantasies harms us and prevents us from realizing a ‘good life,’ yet we remain invested. This phenomenon also manifests in AI capitalism: There is a profound sense of cruel optimism in believing that technologies that have brought us to the brink of planetary collapse can be made ethical through enough adjustments and legal enforcement.

In the European Union, the AI Act Regulation failed to honor collective demands for protecting fundamental human rights. Heavy lobbying by Big Tech resulted in legislation that relies on companies’ self-assessments and burdens governmental institutions with the task of developing AI technology expertise to effectively regulate the market. In a final nod to the feedback loop of control and surveillance, the use of AI technologies in matters of ‘national security’ is exempt from this legislation’s scrutiny.

When it comes to resisting AI, Dan McQuillan focuses on decomputing as a resistance strategy. He stresses that an essential ingredient is recognizing how generative AI technology nurtures far-right politics and tracing this back to the ideological focus on perpetual growth that led to our current situation. Consequently, the goal is to link degrowth with decomputing.

This is a social project that will not be feasible without major changes at the macro level, including large-scale reeducation initiatives. But how is this possible in an era where education is increasingly precarious and subject to the perpetual pressures of growth and productivity? It is an era in which the promotion of generative AI in education as a new neoliberal fantasy has emerged as a form of cruel optimism – a form of optimism that ultimately has dehumanizing effects.

It’s labor all the way down

Education, in all its facets, is labor. We conceptualize it as such when thinking of educators preparing and delivering their classes. We recognize this labor when it comes to academic research, publishing, reviewing, collaborating on projects. The emotional labor of pupils and students, their focus and participation, as well as their group study and collaborative problem-solving, are also part of the labor of education.

The success of a capitalist, market-driven economy hinges on estranging people from their labor. Deskilling, surveillance, and union busting are necessary to ensure that more labor can be extracted from precarious workers at lower costs. Unsurprisingly, computing plays a role in this dynamic.

In “Origin Stories: Plantations, Computers, and Industrial Control,” Meredith Whittaker does not mince her words when she contextualizes the origins of modern digital computing. Charles Babbage, one of the designers of the mechanical computer, viewed democracy as being incompatible with capitalism. Computation, as he envisioned it, is a tool that controls and directs human labor.

Owing to the speed and reliability of digital computation, labor is now tracked, evaluated and optimized by algorithms. The story we tell about ‘the worker’ is no longer one in which she masters her craft, but one in which what constitutes labor is decided and measured outside of her body. Compensation for labor, it follows, is determined externally, and always subject to a pressure to do more with less.

In schools and universities, generative AI is touted as a technology of personalized, tailor-made education. The algorithms of generative AI, much like Babbage’s mechanical computer, evolve towards the goal of making the labor of education more efficient. By deciding which content students will receive and how their performance will be evaluated, generative AI functions like a factory floor optimization mechanism, aiming perpetually to do more, with less.

The optimization of labor hinges on reducing workers and processes to quantifiable data. The current generative AI technology was made possible by hardware and software that can process more data, quicker than ever before. We often question the ethical grounds on which data for Large Language Model training is obtained (or the legal grounds, if we reach for copyright law). We may even question how this data is processed, and criticize the non-deterministic way in which these models function. However, when it comes to optimizing labor, data is used as an acceptable substitute for embodied human experience, and this substitution frequently evades our scrutiny.

In the the first part of her book, “Discriminating Data,” Wendy Chun charts the transition from looking for causality to relying on correlation, when evaluating human behavior and choices. Correlation neither explains nor reveals anything about the human experience, and doesn’t require any embodied knowledge. Chun challenges the idea that correlation in data is a substitute for knowledge. Instead, she argues, treating people based on these correlations creates the behavior it claims to track, not the other way around.

Constant monitoring of labor has become normalized in much of our collective narrative. From requiring employees to work in an office where their bodies and output are under scrutiny to widespread video surveillance in private and public spaces to software that tracks our online activity, data about us becomes a ‘good enough’ representation of us as individuals.

A personalized learning experiences relies on the same substitution. Tailor-made feedback and advice does, as well. The ideology that sustains this substitution is that of productivity and perpetual growth.

The ideological road not taken

In “AI and the techno-utopian path not taken,” Evgeny Morozov writes about a moment in time when two contrasting visions of AI painted two potential futures. In the late 1960s, researchers at MIT were pursuing ways to implement reasoning and problem-solving, by creating algorithms that try to capture the essence of these human processes. Around the same time, in the Environmental Ecology Lab, Warren Brodey believed that reasoning is an embodied process, that emerges from interaction between people and their environment. He aimed to enhance human capabilities through technology expressed in the environment. While the MIT approach pursued greater productivity and efficiency, the Environmental EcologyLab pursued technologies that help us develop richer perceptual abilities.

The ideological legacies of labor optimization and creating algorithms that replace human activities converge into the practice of deskilling. Through this lens, the technology developed sought to maximize the output of labor (be it creative, physical, cognitive etc.) and minimize the resources required.

In particular, AI technologies erase the subjective experience of labor and ensure that the output is quantifiable. Replicating ‘human intelligence’ (through ‘artificial’ algorithms) is only feasible if we accept that the output of our ‘intelligent’ actions can be measured, quantified, and automatically reproduced.

The diminishing sense of mastery and ownership of the worker’s craft decreases her avenues of feeling solidarity with her fellows. Unionizing, collective action and strikes all require trust and a sense of communality to develop. When workers view their craft through the lens of quantification, they are encouraged to compare rather than empathize. When our stories are told using numbers and statistics, our embodied sense of labor disappears, as does the understanding that we’re all in this together.

Re-wilding the classroom

The neoliberal pursuit of growth has weakened the regulation of public interests, just as the quantification of human experience has weakened the organization of common interests. In order to chart a path away from the development of technology based on data subjects, we must first become more than data to one another. Audre Lorde cautioned us that, “the master’s tool will never dismantle the master’s house.” But where to start?

One point of departure: Dr. Alina Utrata from Oxford University published a guide for students and professors, titled “The Anti-Dystopian’s Guide to GenAI for students & educators.” This resource takes a practical approach to the major issues of generative AI technologies, proposing ways to resist them and prevent the co-opting of educational institutions.

On a more abstract level: Resisting the current way of seeing, which is shaped by dehumanizing technology – currently epitomized by generative AI – can lead to the development of human technologies and a renewed appreciation for humanity. Beyond that point, we must envision new potential futures and work towards them. ‘Rewilding’ spaces for learning and knowledge sharing, such as classrooms, reading circles, and libraries, allows us to transcend our role as subjects of algorithms and become subjects to each other.

Evgeny Morozov paraphrases the thought experiment of philosopher Evald Ilyenkov: “Building AI is like constructing a massive, costly factory to produce artificial sand in the middle of a desert. Even if the factory operated perfectly, why not simply use the abundant natural sand, human intelligence?” Substituting the term ‘intelligence’ for the dialectical process of teaching and learning from each other also highlights the ecological aspect of this reasoning: Education becomes necessary for unlearning the quantification of human and other-than-human life and, more boldly, for relearning how to see each other and how to see other species and lifeforms.

One comment on “Unlearning Quantification: The Cruel Optimism of AI in Education

  1. Congratulations on the publication, but more importantly, great job as always aligning relevant technical understanding into a thoughtful, uncommon critique of it–in this case, a consideration of AI and education that, as you’ve indicated, hasn’t been thought of very much (including by me). I also appreciate the links to your resources. Great work, as always!

Leave a Reply

Your email address will not be published. Required fields are marked *.

This site uses Akismet to reduce spam. Learn how your comment data is processed.