
While earlier digital media gradually removed the world from the equation, replacing it with technically generated simulations, AI now goes a step further by removing the human from the equation altogether: Humans are no longer essential to the production of language and text. What is the role of education in this context? Katrin Becker reflects on our options.
*
University hallways are currently abuzz with debates over the extent to which students should be permitted to use artificial intelligence in their academic writing. The reality can no longer be ignored: the expectation that submitted work is truly the students’ own is becoming increasingly tenuous. And anyone who has explored the capabilities of AI models and witnessed the remarkable results they can produce with well-crafted prompts likely knows the seductive inner voice all too well: ‘Why not take a quick peek at what ChatGPT, Claude, or Deepseek might suggest? Perhaps their version is sharper, more creative, more polished than the clumsy draft I’m struggling to put together?’
At the same time, given the rapid encroachment of artificial intelligence into nearly every aspect of society, it is hardly surprising that the humanities, in particular, are grappling with a fundamental question: What skills should universities be teaching their students in the first place? Until recently, the answer seemed relatively straightforward – students were to learn how to read and write critically, equipping them for careers that depend on these abilities. But this argument is now being called into question. After all, why teach skills that may soon be rendered obsolete in an AI-driven professional world? As a result, voices are growing louder, arguing that the primary mission of the humanities should be not to fall behind technological progress but rather to ensure that students learn to navigate and employ AI tools effectively and strategically.
Democratic potential vs. decline of the West
And this is where opinions diverge: Some draw parallels to earlier media transformations, cautioning against the usual doomsday rhetoric that accompanies such shifts – warnings that, time and again, have proven unfounded. After all, texts remain at the heart of the matter; only the way we engage with them is changing.
In fact, this new approach may not only be more efficient but also more democratic: AI provides access to an unprecedented archive of texts and data, allowing learning to be adapted to individual needs and pacing. Moreover, it enables us to uncover patterns of thought and representation that shape our cultures – patterns that have so far remained hidden. In this sense, AI offers the possibility of statistically mapping the great Third – the entity that embodies society’s shared sensibilities, its written and unwritten rules, its values, and ideals – within just a few steps, rather than through the slow and laborious processes of interpretation traditionally employed.
The use of AI in writing, after all, may equally not be as problematic as it first appears. Even here, a fundamental understanding about texts remains essential. Without such knowledge, it’s impossible to craft prompts that yield truly meaningful or coherent results. In fact, could we not say that the formulation of prompts is becoming the new kind of textual craftsmanship – one that is poised to shape the way we write in the future? And even here, there’s a democratic potential: AI now enables individuals to overcome difficulties with writing or reading, making it possible for everyone to draft clear and elegant texts.
However, other, more critical perspectives argue that such applications inevitably signal the decline of essential skills – and, potentially, of the West itself. According to this view, it is only the direct, time-consuming, and resource-intensive engagement with texts that fosters the “cognitive sophistication and speculative skills” from which critical thinking and autonomy emerge. And herein lies, as is argued, the key difference between this media shift and previous ones: the skills now being outsourced to technology are primarily intellectual – they pertain to comprehending, organizing, and critically evaluating complex ideas. And to be fair: this perspective seems to hold some merit when considering research results that indicate the detrimental effects of using tutor chatbots on learning abilities.
At this point, it might be worth considering whether the arguments raised here should be seen not as ‘either/or’ or ‘and’ choices, but rather as ‘if-then’ scenarios – seemingly to the liking of all adepts of the algorithmic paradigm that is increasingly shaping our world. In other words: only when one has mastered the confident handling and critical understanding – namely, the reception and creation of – texts, then can and must the competent use of AI be learned.
Algorithms hidden in nature itself?
To elaborate further, it is essential to recognize that our current social and individual existence is shaped by two distinct realms: the virtual, data-driven (and seemingly dematerialized) realm, and the physical realm bound to materiality and bodies, referred to as the ‘meat space’ in blockchain circles. The virtual realm is constituted by data, algorithms, and numbers, which are gathered to analyze, compute, and – using the relevant technologies – program the material world.
Philosophers such as Jean Baudrillard, Vilém Flusser, and Rosi Braidotti have explored how these two spheres are becoming increasingly inseparable. And this entanglement is making it progressively harder to perceive the virtual sphere for what it truly is: a byproduct, an “imitation” of the physical realm, with its ‘own causal regime.’ It has become too strong in shaping our thinking and perception of the material, physical, and biological world. Yet, this very point is where we want to focus our attention. As by now, the assumption seems to have gained widespread acceptance that the virtual world should take precedence: It is increasingly employed as a model or starting point for understanding and shaping the material world.
This is vividly reflected in the idea prevailing in our collective imaginary – that algorithms are hidden in nature itself, merely awaiting discovery so that we can eventually master them. It also underpins the belief in the emergence of a sentient AGI, one that will not only surpass humans but will also optimize them and solve all global challenges – including climate change. Blockchain technology has been working to translate this mindset into a practical model, with the ultimate aim of divorcing the virtual world from the material world. In that sense, efforts are underway to legalize the virtual sphere through the law of the blockchain, Lex Cryptographia, establishing the blockchain code as a new institution and thus enabling a detachment from the burdens of materiality and state governance.
The cart before the horse
That this argument puts the cart before the horse becomes evident when we consider two fundamental points. First, human, social, and biological life can never be fully reduced to a calculable formula. And second, the virtual sphere is inevitably secondary, in other words: its very existence depends on continuous input and support from the physical world. Let’s examine both points.
First, despite current appearances to the contrary, Jean Lassègue and Giuseppe Longo, drawing on the work of thinkers such as Alan Turing, or Kurt Gödel, emphasize that all “biological, cognitive or social phenomena” possess an intrinsic dimension of unpredictability. In a similar sense, as philosophers such as Alain Supiot, Pierre Musso, or Antoinette Rouvroy argue, society is never merely the sum of its measurable and documented parts. Its very essence and functioning rest on more than what is explicit or verbalized. The values and narratives through which individuals form a collective, shaping a social unit, emerge in linguistic, physical, and aesthetic exchanges.
This always implies a dimension of the unspoken, the incomplete, such as what is remembered or what is hoped for. And this is where texts and language play a crucial role: they sustain the ongoing exchange between the individual and the narratives that constitute a society and hold it together. For the relationship between text and reader, between language and speaker, writer or reader, is never unidirectional, but a dynamic interplay of mutual influence: the texts of a society and its language shape me, yet in writing, speaking, and interpretation, I do not merely use language as a tool – I actively shape and evolve it, introducing metaphors, new ideas, and novel expressions.
If AI-generated texts are now claimed to fulfill the same function as these texts, perhaps even more effectively – offering individuals access to the entirety of a culture’s textual heritage – then several critical aspects are overlooked. First, the data on which AI relies is limited to what has been documented, made explicit, and, above all, digitized. Moreover, AI does not provide direct access to texts but rather to statistically derived averages of them. This means that readers are never in direct dialogue with the text: A level of abstraction is always interposed. And contrary to the common assumption of data neutrality, this abstraction is never free from value decisions. Even when not explicitly driven by political intent – which, in many cases, it is – the seemingly neutral programming ‘to predict strings of text according to frequency’ has political consequences: For it inevitably leads to ‘majority accounts’ being given greater consideration, thus triggering a “homogenization of historical and scientific accounts.”
Let’s move on to the second point, i.e. the claim that any attempt to prioritize the virtual over the physical world – or to sever the two entirely – is bound to fail. It is important to realize that AI is fundamentally dependent on input from the material sphere in multiple ways. Structurally, AI cannot function without the vast technological infrastructures that sustain it – systems that demand ever-increasing amounts of energy and human labor. As an expanding body of research reveals, AI relies on vast armies of (often exploited) data and click workers who play a crucial role in refining its precision.
But AI also depends on a continuous influx of new content. Recent studies highlight that the already emerging scenario in which AI-generated texts feed on other AI-generated texts triggers the system to enter a cycle of self-reference that ultimately leads to an autophagy process: As Large Generative Models are recursively fine-tuned on their own outputs, the result is an inevitable ‘reduction in lexical and topical diversity,’ culminating in what researchers call ‘model collapse.’
Emergence of an ubiquitous AI-speak
Let us now turn back to the initial question of whether and to what extent students should be allowed to use AI in their academic work. How, then, can the if-then approach be justified in light of this argumentation? In other words, why should humanities students first learn to engage in the often time-consuming, tedious, and seemingly purposeless task of reading and writing human-based texts before turning to AI?
First and foremost, because only by doing so can the physical world be given the priority it deserves – an essential priority for both individual intellectual development and societal cohesion and progress. For it is only in the always individual and uncertain interpretation, in reading different, unexpected or unwieldy texts, in the encounter with different styles of writing and thinking that students learns to situate themselves in the space of culture. Where they experience what Hans-Georg Gadamer describes as the fusion of horizons: the dialogue between reader and text, between past and present, between different worldviews. Only in this way is it possible to advance to and critically explore the ideological-historical, textual, medial foundations and dynamics of culture – our own and others’. In other words: It is through this process that students acquire the very skills that define the humanities.
Of course, engaging with AI-generated texts can also prompt critical reflection – on why AI has come to dominate our societies, why it is so often uncritically accepted as the standard for all social processes; to what extent it needs to be understood as a product of the Western tradition of writing and thinking; or on why AI, which appears to speak with omniscient authority from off-screen or from the cloud, is actually based on a comprehensive exploitation of nature and humans and is embedded into concrete geopolitical strategies.
However, the reflections initiated here are inevitably tinted by the aforementioned intermediary level of abstraction: Not only are we inevitably confronted with homogenized content only, but also with a homogenized language. And this has the expected effects on both thought processes and language practice of those using it on a regular basis: A creeping standardization of expression – the emergence of an ubiquitous AI-speak – is already becoming evident in student work, leading to the same effect discussed above: a measurable decline in ‘lexical and topical diversity.’
A ‘living force’
It is precisely for this reason that we should not even allow the use of AI to smooth out the style – no matter how appealing it may be for lecturers to suddenly receive only elegantly phrased, error-free texts. But it is only from the careful handling of writing, of language, that the interweaving of subject and text, the identification with and responsibility for the text arises. Only here does the irreducibly individual style emerge – that “absolutely free bond between language and its corporeal double”– which not only fosters the development of individual intellectual abilities, but – as a ‘living force’ – also ensures and nourishes the dynamism of language. The interposition of an algorithmic entity that takes over large or small parts of the writing process does not simply create a gap between author and text, making individual responsibility for one’s writing increasingly ambiguous. At the same time, it erodes the connection to and thus the foundation and mutability of the text of society.
In my opinion, this is precisely the reason why this media transformation differs from previous ones. Media are not simply channels of information between sender and receiver, they shape our understanding of the world. They mediate, with language as the core medium, the relationship between people and their meaningful world. While earlier digital media – following thinkers like Vilém Flusser or Jean Baudrillard – gradually removed the world from the equation, replacing it with technically generated simulations, AI now takes it a step further by removing the human being from the equation altogether: Humans are no longer essential to language and text production.
Of course, the world and humanity do still exist. And it is ultimately up to humans to contextualize AI-generated texts within the realm of meaning. However, in order to know how to handle these texts responsibly, it is crucial to first understand what this responsibility entails, what defines a text, what constitutes reading – in order to ensure that humans continue to play the central role in the production and interweaving of meaning in society.
The lives of students are increasingly shaped by the virtual world, in line with prevailing ideologies. Time and again, we have witnessed how quickly children and young people adapt to new technological conditions and learn to use the relevant tools. And it is undoubtedly important and necessary to guide them in a way that enables them to use AI competently and for academic purposes. However, this should not be the primary concern of the humanities. First and foremost, universities should focus on the physical, material dimensions of our social and cultural life, and create spaces for the “Anverwandlung of knowledge” – this not in the sense of a reactionary opposition to progress but rather as a way to establish the foundation on which AI can truly generate meaningful value.
Thanks, interesting.
I feel that academic papers already had developed a kind of “new speak” and the “AI-speak” that you brilliantly coin is a step forward in that direction.
In a more general perspective, fewer people will be able to read and write in classic terms. Already most don’t do that much, and their main references are social cliches, mainstream media and memes.
Good luck and best wishes