
There is a widespread belief that ‘humanity has irrevocably ruined the planet’ and that it is time for another, higher form of intelligence to take the helm. The AI industry offers salvation by replacing human expertise with the expertise of AI. Helen Beetham counters this gospel of techno-solutionism, exposing its many blind spots and presenting ideas for a future politics of expertise and education.
*
As Giorgi Vachnatze writes, so-called ‘artificial intelligence’ is part of a long-term capitalist project of automation, one that “does not simply optimize production; it disciplines workers into accepting their marginalized roles within a broader, cybernetic regime.”
Cultural and intellectual workers – graduates – are the primary targets of the present wave of ‘optimization.’ So you would expect the higher education sector to have a coherent response. How have universities reacted to the ‘marginalization’ of academics while AI features are embedded into their daily work flow, and to the scraping of scholarly content into cybernetic architectures only to be sold back at exorbitant rents? How have they reacted to the hawkish targeting of students as users of generative AI, promising instant expertise instead of the hard yards of learning? Hardly at all, it seems.
Capturing expertise
Structurally, large language models (LLMs) reframe expertise as data, and expert practices as forms of data work. In training, texts are rendered as data architectures – hundreds of billions of parameters, each an instance of the ‘empty relation’ of proximity among datafied parts or tokens. What makes these architectures useable for new acts of production (‘inference’) is partly the meaning-making work of end-users, and partly the hidden labor of ‘refinement’: annotating and ranking outputs from the model, writing exemplary answers, developing system prompts. This is epistemic labor too, but it is fragmented, deskilled and made invisible, according to what Harry Braverman termed the ‘Babbage Principle’ of ‘dissociat[ing] the labor process from the skills of the worker.’It also, of course, dissociates workers from each other.
Data work is typically precarious and poorly paid. But aspects of it require at least graduate level education, particularly as data models are used in more expert and specialized contexts. Data outsourcing companies often specialize in particular sectors, and even general data platforms prefer workers with specific expertise. Much of this data work is currently paid for by the model developers, and it constitutes a significant cost. But as the epistemic limitations of their product show no signs of being resolved by other means, they are keen to outsource the cost to their customers.
Users are now commanded to upload their own expertise in the form of extended prompts and linked data sets. Organizations are instructed to make themselves ‘AI ready’ by rendering their in-house expertise as data so the ‘AI’ can ‘work’ (so-called ‘retrieval augmentation’). Just as in previous waves of datafication this means professionals doing more routine data work, not less, their jobs becoming less rewarding and remunerative, and IT systems absorbing more and more of the value generated in work.
‘The world needs fewer experts’
Even if it were theoretically possible – and economically feasible – to capture all available expert knowledge in a data system, there are still outlying scenarios and rapidly accelerating crises that demand an expert response. The model of expertise that AI proposes is the same as Babbage’s: ‘talent’ and ‘innovation’ are the preserve of a tiny fraction of humanity, and machinic labour for the rest. No need for a highly educated middle class so long as the ‘latest’ expertise is propagated through the system. No need for mass intellectual and cultural participation, so long as the ‘best’ expressions of our shared humanity are available – with a subscription – to all.
In the field of higher education itself, we are regularly subjected to a vision of universities where only the ‘top professors in their fields’ are allowed to do any actual teaching (via MOOCs, TED talks or online masterclasses). The rest are employed in ‘student support,’ a role that is rapidly being occupied by AI agents and surveillance tools. We should take these visions of education seriously, not because AI is delivering them, but because they represent a serious intention. The AI industry does not want anything good for higher education. And it does not want to restructure higher education as a project of mass intellectuality and expertise. The more outspoken of its representatives are at least willing to say this out loud. AI demands investment in private technologies, not in public education. It offers to replace shared knowledge and an empowered middle class with proprietary data and ‘smart’ augmentations. AI will not improve the quality of outcomes, let alone the quality of educators’ working lives. It will, if it is allowed, have most of rubber stamping AI outputs and acting as the remedial ‘human in the loop’ when student surveillance systems flag a problem.
A revolution postponed
But what if AI can revolutionize the productive power of expertise? What if it means better cancer diagnosis, drug discovery, and climate modelling? These often-cited examples are specialist applications of machine learning and not general language models, and even here the evidence for enhanced outcomes is limited and controversial. Geoffrey Hinton’s 2016 prediction that radiologists would soon be obsolete was a vast over-estimation of the capabilities of machine learning, even in a paradigmatic use case that has seen decades of investment. When it comes to the general economy, economist Daron Acemoglu predicts the productivity gains from generative AI could be less than 0.55% over the next decade, and many AI-driven efficiencies could have ‘negative social value’. In ‘Ironies of Generative AI’ the authors find: “a shift in users’ roles from production to evaluation, unhelpful restructuring of workflows, interruptions, and a tendency for automation to make easy tasks easier and hard tasks harder.”
Experts can, it seems, use generative models for some routine tasks by judging what is routine, what shortcuts make sense, how to spot the errors, and when an outcome is ‘good enough’. But none of this makes expert work easier, better or more satisfying, none of it necessarily makes expertise more productive overall, and all of it requires… expertise. For the non-expert the plausible faux-expertise of language models may be actively disabling, with effects that include cognitive dependence, deskilling, reduced learning and less personal agency. After all, in higher education and professional learning, content is not produced to be productive but to develop a practice, and to develop through practice. Expertise isa qualification of the whole self, a set of values, a repertoire of situations, a role in a community of shared understanding. There may be shortcuts to sounding like an expert, but there really are no good shortcuts to being one.
The ouroboros of datafied content
Alistair Alexander wrote recently about the risks to knowledge systems from the algorithmic production of cultural and intellectual content. Universities are more threatened than any other sector by these epistemic harms. We are witnessing the loss of open knowledge projects to AI crawlers, AI flooding spaces of public knowledge, the degradation of search engines, an epidemic of scientific fakery, the corruption of the peer review process, threats to scholarly archives and especially we are threatened by scholarly publishers offering up their past and future catalogs ‘to train AI’.
But these degenerative effects are only possible because automation is already well advanced. Scholarly careers already depend on data: how many papers, how often cited, how ranked, how indexed, etc. Meta-reviews and data-based methodologies dominate even in the humanities and critical social sciences. Teaching, learning and assessment require the passing of academic content through multiple data systems, from learning management (LMSs) to plagiarism detection, and from automated marking to progress dashboards. Now, thanks to ‘AI-enabled’ LMSs, teachers can ‘auto-generate’ their lecture slides and assignments. Other systems offer to provide plausible feedback. No wonder students turn to the outputs of generative systems for fear of falling behind their AI-augmented peers.
This has been framed as a crisis of ‘cheating,’ but it is surely a crisis of university education in its entirety. If the curriculum we devise can be ‘passed’ by plausible facsimiles of expert content, we are neither supporting students to become experts in their own right, nor offering them any motivation to do so.
Instead, AI apps are marketed to students for AI production, and to teachers for AI detection, in an arms race that neither side can win. The AI industry makes money from both sides – indeed student essays are one of the very few profitable use cases – and academic content is part of the deal, monitored and monetized to produce ‘improvements’ in the system. In this ouroboros of data and capital circulation, actual content is only a momentary configuration of the data flow in human-readable form. How long before this hindrance to productivity and speed can be removed, leaving student and tutor systems to read each other, unimpeded by a human interface? Why study when you can subscribe to the next upgrade?
What universities could be doing (differently)
Universities need to engage with ‘artificial intelligence’ as a contested zone of meanings, and as a political-economic project that they can influence, rather than simply preparing students for a future in which ‘AI skills’ are the only expertise that counts. On my blog I explore a number of possible responses, but here are a few.
1. Continue to develop experts with their own specialist knowledge and practice. Even if, as promised, ‘AI’ manages to encode expertise in more diverse, situated and useable ways, using those systems will only be rewarding (in every sense) if the ‘human in the loop’ is an expert in their own right.
2. Defend scholarly archives and repositories from generated content, and uphold knowledge practices that do not pass through generative data architectures. This might involve stand-alone devices and servers, spaces of analogue writing and making and analysis, oral presentation of ideas, authentic (situated) projects. Maintaining these spaces will demand technical and epistemic ingenuity – exactly the kind of skills we are told will be required in an ‘AI future.’
3. Regulate AI for staff and student well-being, not for ‘integrity.’ As schools and universities have begun to do with social media, recognize the inequitable and toxic effects of ‘AI’ agents on mental and epistemic flourishing. Having a distinct regulatory approach will allow universities not only to defend their core practices and values but to defend their people too.
4. Negotiate a new contract with students that respects process over performance, that values diversity of expression, that is openly questioning about the future of knowledge and expertise. Workload models for teaching will have to recognize the time and care it takes to engage with student ideas in development. The pace may need to be slowed. One benefit of this crisis is that new approaches to teaching and assessment are being tried, but they all require investment in teaching itself as an intellectually demanding, time-intensive, and values-based form of expert practice.
Universities are not helpless bystanders to the datafication of expertise, but key stakeholders. A university education is supposed to empower students to shape their futures, including how they relate to different techno-social configurations of work. And universities have responsibilities beyond the development of a new generation of experts – for example, responsibilities to justice, equity, the pursuit of knowledge, and a viable planet on which to live – that require a critical assessment of how these technologies might reshape expertise and what harm they might do in the process.