Introduction: AI and the Future of Literary Studies

Most academics in humanities disciplines across the country right now are busy at work designing new courses and programs to meet the national priorities identified in the Universities Accord Review of the Higher Education System. The various product reviews underway are in response to the expectation of job scarcity in the automated industries of the future and the abiding prejudice against the humanities as out of touch with ‘the real world’. In fact, the humanities have never been more outwardly engaged than they are at present, in the shadow of the much-heralded AI Revolution. The revolution is presented in the public square as a Darwinian reality to which the humanities and social sciences must adapt or die, an imperative that overlooks the fact that it was made possible by plundering the knowledge generated in the humanities and social sciences and the cultural repositories they protect. Indeed, the talk of removing the ‘barriers’ between university and industry which enabled these knowledge repositories in the first place has only increased.[1] New bachelor’s degrees with short-term employment targets are in the pipeline, equipped with work integrated learning schemes, flexible (‘stackable’) undergraduate courses, microcredentials, internships, and other industry facing initiatives, chief among them expanded digital literacy syllabi that include alleged AI skills like prompt engineering. These changes may improve the status of the humanities in the short term, at least until the AI bubble bursts and disciplinarity is revealed as the problem we did not need to solve. The interpretive or processual character of learning in the humanistic disciplines, which resists the transactional language of outcomes and deliverables that characterises higher education policy in the post-Dawkins era, will still be practised by those who have lived through more than one education revolution and seen enough to know when the rhetoric of interdisciplinarity is serving the needs of administrative flexibility (Griffiths). For now, amidst change proposals and the threat of job losses, all hopes are pinned on the new AI literacy to secure a path out of years of chronic underfunding, punitive fees aimed at humanities graduates, and the contradictory demand to expand student access to higher education while shrinking academic positions and the national research base.

The crisis facing Australia’s universities, laid bare again in Graeme Turner’s recent review, has boiled down to the curriculum level in the message ‘embrace AI’. Turner comments: ‘Just how bad are things in higher education these days? A short answer might go like this: students are dropping out, academics are burning out, and governments have been tuning out for decades’ (6). The marketing of degrees has intensified over the years with the steepening decline in sector funding, hence the ubiquitous reference to critical thinking skills in any discussion of humanities degrees. A new learning outcome – AI literacy – is being added to the list, though its relation to critical thinking, more assumed than tested, has had to be relabelled ‘critical AI literacy’. The AI-proofed bachelor’s degrees in the pipeline are the product of the same vocational imperative that has driven sector-wide program cuts and delivered homogenised course offerings, providing cover for the universally derided Job Ready Program the current government seems helpless to replace. Warnings by academics about the distorting effect of the vocational imperative on higher education have been dismissed for decades. In the midst a global AI arms race, they are now shrugged off as the moral panic typical of AI pessimism. Humanities scholars have long resisted the instrumentalism and marketisation of education in the knowledge that of all the benefits the study of Shakespeare might be said to confer, a job was never among them. Humanities graduates are prized for their learning capacities, not workplace knowhow. The transferable skills and resourcefulness they possess has seen them flourish in changing roles across sectors in jobs that cannot mapped ahead of time with the sort of certainty industry demands when it points to skills shortages and productivity gaps – a point made more than once in a 2023 Oxford report on the value of the humanities (Robson). In fact, talking up the jobs of the future tips the notion of ‘job ready’ further towards oxymoron, to say nothing of the failed program and its persistence in current policy. The new buzzwords in education distract attention from the fact that no one really knows what AI literacy is or whom it would best serve. Do we need it as citizens or as consumers, to defend the commonwealth or to boost AI adoption? The repurposing of the higher education sector as a provider of private goods has made this sort of ambiguity unavoidable.

Concern is growing that the new digital literacy is compromising academic literacy by creating the conditions for an AI-dependency that erodes rather than supports critical thinking (Lee). As digital decision-making and speed of information gathering is prized over exercising personal judgement, the time needed to read books and the labour of connecting ideas for oneself is increasingly seen as misspent (Baron). The social media experiment has gone on long enough to give educators pause before sending their students to chatbots for brainstorming sessions and drafting advice, if only because it recommends chatbots as a scaffold for academic literacy before certifying the scaffold. Whether literacy is the right term to be using about a technology that does the work of reading and writing for us is a question hardly raised, though the concept coined for the corrosive effects of machine learning – cognitive offloading – points to an awkward answer. To make good on the hasty promise to teach students how to distinguish artificial from human text entails the taken-for-granted skill of generating their own. The emphasis pushed at the moment falls not on the host of exacting skills attained in the fulfillment of the advanced literacy task we call essay writing, but on manipulating the technology rumoured to relieve students of the need to write essays and instructors of the need to mark them. In any case, the sheer number of artificial texts now encountered on a daily basis means that we have reached the point where, as Hannes Bajohr suggests, the distinction between artificial and non-artificial text will have to be abandoned (‘On Artificial’). The post-artificial text has only raised the ethical stakes of authorship. The proposal to teach the ethical manipulation of artificial text cannot bypass the disciplinary practices of the humanities. In literary studies, close reading, the metaphorical term for an amalgam of scholarly and textual-analytic practices that broadly include thinking, judging, reflecting, writing and much else besides is not a skill (or skill set) that can be conducted by students in the frictionless medium of chatbots, for it requires the friction generated by thinking with and against the grain of a text, working with words to formulate propositions, engage arguments, capture fleeting (or tease stubborn) impressions in the uneven voice of the novice. If one believed the hype – or the Productivity Commission – one would think that GenAI was developed to improve the lives of students, educators and scholars by removing this friction and the labour of learning with it. Rather, the AI boosters and EdTech reps pushing the stress-relieving benefits of machine learning on confused students hold the interpretive and attentive practices of close reading in low regard.

We might look forward to the time when the disruption caused by GenAI has been tamed and the technology integrated into teaching practice. Such an anticipation might justify bringing the chatbot to the centre of the classroom, in some if not all subjects and with an eye to perspectives drawn from nascent scholars of the algorithm. But we are not there yet, and to suggest that we are, or that the passage is guaranteed, is to submit to the sort of technological determinism that passes off the mass AI cheating epidemic in schools and universities as a teething problem. Currently, confusion reigns among students, urged to use AI tools to prepare for the jobs of the future, on the one hand, and instructed not to use them in university assessment work, on the other hand. In the face of such contradictory messages, ‘embrace AI’ means little more than ‘figure it out for yourself’. The mooted return to nondigital assessment was instantly ruled out as a backward step on the path to universal AI-adoption on the apparently unassailable assumption that the classroom must adapt to the needs of AI rather than the converse. After all, AI is transforming the global economy, so prepare the child for the road. The argument bears all the features of reverse adaptation (Langdon Winner’s term for the social adaption compelled by certain technologies), as Neil Selwyn suggests of the ‘increased imperative to arrange education settings in “machine readable” ways that will produce data that can be recognised and captured by AI technologies’ (9). The resistance-is-futile rhetoric, the goal of which is not participation but submission, pays scant attention to the pedagogical form it is content to see go by the board (Sarcasas). We have been told to give up on the student essay as an artifact of a dead culture, as if we had only just thought of the question: what are we really teaching in literary studies? The literary essay and the close reading practices that constitute it in the disciplinary context is more than an artifice of the university’s credentialling function, replaceable by less rigorous forms of assessment. It is the creative space of humanistic thinking, where a range of cognitive activities are performed and creatively combined. This processual space cannot be fully automated, though the suspicion that it can be, which lurks behind current doubts about the viability of the student essay, is another victory for data positivism.

The rhetoric of AI inevitability and the mass cheating epidemic that trails it provides a better argument for old school assessment than the tech boosters will admit. Signs are emerging, too, that academics are not falling into line as expected but drawing lines of their own, as a recent open letter refusing AI adoption in education indicates when it refers to the ‘insufficient evidence for student use of GenAI to support genuine learning gains’ despite the ‘massive marketing push to position these products as essential to students’ future livelihoods’ (‘An Open Letter’). The point of ‘going medieval’ on students is not to shut out the digital world from the classroom, which would be impossible anyway, but to protect the integrity of the bachelor’s degree and embrace the intellectual rigor it traditionally represents (Ma). Each the contributors to this issue on AI and the future of literary studies teach about large language models. Two have internal grants to explore the proper role of GenAI in undergraduate learning, while another assesses a module on literature in the digital public sphere by in-person exam. None of us seriously thinks that chatbots will deliver the accelerated gains in teaching and learning promised by the World Economic Forum. Whether they will make a generation of students over-reliant on chatbots, unable to develop their own well-reasoned arguments or critically evaluate the information produced by the probabilistic mimicry of knowledge, is a more pressing concern than productivity forecasts. In making the case for critical AI literacies, Lauren Goodlad and Kathryn Conrad recognise the need for students to engage with AI tools but with provisos that go unheard in the general call to embrace AI; for if ‘generative AI tools are designed to predict statistically likely results’, and ‘plausible mimicry, not creativity or trustworthy information access, is their core functionality’, then ‘students encouraged to look to these systems for research are being deeply misled’ (41).

As bad as the cheating epidemic is (Cassidy), the outsourcing of the labour of learning is only one part of the growing dependence on GenAI that is leaving a trail of degraded language across the public sphere (Eliot). The threat to the integrity of the classroom posed by chatbots masks deeper problems around the narrowing of intellectual inquiry in the humanities with technologies that ‘accentuate the managerial ethos of the technocratic institution’, as Christopher Snook says of the ‘moralizing bureaucratic speak’ generated by large language models in the Canadian context. At this rate, the critical reading practices taught in the humanities – assuming they survive the metaverse – will be trained on the vapidity of public discourse increasingly generated by large language models. The picture is not any clearer in the field, in the humanities or the sciences. The subject line from a colleague’s recent email taunted me with the reminder that ‘while we talk about our collaborative stance with LLMs and not engaging in moral panics, some scientists are more circumspect’. In the attachment was an editorial from a science journal, Nature Reviews Bioengineering, on the indispensable function of writing to thinking in scientific research. The editorial is quick to defend its ‘call to continue recognizing the importance of human-generated scientific writing’ from the predictable charge of Luddism. ‘This call might seem anachronistic in the age of large language models, which, with the right prompts, can create entire scientific articles (and peer review reports) in a few minutes’, notes the editorial, before concluding that the essential unaccountability of large language models means they cannot be considered authors.

The infiltration of AI writing tools into the writing ecosystem is producing an array of homogenising and normalising effects we are only just starting to notice. The question forming behind the increasing automation of writing tasks, from speechwriting to report writing and emails, is: why write at all anymore? Tutors posed the question to students in classrooms over the country at the start of the autumn semester for fear that the motivating contexts of writing had been swallowed up over the summer by ChatGPT and had to be rehearsed in tutorials to rediscover the purpose of higher education. Recovering the contexts of writing in the classroom represents the first line of defence against reverse adaptation and the uncritical AI spread by the perpetual forgetting of the limits of large language models and the ‘myth of frictionless knowing’ (Goodlad and Stone). Each of the contributors to this special issue offer various responses to the question of why one would want to write at all in the age of AI.

In the first of three jointly written papers in this issue, ‘Technologies of Literature: Reading, Judgment, and the Large Language Model’, Charles Barbour, Christian Gelder and Tyne Sumner take a deflationary approach to the existential threat AI poses to literary studies by reminding us that technology has always been a part of literature, for reading and writing are already technologies. They describe the institution of literature as a technological institution made possible in its production and distribution by numerous technologies that have emerged and evolved over time, including the algorithm, the screen, the book, the codex, the typewriter, the printing press, the scroll, the alphabet and so on. Literary thinkers have regularly problematised the distinction between the computational (or ‘mechanical’) and the hermeneutic (or ‘organic’) modes of reading, whether in I. A. Richards’s pursuit of a science of criticism (the book as machine for thinking) or in the Russian Formalists’ mechanical model of the literature machine. Though often portrayed as book-clutching technophobes anxious in the face of change, literary scholars and the modernist poets they study embraced machine learning long before the urgings of the productivity boosters, making them well placed to adjudicate its misuses in academic and educational settings. Historicising the technology in new accounts of the discipline like the one sketched in the opening paper of this issue serves to demystify the magical thinking and technosolutionism that accompanies so much AI discourse at the present time. The effort to ground interpretation in data analysis of the kind at work in today’s large language models was tried over a century ago.

Indeed, large language models are the culmination of a post-Enlightenment quest for a science of verse. Before computational literary studies, scientific-minded critics like Caroline Spurgeon tabled Shakespeare’s imagery in a form of quantitative criticism. The literary history behind machine reading is all too often forgotten even inside the discipline, though not by the CTO at OpenAI, Mira Murati. ‘The origins of predicting what word comes next has roots in Russian Literature’, Murati noted in reference to the stochastic processes Andrey Markov developed to study the distribution of vowels in Pushkin’s Eugene Onegin (159). Expanding the history of the discipline to include computational methods allows us to see machine learning less as a rival and more as a realisation of a quantitative form of literary judgement. De-spiritualising meaning in this way might also temper anxiety about the chaos caused by ‘stochastic parrots’ and focus attention on the systemic biases inscribed in their training data. The need for human intervention and oversight to steer large language models away from ‘alignment problems’ is indisputable. But we lose sight of student needs in the abstract of fear of a collapse of meaning triggered by the economic nihilism of Silicon Valley. Barbour, Gelder, and Tyne avoid fetishising meaning as the property of the individual mind (and the reduction of communication to a transparent vessel that transports it through the ether to other intended minds) in their admonition against reducing interactions with AI to questions of meaning, for to do so bestows chatbots with a nihilistic power that properly belongs to the corruption of liberal politics and institutions unfolding in the US.

In their contribution ‘Writing Places in the Spaces of AI’, Caitlin Maling and Catherine Noske seek out the spaces of care in a neoliberal academy that historically privileges the structures of autonomy forged by colonial imperatives and competitive practices that GenAI, arriving at the end of ‘a long history of technological advances’ that have been ‘used against and to the detriment of Indigenous peoples,’ further entrenches. Reflecting on their professional practice as teachers of creative writing, Maling and Noske offer a response to the question of why one would want to write at all in the age of automatic writing by exploring the possibilities enabled by a place-based approach to creative writing, a pedagogy they characterise as sturdy enough to withstand the anonymising ‘anyplace’ of GenAI. Their essay is designed as a series of diary entries that seek to recover the contexts of writing GenAI strips away and thereby promote embodied, contextual and relational positionalities. They take the Acknowledgement of Country as a model of how to recover the contextual layering of meaning and value embedded in place that is disembedded in the ghostly nonplace of AI. The Acknowledgement of Country is an acknowledgment of the relations of interdependence that sustain the torn halves of our working and nonworking lives. The disembedding of the Indigenous knowledge rooted in specific territories and places is a threat compounded in the online classroom by GenAI, provoking the question of where the act of writing can be said to occur when produced by GenAI. If the placelessness of GenAI flattens diverse cultural knowledge and ontologies into data, then the prepositional thinking that acknowledging Country obliges us to adopt indicates how to recover the context and diversity AI strips from the classroom. Noske points to the IP theft behind the ‘Shaxpir’ bot (and the plagiarism of a local author it enabled) as one example of this complexity, before asking what the presence of GenAI means on the Swan River plains of Whadjuk country where she too works and speaks. Returning to the scene of her work on the article (and the time snatched from parental leave), she discusses the arrival of her baby son as a further layer of complexity, indeed of ‘the changing bodily space from which I write’. The practices of care and attention-giving underlying their work as writers, teachers, and mothers serves a reparative and potentially transformative function to the homogenising drive of GenAI. Their suspicion that the penetration of GenAI tools into the writing environment has homogenising effects is confirmed by a recent study, in which the drift of Indian participants’ writing towards Western norms when they used AI-writing tools stopped when they wrote without them (Agarwal). The creative mapping exercise Maling and Noske offer as writing guides to students, which encourages them to return to a chosen place at different times of the day to reflect on their embeddedness ‘in the complex system of that microenvironment’, is the key example of a creative writing pedagogy fortified against the distorting effects of AI-generated storytelling detailed in current research. If the task of decolonising GenAI still looks insurmountable, Maling and Noske nonetheless locate a starting point in a critical pedagogy that reconceives the classroom as place.

In an alternative bid to defend the embodied and relational structure of human understanding persistently obscured by AI boosterism, Anthony Uhlmann’s essay in this issue, ‘Shannon’s Information Theory, AI, and Fiction’ adopts a mode that is (by his own admission) more speculative than analytical. Uhlmann builds on an idea of understanding drawn from his work with Moira Gatens on Spinoza to distinguish embodied understanding from the simulation of patterns in machine learning. His core insight is summarised in something of a Spinozist axiom: humans feel understanding, machines cannot. Indeed, GenAI fails to distinguish the difference that accounts for ‘how living systems function’. Despite the hype that has surrounded AI since its emergence from its second nuclear winter, early pioneers of information theory were more honest about the excision of meaning from information than the AI boosters and futurists outdoing each other with outlandish predictions about the imminence of Artificial General Intelligence, what Sam Altman is now pleased to call ‘The Gentle Singularity’. Uhlmann returns to Claude Shannon’s seminal 1948 essay ‘A Mathematical Theory of Communication’ to retrace the ‘bracketing of meaning’ at the mathematical basis of all AI systems. Shannon’s mathematical approach to language and communication as first and foremost an engineering problem represents for Uhlmann a foundational moment in any effort to distinguish machine learning from human learning, or GenAI from literary thinking. His concern is not with Shannon’s bracketing or alienation of meaning, however, but in some of the ways we might distinguish the stochastic processes behind the mimicry of human communication from the intuitive basis of human understanding and its expression in writing. And here, he suggests the value of considering certain literary practices through the lens of the mathematical concept of redundancy, following Shannon’s own brief use of literary examples to distinguish high versus low redundancy in language use. Shannon’s contrast of Joyce’s Finnegans Wake to I. A. Richards’s Basic English would appear to confirm modernist dogmas about literary language and the value of poetic difficulty in response to GenAI. Uhlmann returns to Virginia Woolf’s famous remarks about the ‘wave in the mind’ (and the rhythmic basis of literary style) and brings them forward to Alexis Wright’s literary practice to underscore the more-than-mechanical basis of literary creativity. Mikhail Bakhtin’s dialogical epistemology also comes to mind: while the sentence is repeatable, the utterance is not.

Alexis Wright’s account of the sovereign imagination seeks to rescue Indigenous experience from the intergenerational damage wrought by the pathologies of colonialism. Uhlmann discusses her novel Praiseworthy in terms of Wright’s compelling, sometimes harrowing portraits of the psychological injuries of internal colonisation and the hard journey of unlearning the cultural self-hatred many Indigenous Australians have learnt from the dominant culture. The threats posed by GenAI in this context – the flattening of Indigenous knowledge mentioned by Maling and Noske – emerge as an extension of colonial mechanisms. Wright fortifies the sites of Indigenous knowledge in the complexities of a literary practice that information theory recognises with the term ‘low redundancy’. If the Indigenous sources of connection to place and tradition lie on a map beyond the reach of GenAI, Uhlmann seeks to demonstrate the point in an experiment with a large language model he directs to mimic Wright’s style. The impressive feats of pattern matching at which chatbots excel fall short of the ironies Wright works up from the figure of epanorthosis, as Uhlmann shows after cataloguing the results generated by the chatbot ‘Hyperwrite’. The shifting voices and rhythms characteristic of verbal art ‘carry with them specific times and places and world views’, a point amplified in the ‘minor voices that exist even within minor languages’ to be found in Joyce’s rendition of Irish English and Wright’s of Aboriginal English.

The discovery that a large language model fails to read, replicate, or understand Wright’s irony should not surprise us. If irony traditionally locates meaning in what is not said but intended, then how, asks Charles Barbour in his essay in this issue, ‘could a probabilistic analysis of what is explicitly said ever capture its meaning?’ Barbour’s question throws into relief the all-consuming drive to make machines more human by making them ‘smarter’, allowing us to see it for what it is, namely, ‘one of the stranger ideas circulating today’. In ‘Irony Machines: Artificial Language, Literary Language, and the Opacities of Trust’, Barbour suggests that the wrongheaded pursuit of an ‘irony code’ represents an opportunity for literary studies to assert its undervalued disciplinary claims. The calculative model of understanding that has informed the development of AI systems since Hubert Dreyfus railed against it is the immovable obstacle to its stated goal of recreating human intelligence in machine form. If AI is to make good on its stated goal, Barbour suggests, then those charged with the task of replicating the human machine will need to change course if they are to avoid sinking in their own ‘deeply entrenched understanding of what it means to be human, and what it means to live in language and with others’. To do so would entail the discovery that ‘the practices of literary criticism and literary theory’ are ‘impossible to circumvent’.

If the research program fixed on teaching large language models to detect irony was doomed from the start, it raises interesting questions about where we locate the start. The metaphysical tradition descending from Plato reduces irony to a figure of speech, with little bearing or purchase on the truths it habitually portrays as the possession of reason. The multiple charges the philosopher laid at the poet’s door – social uselessness, perversion of truth, and moral anarchy among them – were prosecuted by Church authorities in the Middle Ages and used as cudgels by moralists and politicians ever since. The computer programmers convinced that irony can be tracked, matched and mimicked come at the end of a long tradition of an anti-rhetorical and anti-literary thinking. Computer scientists approach the task of detecting irony by bracketing meaning and pinning down irony’s external or material traces. The systems they developed were initially successful in detecting sarcasm, and their capacity to detect the external traces of irony has been upgraded. But calculating the meaning of what is not said has proved a more elusive goal. The traditional line on irony and rhetoric met its strongest challenge in German Romanticism and the literary theory that descends from it. Socrates’s use of irony as method aside, we owe irony’s metaphysical turn to Friedrich Schlegel, where it emerges from his anti-foundationalist reading of Fichte’s subjective idealism as the original disrupter of the conceptual orders of thought. Not all that much separates Richard Rorty’s pragmatic account of irony as the detachment from metaphysical truths from Jonathan Lear’s therapeutic account, where irony’s negativity is born in an experience of radical uncertainty crucial to self-formation. If irony disrupts one’s practical identity, it is because it offers a vertiginous glimpse into the gap between possibility and actuality essential to the educational task known as becoming human – or becoming human together – that has provided poets and novelists with endless material. After Schlegel, it was Kierkegaard who appreciated the extent of irony’s corrosive power, the acid bath he called its ‘infinite absolute negativity’. By isolating the self from its social roles, irony activates the ethical process of authoring a self. Years of reading Plato and Kierkegaard convinced Jonathan Lear that irony is fundamental to the human condition but poorly understood. The misunderstanding is ‘pervasive in contemporary culture’, Lear said in the preface to his Tanner Lectures (ix). The quest of computer science to program irony, which it seems unlikely to ever give up on, would seem to prove his point.

Thinking about AI seems to lead back to the anthropological riddle of the human whether we will it or not. If the AI project is characterised by the effort to close the distance between human and machine, then the effort, Barbour suggests, discloses the necessity of maintaining distance (social and ontological) at the very base of the human project. Mindful of the trapdoors along the metaphysical turn, Barbour walks a line between the contrasting approaches of Paul de Man and John Searle to arrive at the idea of irony as a social relation, or more precisely as the nexus between the social and the asocial. The idea entails a redescription of Georg Simmel’s sociology of ‘asocial sociability,’ which Barbour describes as ‘the enigmatic sense in which humans are held together by being held apart.’ As a blend of publicity and secrecy, sociability and its opposite, the ironic speech act is structured by opacities and uncertainties that make the socius possible, for there can be no trust in a society of perfect transparency. Barbour thus develops a critical approach to humanities concepts consistent with what Hannes Bajohr has described as thinking with AI and not against it (‘Introduction’ 12).

The relational thinking evident in the Simmel revival (the so-called ‘relational turn’) is rooted in the rich soil of human ignorance. The partiality of human knowledge and the inevitability of ignorance that Simmel insisted on is at odds with the authoritative discourse mimicked by chatbots and the techno-solutionism driving the exponential growth of data analytics. As Alan Wolfe put it some time ago, in an essay advocating the relevance of Simmel to the age of AI: ‘Human forms of learning grow out of the uncertainty of what we do, leading us to rely on social practices, the cues of others, experience, definitions of the situation, encounters, norms, and other ways of dealing with uncertainty that enable mind to develop’(1083) The self would never have arisen in a world of certainties. In ‘Interpretation; or, AI’s missing algorithm’, I explore several topics noted above under the sign of the human desire for certainty in an uncertain world. While algorithms have proved useful guides to decision-making, they pose dangerous threats when taken as reliable or authoritative ones. The over-reliance on algorithms stems from a burgeoning data positivism. If educating students as researchers and not as consumers means interrupting the feedback loops of data positivism, then, as I try to show, it is a task for which the humanities has long prepared itself. My emphasis falls less on the hidden technologies of literary studies than on the hidden hermeneutics of data analytics and the resulting threat of data absolutism; for it is the concealing of the interpretive character of information in the pseudo-objectivism of data that engenders the pseudo-authority of chatbots. I close with a brief discussion of the much maligned and misunderstood technology of close reading, and its new appreciation amidst the ongoing digitisation of culture and learning.

The benefits of recovering alternative histories of AI are also on display in Mathew Holt’s contribution, a portrait of the German litterateur and cybernetician Max Bense. In ‘Repairing Rationality: Max Bense and the Automation of Literature in Post-War Germany’, Holt’s timely sketch of the postwar Stuttgart School is enough to remind us that the AI project that arose from the Dartmouth Proposal in 1955, conceived as the simulation of human intelligence and bankrolled by the US military and Big Tech, need not determine our uses of AI nor curtail the democratising potentials suppressed by the digital infrastructures of surveillance capitalism. The aesthetic rather than instrumental explorations of machine intelligence and its applications conducted by Bense and his transdisciplinary team at Stuttgart (and later at Ulm) repudiated the idea of scientific arms race, the Cold War context for the AI project as it was conceived at Dartmouth. Where the ideological agenda emerging from Dartmouth expressed faith in the expert agencies of a professional scientific class, the heterodox explorations of the Stuttgart School began in a loss of faith in (technological) reason that diverted efforts in information theory and computing into the aesthetic modalities of poetry, theatre, and the visual arts. Bense himself was a key figure in the concrete poetry movement, which emphasised the material and structural components of language. The experimental works he oversaw at the Stuttgart School were among the earliest examples of computer-generated art to draw on the stochastic modelling of Shannon’s information theory. The reorganisation of Franz Kafka’s The Castle with a basic algorithm by the mathematician Theo Lutz, which anticipated recent conceptual digital literature like Nick Montfort’s Megawatt (a recomposition of Samuel Beckett’s Watt with Python script), was an example of the information aesthetics Bense formulated before its eclipse by the media theories of Marshall McLuhan and Umberto Eco.

In the post-traumatic years of 1950s Germany, inquiry into the possibility of machine intelligence provided a way to rethink the rationalised forms of modern life in terms of intervention in the public sphere beyond the industrial goals of simulation and automation. The goal of such reparative work undertaken by Bense and the Stuttgart School was not to model human intelligence but to create new forms of thinking and communicating through structured randomness, algorithmic patterning, and poetic logic, a vision rooted in a German post-war context ‘shaped by a desire to rebuild a world in ruins, to reconstitute more democratic and pluralistic ways of thinking from the fragments left behind by fascism’. This critical alternative to the US-centric history of AI is also an alternative to the Frankfurt School-centric history of critical theory, often characterised as demonising the role of the sciences in the instrumentalisation (or mathematisation) of the natural universe underpinning administered life in neoliberal states. Bense’s transdisciplinary team were fired by the conviction that the creative and the computational were not in sneering opposition but complementary modes of inquiry. In key demonstrations of the concrete thinking of Bense’s informational aesthetics, Holt shows how the reparative logic of the Stuttgart School widens the opportunity for political engagement that the redemptive aesthetic of the Frankfurt School had reduced to the negations of high modernism, thus relying, as Bense saw it, on a one-dimensional account of Enlightenment rationalism evident in the positivism dispute. The historical irony that carries this reparative project, namely the search for democratising potentials in technology by the recently fascist German state, ‘may be worth documenting now for other reasons’, Holt remarks with a flinty glance at the technofascism lurking in the present. The experimental and interdisciplinary focus of Bense’s school exposed the mechanics behind the circulation of messages in public communication and opinion formation in the conviction that only a scientific grasp of communication could restore trust in the public sphere. Analysing the autonomy of communicative phenomena by breaking down and recombining the language that had been riddled with Nazi ideology was a crucial step towards reconfiguring rationality through the aesthetic domain. And the work of reconciling the traditions of logic and poetry in the shadow of fascism passed through the stochastic and informational sciences. Indeed, the reparative logic of Bense’s information aesthetics takes on renewed urgency amidst the digital nihilism created by the current crisis of financialised capitalism and the digital fascism stepping out of the shadows. The upheaval wrought by the AI book heist, euphemistically termed the digital disruption, is not inevitable. The interdisciplinary future predicted for literary studies and the humanities lies not in the embrace of AI products, then, but in a new configuration of the rationality of the digital public sphere by way of a critical reappropriation of its concepts and traditions.

My thanks to Professor Tanya Dalziell, Natalie Bühler, and the ALS editing team for their interest in this issue and their efforts in seeing it through to publication.

Published 22 December 2025 in Special Issue: AI and the Future of Literary Studies. Subjects: Artificial Intelligence.

Cite as: Conti, Christopher. ‘Introduction: AI and the Future of Literary Studies.’ Australian Literary Studies, vol. Special Issue: AI and the Future of Literary Studies, no. , 2025, doi: 10.20314/als.89c57c0c4a.

  • Christopher Conti — Christopher Conti is Senior Lecturer at Western Sydney University and member of the Writing and Society Research Centre.