Gods and Monsters
Where once the gods spoke through entrails, now there’s an app. (Keane and Shapiro, ‘Deux Ex Machina’)
Since Klaus Schwab popularised the term the ‘Fourth Industrial Revolution’ to describe ‘the staggering confluence of emerging technology breakthroughs, covering wide-ranging fields such as artificial intelligence (AI), robotics, the internet of things, autonomous vehicles, 3D printing, nanotechnology, biotechnology, materials science, energy storage and quantum computing’, the prospect of losing control over the global risks inherent in an evolving interconnectedness rooted in cyberdependency is brought to our attention almost daily. ‘We have yet to grasp fully the speed and breadth of this new revolution’, said Schwab in 2016, because ‘the changes are so profound that, from the perspective of human history, there has never been a time of greater promise or potential peril’ (7). Schwab was hardly a lone voice raising the alarm over the catastrophic potential represented by the ‘size, speed, and scope’ of changes demanded by the new technology revolution (8). The medium of these developments and the spectre of ‘Cybergeddon’ they carry with them – technology – has long been associated with modernity itself. In recent years, as AI has emerged from its second winter with the magical promise long invested in technology, the speed and scale Schwab spoke of has been amplified across the public sphere. It is now common to hear that ‘the development of superintelligent AI’ as ‘not just a technical challenge’ but as ‘a race against time to ensure we can control what we create’ (Marr). The surpassing of human intelligence by artificial superintelligence is now regularly represented as imminent and inevitable, rushing out of the future to take its place at the apex of the evolutionary order in a technological singularity preached with evangelical fervour in Silicon Valley.
The singularity theology that develops in Silicon Valley at the end of the American century – like the popular theology that bloomed in the Mid-West at the start of it – has little time for the uncertainties of interpretation. In the wake of the commercial release of ChatGPT, noted Yuk Hui (1), an eschatological imaginary, made up of stereotypes about the machine, covered over the real novelty and significance of AI. Indeed, the theistic tropes of AI discourse are ubiquitous. We need only point to a recent New York Times opinion piece by Yuval Harari and his colleagues, who echo a familiar awe and incomprehension at the speed with which AI tools are developing advanced capabilities. Describing language (or ‘the word’) as the ‘operating system of human culture’, Harari and his colleagues reduce culture to an information machine and make the outlandish claim that large language models have ‘mastered language’ and ‘hacked’ civilization. ‘We have summoned an alien intelligence’, they warn ominously; ‘We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization’. Exalting the magical powers of chatbots to a level far beyond human comprehension downgrades human intelligence and undercuts political discussions about the social ramifications of AI technology ‘before they have even begun’, notes Lauren Goodlad and Mathew Stone.
For Hui, the feedback loops of the eschatological imaginary are only avoided in the prolonged task of educating the public about the nature of recursive machines. In contrast to the apocalyptic pronouncements of tech sector leaders, AI programmers and developers are increasingly using a more precise vocabulary of ‘actually existing AI’ that captures the limited possibilities of large language models. The drier language of computation and quantification is catching on, despite the ongoing hype cycle of AI product releases – steered by ‘proponents of AI’ who ‘have huge incentives to minimize its known limitations’ (Larson 1) – that makes florid use of anthropomorphic tropes. In the mouths of AI developers, many of whom have proven unwilling to this point to concede the limits of large language models, the language of ‘actually existing AI’ promises to demystify the mythology that shields the tech sector from public oversight. But the educational task is, to say the least, Herculean. The anthropomorphising of machines was already engrained in public discourse by the time Joseph Weizenbaum created the first chatbot (‘ELIZA’) in 1966. The analogising of machine processes with human faculties, or the ELIZA effect, is so embedded in popular discourse that we must continually remind ourselves that computer hardware and software have no capacity for memory, learning, reasoning, thinking, judging, etc. The anticipated ‘liberation’ from old habits of ‘mystifying machines and humanity’ requires a ‘suspension’ of ‘the anthropomorphic stereotyping of machines’ that would have to be extended indefinitely (Hui 7).
Indeed, the vagueness of the term ‘Artificial Intelligence’ (and the confusion over whether it refers to a technological object or a system of technologies) explains why the discursive field was left wide open. AI boosterism can be traced back to the moment when computers left the laboratory and entered the campus and workplace. The normalisation of digital technology in the education sector today was not inevitable but built on repeated calls for an ‘education revolution’ to keep pace with the ‘information age’. Predictions of an ‘edgeless’ educational institution without barriers to access were fuelled by enthusiasm over the possibilities of asynchronous learning in the virtual classroom. The democratising potential represented initially by the internet and later by digitally assisted ‘open’ classrooms and low-cost massive open online courses (or MOOCS) was frequently touted to rationalize the overhaul not just of pedagogical practices but all aspects of management and administration in the sector. The cosmopolitan virtues of transparent, adaptive, and collaborative communities, the fruits of a new ‘epoch’ of education sweeping the world, would only ripen when ‘technology’ was placed ‘at the heart of this story of institutional change’ (Bradwell 8). The future was arriving fast and threatened to consign to history’s scrapheap those unable or unwilling to embrace digital tech. The 2024 World Bank executive summary of AI in education repeats the formula but picks up the pace: ‘The Artificial Intelligence revolution is transforming education at an unprecedented pace, offering innovative opportunities to personalise learning experiences, support teachers and students in their daily tasks, and optimize educational management’.
A look back at the recent achievements of EdTech goes some way to breaking the spell it has cast over the education sector. We need only remind ourselves of previous drives to ‘upgrade’, ‘optimise’, or ‘streamline’ educational processes to get perspective on the AI moment in higher education. PowerPoint, Turnitin, and Blended Learning have not brought to educational processes the speed and efficiency promised. The ill-defined concept of blended learning, which presupposed an equally vague idea of traditional learning, was designed from the start to bolster ‘the subservient relationship of higher education to industry [as] advocated by government’ (Oliver and Trigwell 21). The ‘disruption’ it posed was presented as the necessary bridge to a digital future, the same language used to describe the current chaos wreaked on assessment structures by the chatbot corruption of the student essay. ‘Consistent throughout this history of digital hysteria’, noted Neil Selwyn of the several decades prior to 2014, ‘has been a belief that new technologies herald substantial educational change, renewal and – as has become recently popular to suggest – “disruption” of traditional institutional arrangements’ (Digital Technology 9). The headline of S&P Global’s special report on education from January this year – ‘AI and education: embracing the disruption’ – is one sign that the pattern Selwyn identified in 2014 has continued to the present (Fernández et al). The Covid-hour Flipped Classroom intensified the drive to the datafication of the teaching space, producing learning analytics justified as student support but driven by ‘the logics of a national audit system imposed on higher education’ (Gourlay 1040). To be sure, the uncritical tone of AI boosterism noted by Selwyn was met by the alarmist and pessimistic reaction of a minority doomerism; indeed, the story of organisational change and the dreaded ‘restructure’ is an academic genre all its own. The polarising voices of boosters and doomers that bound debate over a decade ago continues to limit discussion of the benefits of digital technology. Which is to say, how we represent large language models matters, especially now that Big Tech has managed to harness the poles of debate in a new discourse of doomer-boosterism that places fear of an uncertain, dystopian future at the centre of AI marketing and its own evasive responses to calls for public oversight and regulation (Goodlad and Stone; Weiss-Blatt).
Doomer-boosterism might be dismissed as an unavoidable part of the hype cycle of AI product releases were it not for the social havoc it is wreaking. The rise of godbots – chatbots posing as oracles to dispense spiritual advice to the devout – provides a snapshot of the problem. The dangers posed by godbots are not confined to the prospect of bad actors steering fundamentalists into acts of terrorism but are symptomatic of a wider problem stemming from the pseudo-authority chatbots have attracted. If chatbots are currently acting like godbots it is because we let them – by treating the algorithm as an oracle. We often struggle to explain the novel outputs of chatbots because of the Black Box Problem arising from the complexity in artificial neural networks, but there is nothing new about the desire for certainty in the face of an uncertain future. Chatbots ‘exploit our tendency to impute divinity to inexplicable processes by speaking in certainties’, notes Webb Keane and Scott Shapiro, such that ‘[t]he temptation to treat AI as connecting us to something superhuman seems irresistible’. Under conditions of stress and uncertainty, humans have long turned to magic and myth. The continuities between the ancient culture of divination and the modern culture of chatbots goes unnoticed in the secular self-regard of tech culture, which may explain the hidden role of hermeneutics in data analytics (a point I’ll return to below). The mysteries of self-learning programs make them appear divine, an appearance disguising the human input they depend on; but ‘even traditional diviners didn’t take the signs at face value, they interpreted them’ (Keane and Shapiro).
Maintaining the technological myth of inevitability as the only bridge to a viable future requires constant repetition of the gospel. A deflationary, de-anthropomorphising diction is the appropriate response to the hyperventilated discourse of AI so long as it avoids the enlightenment dichotomy between myth and reason that sucks us back into the polarising positions of doomer/boosterism. The human drive to see all around us ‘minded’ creatures like ourselves is not restricted to the sphere of human relations but characterises our interactions with pets, machines, and diverse aspects of the nonhuman universe. Anthropomorphism, in other words, is a feature of human cognition, not aberrant behaviour to be corrected with the right dose of critical thinking. The strict dichotomy between myth and reason in fact sets up a myth of its own, the myth of Reason that feeds the AI gospel and the indemnity it takes out against an uncertain future. We might congratulate ourselves for having escaped the feedback loop of the singularity (and poke fun at the ‘geek rapture’) while still failing to see our entrapment in others, like the self-fulfilling prophecies of industry or the belief that all problems are technical ones requiring technological solutions that is evident in ‘the desire of educators to realize a paradigmatic change in a few years’ time’ (Hui 7).
For all its continuities with the ancient culture of divination, AI boosterism recalls the rhetorical manoeuvres of universal history, the quintessential Enlightenment discourse born in the effort of a revolutionary avant-garde to master the acceleration processes of modernity in a normalising discourse that enacts ‘the modern tribunalization of historical lived reality’, in Odo Marquard description. The tribunal passing judgement on the current state of affairs – formerly convened by God – is now run by human beings who decide which other human beings ‘have to answer (as the slow human beings who resist the increasing tempo of the process of history) for the present state of affairs, being condemned to immediate pastness by the historically swift human beings who claim (with universal history’s guarantee that they are among the definitive victors in history) to be agents of the good future, already, in the present’. Marquard had in mind the catastrophes of European colonisation, though he might have been talking of the digital colonisation of the Global South. Slowing down the proceedings of a tribunal inspired by ‘acceleration conformism’ is a goal he nonetheless recommended to anyone hoping to avoid ‘a finale in which human beings are compelled to deny humanity to other human beings, and thus to become, themselves, inhuman’ (58–59). The role of literary studies in averting this finale requires a brief reminder of the vocation of the hermeneutical humanities.
Disrupting the Digital Enclosure
I suggest that ‘education and culture’, whatever else they may still be, have something to do with this delaying of the functional connections between signals and reactions to them. (Hans Blumenberg, ‘Contemporary Significance of Rhetoric.’)
If there is little time to think in the university these days, the reason is plain to any observer of the cult of speed in higher education. In the words of Stephanie Gearhart and Jonathan Chambers, ‘the university has been compromised by its uncritical acceptance of our culture’s standards of productivity, busyness, and speed upon which all activities are judged’ (1). The consequence of imposing the wider culture’s speed rules on the academy is the arraignment of the interpretive practices of the humanities to answer charges of ‘unproductivity’. Yet resistance to the culture of speed in favour of the sure-footed practices of comprehensiveness, engagement, and inclusion is the deliberative model by which we teach the critical thinking promised in all those university prospectuses pitched at prospective students. The different values informing the model and the prospectus create a tension lived by teacher and student alike, for the fast-talking language of utility and outcomes that is drawn from the wider culture hollows out the experience of attending to the text in the classroom and developing habits of engagement hardly cultivated elsewhere.
The relieving function of institutions more broadly is built on practices that imply a temporality at odds with the speed, lightness, and efficiency associated with technology. Even a casual assessment of institutions reveals that human beings are not made for the speed demanded by the self-appointed agents of the good future in the present. The university, like the school, has always possessed a complexity of relations that technology can never fully capture or automate. Communicating practical wisdom or the desire to learn, for example, evades the individualist focus of ‘learnification’, which reduces teachers to adjuncts and education to learning. The increasing drive to datafication and performance measures, Lesley Gourlay suggests, ‘arises from a very particular set of ideas about the university in the digital age’ that ‘centres on notions of academics and students as somewhat abstract, disembodied human subjects, removed from their social and material settings’. The increased use of digital technology to monitor and modify classroom behaviour expresses the fantasy of human transcendence in higher education and forms part of a web of ‘highly contradictory notions about the ontological status of the student, the lecturer, the text, the university and knowledge itself’ (1040). The coming developments of AI are likely to deepen the contradictions inherent in the statistical modelling and quantification of ‘the contextual layers implicit in any educational episode or moment’, as Selwyn puts it, which can only distort ‘the real-life complexities purportedly being captured’ (‘Limits of AI’ 7). Selwyn accordingly renews the case for ‘slowing down, scaling back and recalibrating current discussions around AI and education’ on the basis that the ‘urgency’ of those conversations ‘is clearly unproductive in the long run’. Of the ‘AI-fever’ around GenAI writing tools he notes ‘how quickly the educational debate around ChatGPT spiralled out of control, with many otherwise sober commentators reaching extreme conclusions over the transformative connotations of this technology’ (4). Slackening the pace of discussion in ways that address the ‘inequalities and injustices already arising from AI technologies’, starts in the effort of educators to ‘disconnect themselves from the apparent imperatives of AI-driven educational “transformation”’ (12).
Interrupting the feedback loops of an encroaching data absolutism is a task for which the humanities has long prepared itself. Literary studies, with its taproot in rhetoric and hermeneutics, possesses a deep history of deliberative practices to draw on when it comes to repairing the lifeworlds damaged by the culture of speed. In a 2018 article that identified interpretation as the blindspot of AI, Dutch media theorist Florian Cramer contended that critical discussion of data analytics inevitably re-enacts the 1960s Positivism dispute that saw Theodor Adorno and Karl Popper lock horns over the logic of the social sciences. Adorno and Jürgen Habermas defended the interpretive orientation of the social sciences against Popper’s and Hans Albert’s demand for a common methodology based on the natural sciences. Popper stopped short of a purely quantitative approach when he insisted that the logic of science begins not in observation, facts, or data but in ‘the choice of interesting problems’ (in Adorno 98). In the years since der Positivismusstreit, a shift from problems to data has occurred that represents ‘a much more radical positivism than either Adorno or Popper imagined’ (Cramer 34). That an ‘objective’ analytics, untouched by the biases of interpretation, does not exist has proven no obstacle to its pursuit in the AI sector. To be sure, data bias is acknowledged in the AI industry. Judging by the frequency with which the solution to algorithmic bias is said to be more algorithms, the admission has done little to dent data fundamentalism. Interpretation is in fact ‘always at work in analytics’ even ‘though it is rarely acknowledged as such’ (Cramer 34). It is there at the moment of data capture, in the hidden biases that result in the racism of database dragnets and predictive policing algorithms. The scope for interpretation may be narrower for a data operator than it is for a literary critic but in both cases the structure is hermeneutic. The key difference is that in data analytics interpretation is a bug in the system, a failure that compromises knowledge rather than the ground that secures it. Ironically, the interpretive nature of knowledge is revealed in all the bug-fixing drives to chase it out of the system: ‘Historically, there may never have been as much interpretation going on as there is in the age of analytics, yet this paradoxically coincides with a blindness for the subjective viewpoints involved’ (Cramer 36).
If the difficulty facing Big Data is making sense of massive data sets, then data analytics faces the same structural problem the Delphic Oracle tried to solve, namely, ‘how to make sense of endless streams of (drug-induced) gibberish?’ (Cramer 23). Whatever method used to translate the gibberish will determine the outcome. As priests succeeded the oracles down the centuries, and philologists the priests, literary studies emerged in the nineteenth century as secularised hermeneutics, with a counterpart in psychoanalysis – ‘the close reading of the gibberish captured from a patient’s subconscious’ – giving its interpretive practices the allure of medical authority as applied science (24). But with the arrival of Big Data, intelligence agencies and investment banks turned analysis into analytics, bringing sense to the reams of data by shifting from syntax to pragmatics. Analytics is limited to the algorithm, which ‘changes the perspective on the gibberish’ from ‘a narrative in need of exegesis’ to ‘a data set in need of statistics’ (24). The shift from interpretation to analytics proceeded on the assumption that all data is quantitative, giving the false sheen of objectivity to the view adopted by the ruling authority. The growing territory analytics captured from interpretation raises the spectre of data absolutism. The singularity predicted at the turn of the century is in this sense already here, Cramer suggests sardonically, only its arrival does not coincide with more intelligent machines but with less intelligent humans. Ultimately, it requires society to dumb itself down, ‘because the difficulties of making sense of information that is so easy to capture will still remain’ (38).
The crapularity is Cramer’s sardonic term for the present state of things under the surveillant, extractive, and exploitative practices of digital capitalism, in which automation leads to job losses and lethal accidents, cybercrime makes up the world’s third largest economy, and AI empowers antisocial activities that promote racism, sexism, data colonialism, etc. The Internet of Shit, a Twitter account spoofing the utopian promise of The Internet of Things by cataloguing the stupidity of all our smart devices, provided weather reports of life in the crapularity in the years before Corey Doctorow announced ‘the Great Enshittening’ of the Internet and ‘the accelerating decay of things that matter to us’. The ‘Enshittocene’ ushered in by the unchecked power of digital platforms is not the product of ‘the Great Forces of History’ but ‘a material phenomenon, like a disease’ whose cure lies in ‘antitrust efforts, regulation, unionisation, and data portability proposals’ (Doctorow 1). In the current context, there seems little point worrying about whether our analyses are Marxist, post-Marxist (technofeudalism), Feminist (digital patriarchy), or post-colonial (digital colonialism) when the distinctions between them are boiled down to the same positivist dismissal as speculative and ideological. Our various accounts of fascism might be good ones, but they won’t make a dent in a digital positivism that hoovers them up like grist to the large language mill.
The focus on ‘techniques’ and ‘methodologies’ in the various schools of criticism, in other words, will count for little without a critical revival of the legacy of humanism, Cramer suggests, for subjectivity gains new significance in the crapularity not as metaphysics but as critique of (digital) positivism. His analogy of data analytics to the oracle culture of the Ancient Greeks is made in defence of the hermeneutic model of knowledge that stretches back from the secularised discourses of the humanities to their precursors in medieval theology, ancient philosophy, and rhetoric. Retrieving the legacy of humanism and its concepts of judgement, agency, responsibility, transcendence, personhood, etc. requires a map that marks the spots where they are buried by idealised concepts of the ratio. Gourlay captures the current state of the crapularity in the classroom: ‘Utopian desires for extended human agency, untrammelled by the confines’ of embodiment, time and materiality, sit alongside increasingly prevalent digitally mediated regimes of surveillance and control in university settings’ (1040). Critical hermeneutics interrupts these regimes by temporalising and embodying the subject of knowledge that AI de-temporalises and disembodies. Its goal is the integration of technology into daily life, a goal opposed to the integration of life into technology. Despite the ongoing illusion created by the ‘myth of frictionless knowing’ (Goodlad and Stone), life in the crapularity cannot operate at the speeds associated with technological breakthroughs. The digital technology ‘entering our schools and classrooms’, notes Selwyn, ‘are far-removed (if not totally distinct) from the speculative forms of AI that often feature in popular discussions of how ‘sentient’ forms of AI might soon replace teachers, render schools obsolete, and even do away with the need for humans to learn things for themselves’ (‘Limits of AI’ 5).
By the same token, the traditions of the humanities cannot be transmitted outside the digitalisation of social and cultural space, as is clear from the fact that scholarly engagement in that space has stimulated new lines of inquiry and appropriations of intellectual tradition. The critical encounter between the humanities and AI cuts both ways, such that the orienting concepts of the humanities are tested on the new phenomena emerging from computer science and software engineers. To think with AI and not simply about it, Hannes Bajohr suggests, means searching for the ‘concepts, frameworks, and metaphors AI can provide us with that can be used to reflect productively back onto the humanities themselves’, a task that entails both ‘observing how traditional ideas clash (or mesh) with current advances in information technology’ and the ‘potential revision’ of ‘humanistic concepts as objects of inquiry’ ‘in light of the questions raised by Critical AI Studies’ (12).
Scholars like Bajohr have turned to Hans Blumenberg’s early thinking about technology at this juncture to help steer a passage through the mythic forcefield of AI discourse. By reminding us that we have lived among the opaque processes of technology for some time, without fully integrating them into consciousness, Blumenberg indicated the possibility of sustainable digital lifeworlds in his 1963 essay on technology (or ‘technization’) and the lifeworld. The increasing abundance of technical devices in everyday life that are neither fully grasped nor reflected upon is a sign of technisation, or the acceleration of historical processes widely regarded as progress. ‘Machines can help us skip levels of consciousness, and we often have to respond to the overexertion of objective demand by automatizing ourselves – for example, by using formulas that we do not fully grasp’, observed Blumenberg in another essay; ‘Thus, our consciousness is “bypassed” by a set of behaviours and actions that result from the inherent laws of our areas of life, which are objectivized and become autonomous, and are constantly forcing themselves on us’ (Blumenberg Reader 42). Blumenberg acknowledges the sense of disorientation and helplessness caused by rapid innovation (or technisation) without demonising or deifying technology. In doing so, he provides us with clues as to how the chatbot can go back in the box, as it were, as part of the infrastructure of our digital lifeworlds.
Audrey Borowski extrapolates this line of thinking from Blumenberg’s defence of nonconceptual thought as a way of drawing the limits of algorithmic systems. Digital algorithms, she points out, have provided us with a structure that is closer to myth than reason insofar as they offer nonconceptual means for navigating an uncertain world that always escapes the reach of the concept: ‘The simulations provided by AI and various digital tools are in this respect perfectly consistent with a long history of measures adopted to mitigate contingency, even if those measures preclude a proper comprehension of the world’ (249). The ubiquity of algorithmic systems does not just provide orientation inside our digital lifeworlds, however, but profoundly alters the reality we experience by creating recursive feedback loops. ‘Data is extracted from and then read back to the individual in self-reinforcing loops that tend to homogenise our experiences and make forms of critique or resistance difficult’, as Borowksi puts it, leading to the ‘enclosure of the digital on itself’ in ways ‘anathema to the cultivation of a public space’ and conducive to ‘the radicalisation of opinion, the negation of common experience, and the effacement of common values and reference points’ (250–51). To dismiss the continuities between myth and data analytics by regarding the first as irrational and the second as purely objective is risky business, for while digital lifeworlds are no more artificial than the mental constructs that oriented us in the past, they pose grave risks once they ‘are literalised or taken for reality’ (250–51). In fact, the drive to eliminate contingency through data in the expanding algorithmic world before us threatens to usher in an absolutist reality that undermines the future it seeks to secure. As a result, ‘our lives are increasingly reduced to homogenised data sets and modulated according to “stimuli” and their “reflex responses,” absolving us from any need to interpret, assess, or critique’ (250).
Rhetoric is not the shortest way between two points, Blumenberg reminds us, but the route back from the precipitous paths of direct action opened up by technological dependence. Amidst the speed of digital transformation, the deliberative practices of literary studies and the humanities stand as guardians of the lifeworld (or the temporalising and embodied truths of subjective life) by exposing the need to interpret, assess, and critique.
Slow Reading
But the real achievement of our slow close reading, drawing on the semantic and syntactic contexts, is to understand clearly and, in detail, just how literary prose can create a complex and layered world. (David Greenham, Close Reading)
The spread of large language models across the spectrum of digital communications has seen renewed calls for the return of rhetoric. In view of the uncritical posture to AI currently urged upon educators, the new rhetorical rationale for literary studies represents an opportunity and a danger. The multiple meanings, mental activities, and evaluative skills huddled under the umbrella term ‘close reading’ can never be simply instrumentalised as a tool and put in the hands of fee-paying students. The belief that it can stands behind proposals to merge literary studies with communication studies or digital media studies. We are now accustomed to making the case that the practices of reflection and engagement essential to literary studies produce employable graduates, even as we know that the goal of these practices is not employability, but imaginative world-disclosure; that is, unlocking the truth content stored in the figurative potentials of language and narrative, and released in criticism.
Close reading has long been better understood in the demonstration than in the concept. Disagreement persists amongst literary scholars over the origins of the practices designated by the term, whether it refers to the cultivation of sensibility or to creative knowledge production, whether it implies reactionary or radical politics, etc. The dearth of convincing answers to these questions, not to mention the longevity of close reading as a technique and the ubiquity of its scholarly practice, suggests its non-ideological character. The vagueness of the term can by now be appreciated for signifying a diversity of creative, critical, and cognitive practices (reading, writing, co-writing, attending, judging, attention-giving, quoting, etc.) that, while methodical, never quite resolve into methodology. The interwar practical critics led by I. A. Richards and the postwar American New Critics were not even trying to devise a method of reading; their goal was rather to establish the judgement of literature on more rigorous grounds (Guillory viii). The list of literary, rhetorical, and linguistic techniques available to the close reader is for this reason virtually endless. If hermeneutics seeks to inhabit conceptual frameworks and aesthetic objects, then close reading lays the groundwork; and it does so by showing the work of reading in ways that support interpretation and open it up to inspection and contest (Guillory 59). Both close reading and the hermeneutic modes it supports require a mental and physical effort antipathetic to the frictionless knowing of machine learning (Goodlad and Stone). The difficulty of defining it in procedural terms explains why its demonstrative modelling in the mentor-student relationship remains the best way to learn it, despite ongoing efforts to replace it with ‘flexible learning’. This truth has emerged again in reassessments of close reading as a craft that resembles certain features of the aesthetic object it interprets.1
It is no coincidence that the reassessment of the place of close reading in literary studies is occurring at a time when rapid advancements in digital technology are threatening the reflective and attentional practices it requires with extinction. The current interest in close reading unfolds against the backdrop of a wider cultural anxiety over what the new technologies are doing to us and a suspicion that we are all trapped in a perverse social experiment that sees us getting dumber as our machines get smarter. The disappearance of the long novel from literary courses on the assumption that today’s students are either unwilling or unable to engage in the kinds of immersive reading practices required of ‘the big read’ is a sign of the times. In a culture overcrowded with digital products competing for their attention, today’s time-poor students must weigh up the benefits of taking literary courses that have traditionally obliged them to devote many hours digesting novels like Eliot’s Middlemarch or James’ A Portrait of a Lady. A brief look at the material conditions that enabled ‘the big read’ and how it came to be prized in the institutional context as paradigmatic is useful here. John Guillory has given us some good reasons to avoid the reductive comparison often made at this point between the immersive attention required of ‘the big read’ and the distracted forms of attention associated with digital forms of information and entertainment like video gaming. The fact that the video game engages users with forms of immersive attention and narrative indicates that the basis of comparison between the two forms is ‘medial’ and not attentional. The multimedia video game and the monomedial novel or poem are not alternative delivery systems of narrative but incommensurable forms whose uniqueness demands different social valuing. Once upon a time, a novel by Jane Austen or George Eliot was deemed a dangerous source of distraction from more serious matters addressed by other literary forms like the sermon or the social history. However, ‘close reading emerged as a technique out of a paradoxical media situation’, Guillory reminds us, ‘in which the very forms of writing that could be regarded in earlier historical periods as sources of entertainment and therefore distraction – poems, novels, plays – were dialectically repositioned as demanding the most strenuous forms of attention’ (76).
The disciplinary practice of close reading and its emphasis on a new practice of attention-giving developed alongside the growing massification of the competing cultural forms of film, television and radio, and, as Guillory has it, as ‘a cultural development responding to the latest technological modernity, the diffusion of the new media, along with the proliferation of writing itself in new popular as well as bureaucratic forms’ (76). In this new condition of writing, those literary works capable of attracting close reading stood the best chance of survival in the increasingly transient world of mass cultural products. Close reading acted in this context as a kind of sorting mechanism or ‘means of producing rarity in response to the condition of superabundant writing’ (78). Literary scholars had good reason for not trusting the market as the ultimate arbiter of literary value; and their efforts were later complemented by the growth of literary prizes that improved an author’s chances of survival under the conditions of the rapid turnover of mass goods. But scholars have often mystified their own participation in the cultural sorting function by giving close reading practices a certain elitist or esoteric appeal, one tinged with religious meaning by the New Critics. This often resulted in the sort of reductive comparisons between the products of high and low culture typical of the Great Divide (Andreas Huyssens), as well as periodic drives to purge close reading practice of the ideological viruses it has been alleged to carry.
The reconsideration of close reading currently underway allows us to see the emergence of distant reading as a dialectical response to the limits of close reading as a form of producing literary value or scarcity. Both perform the function of nonmarket sorting, says Guillory, close reading via its selective practices of analysis and distant reading via the category of genre that sheds light on the hitherto unnoticed practices of canon formation buried in vast troves of digital libraries. Certain AI tools have already been integrated into literary study since the computational approach of distant reading was developed in the early 2000s (Long and So). The pattern matching of distant reading is already algorithmic, though the textual patterns it unearths from vast textual corpora do not interpret themselves, Guillory reminds us (58). Here again, the claim that computer modelling represents objects and meanings ‘objectively’ has drawn the charge of data positivism.
There are many other forms of reading beyond the close and distant forms that flourish outside the institutional context. Katherine Hayles suggested that the skimming and scanning typical of online reading or ‘hyperreading’ as well as the forms of machine reading used in distant reading should be regarded as equal in value to close reading. Hayles and Yves Citton have suggested that the humanities should include these alternative forms of reading alongside close reading in a pluralistic approach that places more emphasis on an ecology of attention (and less on an economy of attention based on the quantification); but the practical constraints of giving equal time in the classroom to all these forms means it has not taken on. We do not uncover the different social value of these alternative forms of reading by regarding them of equal value. Guillory urges us instead to an appreciation of the incommensurate social value of these forms of reading. Close reading is not simply immersive reading; in fact, it disrupts immersive attention to initiate its esoteric techniques, which is why it is unlikely to survive outside a disciplinary setting.
Wary of the quasi-religious formulations of the New Critics, Guillory describes the origin of close reading as an effort to ‘de-industrialise mass literacy’ that originated in the effort to orient the technique of reading towards a bodily technē, and thus ‘away’ from technology to something close and slow (80). Distant reading was conceived as an exercise in de-mystifying close reading with quantitative analysis; but that should not spell the end of close reading. If, as Guillory contends, the history of close reading has been driven by the twin imperatives of de-theologisation and de-industrialisation, then it is no surprise that close reading is back on the agenda, when AI has created new gods and monsters and automated writing has delivered vast ‘new quantities of industrialised textual “content.”’ The question of the fate of reading in an age of mass literacy is before us again, though for us the ‘mass’ is splintered ‘into thousands of sites and modes of reading and writing’ and dispersed across networks. If machine reading engages the massification of text production to produce knowledge about texts we would never otherwise read, then ‘human reading will always be limited by the constraints of the human body’ (82), which is why it is so slow. And close reading slows down this process further still to engage in the reflective processes that show the work of reading. ‘Slow’ here does not refer to how quickly our eyes run across the page or screen, but to the techniques of close reading that require reflection. For this reason, Guillory describes close reading as the disciplinary analogue of the countercultural slow movement (83).
The possibilities large language models present to literary studies and the humanities should be taken up in the awareness that technology is inherently political and not just a tool. Given the long-term goal of AI research to design a computer with human-like intelligence, the analogising of human and machine capacities is all but unavoidable; but the invidious comparison between human and machine intelligence nonetheless recurs with a special potency in a positivistic culture that downgrades the humanities and confuses understanding with knowledge. The ‘kind of knowledge’ that ‘understands that something is so because it understands that is has come about so’ is historical, not scientific, Hans-Georg Gadamer reminds us (5). Indeed, understanding is neither method nor knowledge but encompasses both. No amount of computational power can yield the act of understanding, because understanding is not a discrete cognitive activity or form of information processing (Wang 136). It is not a rule-bound procedure at all but a practical way of coping with the world. Its historical character is AIs missing algorithm.
Footnotes
-
See the close reading archive compiled by Scott Newstock at www.closereadingarchive.org.
↩