‘It’s Best to Leave This Constructive Ambiguity in Place!’: The Evaluation of Research in Literary Studies

, ,

Abstract

Despite recognition that the use of journal rankings in research assessment is problematic, they are implicitly or explicitly used by institutions to evaluate individual researchers. This essay reports on a study we undertook on behalf of the Australian University Heads of English (AUHE) investigating research assessment policies within the field of English, and their impact on academics’ publishing strategies and careers. After an initial online questionnaire, we conducted follow-up interviews with twenty-seven Australian literary studies academics from a range of institutions and at varying academic levels. Given generally widespread scepticism about the role of journal rankings in measuring quality, we asked these academics how they think literary studies can and should be evaluated. What we discovered was a broad and rich range of responses to this challenging question, as well as various creative ways literary studies academics negotiate questions of value in relation to institutional priorities and modes of evaluation. This paper suggests that broadening conceptions of value may be an important strategic response to the current institutional context in Australia.

In a rather disheartening opening to his chapter in Ronan McDonald’s edited collection, The Values of Literary Studies, Derek Attridge writes:

The establishment of a career structure for huge numbers of university employees dependent upon publication of ‘research’ in the humanities, and the modelling of that research upon the sciences, has resulted in a massive increase in the number of articles, reviews, and notes appearing annually in journals, edited collections, annotated editions, online comments and academic talks. The introduction in certain countries of national assessments of the research produced in specific subjects, on which crucial funding decisions depend, has further increased the quantity of what we have learned to call ‘outputs’… Given the vast resources, institutional and individual, now being devoted to this global activity, the question of value becomes unavoidable. (250)

Attridge’s contribution to the collection has a specific focus on the nature and value of the literary experience and he is quite clear on what he is not trying to do:

I am not attempting to address the huge question of the deformation produced by the widespread imposition of the scientific model or the overproduction resulting from the professional demands of promotion and ranking. To counter these trends would require a strategy for bringing about a significant shift in the governing culture of tertiary education and the bodies that fund and oversee it. I’m not able to come up with such a strategy, other than opposing that culture at every opportunity and encouraging others to do so … And I have to admit that from the point of view of a supervisor of PhD research and the mentor of younger colleagues, I have little option but to make the best, and to urge others to make the best, of the unpropitious circumstances that we find ourselves working within. (251)

This article attempts to address this very question. It does so by drawing upon interviews conducted in 2022 as part of a project with the Australian University Heads of English (AUHE), a peak body comprising academics from more than thirty universities. Using an online questionnaire this project sought to investigate how journal rankings are being used by tertiary institutions within the discipline of English, and how they impact English academics at various institutions and career stages (Mrva-Montoya et al., forthcoming). We then undertook follow-up interviews with twenty-three respondents to expand on the issues that were emerging in the study. Respondents represented staff at every level, including early career researchers, teaching-focused and research-only staff, continuing and casual academics from universities across the country representing the Group of Eight (GO8), Innovative Research Universities (IRU), the Australian Technology Network (ATN), Regional Universities Network (RUN) and others. The respondents represented a diversity of subfields including Australian literary studies, children’s literature, creative writing, critical animal studies, environmental literary studies, modernism, postcolonial literary studies, romanticism, travel writing, Victorian studies and women’s literature. We are confident, therefore, that this broad range gives our data a welcome richness and diversity.

This article considers the question of the evaluation of research in literary studies. We asked interviewees if there was any value in ranked journal lists. We also asked them if and how they thought literary studies should be evaluated. This paper is not, then, investigating the value of literary studies (let alone literature!) although that is indeed an important and related question that has been addressed elsewhere, including in this special issue (see also McDonald). Rather, it explores how literary studies scholars conceive of, articulate and respond to the challenge of how research in literary studies should be evaluated in an academic culture of relentless and seemingly unavoidable assessment that seems far removed from the articulation of any values at all.

This article, then, is really about the experiences of literary studies academics in Australia as they attempt to understand, accommodate, negotiate and/or resist what Guy Redden has called ‘technologies of performance evaluation’. We have deliberately kept the analysis light throughout to centre those voices. Our hope is that literary studies scholars find in this article a sense of solidarity with their colleagues, a capacity to challenge or at least complicate the legitimacy of systems of performance evaluation. We also hope to contribute to the ongoing dialogue on literary value and facilitate alternative perspectives. More broadly, we hope to show literary scholars' collegiality, humour and dedication to the field and their students in the face of the expansion of metrics.

Most interviewees were aware that there is no agreed upon definition of quality and recognised that different academics grounded their research and publishing strategies in different values. It is worth remembering that the data emerged out of relatively informal interviews and the comments are impressionistic rather than highly theorised. While the participants have thought deeply about the value of the discipline, this is not necessarily their area of research expertise. These scholars expressed a range of views dependent upon but not reducible to their specific context. None of the participants are identified. Where we think it is relevant, we include their level and the type of institution they work in. Academic level, however, is not always a good indication of either experience or contribution to the field.

The interviews took place in the context of shrinking enrolments, the Job-ready package, and the vetoing of humanities grants by the Federal Education Minister, and just before the Sheil review of Australian Research Council recommendations was published in April 2023, which made specific reference to the danger of data-driven metrics (Sheil, et al.). As one senior lecturer at a GO8 university said: ‘Our discipline feels fragile. We’re being told our enrolments are falling, we’re not sure if we’ll replace some of our staff. Literary studies feels under threat in many ways, not only through ignoring what we do, and not valuing it by including it in these metrics ... Will we exist? It’s so hard to fight back, and it’s a bit scary.’

This article seeks to draw out similarities and resonances in the data. In this sense, we are doing something similar to Matthew Allen and Jennifer Mae Hamilton’s 2022 essay in the Sydney Review of Books, ‘Taking out Time’. Like them, ‘[t]he story we tell below about the struggle to measure and value academic work illustrates not merely the plight of our university or the university sector in general, but also the broader challenges of creative and intellectual labour under surveillance capitalism and its obsession with innovation and metrics.’ A number of themes emerged from the interviews, and we elaborate on these in the sections that follow. Overall, our participants were pragmatic. They perceived the need to engage with evaluation processes, even if they did not like them, and most tried to find some form of value in them. Most were aware of the potential of such forms of research evaluation to distort their agendas but tried to deal with this as strategically as they could. All participants found it galling to be subjected to measurements by people who did not know or care about the discipline, and wanted some input into the modes of evaluation that determined how their institutions assessed their research. None spoke of value in terms of time and labour, although it is clear from responses to other questions that all our respondents were working well beyond the hours for which they were paid.

Most participants supported the idea of peer review as a form of evaluation, with a few important reservations, and there was a strong emphasis on the value of working with a community of scholars. For some, the opportunity to narrate one’s own value was a key component of professional autonomy. Many considered the value of literary studies to those outside the academy, including its contribution to the national conversation, which rankings and other metrics all but ignore. Engagement and impact emerge as something universities pay lip service to but which are frequently at odds with quality measures that use journal rankings as a proxy. Finally, and perhaps most intriguing, was a sense of both knowing and unknowing in evaluating literary studies, frequently seen as a ‘constructive ambiguity’ as our title suggests. We argue that this commitment to uncertainty – the unknown – and the need to continue to make meaning and engage in critical dialogue in the face of it, exemplifies what the field of literary studies has to offer.

‘I Know Why Rankings Are There’: Accommodating the Current Landscape

Given this context, it is not surprising that many of the scholars we interviewed seemed to accept, or be resigned to, rankings as an inevitable part of the academic landscape. While few whole-heartedly endorsed such ‘technologies of performance evaluation: those used to measure and fund research quality’ (Redden) there was an openness to the potential that such tools, if used appropriately, could be of some value to the discipline. This sits in stark contrast with earlier critiques of rankings when they first appeared on the Australian scholarly landscape little over a decade ago where they were seen as part of the encroachment of a broader neoliberal culture that incorporated forms of compliance, auditing and performance improvement (see Redden; Cooper and Poletti). It now seems that this culture is thoroughly engrained in academic modes of self-governance in ways that have redefined academic work in Australia (see, for example, Genoni and Haddow). Even as they understood the historical contingency and therefore contestability of these modes of evaluation, most were more concerned with how to make forms of evaluation more consultative than challenging the system, including the value system, that underpins them.­

Some participants suspected there might be some value to lists, but only if lists were produced collegially, and not as a measure of quality but as a guide, although the ARC’s journal ranking exercise demonstrates just how hard it is to control how technologies of evaluation are mobilised. Some were looking for the discipline to produce a list of quality journals, mostly motivated by a desire to avoid being subjected to existing management-driven institutional rankings systems. In this sense, some held out hope that academics could wrest control of these kinds of tools for their own benefit, in ways that might be supportive rather than punitive.

When asked whether journal rankings have any value, there was a range of responses, from those who would welcome some form of journal rankings to those who would like to see them disposed of altogether. As a senior lecturer at a research-intensive university said:

I don’t have a problem with rankings, per se. I do think that that we work in a sector where there’s always going to be pressure to be able to define what excellence is. And that’s sometimes going to be assessed by people who are not in our discipline … some kind of ranking is useful for that purpose. I think Scimago is woefully inadequate … the fact that that ancient ERA [Excellence in Research for Australia] journal lists still circulates suggests that the ways that research is funded require some measure.

Another participant, a senior lecturer at a GO8, also saw value of journal rankings:

I wish there was at least some kind of ranking that the university was using in terms of either promotions or, I don’t know, [study leave] applications … not just for myself as an applicant, but I imagine for the committees. It’s also really hard to judge applications that come from outside your discipline.

Likewise, a mid-career researcher at a regional university was open to rankings with a proviso:

[Rankings] provide people with an idea of where a discipline is going, or where the opportunities are, and where you’re going to get a particular kind of audience and a particular kind of reader for your work, and how your work is going to be valued by others. So I do think that there’s value in them … it becomes counterproductive when decisions about where to publish become the main game.

For another senior lecturer, lists of journals can be valuable, but not the rankings of these lists,

because of the way that they’re ranked ... metrics don’t work for the Humanities, we’ve got to think of another method of evaluation, but it’s never been implemented. The qualitative factors that you might measure with a humanities journal, it’s harder to collect that data … if it’s easier to collect the numbers, then that’s where it goes. Because of that, I don’t see that there’s much value in ranking lists, because the people who use them don’t understand how they’re created, and therefore they use them improperly.

In addition to being helpful to one’s own career, these ranking exercises can reveal how external parties perceive research work in various ways, and participants spoke of their attempts to strike a balance in the current landscape. For one professor and experienced administrator, journal rankings:

can teach us things about our work, and that the world around us sees a value in our work. And it’s not the same as we might see in our own work. And I think we’re arrogant if we don’t take both into consideration, right? So when the system is telling me to aim for stuff that is of scholarly repute, okay, I’ll happily factor scholarly repute into my decisions. When the system is telling me to aim for stuff that reaches beyond academia and has an impact more broadly, proudly, I was on that train already ... And when the system tells me to emphasise research income, I can see that my employer is going to bleed if I ignore that … my job security improves if I work with them on that … I try to find a balanced way to do what they want without sacrificing what I want to offer.

Another professor, who had held senior roles in research administration at a GO8, made similar observations:

I think they’re quite useful. Because they get people to think differently … if that’s a process that gets an academic out of their normal way of thinking, that can be very productive. It can actually help people to change their ways of doing things, because what I saw at that level of being a kind of big picture research person within the school, is that people do a little bit opt for the easy thing. Now the flip side of this, the more cynical side now that I’ve gone into this kind of research space, is that there is a sense that these places are very exclusive and they have no interest … I found, having tried that, that you do get shut down very quickly. It can be very damaging to your confidence when you get a report saying, I don’t understand this, or criticising and being snide. I think those two things have got to be balanced.

Our participants seemed acutely aware that the increasing use of metrics can lead to ‘self-fulfilling prophecies’ (Siler and Larivière) that impact some fields of study (Genoni and Haddow; Mrva-Montoya and Luca). As a junior academic noted:

I think anytime that we’re creating this situation of rankings, it becomes self-perpetuating. And it makes it much easier to articulate the impact in some ways. But that comes with the risk and detriment of shaping and determining research and research trajectories, in particular, on the basis of that estimation of quality.

Other respondents spoke about how rankings were explicitly linked to funding and thus had little inherent value in them. A professor at a RUN university commented:

Money is never just given instantly, equitably across the board. The moment you start having to divide up who gets what amount, you need a league table and to come up with a league table you need metrics and so on. At the end of the day, you need to start assigning numbers to things. So I know why they’re there. But in our area and so long as the ERA and other instruments of authority are attached to the idea that some disciplines just don’t rely on citations, they are about impact and you need to peer assess, so long as that still holds cachet, we should be absolutely resisting any application of quartile rankings or any kind of ranking to our areas. Admittedly, an assessment exercise takes place. People still need some kind of sense themselves of what are the good journals and what aren’t. There is absolutely an ad hoc ranking that goes on there. But I don’t think quantifying that in terms of numbers fixes the problem of then trying to argue we are now assessing apples and oranges equitably. We can’t simply say that Q1 in languages and lit is identical to Q1 in a STEM discipline. My gut feeling is that if we can abolish them, in our disciplines at least, it will be a heavenly day.

If these journal ranking lists cannot be abolished, then ‘approaches where each field is consulted, has to be integral’ said a senior lecturer. Yet even consultation has its limitations given the colonial origins of the discipline in Australia, and its history as a vehicle for the transmission of ruling class values. Of earlier attempts to generate lists in the field from within the discipline, one professor reflected:

As the experts in those fields, I think there was not agreement about what that could look like in our very diverse discipline, in particular, a kind of colonially produced discipline with still very substantial hierarchies notionally, that can be reproduced between locations where those versions of English studies are produced. It’s very difficult for those hierarchies not to be reproduced so that it can be taken for granted that the big journals in the UK that publish on English literature, in particular, remain the high-quality journals, incontestably where their readerships are global, where they’re discussing the canonical authors, where they have big citations because of that.

That’s reflected in a small way in the Australian context … insofar as we are a kind of postcolonial outpost of long-standing disciplinary hierarchies. It’s easy for those to still be reproduced within our disciplines. Scholars who work in the early modern period, for instance, have a very definitive idea what they think are the quality journals in the discipline, which may include no journals that will publish an article on Australian literature that could otherwise be considered very worthy … I think it’s very difficult to produce a list that will genuinely reflect the work we do in a way that’s enabling for us as a discipline rather than disabling. At the same time, I do understand that it is possible to say, this is not a quality journal, and to recognise that some of us can publish in journals that are not meeting what we would understand to be quality markets.

Ultimately, although most of our participants were concerned with quality, they felt journal rankings were too blunt an instrument to ascertain quality, and that the context and values of different research agendas needed to be considered, as a senior lecturer from a GO8 university elaborated:

I see the problems with rankings, especially because they kind of push all the articles to one journal, and there might be other journals that are more suited. And there are also a lot of interesting journals that might not get enough submissions. I think it’s just so specific to your field and what you’re currently working on, and also just what you’re trying to do … sometimes you publish in a journal that is not the top journal, or even very highly ranked because someone asks you and you want to cultivate that relationship. There’s also a sense of not putting too much pressure on people to publish into top journals all the time, but maybe thinking about, maybe the article itself is really good quality, even though the journal is not in the top in the top five for whatever ranking you might use.

The ambivalence amongst English academics about metrics is clear in these responses and, while potentially valuable as a guide, many expressed concerns about the uses to which they could be put. For many participants, having some capacity to intervene in processes of evaluation was crucial.

‘They Tell a Story which Can Help Academics’: Using Rankings to Narrate Research Trajectories

The importance of being able to tell a story about one’s research trajectory recurred in the interviews. As a senior lecturer at a regional university said: ‘lists are really dangerous … they’re flawed, but they tell a story which can help academics.’ For another senior scholar, ‘allowing people to narrate the significance of their research is very important. There’s a sort of good thing that happens when people are allowed to be left alone to do their own thing and flourish.’ Another senior lecturer made a similar point:

it’s about how good you are at spinning a narrative. So you need to say, Oh, this is the top journal in this niche field and that’s why I picked it, which is often the case, but you kind of need to construct a narrative around that, which is good because it gives you that space to justify your choices.

Our research revealed that literary academics at research-intensive universities were less subject to journal ranking regimes and it may be the case that academics at those institutions have more freedom and possibility to craft these kinds of narratives. For those outside the GO8, the opportunity to auto-narrative one’s research trajectory seems more limited. One senior lecturer from a New Generation Universities (NGU) university thought her institution should be actively assisting humanities academics in constructing such narratives:

based on a range of algorithms and individual case studies about what their work was doing and where … there’s room to interpret those and present those. And if it looks like a bit of a reach, a bit of a falsified argument … then you need to hear someone’s advice. Here’s some professional development. You’re not quite understanding the process, which is what happens when NTROs [non-traditional research outputs] are assessed.

Another senior lecturer at a regional university made a similar point, suggesting that perhaps for scholars at regional universities, the capacity to narrate the value of one’s own work was of particular strategic importance.

I think we get into trouble when we rely too heavily on one particular model, particular set of algorithms. It is up to individual scholars to advocate for themselves. And to be able to rhetorically justify the value of their research. I think that’s important. But that needs to be backed up with as much quantitative and qualitative third-party evidence, as well … I think we just need to keep refining the processes that we have, but I’m concerned that often we get these processes imposed upon us, for reasons that sometimes have very little to do with research.

Creating a research narrative is one strategy that has been used to work with, and around journal rankings. But the discussion of the best form of evaluation of research frequently came back to peer review.

‘Peer Review Is, Obviously, Really Good’: Calling on a Community of Scholars

Jonathan Tennant argues that ‘[p]eer review is one of the strongest social constructs within the self-regulated world of academia and scholarly communication’ (1). Tennant traces the history of peer review, which was ‘employed mostly to help constructively improve manuscripts by eliminating obvious flaws and gaps in reasoning and improving the rhetorical style and argumentation of articles, rather for any sort of implicit or explicit gatekeeping function’ (2). Overwhelmingly, our participants saw peer review as a key form of what Thomas Haskell calls ‘collegial self-governance’ (54), and the best mode of research evaluation, even if its well-documented dangers were noted (Tennant). Even those who saw some benefit from the use of ranked journal lists thought such lists needed to be used in combination with peer review. English academics value being part of a community of scholars, and appreciate its role in ascertaining the value of research. For one professor, though, peer review is about gatekeeping. He said that it was vital that:

there is at least some kind of gatekeeping, or quality assurance procedure in place within a journal that needs to be demonstrably being applied … the same sorts of ethical and quality practices that we would expect to see from our developing researchers, we should also expect to see at the publication end and that the publishers themselves are applying those principles as well. At the end of the day, we should read it. And we should check for the markers of good practice.

Regardless of university or level our respondents viewed peer review at its best as collegial and constructive. An early career researcher (ECR) at a GO8 university thought ‘a peer-based model is good, rather than any sort of objective criteria … that’s supposed to be a community of scholars of knowledge.’ For a senior lecturer from an ATN university, ‘Being regarded highly by peers who have read and discussed your work and who are able to assess the influence in the field …. I think it’s something that the field does cooperatively.’ A junior academic believed that ‘actually asking people in the discipline how they think it should be evaluated’ would yield the best results and a senior scholar asserted that ‘the experts in the discipline should be able to speak to how best to evaluate in terms of esteem, status of journals etc.’ Another participant, an ECR research fellow said: ‘I would hope that [research] would be evaluated through a kind of peer review … we don’t have to work in the same periods, area, subfields, whatever, to at least be able to discern the kind of texture of one another’s research.’ A mid-career academic noted that ‘a genuine contribution to knowledge is the main factor that should be taken into consideration when literary criticism or scholarship is being evaluated’ and peer review is the best way of determining this:

If we are to have something along the lines of the ERA process where other eminent people in the field are probably the best people to judge not only on where the work appears, but the quality or contribution of that work, maybe some combination, with citations. We need to go back to the expertise of the field, to those associations, to those senior scholars to do it without a blunt instrument that’s just applied by admin staff. They all work very hard, and it’s no critique of them, but they’re being asked to evaluate something that they can’t accurately measure other than by applying those very un-useful criteria to what we do. It’s hard because I know what I need to do to impact on my field, and possibly to get a category one grant. But a lot of these measures are pushing me to do things that I don’t think are the best way to do that. I think that’s where we’re getting into a bit of a bind; we’re being asked to just follow this procedure, rather than what we know works in literary studies, or what we know that our esteemed colleagues would appreciate it. (senior lecturer, GO8)

In spite of this endorsement, quite a few interviewees hinted at issues in the peer review model, even while expressing their preference for that system over alternatives. One respondent drew upon their experience as an ERA assessor to address these:

I have appreciated that the ERA does build in in-depth qualitative assessment, that is about actually reading people’s work. It’s a radical idea that actually does work ... it is possible for senior scholars in the discipline to read people’s work at length. And under very significant time pressures. It’s quite a tough task to be relativising those assessments against other institutions, against other scholars, against other publishers with a commitment to a global view. That kind of in-depth engagement does allow some considered qualitative assessment that has to be nuanced, has to be complex, has to look at both impact, as well as insight, whatever that might mean, and engagement in debates looking for advancement of a debate, at a level that changes thinking, perhaps at a national level, perhaps at a global level … whether it’s measurable is a good question, but it certainly can be witnessed and described. (professor, GO8)

As this participant noted, witnessing and describing it is one thing, but evaluation is another and this does not happen in some kind of neutral and objective space, free from the politics of academia, as we will see.

‘I Hate the Arsehole We’ve Become in That System’: Issues in the Community of Scholars

For some, the community of scholars that underpins peer review can be thought of as a less benevolent force in competitive and exhausting environments, and the smallness of the field of literary studies in Australian universities is a real challenge. As a senior lecturer noted

there has to be some aspect of [peer review] that is subjective, people who are respected as experts in the field who look at that work and say, yes, it’s good. But I think it’s extremely fraught in a small discipline, where people aren’t very good at being nice to each other, especially, for example, in ARC assessments. There are real issues with that kind of peer review, too. Australian literature, even English is a small field now … It’s just absolutely fraught with interpersonal dynamics. And that’s always going to be part of peer review.

Another lecturer pointed to the problems of the self-authorising nature of the community of scholars in relation to the evaluation of research, and the exclusionary forces that might underpin it:

It would be nice to think that peer review could offer us the depth and scope of appreciation and assessment, and a journal supporting a peer review process and being able to prove and demonstrate its peer review process would be the ultimate marker of quality. But at the same time, I’m conscious that the peer review structure leans heavily on academics who have the time and availability and who are privileged in the sense of having a stable income and able to offer unpaid service, who are not completely exhausted from overwork and workloads that are really reprehensible across the sector, who are available, essentially, to have that space and time to contribute. There is enough inequity in the sector in those terms that the peer review process is skewed in the same ways and fallible in the same ways. I think we have the same problems of expertise that you’re seeing with commercial publishing, in the sense that there is a lack of diversity in academics employed in Australia, that there is underrepresentation of marginalised voices in many senses, and that those scholarly perspectives are essential to peer review.

Even very senior scholars seem concerned about parts of the peer review process. As one professor and senior research administrator at a GO8 said:

I think peer review is, obviously, really good. What I would like is for my research to be read, not just ‘well, he’s published in these journals’, but actually to read the articles or the books. But that’s really hard. I know that’s part of the ERA. But there’s also a little bit of defensiveness of academics to that process, because it’s very secretive. You don’t know if someone’s reviewed you. People talk a lot about that. I think it should be combination peer review, as well as looking at the quality of publishing and these rankings. I think you do need some kind of superficial measure. At the same time, it’s a bit hopeless to think that people have got time to read. That’s the problem.

Another participant expressed similar concerns:

To measure quality ... and the ERA process does require you then to quantify that within a five-star rating system, that’s quite a challenge … And for a small ... discipline like ours that feels like it’s shrinking, that process is not necessarily always positive. We can feel in Australia that we are a discipline where we are too well known to each other. It can feel like we’re under pressure from institutions, and broadly from the public discourse, as we have been to prove our worth and relevance, where we’re also asked to critique our fellows for not being of sufficient quality ... It can turn into a negative spiral, as we have seen, for instance, in the ARC assessment processes where it’s easy to be too harsh on colleagues because we’re all fighting over … what little space there is left for us … also around diversity measures, accessibility measures, around opening the discipline to new voices ... the notion of quality can really be mitigated against them as well.

A senior scholar from a dual sector university was even more scathing about the implications of linking funding to a peer review process, at least in the context of research grants:

my biggest worry about research income as a driver, and that’s true in the ARC grant system as well as, of which I am utterly contemptuous ... I’m not boycotting the ARC but I’m really close. I just hate the arsehole we’ve become in that system. It’s all that’s meanest about ourselves. The peer review is awful. The decisions are crushing … more careers are broken than made in that system every year. It just sickens me.

Another experienced scholar, who believed they would not be able to progress in the current academy because of an unwillingness to play the game, elected to maintain a mode of questioning that probed what value is and resist attempts to define it. Of her colleagues, she said:

what we have wanted is for literary studies scholars to determine what quality is. So instead of having a list we wanted our peers to determine that but I don’t even know if I really want that ... why do we need to know? What purpose does judging quality in a sort of quasi objective way serve? I’m not quite sure, actually. I know who I read and would always read, because I know the quality of their work. And those people would be different from the people who other people read … it comes back to what interests me, I’m interested in writing that doesn’t conform to rules and is willing to take risks. And I mean that in academic writing as well as public writing.

Value, as expressed here, is both contingent and dependent on one’s perspective, which shifts over time in response to changing circumstances. While our participants viewed peer review as a crucial form of evaluation, it is situated in an already challenging research context and its status as the gold standard for evaluation cannot be taken for granted.

‘How Do You Evaluate Contribution?’: A Multitude of Contexts and Perspectives

Many academics have a more well-rounded sense of their contributions than the systems that measure them. For a number of our participants, one way of measuring the value of research was on the basis of its contribution to teaching and the wider community, which are not considered to be measures of research quality in the current environment. One senior lecturer would like to see ways of measuring value broaden to capture impacts beyond peer review, journal rankings and citations measures:

I think it should also reach to how valuable students feel that article or that journal is. Everything’s online now. And it’s digital. So why can’t we also be measuring the number of times students are citing us? Because that’s impact to learning. And learning is happening every day for these students. It’s something that they’re paying for. They’re investing emotionally. And if we are saying that the best academics are operating across these three areas of research, teaching and community impact, then can we also be looking at citations from community members or in speeches and Facebook? I don’t think [current measures] really accurately capture the impact that we’re having when we publish an article.

Others spoke about value beyond the university. An experienced scholar reflected on ways to play different values off against each other in an attempt to disempower tools of research evaluation:

My strategy personally has been to try to do more public engagement stuff, like working with schools and libraries and things because maybe that type of impact will be valued. But I think in terms of the scholarly contributions, the things that we recognise as scholars, there’s a real devaluing of that.

In these responses we can also see a self-conscious concern ‘from a context in which literary studies is called to account in a more outward-facing value gauge’ (McDonald, 3). One participant pointed to the tensions for academics trying to meet shifting evaluation frameworks:

I think research should be evaluated based on its contribution. But how do you judge that? How do you evaluate contribution? At the moment, the system is through the Q1 journals, but also the discussion that elicits in the community of scholars, and that might be through reviews or through responses to symposia. But, I mean, these are all ideas. Everyone’s just so exhausted. And there’s so little funding to pursue all of these ways of measuring or evaluating research … to have broader networks, you have to attend various different events or things organised by those networks, you have to show up. And it’s important to show up. And if you’re spreading yourself thin doing that, then that’s eating into the writing time. It’s just an ongoing challenge.

All of this is enough, as one scholar noted, to give you whiplash. An experienced academic at a regional university summed up well the dilemma of assigning value:

The evaluation of culture and cultural institutions is always going to be problematic. Because the language of economics is the language that’s most frequently used and translating cultural value into that economic value is – I am resisting saying impossible – but very, very difficult because people on either side, probably don’t want to hear what the other side is saying. The economic managers don’t want to hear that something has a value beyond its return on investment – that return on investment, whatever that return is, is always going to be a point of disagreement ... In the work that I did, there was a social return on investment, putting social in front of that, and community wellbeing becomes a part of that. But attempting to measure that and define that is difficult.

For our participants, the value of research is much more multi-faceted than current regimes of value recognise. They articulated a vision where the value of literary studies research extended beyond the confines of journals and rankings, into the classroom and the community. Some literary scholars are hoping to exploit the contradiction between research quality, as it is traditionally understood through journals and citations, and the growing demand for research to be engaging and impactful in order to open up systems of evaluation but defining impact is also fraught and walking this fine line is not always easy, and can overburden academics already struggling to be all things to all people.

‘Literature Is Just Not Part of Nation Building Here, Which Boggles My Mind’: The Fate of Australian-Focused Research

Literary studies play a crucial role in thinking about the value of national storytelling. Literary scholars have long provided insights into the complex ways in which literature reflects, critiques, and shapes national narratives (see, for example, Turner; Elder). Although the federal government’s new National Cultural Policy, Revive, released early in 2023, puts story-telling at the heart of Australian national identity (Australian Government), the subfield of Australian literary studies has suffered under regimes of evaluation that threaten its capacity to critically engage with these stories. The participants recognised the vulnerability of Australian literary journals in a landscape that valued international over national publications and expressed concern for the subfield. One professor was emphatic that

national publishing interests should be respected. So, for example, top literary journals in Australia should be given status because they belong, they represent their nation. It shouldn’t just be let’s just throw everyone into an international soup, and then decide which ones ... because a lot of those journals are going to fall down. I think there should be some protection.

Another professor put it this way:

I do recognise that there is a not uncontroversial, but nevertheless, part consensual recognition that publishing with Cambridge, or Oxford is seen as a good thing, but also acknowledge that those publication houses aren’t necessarily immediately interested in Australian studies stuff, either. And the value within Australian studies gets cut off when an [Australian] publisher might publish really good work, but they’re not seen as prestige, but they still are supporting valuable work that is continuing knowledge. I find this a really vexed issue.

One participant helpfully compared the situation in Australia with the research climate in Canada in the 1960s and 1970s, where

anyone who was successful got successful in the United States. There was a lot of cultural cringe. But now, it is really not like that. And I think that you can now proudly publish with UBC Press or University of Toronto Press, especially on Canadian topics. If you’re a Canadian now, it will be almost embarrassing to publish a book on Canadian studies at a non-Canadian press. Australia kind of reminds me a lot of what Canada was like thirty years ago. And one thing that Canada did, which is very different from here, is there was high level federal policy to get Canadian content everywhere. It’s just completely shifted from thirty years ago. And one other thing is that at any Canadian University if you’re getting a degree in English, you have to take Canadian literature. And it always has just shocked me that even at [my university], you can go for four years and never take Australian literature and still get a degree in English. Literature is just not part of nation building here, which boggles my mind.

Although a minority of our participants undertook research in Australian literary studies, almost all expressed concern for the national literature and literary tradition in systems of evaluation that valued the international over the national. As one scholar argued:

I think we all have a right to know the richness of our extraordinary history and heritage in its full diversity. If we don’t understand that, we don’t understand how we’ve got to where we are, or who we are. And also, we don’t have other ways, or alternative ways, to come to grips with the contemporary issues that confront us.

For this scholar, Australian literary studies is more than just nation-building, in the sense of constructing a national canon; it ensures that diverse Australian stories, those on the periphery of the ‘world literary system’ continued to be told, analysed and engaged with (see Osborne et al.).

‘So, There’s No … Any Easy Answer’: The Value of Not Knowing

This article opened with Derek Attridge’s view on the state of research evaluation. In this piece, Attridge focuses on the nature of ‘literary experience’, by which he means both the practice of literary composition and the reception of literary works, and its importance at both an individual and social level. Those who are familiar with Attridge’s work will know his general argument, but he restates it succinctly here: ‘the particular value of literature … lies in that event whereby closed thoughts, feelings, and ways of behaving and perceiving are opened up to that which are excluded.’ He continues: ‘The value of literature as literature, then, lies not in any predictable effects but in the continuous exploration by writers of what lies outside the limits of the knowable world and in the repeated experience of alterity by readers. These are valuable functions because a culture that is entirely enclosed within its familiar boundaries, operating with its familiar stereotypes and prejudices, is one that cannot fully foster the potential of its members.’ (255–56).

Of course, Attridge is talking here about literature, not literary studies, but in considering peoples’ responses to the question of how research in literary studies should be evaluated, we noted an interesting tension between knowing and not knowing. In Literary Knowing and the Making of English Teachers, McLean Davies et al. argue that one of the tensions in literary studies at tertiary level is that it ‘is constantly in a state of flux – a position which may be characterised as “unstable”, or rather seen as a state of continual renewal’ (3). This state of flux may help to explain the contradictory dynamism of this simultaneous knowing and not knowing that we explore in this section.

On the one hand, many respondents insisted that literary studies scholars innately understand what quality is – it is something that academics just know, a knowing that comes from experience. On the other hand, the same scholars were hesitant to answer the question of how to evaluate research literary studies, wanting to retain an openness to what cannot adequately been known or, as one participant articulated, to ‘keep the constructive ambiguity in place.’ Our participants maintained a simultaneous knowing and not knowing that resonates with Attridge’s perspective on the value of the literary as an encounter with the unknown.

So, for example, many of our participants insisted on knowing quality. As a junior research fellow said: ‘It’s not rocket science. I mean, these sorts of evaluations have been done all the time by lots of people all across the world. It’s just not that hard.’ Or, as another junior academic at a research-intensive university, said:

[As] people producing the research and publishing, we don’t need [lists]. We understand where good places to publish are. It’s something that you just know, it’s a skill ... But it’s not that hard, either.

But these responses were frequently articulated in ambivalent ways. One experienced academic, when asked how literary studies should be evaluated, said:

I couldn’t actually quantify that at the moment. A lot of it has to do with just having been in the discipline for a long time and getting a sense of the quality of publication and research and scholarship that goes into a publication. I mean, I should be able to quantify it, because I can tell what’s a good student essay and what’s not. I probably then use similar criteria to what I would use in assessing a great essay.

Partly, this appeal to knowing is framed as a form of trust – a quality not readily appreciated in neoliberal contexts. As a senior academic said, ‘we have the expertise in our field … it’s demoralising, when you’re not even entrusted with knowing what’s a prestigious journal in your field.’ The role of trust, or the lack of it, speaks to an academic culture that uses forms of performance evaluation that bring extrinsic motivating factors to bear on academic work which assumes that academics do not have the desire or capacity to perform without such measures. As a research fellow said:

I think that something like a list can absolutely serve as a resource for colleagues to find their way toward a scholarly conversation within a particular field ... But the idea of a sort of rubric against which our publications are being tested for their validity… it would seem to me to undermine the faith that, in theory, an institution puts in one when it gives one a job or indeed accepts one for a PhD program.

But for another senior academic who has held administrative research positions at a research-intensive university, this knowing and unknowing is part of the difficulty of the job. As he says: ‘I find that problematic if everyone’s gonna say, well, I just know what’s good. And what’s good is what I publish in.’

Not only do scholars struggle to respond to the question of value, they are hesitant to do so because they want to retain a sense of uncertainty and openness about what constitutes value. As a senior scholar said:

the question is, again, who’s doing the valuing? And which value are we talking about at what time? And a big part of me says there is value to everything, right. And something that might not be recognised by a major journal at one point in time suddenly, is seen as something that’s incredibly valuable later on down the track. Recognising that value shifts, and that value is not neutral, is part of the response to that.

Another senior lecturer at a research-intensive university also responded tentatively:

I don’t know. Because I feel like there’s a sense you can’t really compare people’s research. Output is probably the easiest way to do it. But I’m not sure if that always works. I don’t know, I’m very suspicious about evaluating research. There are different types of research. I know some people are doing a lot of teaching. And they’re doing a lot of research into their teaching, which I think is also research, but not the research that is visible in the same sense. Yeah, so I don’t really have an answer to that question.

Our participants recognised that value was not absolute and none subscribed to a universal or unproblematic set of values that all literary studies scholars should share. On the contrary, they were acutely aware that values were historically constituted and shifted over time, making the evaluation of literary research particularly challenging. In a sense, this capacity to unsettle certainty in practices of evaluation and pay attention to the power relations that underpin them, may be the very important contribution that literary studies scholars are most trained to make.

‘Keeping Alive the Solidarity of That Conversation’: Optimism in the Face of Evaluation

This research suggests that questions of value are very much on the minds of literary scholars in Australia. These scholars took the time to critically engage in debates about literary value while rejecting certainties about what these values should be. They also actively participate in related questions of evaluation, negotiating the tension between their own modes of evaluation and the ones to which they are subject. An experienced mid-career researcher at a regional university put it like this:

English needs a fair amount of soul searching in terms of how it’s being taught, and what is published. Not because there’s not great stuff, because every time I read any journal, I’m like, Oh, I love this. I remember why I love this subject so much ... If we’re asking the question how the research should be assessed, then we need to ask questions about what literary studies does, and what its claim is relative to other subjects. And I think part of what it does is to give us an idea of who we are, and who we might become, and that has an intrinsic social component. And also to talk to the relationship between the narratives that we say, and that the narratives that we consume, and other broader political issues. So that would be where to start is to think, well, what is it meant to do? And then how do we assess the quality?

I think what literature does is change the way we see the world and create a mobilising of transformation and change in subjects as they encounter narrative and helps us to read that deeply … I’m torn about that all the time. Like, what are we doing? The existential moment of what are we doing? Literary studies scholars are just going through the absolute hammer at the moment because – literary studies and creative writing ... these questions of research are deeply connected to a whole range of other questions.

For another professor, the capacity to critically engage with a range of texts and narratives was crucial in expanding our understanding of ourselves. Even still, for this scholar, the question of how literary studies should be valued remains open:

I’ve just been having that debate with my head of school, and my dean, and every level at the moment: what are we for? It’s a significantly difficult thing to answer ... one of the ways in which it matters, I think, is making accessible forms of cultural heritage that inform who we are, and inform how we think and inform how we understand each other in substantial ways that we’re all the poorer if we don’t understand. There’s so much to learn from the ways in which writers have explored issues that still confront us that if we leave them behind and forget them, which I think we’re in danger of doing, then we leave behind an extraordinary cultural richness and self-consciousness as a culture that we can’t get back again, that we can’t then access in other ways. We just all shrink in understanding of ourselves.

For many of our participants, there was value in being able to have the conversations that the literary enables, and this sense of participating in a conversation is linked to the values of collegiality and solidarity that literary scholars maintain in the face of frequently competitive and individualising systems of evaluation. If, as Davies et al. argue, literary sociability is a key methodology for literary knowing, it is not surprising that literary scholars would turn to dialogue to open up ways of responding to the question of value. If literary criticism is anything at all, it is an invitation to conversation. For a senior lecturer, value could be seen if ‘there’s a culture of argument and critical thinking around that research, which is going to elevate it and everyone connected to it.’ A research fellow found value by looking ‘at what people are writing, how they’re positioning themselves within scholarly conversations, which I think is basically just about like saying, to what extent does this person seem to be prepared to take their work and do things with it in the world, link it up with conversations of significance?’ For many of our participants, the more people who can participate in this conversation, the better. According to a sessional academic: ‘we need as many voices as we can. And we need to try and find ways to make them sustainable in some way’, and for another, ‘scholarly communication and scholarly conversation and dialogue about research that’s been published in whichever journal, that’s important. I think that’s a way of valuing research.’ For a creative writing professor, ‘building community, a scholarly community – that is how research should be valued and evaluated.’

We close this article with observations by a widely respected professor, which emphasised the place of openness and conversation in any discussion of value, and a preparedness to not necessarily find an adequate response. Redden noted back in 2008, ‘there is no easy way for academics to stand outside. The performance mechanisms are wrapped around things researchers care about and do’. In this context, maintaining a critically engaged dialogue with colleagues about the value of what we do in the face of technocratic concerns with efficiency and productivity may be one of the most formidable forms of resistance available to us:

This question of journal ranking seems to be on a cycle. I remember going through this in about 2008. And again, in about 2016. So it’s certainly a conversation that doesn’t go away. But neither is it one that seems to be resolved at any point, either. It is keeping alive, I think, the solidarity of that conversation, if it has to be had, but not necessarily feeling like there is a solution that we can come up with that’s going to satisfy the contradictions that we’re working with, and the differences within our discipline, as well, which I think are also important to keep alive. Because that’s part of the strength of our disciplines in English. There is a danger in trying to flatten that out, as much as it might be helpful for other positions within universities to have a simple model that they can just tick things off. I don’t think that does justice to the complexity of what we try and aim for. It’s okay for me now maybe, having a longer experience of this. But for people at different stages of their career, they might just be feeling exhausted and torn and unable to meet all the expectations, mostly because there are so many of them and they don’t necessarily make sense, either individually or collectively. So even just to have that conversation might be really helpful for some people.

And so not to assume that there is something called value that we all agree with, but also recognise that we are, as researchers, trying to produce valuable work, that might be more multiply recognised or measured, which sounds like a bit of fence-sitting. I don’t mean it in that way. I think it is just to respond to and recognize the big-knotted mess that we are working within and I don’t see a journal list of rankings ironing out that complexity. I could be wrong there. But that’s just my perspective. Hasn’t got a good history, has it? I mean, looking at the past.

Published 30 October 2023 in Special Issue: Literary Value. Subjects: Literature - Study & teaching.

Cite as: Nolan, Maggie and Agata Mrva-Montoya and Rebekah Ward. ‘‘It’s Best to Leave This Constructive Ambiguity in Place!’: The Evaluation of Research in Literary Studies.’ Australian Literary Studies, vol. 38, no. 2, 2023, doi: 10.20314/als.8ec9216602.