Egregious ranking analysis?
The Research Quality Framework (RQF) was a proposal by the previous Australian federal government to introduce a set of metrics by which the research output of university departments can be measured. Something like the Research Assessment Exercise in Britain, certainly in principle (I don't know enough about either to say how alike they were in practice). The new federal government scrapped the RQF earlier this year. It's gone, dead, buried. Instead we're getting something called the Excellence in Research for Australia (ERA) Initiative, which is completely different, and much better. Personally, I think I preferred the RQF -- more interesting possibilities for backronyms there.
I don't really have an objection to this type of thing, in principle. But as everyone knows (including, I'm sure, those who devise them) performance metrics can lead to perverse incentives. The average length of time people have to wait for elective surgery would seem to be a good one for hospitals, but not if they start turning people away rather than hire more doctors or expand facilities in order to reduce this metric. Or even worse, start turning them out before they are fully recovered.
So the precise metrics used matter. And one of the ERA metrics seems to be causing a lot of concern: the ranking of journals, both international and Australian, in terms of quality. Publishing in high quality journals scores more highly than publishing in low quality journals, and in the end this presumably translates into more dollars. Seems fair enough on the face of it: obviously most historians would prefer to publish in high journals whenever possible anyway, with or without an ERA. But who decides which journal gets what rank?
The short answer is: not historians. The longer answer is the Australian Research Council (ARC), which is the peak body in this country for distributing research grants. In the first instance they are relying on journal impact factors (a measure of how often articles from a journal are cited by other journals), which at first glance would seem to discriminate against historians, for whom monographs are a hugely important means of publishing research. Maybe there's a way of correcting for that, I don't know. Anyway, there are four ranks, ranging from C at the bottom, through B and A, to A* at the top. [Spinal Tap reference here] A* is defined as follows:
Typically an A* journal would be one of the best in its field or subfield in which to publish and would typically cover the entire field/subfield. Virtually all papers they publish will be of a very high quality. These are journals where most of the work is important (it will really shape the field) and where researchers boast about getting accepted. Acceptance rates would typically be low and the editorial board would be dominated by field leaders, including many from top institutions.
This is supposed to represent the top 5% of all journals in a field or subfield. A is like A*, only less so, and represents the next 15%; B, the next 30%; C, the bottom 50%. I can see a danger for perverse incentives here, at least for Australian journals (international journals won't notice a couple of submissions more or less): C rank journals might get even fewer quality articles submitted to them, because these will be directed to the A*s, As and Bs first: how can they then hope to climb up to B? So ranking journals in this way not only measures the quality of journals, it might actually fix them in place: a self-fulfilling metric.
At least the ARC is seeking input from historians (and indeed, all researchers in Australia in all fields) about the proposed ranks, but what effect this will have is anyone's guess. The ARC has already extended the deadline for submissions from next week to mid-August, so they're clearly aware of the 'large interest' the journal ranks have aroused.
So, to see how the draft ranks play out in my own field (what that field is is debatable, but let's call it modern British military history), I trawled through my bibliography files to gather together a list of journals which have published articles useful to me in the last 25 years, discarding any which appeared only once (on the basis that they're evidently only of marginal relevance).
A*:
- Historical Journal
- Journal of Contemporary History
- Journal of Modern History
- War in History
- Journal of Military History
- War and Society
- International History Review
- Journal of British Studies
- Twentieth Century British History
One thing leaps out: no military history journal is ranked A*. There's an A (War in History) and two Bs. This is troubling for military historians: we will be disadvantaged relative to historians working in other subfields if even our best journals are ranked low. It could be argued that some journals are too specialised to be in the top rank, but then Ajalooline Ajakiri Tartu Ulikooli Ajaloo Osakonna Ajakiri, which I think focuses on Baltic history, is given an A* rank, alongside American Historical Review and Past & Present.
Is it really the case that military history doesn't deserve a place in the top 5%? Are the other journal ranks relatively fair? What do people think?