May 092012

I am going to write in praise of the Research Excellence Framework (REF), the successor to the Research Assessment Exercise (RAE), which currently determines the distribution of university research funding grants in the UK. I am going to argue that the RAE is good value and good at what it does. I will speculate that its replacement, the Research Excellence Framework will likely be even better.

The main points I want to make are:-

  • The RAE is an extremely efficient way of giving out a large sum of money to support university research in the UK.
  • The RAE appears to be much cheaper than obvious alternatives.
  • The RAE is highly discriminatory but is generally perceived as fair.
  • The bad consequences of the RAE are not inevitable: they are usually consequences of decisions by institutions.

Before I try to develop these arguments I feel an urge, or possibly a duty, to explain how I come to be making them, not least because I have always been openly sceptical about the value and robustness of the RAE and more aware of bad consequences than good.

It all started a couple of months ago when I decided that I fancied writing a blog. Since then I have been avidly following some of the wonderful HE blogs out there and wondering what on earth I could write that would match the quality of what I was reading. I particularly commend the blogs by Dorothy Bishop and Athene Donald, which are both interesting and provocative – in the best sense.  If you are interested in looking at HE blogs you will find a good list of them on Phil  Ward’s blogroll.

Both Athene and Dorothy have made me think and have given me ideas for what I would like to write about in the future. It is to Dorothy that I owe my interest in today’s subject. Let me reiterate that I think Dorothy’s blog is terrific because I am going to disagree with almost everything that she said about the REF in a post in March, in which she described the REF as ‘a monster that sucks time and money from academic institutions’.

My first thought on reading Dorothy’s post was that I should perhaps see if the REF can be defended on the basis of available data on what it gives to Universities and what it costs them. Before I do that, I should declare an interest: my own career has benefited significantly from the invigoration of the UK academic job market caused by the RAE. Without the RAE I would probably still be a lecturer in Physiology in Newcastle. I can’t prove that my mobility is a consequence of the RAE but I can’t remember a job interview in which my likely contribution to the RAE hasn’t figured.

According to data in the public domain, the RAE is an extremely efficient way of giving out a large sum of money to support university research in the UK. The 2008 RAE exercise cost the Higher Education funding Council for England (HEFCE) about £12 million to run. The indirect costs to Universities were estimated at £47 million, according to  a report commissioned by HEFCE.

£60 million sounds like a lot of money but it goes a long way: over the RAE cycle about £10 billion will be allocated to universities using the 2008 RAE results. So the review costs about 0.6% of the funds given out. This is about a tenth of the proportional cost of research council peer review, which was estimated as  about £196 million per year, split about 95:5 between universities and the research councils, to allocate an annual spend of around £3 billion. Not only is research grant peer-review more expensive but a bigger proportion of the cost is borne by the universities. In comparison the RAE looks more like a midget that pumps money into universities and less like a monster that sucks money out.

I don’t want to get hung up on the cost of these processes, because they are not exactly comparable and anyway, the quality of the decision making is extremely important. We don’t have direct measures of the quality of either the RAE scoring process or of Research Council peer review but both processes seem to have earned a high degree of trust.

Trust in the assessment process is crucially important because the RAE delivers very uneven outcomes.

The graph here shows data from the HEFCE website: the distribution of HEFCE Research grant to institutions in England in 2012, based on the outcome of the 2008 RAE. Each datapoint represents the grant to an institution. I have ranked the grants by size and plotted them cumulatively to show how the total grant 0f £1.56 billion is divided between institutions. The increase in the slope of the graph from left to right shows how unequal is the distribution of funds. At the left, the slope is zero because the bottom few institutions get nothing. Yes, nothing. At the right, the increasing separation between the institutions is measured in tens of millions of pounds. The top institution, Oxford University, gets over £130 million, about 8% of the total. The top 12 institutions share half of all the money.

This very unequal distribution is widely accepted as fair because it is based on a robust evaluation process, one that places most weight on the quality of the research outputs, and one that is carried out by nominated representatives of the research communities that are being judged. The money is seen to go to the institutions that deserve it. Any radical simplification of the evaluation process, such as putting it to a vote, rather than basing it on measures of the quality of the research outputs, would risk breaking that trust.

Restricting research grants to a limited number of ‘research intensive’ universities, as has been proposed from time to time would also be counterproductive. It is an important feature of the current system that excellent performance can be recognised and rewarded, wherever it occurs. And importantly, in every RAE the process throws up surprises. Departments in weak institutions achieve excellent results. Formerly excellent departments in which everybody smart has left town, or stopped producing are uncovered. All this contributes to the sense that the process is fair. And the cost of inclusiveness is not high. The 65 institutions in the lower half of the distribution share less than 5% of the money. But they get the money that they earn.

A new feature of the REF is that 20% of the evaluation score will be based on a measure of the social or economic value of research, referred to as impact. This has led to widespread concern about the appropriateness and practicality of such measures, although many, including me, think that they will be very helpful in justifying the spending of public funds on university research.

Dorothy complains that the introduction of impact into the REF has also led to time-consuming meetings in Oxford and to the creation of jobs at University College London for people who will do nothing but help prepare the associated impact statements. I agree that these are diversions of time and money from the mainstream activities of the Universities but change always has a cost and the resources ‘wasted’ in this way are small in relation to the hundreds of millions of pounds that will be brought into each of these institutions by the REF.  And, of course, these wasteful activities are undertaken freely by the institutions concerned. They are not required by the REF. I think that there are both beneficial and damaging responses to the RAE and the REF that I would like to discuss in future blogs.

To finish on a positive note, I think that the long term effect of including impact as part of the routine assessment of research will be immensely positive. I think it will reinforce the view – inside and outside our universities – that we work for the good of society. In short, I think the REF will be a good thing.