... personal wiki, blog and notes
On peer review (of grants)
Peer review comes in for its fair share of brickbats, but not too many academics are seriously against it! Indeed we rely on it. Peer review comes in many guises, and is applied in many ways, in many places, for many tasks. However, for all the experience we all have of peer review, there are few instances of "best practice", examples of which we might all agree: "that's how it should be done"!
Over on Michael Tobis's blog, a few of us had a bit of a diatribe about the way peer review is generally applied by research funding bodies: we argue that such bodies end up making decisions without all the information they could (relatively) easily obtain (albeit, probably, because that way they are protecting their reviewers from fatigue).
As it happens, right now we're considering how peer review should be applied to the assessment of datasets for publication. In the process of working up our procedures, I have been delving a little into the published literature about peer review, and some of what I found has relevance to the previous discussions ...
The UK Boden Report from 1990 to the then Advisory Board for the Research Councils (2.8 MB pdf) was a pretty interesting read (yes, seriously :-). Although it's nearly twenty years old, and predates the Internet, the key findings are pretty robust even now, especially the one that essentially states "there is no other game in town!" It looks like most of the recommendations have been dealt with, but a propo the previous discussions, I was struck with this:
4.41 There is a view in the academic community that one needs to be "visible" rather than merely "good" to get support from the Councils. THe general question here is: does one review the applicant or the application? Roughly half the responses to the Group concluded that undue weight on track record disadvantages the young 1. However, the other half put the case that more attention should be put to track record since, research evidence suggests, evaluation of past performance is a good indicator of future results. They also made the point that backing "known" winners could lead to considerable savings in peer review cost.
4.42 ...the onus should be on Councils, and their committees to gain adequate knowledge of such track record as there is and to weight this2 with due reference to the experience of the applicant.
I can't say I've direct experience of such conversations (re weighting track records) happening in the (few) moderating panels I've been involved with ... but that may be statistics of small numbers. Perhaps it happens! There are certainly new investigator schemes ...
Which brings us to this:
4.47: ... We are not unsympathetic to the idea of using interviews as part of peer review practices, since they do allow a full examination of the project and an opportunity for the applicants to respond to peer's queries. Our doubts rest on the high cost of this strategy ...
And that's the nub of the problem. Funding bodies are always trying to drive down the frictional costs of running research, and keep as much as possible of the research budget for doing research (good). That said, I think there is plenty of scope for using new technology (video conferencing, instant messaging etc) to improve the information available to peer review panels. What we also need to do is find ways of improving the external peer review itself.
That there are problems is not doubted. Amongst the vast literature on this subject, the following is as succinct as it gets: There is little data defining the accuracy or reproducibility of peer review (Ernst & Resch, 1994 3), who went on to find in an experiment on reviewing consistency in clinical medicine that there was a significant impact of reviewer bias on referee judgement and room for improvements in fairness and consistency in peer review.
But even where the right people are doing it, and being as even-handed as possible, they simply don't have time. We need ways of making it more acceptable to spend time doing it. I hadn't heard of peer miles before, but ideas like that can't hurt. NERC pay folk to be on the peer review college, but the pay is derisory for the extra workload involved: quite clearly NERC is exploiting co-funding (and lots of it) from the referees employer. It would be nice to see mechanisms invoked to reward good quality work in that time. Some journals give out awards for "good quality reviewing", perhaps NERC could do that too 4, but universities would have to find ways of rating such awards in their departmental brownie points systems (or however they do internal load balancing).
Again, the Boden report has been here first:
5.4 The time spent by applicants and peers is not a cost to the Councils: it is a cost to the employing institutions 5
5.14 ... Spending around 1-1.5% of total expenditure on peer review seems a reasonable use of resources.
My initial reaction to this was to consider that this statement could be bolstered by some metric of the success of the process in terms of quality of research delivered. Then I realised that it's mighty hard to measure the success of the research that might have been delivered had it made it through! Maybe there is a role for Bayesien statistics there ... (some have tried it for journals, e.g. Neff and Olden, 20066), a challenge for James?
I wonder whether the extra costs of making the process a bit more interactive could be quantified? If they could be, and they were around 0.5%, I'd reckon that money well spent ...