How to fix peer review

The Economist:

PEER review, many boffins argue, channelling Churchill, is the worst way to ensure quality of research, except all the others. The system, which relies on papers being vetted by anonymous experts prior to publication, has underpinned scientific literature for decades. It is far from perfect: studies have shown that referees, who are not paid for their services, often make a poor fist of spotting serious mistakes. It is also open to abuse, with reviewers susceptible to derailing rivals or pinching their ideas. But it is as good as it gets.
Or is it? Marcus Munafò, of Bristol University, believes it could be improved–by injecting a dose of subjectivity. The claim, which he and his colleagues present in a (peer-reviewed) paper just published in Nature, is odd. Science, after all, purports to be about seeking objective truth (or at least avoiding objective falsity). But it is done by scientists, who are human beings. And like other human endeavours, Dr Munafò says, it is prone to bubbles. When the academic herd stampedes to the right answer, that is fine and dandy. Less so if it rushes towards the wrong one.
To arrive at their counterintuitive conclusion the researchers compared computer models of reviewer behaviour. Each began with a scientist who had reached an initial opinion as to which of two opposing hypotheses is more likely to be true. The more controversial the issue, the lower the confidence. He then sends the manuscript supporting one of the hypotheses to a reviewer, who also has a prior opinion about its veracity, and who recommends either rejecting or accepting the submission. (In this simple model journal editors are assumed to follow reviewers’ advice unquestioningly, which is not always the case in practice.) Subsequently, the reviewer himself writes and submits his own paper advocating one of the hypotheses to the journal, and the process repeats itself.