"Our greatest responsibility is to be good ancestors."

-Jonas Salk

Saturday, June 18, 2011

Is Peer Review Double Blind?

The egregious Pat Michaels makes the following claim:
In order to limit any bias caused by personal or philosophical animosity, the editor should remove your name from the paper and send it to other experts who have no apparent conflict of interest in reviewing your work. You and the reviewers should not know who each other are. This is called a “double blind” peer review.

Well, this is “the way it is supposed to be.” But in the intellectually inbred, filthy-rich world of climate science, where billions of dollars of government research money support trillions of dollars of government policy, peer review has become anything but that.

There is simply no “double blindness.” For reasons that remain mysterious, all the major climate journals leave the authors’ names on the manuscripts sent out for review.

Economists, psychologists and historians of science all tell us (and I am inclined to believe them) that we act within our rational self-interest.
Removing the double-blind restriction in such an environment is an invitation for science abuse.
Emphasis added.

I can pretty much dismiss "inbred" and "filthy-rich" as completely out of touch with reality, but I'm not sure how to make the case to someone who believes otherwise.

But what about this "double blind" thing on peer review? I am pretty sure that this erasure of authorship doesn't happen in computer science, for one. A little searching reveals this recent Nature article.

Salient points:
  • Double-blind peer review, in which both authors and referees are anonymous, is apparently much revered, if not much practised.
  • Although at least one study in the biomedical literature has suggested that double-blind peer review increases the quality of reviews, a larger study of seven medical journals2, 3 indicated that neither authors nor editors found significant difference in the quality of comments when both referees and authors were blinded. Referees could identify at least one of the authors on about 40% of the papers, undermining the raison d'ĂȘtre for double-blinding. The editors at the Public Library of Science abandoned double-blind peer review because too few requested it and authors were too readily identified.
  • The double-blind approach is predicated on a culture in which manuscripts-in-progress are kept secret. This is true for the most part in the life sciences. But some physical sciences, such as high-energy physics, share preprints extensively through arXiv, an online repository. Thus, double-blind peer review is at odds with another 'force for good' in the academic world: the open sharing of information. The PRC survey found that highly competitive fields (such as neuroscience) or those with larger commercial or applied interests (such as materials science and chemical engineering) were the most enthusiastic about double-blinding, whereas fields with more of a tradition for openness (astronomy and mathematics) were decidedly less supportive.
On the other hand,
  • The one bright light in favour of double-blind peer review is the measured reduction in bias against authors with female first names (shown in numerous studies, such as ref. 4).
But as for Michaels' implication that double-blind review is common practice and that climate is somehow exceptional, that double blindness is a default which climate sciences have somehow conspired to reverse, it's a complete fantasy, like most of what he purveys.


Marco said...

To the best of my knowledge, double-blind review is very rare in any of the natural sciences. I myself have reviewed for journals that span different areas of the natural sciences, and have never ever encountered double-blind reviews.

In quite a few cases it would also have been hilarious. Sentences containing things such as "In our previous work", and "We have previously shown" kinda defeat the whole idea...

To me it just sounds like the typical whining of certain people who are upset they can't get their bad papers published in good journals. In this case a whine that contains misinformation. Michaels excels in that (Hansen being a beloved target a few times).

Steve Bloom said...

Well, "Pat." Say no more!

Ted Kirkpatrick said...

Some areas of computer science use double-blind review. For example, the main computer-human interaction conference, SIGCHI, states that "authors are expected to remove author and institutional identities from the title and header areas of the paper". They also note that genuine anonymization is hard to achieve and they allow authors to be as loose or vigilant about it as they wish.

On the other hand, the main journals of the American Economic Association just announced they are dropping double-blind review. Crooked Timber had a good discussion of the pros and cons. Some people have set up a petition contesting the change.

As with so many other aspects of reviewing, there seem to be a wide range of conventions and assumptions in use across disciplines.

The one generalization I'd make about reviewing is that Michaels' claim is characteristically mendacious. He begins with the assumption that climate scientists lack integrity and then finds "evidence" everywhere he looks.

Whatever methods are used by the various climate science journals, I'm confident they work just fine, because the authors and reviewers care deeply about the quality of their work.

Andy S said...

In practice, well-published scientists frequently quote their own work disproportionally, either out of necessity but perhaps sometimes out of self-promotion.

Also, in practice, experienced reviewers will know which scientists are working on which major projects, so they will be able to guess the authors in most cases.

This means that double-blind peer review would be ineffective in stopping the (alleged) "pal review" that McI and co. decry.

But, perhaps, authors should be given the opportunity to submit their manuscripts anonymously.That would at least stop people (especially unaffiliated amateurs) complaining that they are being censored for who they are rather than what they write.

RW said...

Peer review is not double-blind in astronomy, neither for papers or observing proposals. In practice it's often relatively easy for authors or proposers to guess who might have reviewed their work. And the reviewer is at liberty to waive their anonymity, and it's not uncommon for them to do so.

Marco said...

A small P.S. to the last reference to women being disadvantaged in single-blind review: Budden's data, used by Nature as the one example, was rather questionable:
The response by Budden et al is far from convincing...

Steve Easterbrook said...

In computer science, I've submitted papers and reviewed for venues that use double blind reviewing (SIGCHI, as Ted notes above, and CSCW), as well as a much larger set of venues that don't.

In my opinion, double blind reviewing is pointless. Anyone who is sufficiently expert in a field to be valuable as a reviewer will know's who's work it is. I've been able to identify the authors of double blind papers I'm reviewing in all but one case.

The main problem is that most research is cumulative - new papers build on your previous work, and the peer review process has to take this into account as context - you can't usefully review a paper in isolation from the work that has gone before it. The only case where I wasn't able to identify the authors was where the authors had started a new line of research that was very different from what they'd done before.

Jonathan Gilligan said...

A relevant post from Inside Higher Ed:

Jonathan N. Katz, co-editor-in-chief of Political Analysis and chair of humanities and social sciences at the California Institute of Technology, said in an interview that the journal would change with its next volume. He said he sees values in double blind but that "in the age of Google, double blind has become a fiction." The journal did an experiment typing in the titles of 20 recently submitted papers and was able to correctly link almost all of them to authors, who post working papers, talks given at meetings or information about their research on various websites.

Marion Delgado said...

I believe a lot depends on the field. Some controversies have arisen, or been manufactured, because the particular science involved was so specialized that the reviewer pool was small. You can't always talk "science" in a one-size-fits-all set of rules.