It's the same old story.
A more sensitive method is obtained that extracts physical reality from data; a publication is released.
Somebody nitpicks the math; with statistics this is always easy because everything is built on a prior. One can always quibble about the model. The McIntyre crowd knows how to push buttons in R and get results until they get one that excites them. That isn't statistics and it sure ain't science.
Real statisticians appear on neither side because the issues do not interest them. It's important to understand how separate the ivory towers are. The failure of statisticians to engage in climatology is often treated as climatologists' fault, but in fact it is the fault of the social structure of science. "Interdisciplinary" work is only possible if both disciplines break new ground. Nobody is interested in work that cannot be published in recognized journals in their own field.
There are very few top notch statisticians taking sides on such matters. My expectation is that they would have little trouble to reject both sides for statistical purity. Yet we may presume that there is, be it positive negative or zero, a real quantitative temperature trend in WA.
It is also the case that real climatologists make realistic tradeoffs and do reasonable tests of robustness, because they are interested in climatology. This approach is clear in Steig's review. Anti-climatologists do not appear constrained by this approach. Testing a single question using multiple approaches in practice effectively substitutes for the sort of mathematical rigor the anti-climatologists claim to prefer (a degree of rigor which is in fact impractical in the messy real world). Their avoidance of this path is indicative of their lack of serious interest in the underlying phenomenology.
Let me try to explain: suppose I am interested in getting the right answer. I am not a top-notch statistician, but I don't consult one anyway because the statistician will ask me to provide a rigorous prior which in fact I lack. I massage the data more carefully than has been done in the past, using a half dozen methods that occur to me. I also do the same with synthetic data. When I come up with something that appears robust to the sorts of phenomenology I expect across several methods, I choose one of those methods, polish it up, and publish.
Suppose to the contrary I am interested in casting doubt on the previous answer. I massage synthetic data more carefully than has been done in the past, using a half dozen methods that occur to me. I find a dataset that is consistent with observations that yields results very sensitive to observations in the method proposed by the climatologist. I drum up uncertainty and doubt.
In the O'Donnell case, he has succeeded in adding to the arsenal of methods. Steig offers an an interpretation consistent with the totality of evidence AS IF O'DONNELL WERE SERIOUS.
This constitutes an excellent test of whether O'Donnell is interested in science or in McIntyrism. The results of this test are unambiguous to say the least.
Update: I should add that I have not followed the arguments in any detail and am not in a position to critique them in detail had I done so. That is to say, I do not know how sophisticated or appropriate Steig et al's methods are. Nothing I have said here should be construed as a criticism thereof.
I only am asserting that Steig shows an interest in the result and O'Donnell shows an interest only in embarrassing Steig. In my opinion this sequence of events illustrates the methodology of McIntyre and Co with precision, and shows why there is no reason to get down and dirty with the statistical minutiae.
We should indeed be looking for better ways to promote interdisciplinary collaboration. People who would rather see such issues as chinks in the armor of an enemy are acting from malice rather than curiosity. They attack the entire scientific enterprise, not just a small corner of it.