Thanks to Jim Manzi, who in commentary to my posting about Oreskes' talk "How do we Know We're not Wrong?" asks a couple of provocative questions.
I am not an expert on tree rings and 1000-year scale reconstructions, so I will just say my peice on the subject and leave it at that. I'll have more to say about his modeling questions later, where I'm better armed, and more sure he is on shaky ground with his critique.
Anyway, for what it's worth.
I haven't met any of the tree ring people people and am almost a layperson on the subject except for a single journal club meeting at U of C, led by Rodrigo Caballero, with David Archer and Ray Pierrehumbert in attendance, so it was one of the many undeserved priviliges I had at U of C.
It was about von Storch's criticisms of Mann's statististics.
Mann, as is well known, produced a time series with considerable uncertainty but very little variation prion to 1900. This especially presented a threat to those who argued for large natural variation. To modelers, it was something of a surprise, because Mann showed our models to be exactly right, in fact, more right than most of us would have expected. (Mann's hockey stick reconstruction, modulo a couple of early bumps, does pretty what AOGCMs do.) It was also an iconic figure because the difference between preindustrial and postindustrial behavior of the atmosphere was so obvious.
In fact it appears Mann was in some sense "wrong". The way statisticians jump on people who aren't statisticians is, in fact, a bit obnoxious. They tend not to engage in the work of other fields until after it is published. (Fortunately there are exceptions, and our group does have a close collaboration with a statistician.) The fact remains that roughly 5% of the record of anyone else's reconstruction falls outside Mann's confidence bounds, so in a sense he was entirely right.
The other problems Mann had with bitwise replicability are very interesting and revealing of how science is conducted in climatology. Our practice is less formal than you might like in a life science lab and much closer to what happens in a small software shop, where the focus is on the output and not on the process. We think the importance of our work is so great compared to our small resources that will resist any imposition of process that reduces productivity. I see the other side of that argument but I'm not in the majority on that. In fact, I would be very pleased with a mandate that all public sector computing (except for a very small subset of security related matters) be performed entirely on an open source tool chain. I think the only reason a compelling argument to that effect hasn't been made is that it's a lost cause in the present political and economic context.
In practice, software in the scientific sector gets done by people trained in science and self-trained in software, and the maintenance and documentation issues for small-lab output (this does not include high performance models like GCMs to the same extent) would be considered amateurish and completely unacceptable in even the most casually run commercial software company.
Should somebody producing results you don't like should be held to higher standards retroactively? Called on the carpet in front of congress? Investigated publicly? That's the sort of thing that drives the best people out of science.
Von Storch pointed out that Mann's method systematically eliminated low frequency variability in the record. Subsequent reconstructions did show more secular variability, and since this was the point of greatest interest to the critics, they have declared him "wrong". The conclusion that contemporary temperatures are probably higher than they have been in a millenium, however, stands.
So, if the hockey stick is wiggly, what does that mean in practice? Our journal group was left with a damned-if-you-do damned-if-you-don't conclusion. If Mann's reconstruction was right, the detection of the greenhouse signal would be unequivocal. (Remember this was a few years ago when there was a still tiny shred of hope that we were wrong about the basic physics.)
On the other hand, if the stick had more wiggles, if natural variability were higher, this would weaken the detection argument, but would be cause for concern that the climate system is "tippy", with a tendency to wander further from its equilibrium than models show. This means that perturbing the system would have larger century-scale effects, and that models likely exclude phenomena that would cause the prognosis to be worse than expected.
Many arguments about model inadequacy go like this: it's more bad if the models are overoptimistic than it is good if they overpessimistic. So risk-weighting means the less we trust the models the more we should worry. The pseudo-skeptics invariably get this one wrong, and the real skeptics (Hansen, Broecker, Lovelock) are quite worried as a consequence.
The bumpiness of the reconstruction also is used backwards in the arguments; the bumpy record does not argue for complacency. Yes, if the record is bumpier, the detection problems becomes harder, but that one is in the bag already. The bumpier the record, the more evidence we have of models missing system modes at time scales that we have to worry about even in conventional policy terms. If Mann is wrong about the bumpiness, we are in bigger trouble, not less!
Regarding Jim's "bow vs stick" question, that's a sort of Rorschach test, isn't it? There is no doubt that from about 4000 BC until 1900 AD there was a gradual cooling trend. Nobody is claiming that present temperatures are the warmest in the postglacial period. Yet.
Regarding the performance of the reconstructions within the 20th C, my understanding is that there are all sorts of confounds introduced by the onset of anthropogenic forcings, not least of which is CO2 fertilization. That said, I wonder why that effect wouldn't artificially steepen the curve rather than flatten it out, if we assume that any individual specimen grows more under conditions of more warmth and more CO2.
You have plumbed the depths of what I know about this. I don't make a big deal out of this particular question and I don't think Oresekes does either. These guys think it is unusually warm, and I tend to believe them because that is what I expect, but the reasons I expect it have little to do with their work. I am sure they share my expectations, and I am not sure how effectively they separate their expectations from their results.
Oresekes is pointing out that their evidence is consistent with other lines of reasoning. I'm more familiar with the other lines of reasoning and am happier defending them, but if I had to bet I'd bet that we are already at the hottest point in the lasst 1000 years and will probably exceed the hottest point in the last 100,000 years (which happened about 6000 years ago) soon.