"System change is now inevitable. Either because we do something about it, or because we will be hit by climate change. '...

"We need to develop economic models that are fit for purpose. The current economic frameworks, the ones that dominate our governments, these frameworks... the current economic frameworks, the neoclassical, the market frameworks, can deal with small changes. It can tell you the difference, if a sock company puts up the price of socks, what the demand for socks will be. It cannot tell you about the sorts of system level changes we are talking about here. We would not use an understanding of laminar flow in fluid dynamics to understand turbulent flow. So why is it we are using marginal economics, small incremental change economics, to understand system level changes?"

Thursday, July 12, 2007

How do we know climate models are useful?

Jim Manzi, who believes we have an anthropogenic climate change problem, (or at the least, someone unverifiably claiming to be Jim Manzi) nevertheless remains in the model-skeptic camp. He asks the following in response to Oreskes' presentation:

Page 64 is pretty amateurish – “many model-based predictions have come true”. Really, I have a causal model for predicting the winner of baseball games – the team that bats first wins. Look at this long list of predictions that my model has made correctly.

Pages 65 – 69 use the intense 2005 hurricane season as confirmation of predictions. 1. Too bad about 2006. 2. There is a reason that hypotheses are subject to falsification tests rather than confirmation tests .
Let me take these in reverse order.

The latter criticism of Oreskes is relatively stronger. A fair consideration of the point requires understanding of a few things:
  1. The climate system doesn't care very much about whether the tropical storms make landfall and 2006 was not unusually quiet
  2. Atlantic hurricanes correlate inversely with El Nino and 2006 was an el Nino year
  3. There are other components of tropical storm variability which are not known. The question of whether the 2005 Atlantic season was so anomalous as to require explanation is something of a judgment call. (The next few months will tell us something, as a negative El Nino anomaly, favorable to Atlantic hurricanes, has returned.)
As an aside I say that the lesson of New Orleans is not so much that the age of superstorms has arrived, though in fact it might be so. The lesson of New Orleans is that society should listen to well-informed people who say "listen to me before it's too late!" before it is, actually, too late.

In summary, though, the very peculiar Atlantic tropical storm season of 2005 doesn't constitute a trend in itself. It is however, part of a trend, and that trend is consistent with predictions. It certainly doesn't argue against the climate change consensus, and the very high sea surface temperatures of late support it.

On the first point I must disagree with Jim and agree with Oreskes.

The list of validated predictions is long and extraordinary, in the context of the near-stationary climate of historical times. It can be argued that Oreskes missed a very important one: cooling of the stratosphere, which is inconsistent with solar forcing which would warm the entire depth of the atmosphere.

Polar amplification, nighttime amplification, these are robust (all models that can replicate contemporary climate from primitive equations do this) predictions of dramatic change matched by robust observations. This is not cherry picking as Manzi suggests. If it were cherry picking he (or anyone) could identify comparably robust comparably unprecedented changes that were predicted by most GCMs that didn't happen at all.

Am I missing something? If so, please enlighten me.

So, Jim or some person claiming to be Jim, on what basis do you assert that you "don't think the models are validated"? As for the necessity of the models to constrain the sensitivity, even that isn't entirely crucial. We still have theory and (if you don't go along with some of your 'conservative' allies in ignoring any evidence that implies the world is more than 10,000 years old) pretty extensive paleoclimate evidence.

By the way, testing models against paleoclimate is one of the best ways to validate them. For the most part it works out OK, though in very warm periods (the Eocene, notably) the results have been sort of funky. Sriver and Huber of Purdue claim to have worked this out, though. On their theory, it appears that the tropics are less heated than the poles in hot worlds because a good deal of heat is transported poleward by relatively more active tropical storms.

The sensitivity range question was handled admirably by Annan and Hargreaves. James Annan stops by occasionally and may want to elaborate. James is more concerned about tendencies to exagerate the high end, but I think even Lindzen, if pressed, doesn't take a position much below the low end.


Anonymous said...

Thanks for the (as usual) well-informed and interesting commentary.

I think that reactions to both of the points that I raised and to which you responded depend on three things: (1) what elements of the ‘scientific consensus” do we care about being right or wrong?, (2) where is the burden of proof?, and (3) how self-contained should a presentation like hers be? I implicitly, though I am happy to be explicit about it, assumed that (1) the only question I care about is our ability to predict the climate impacts of various human forcings, (2) the burden of proof lays with those who claim to be able to predict the attributable impact of a climate forcing 50 – 200 years from now, and (3) that when somebody titles a presentation “The Scientific Consensus on Climate Change: How Do We Know We’re Not wrong?” the assumption is that (at the level of abstraction relevant to the argument) the presentation will contain the information required to evaluate the claims being made.

Under these three assumptions, my first comment can be boiled down (in less snarky language) to the point that she has provided incomplete evidence on this point to evaluate “how we know we’re not wrong”. We would have to be presented a relatively complete list of the predictions made by the relevant science (as it was understood at the time of the predictions), and which were correct and which incorrect. Her slide demonstrates a classic case of survivor bias.

In terms of the second of my comments that you reference, I have often had to defend the AGW hypothesis from skeptics who will say something like: “there’s no warming trend over the last 5 years” (or five minutes, or whatever comes in handy to them).

Here’s an exchange for Planet Gore. Steven Milloy criticized me for accepting that more CO2 means the atmosphere gets hotter (as part of a much longer article), here Milloy said:
From a historical perspective, consider the relationship between carbon dioxide emissions and global temperature for the period 1940-1970. As atmospheric CO2 levels steadily increased during this period, global temperatures decreased, giving rise to the 1970s-era scare of an impending ice age. It’s also clear that, if there has been a relationship between atmospheric CO2 and global temperature since the 1970s, it’s not readily apparent.

I respond by saying:
This is a type of argument that Milloy uses repeatedly in his piece. He identifies several decades or a geographic region, or the combination of both, for which there is no correlation between CO2 and some relevant outcome that would support the hypothesis that human activity is creating global warming, and acts as if this is a smoking gun that disproves the hypothesis. In technical terms, he is claiming that the hypothesis has failed a falsification test.
This is like saying “Look, I cut way down on fatty foods after the holidays and the level of plaque in my arteries did not noticeably decrease by April, therefore this theorized link between fat intake and arteriosclerosis can’t be correct”. The obvious problems with this supposed falsification test are that the theory calls for an effect that is (i) based on cumulative intake, (ii) manifests itself over a longer period than four months and (iii) is part of a complex system called he human body that is only partially understood. You would have to run the test over a much longer period and with more than one person and varying levels of fat intake to reliably test the theory. Similarly, observing that "hey we kept pumping a lot of CO2 into the air between 1940 and 1970 and temperatures didn't go up" does not falsify the hypothesis that CO2, all else equal, increases temperature over time. I’ll refer to this as the “localized fallacy” each time Milloy uses it in order not to have to repeat this argument.

I believe what I said to him. But you can’t have it both ways – if the hypothesis requires X years to falsify or fail to falsify, then you can’t pick individual data points that are consistent with your hypothesis and say ‘that this constitutes material evidence for it.

You can see the whole exchange here, if you’re really bored:



You raised a separate point about my assertion that the models have not been validated. I’ve written several articles that address this point and have had a long online dialogue with Gavin at RealClimate about this. Here’s what I mean by validation: a GCM makes a prediction at time X for the temperature at some later time Y; the GCM and operational codes are escrowed at that time; when time Y rolls around the actual set of data for all required model inputs in entered and operational scripts are executed by a party that did not build the model and this result is compared to actual measured temperature at time Y. A series of validation exercises like this is used to create a distribution of forecast error. I know of no published effort that has done this and validated that any GCM is capable of performing THE key advertised function: predict the temperature impact of various emissions scenarios on a multi-decadal timescale. There is a long, sorry history of predictive models in a variety of fields that seem to make sense, and perform well on hold-out samples, but fail in production. Further, almost all predictive modeling communities over-estimate their own accuracy.

(and yes, I am actually Jim Manzi, though I can’t see how to validate this claim – so to speak)

Michael Tobis said...

As usual with a good conversation, many possible threads started here. It may not be possible to chase all of them down...

Regarding "survivor bias" I understand your point but that doesn't mean it's true. I ask you to provide comparable examples of comparable predictions of the same class of models that failed.

The assertion is that some first order behaviors of the system were successfully predicted by a cross-model consensus. I elaborate by asserting that I know of no counterexamples. Things happenned the way my profs 15 years ago said they would. I consider this a validation.

This business about escrow is a bit crazy. Untar CCM2 and run it with Hansen's middle scenario for yourself. The models may be obscure but they aren't secret.


The "burden of proof" question is one I took up over a decade ago on sci.environment. I don't think my position on it has changed. I think it's fundamentally a misguided question.

In political circles we see each side with an essentially airtight argument that the burden of proof goes to the other side. One side calls it "precautionary" and the other "conservative". Essentially these amount to exactly the same impulse applied to different things.

I was accused on a leftist blog recently of being "excessively reasonable" (apparently because I don't want to send the board of directors of Peabody Coal to the Gulag), but I'm afraid I have to stick to my nambly flambly middle of the roadism here.

So how's this for a cowardly difference-splitting position? Both sides are offering completely useless formulations with extremely dangerous ramifications, and only a pragmatic approach has any promise.

More to follow.

Do stay in touch. (You never call. You never write...)

Tony said...

A very respectful and valuable discussion, as always. IANACS, but I like to think I'm as informed as an educated citizen needs to be to make up his mind on climate change.

However, one sceptic argument that makes sense to me is the one about models. To hear them say it, climate scientists aren't up with the latest modelling techniques since they don't have any formal modelling training. Hence (so the argument goes) the climate models used by most of the papers in the IPCC review are simplistic.

One of the latest attacks (here) is upcoming in Energy and Environment.)

My question is, how up-to-date are the models used? Do they meet all the 89 modelling principles referred to in the E&E paper (or are the authors moving the goalposts)? The E&E paper seems suspicious but I can't put my finger on it.

Michael Tobis said...

So the person behind forecastingprinciples.com accuses climate science of being unaware of forecastingprinciples.com . To this accusation I for one plead guilty.

He isn't moving the goalposts, he is inventing the game.

It is certainly the case that the sorts of forecasts he dwells on on his site are very difficult.

1. Can you give me examples of different types of forecasting problems?

Sure. Forecasting problems can be posed as questions. Here are some examples. How many babies will be born in Pittsburgh, PA in each of the next five years? Will the incumbent leader be elected for a second term (see Political Forecasting)? Will a 3.5% pay offer avert the threatened strike (see Conflict Forecasting)? How much inventory should we aim to hold at the end of this month for each of 532 items? Will the economy continue to grow at a rate of at least 2% per annum over the next three years? Taking account of technical matters and concern among some communities, how long will it take to complete the planned pipeline? In which areas should policing efforts be concentrated in order to have the greatest effect on property crime (see Crime Forecasting)? Which will be the most prevalent diseases in the U.K. ten years from now (see Health Forecasting)?

On the other hand, a forecast of the position of Jupiter in the sky exactly 50,000 years hence is quite feasible.

Climate physics is more constrained than social dynamics and less constrained than the orbits of the planets. So we can get more than 5 years and less than 50,000.

Beyond that you have to get into detail.

These guys are promulgating purported universal principles on the basis of an argument from authority, when as far as I can tell the only basis for their authority is having registered "forecastingprinciples.com".

Well, I registered 3planes.com some years ago. This means that anyone claiming to be three-dimensional will have to pass 83 criteria identified by me.

I see the drumbeat of the denialist camp is in action, since David Duff referred to this silly thing in another thread.

It's noise.

Let's talk about what we can know about climate, what we already know, what we stand to learn, and what we'll probably never figure out. Let's base it on geochemistry and geophysics please, and not on the lousy record of predictions of self-declared experts in public policy or economics.

Science really is different.

Tony said...

Thanks for your response. I hadn't realised they were making essentially an argument from authority. It's so hard to keep up with all the dust they keep throwing in our faces!

I just got off a long Skype call with my girlfriend in Tokyo who was convinced by a newspaper article she read that there was no science showing global warming to be unequivocal. Sigh.

Keep up the good work. In case you haven't heard this recently, what you established researchers in the field are doing is invaluable for those of us passionate about the topic but without the education.

Michael Tobis said...

Thanks Tony.

"science showing global warming to be unequivocal" is not really a well-defined statement.

"Global warming" is literally, to a scientist, an assertion that a planet is warming on some time scale. There's so little doubt of that on earth now that it isn't worth considering, though it's probably excessive to say anything in earth science is "unequivocal".

The phrase "global warming" is slippery. I try to avoid it. I can think of at least five meanings and I think the obfuscators casually slip from one to another to confuse the conversation.

Belette said...

This is getting a bit tangential but I'm baffled by the escrow bit. If your GCM makes a prediction, why do you need to keep the code? All you care about is keeping the prediction and later verifying it. There is no need for subsequent model runs.