The only thing we can be sure of about the future is that it will be absolutely fantastic. So if what I say now seems to you to be very reasonable, then I have failed completely. Only if what I tell you appears absolutely unbelievable, have we any chance of visualizing the future as it really will happen.

- Arthur C. Clarke (h/t Brin)

Sunday, March 2, 2008

Guessing a Century Out

A nice piece of paleofuturism turns up from Ladies Home Journal of all places. This recently made Digg, but has been up on Paleofuture since last April. Considered predictions to the effect that there will be airships for specialized uses but they will not be competitive with express trains and hovercraft for long distance travel; or that the letters 'C' and 'Q' will fall into disuse, are there in print for your perusal.

The difficulty with predicting the world a century hence remains. Even the best thought out predictions will be wildly wrong in places. Chip Levy of GFDL in a recent informal talk at U T Austin explained how this uncertainty affects the purpose of earth system modeling.

Scenario-based climate prediction to date has been based on prescribed trajectories of radiatively active components in the atmosphere. It's enough to advocate putting the brakes on various emissions as soon as possible for those who understand what is happening, and of course it offers lots of targets to those who want the science ignored.

One critique of scenario-based prediction is that it doesn't give guidance to policy, because humans affect emissions directly, and a great deal happens to convert emissions to concentrations. Most of the public isn't aware of the gap. We see this frequently in online discussions where people primed to be hostile to regulation ask us whether our climate models account for something like, say, carbon fertilization. or others primed to be hostile to industry ask us how we account for, say, tundral emethane feedbacks.

The fact is, we don't. That hasn't been considered a part of climate modeling, but there is a great deal of demand for it. The fact that the demand exists, though, doesn't make it especially feasible.

These proposed models are being developed for IPCC AR5, and to distinguish them from pure climate models are being called Earth System Models or ESMs. I have a great deal of doubt that building ESMs is a good use of scientific time and effort.

I asked Chip whether these efforts weren't vulnerable to an accusation that they have too many degrees of freedom and not enough constraints, which he freely admitted. Nevertheless he insisted that ESMs were bound to produce interesting results. I am not at all convinced that this is possible. We may have to make decisions in the face of uncertainty.

A crucial aspect of the effort is the stability of the system under the coupling of all the imposed physics. It's hard to explain this, but it seems likely that these systems will be of two classes: ones that yield catastrophe under the most modest of forcing and ones that yield stability under the most severe. It's possible that they do both; seeming stable for a few hundred years and very unstable over longer time scales. The reasons for this lie not in physics but in system dynamics. The way these various phenomena are being coupled together is not driven by physical reasoning as much as by a desire to have things "look right" on some time scale. Chip insisted that this has some value, that the systems "already display interesting dynamics" (the atmosphere isn't yet fully coupled in the system he works with.) Well, maybe interesting, but to what end? The idea that these models will have a lot of value to the policy process (which is implied by an IPCC AR5 driver) strikes me as over the top. It simply distracts effort away from improving the value we already have.

It doesn't solve the futurism problem anyway. It turns out that while CO2 is the biggest one, there are a number of other emissions to worry about. Getting the trajectory of all of them into a scenario with any predictive value is not a snesible prospect even if ESMs were perfect. We have to make policy decisions based on the information we have now, and not hold out promises for some breakthrough in the future. I'm all for throwing money at the climate modeling problem. I'm just making the case that adding degrees of freedom and long time constants to the problem is the opposite of helping right now.

A huge push on paleoclimate evidence might somewhat resolve the problem, but in some sense the whole question is confused. We do not predict our behavior. We decide on our behavior. No projection that depends tightly on human behavior can possibly amount to a prediction.

Future climate is an engineering problem and not a scientific problem. We need to stop guessing what we will do and start deciding what we will do instead.

3 comments:

Dano said...

The difficulty with predicting the world a century hence remains. Even the best thought out predictions will be wildly wrong in places.

Again, the urban ecosystem guy chimes in: this is why we make projections rather than predictions.

Best,

D

Michael Tobis said...

Dano, agreed. There's more to it than that, though.

I'm not sure I've expressed my concern about ESMs effectively. The question isn't projection vs prediction, it's useful projection vs useless (and very expensive) projection.

ESMs will be at least an order of magnitude more complicated and expensive than the GCMs they are built around. I don't see the payoff. I think it's bad for the field and bad for the world, but good for the computer vendors and the HPC ("high-performance computing") hangers-on.

I can be a good HPC customer myself, so I don't mean to argue against heavy metal computing.

If ever there was an area where it is easier to do things wrong than right, though, it sure seems like Deep Thought is the way there.

Simon Donner said...

Michael, thanks for the discussion. In the end, it may be that increasing model complexity and including more physical processes in the models is a good thing. But that should not be an implicit assumption. We need to first weigh the potential gains in descriptive ability against the potential losses in uncertainty and computing time.