"System change is now inevitable. Either because we do something about it, or because we will be hit by climate change. '...

"We need to develop economic models that are fit for purpose. The current economic frameworks, the ones that dominate our governments, these frameworks... the current economic frameworks, the neoclassical, the market frameworks, can deal with small changes. It can tell you the difference, if a sock company puts up the price of socks, what the demand for socks will be. It cannot tell you about the sorts of system level changes we are talking about here. We would not use an understanding of laminar flow in fluid dynamics to understand turbulent flow. So why is it we are using marginal economics, small incremental change economics, to understand system level changes?"

Wednesday, August 20, 2008

Climate Models - Is There a Better Way?

What is the simplest possible CGCM ('climate model' with 3D fluid dynamics)?

I don't want the best possible match to observed climate. Such a thing serves little purpose, as I explain briefly below. I would like to see the simplest useful climate model with full 3D primitive equation dynamics, moist physics and radiative transfer. Such a thing would be very informative.

At PyCon 2007 in Dallas, just as I was moving to Texas, I had the pleasure of being a party to a hallway conversation with one of the keynote speakers, Robert "r0ml" Lefkowitz, who had just given an amazing keynote about source code as communication medium.

Now I grew up exposed to several alphabets, (Roman in English, French, Czech and Hungarian variants, Hebrew in Hebrew and Yiddish variants, musical notation, numbers) and have always considered arrays of symbols and what each symbology could represent a point of deep fascination, so he was talking to me.

In the hallway, I ended up explaining how there would probably never be an Einstein of climate, the system being too messy and contingent to allow for blazing reorganizing insights. r0ml suggested that there might, nevertheless, be a Mozart of climate modeling. Appealing to my grandiosity is generally successful, and so I have been unable to entirely shake the idea since.

I doubt that I am Mozart, but I would like to pave the way for him or her. In short, I would like to create a climate model that you would like to read.

Climate models are not devices intended to explain the past trajectory of global mean temperature or predict its future. Climate models are efforts to unify all knowledge about the atmosphere/ocean/cryosphere system. The basic equations and boundary conditions are put in; elaborate circulations with many close resemblances to the actual circulation come out. While there are free parameters in the models, they are far fewer than the degrees of freedom in the resulting dynamics. The fidelity of the models thus represents a real and striking success.

However, the sensitivity of the models (the simple relationship between greenhouse forcing and temperature) has only a handful of degrees of freedom. The tuning of the models has been informal. It is conceivable that the models have been inadvertently tuned to cluster about the sensitivity that other arguments indicate, but that's not necessarily a bad thing. The sensitivity is very likely in the neighborhood of 2.5 - 3 C / doubling. Perhaps in the absence of other evidence the spread would have been a bit larger, but it's difficult to see how it could have been dramatically different. Surely, no amount of further modeling is likely to say anything different, and any pretense that obtaining the global greenhouse sensitivity is the purpose of the effort has always been an oversimplification which should long ago have been abandoned.

Most effort at present is involved in making models more complex. This is in response to the intense desire for a closed climate/carbon system, as well as to addreess other geochemical and ecological questions. It is my opinion that these efforts are ill-advised in the extreme; they will have very limited intellectual or predictive value because they are vastly underconstrained. What's more, they will add a vast range of tunable parameters. The new Earth System Modles stand a good chance of becoming what current GCMs are accused of being, i.e., models which can be tuned to yield any desired result.

Models serve many purposes, in research, training and public communication. The lack of a model that is easily understood, modified and run is increasingly unnecessary and inappropriate. Such a model would be dramatically simpler than existing codes, and possibly somewhat lower in simulated days per floating point operation. Developing it would be diametrically opposite to contemporary trends. In addition to opening the field to investigation by amateurs, it would resolve some important questions in the course of its development. Specifically, in seeking the simplest coupled GCM, one identifies which phenomena are actually important under present day circumstances.

We should also seek to create in this context a model which is applicable to other observable and imaginable planets, thereby facilitating investigations into the theory of geophysical fluid dynamics, and allowing for the widest possible range of algorithms.

Efforts to increase model fidelity by increasing resolution are compatible with the approach of radical simplification. In fact, investigation of specific phenomenology is compatible as well. The objective should be a model that is not only actually readable and actually read, but reliably modifiable and actually modified, testable and tested, validatable and validated.

Efforts like PRISM and ESMF are well-intended but fail to move in the right direction. Contemporary software development techniques must be imported from the private sector.

The resource base for this could be as small as twenty person-years, say five developers, a manager and one support staff over three years. I doubt it could be a hobby, though the first step could be a hobbyist effort: I'd like to see a clean, readable implementation of the MIT "cubed sphere" grid to kick things off.

I'd be happy if someone else took this on. If you want me involved it requires some close variant of Python. It also requires some support, by which I mean money. I am not the sort of coder who can write scientific code all day and volunteer scientific code all night. That all grumbled, you actually really need this thing.

Yesterday's SciPy talk on Cython was very encouraging, by the way. Alternatively I understand there are Python bindings to PETSc, (the NSF proposal stresses that approach) but that might not serve the purpose of maximum accessibility.


Bryan said...

You had to figure I couldn't ignore this.

Perhaps you're wanting to runn before you can walk? I'm confident that one good scientist/developer working for six months, starting with a simple fortran code as "advice", could build a pretty useful python numpy based model. It wouldn't be a GCM. But you could do useful things with it, and it would be a step along the road. I think that's what's missing: Mozart didn't spring fully formed into existence. I bet he started with a few bars ...

(But why start with a cubed sphere, why not start with something understandable :-)

SomeBeans said...

I've recently been dabbling in photorealistic computer graphics, and there exists something of this sort in that field.

The pbrt renderer (http://www.pbrt.org/) is written in the literate programming style and is accompanied by a rather nice book. It's principally designed as a teaching tool for undergraduate / graduate students, but it has been used by other researchers as a basis for further research. It's written to be easily extendable by the use of plugins.

Universities / organisations involved in photorealistic rendering research seem to each hold their own base rendering system which individuals or groups modify according to their own research needs. It seems to me that this has strong parallels with the way GCM works.

ac said...

I'm thinking out loud her: maybe the simplest climate model is not coupled, and not 3D - maybe it's along the lines of a lattice gas model.

Can you define simplicity a bit further. I've got: a) Simplicity of software design, encompassing literate source code, modularity and 'good practice'; b) Simplicity of the numerical solver (simplest thing might just be to farm this out to pre-exisiting componenents, but last time I looked there weren't any easily findable mature finite difference or spectral n-dimensional PDE solvers.); c) Simplicity of the model physics ; and d) The simplicity of a coarsely resolved model.

The resolution of the model interest me, because phenomenological models tend only to be valid across limited length and time scales. I'm fast leaving territory I know something about for territory that I think I might have an idea about, but isn't it the case that the sub-grid parameterisations of atmospheric models are one such example of a length scale (and time scale) contingent phenomenological model.

So this leads to two approaches to simplicity: on the one hand a ruthless coarse-graining that reduces the number and complexity of sub-grid physics, and on the other hand, increasing resolution to reduce the need for sub grid parameterisations in the first place so that all you need is continuum mechanics.

David B. Benson said...

What is an MIT "cubed sphere" and why do you want it?

Michael Tobis said...

David, there was a time in my life that I discovered Alistair Adcroft with great relief. His existence means that I don't have to try to be Alistair Adcroft. It's a good thing. That's too hard for me.

He's the computational fluid dynamics guy behind the MIT GCM (I think); the cubed sphere grid underlies their implementation of atmosphere, ocean, and any other geophysical fluid (presumably another planet) that they choose to aim their code at.

There is an old joke that you can write Fortran in any language. Adcroft has rather tried to write C++ in Fortran. It's impenetrable but it works.

Anyway, the idea of the grid is to imagine a rectangular mesh forming the faces of a cube, and a light bulb at the exact center projecting those faces onto a sphere.

The resulting mesh has very nice properties at all three levels: it's cognitively easy, less mathematically intractable than a lat/lon grid, and works well with multiple cores, distributed memory, cache and the like.

David B. Benson said...

Michael Tobis --- Thanks, now I understand.

I'll surmise that you will do at least as well with a mesh of spherical triangles. Indeed, you ought to be able to do better, esaily, because the spherical trianlges can easily be subdivided in regions of hish spacial/temporal derivatives.

Marion Delgado said...

You should REALLY correspond with John Mashey at some point, Michael.

That's more or less what he's about right now, or at least one of his hats.