"Our greatest responsibility is to be good ancestors."

-Jonas Salk

Saturday, June 12, 2010

Is Climate Modeling Still Stuck?

Is climate modeling wandering down a garden path?

Is it time for a blank slate?

It seems timely to revive this conversation. Hat tips to Steve Easterbrook and MrPete for the reminder.

see also:
Staying Geeky

Excised Paragraphs

NCAR vs Google

14 comments:

Arthur said...

Michael, after our bit of confusion talking to Claes Johnson on his blog, do you have a pointer to a good review or overview somewhere of how convection and latent heat transport is treated in real climate models? I've reviewed Ramanathan and Coakley's 1-dimensional version but that seems very ad hoc - I'm assuming what's done in modern global climate models is considerably better, but I've never looked into it myself.

The 3-dimensional nature of Earth's convection (with seasonal variation plus the diurnal mixing components) seems highly complex, and I for one would like to understand better both how its modeled and where those models may part from reality in one way or another. So maybe there's a way to start "from scratch" with something that addresses "just" those pieces somehow?

counters said...

Dr. Tobis, I think this thread might be a few days too early. There's a workshop on High-Resolution Global Modeling being held by the Center for Multiscale Modeling of Atmospheric Processes (CMMAP) this Tuesday through Friday in Fort Collins, CO, which should provide some great evidence to use in answering your question.

If you or anyone else is attending, there are a myriad of presentations which are extremely relevant to this question (see http://kiwi.atmos.colostate.edu/ghrcm/Presentation_table.pdf).

I have some personal opinions about what's going on in modeling and where I'd like to see it go and/or take it during my career, but I want to hold back until after this workshop so that I can reevaluate them before sharing.

Michael Tobis said...

Excellent!

As usual, given my peculiar fringe position, nobody thought to tell me or invite me, but I will be watching with great interest. If there is not another venue for non-participants to comment, let it be here.

Thank you, counters.

Anonymous said...

In two weeks, the 15th annual CCSM workshop is in Breckenridge, CO. It will also feature the release of the CESM, Community Earth System Model.

Michael Tobis said...

I almost went to Breckenridge. I didn't for two reasons:

1) SciPy is in Austin for the first time at the exact same instant. It would be insane for me not to go to that.

2) I am not functional at 9000 feet. I really got a lot out of the CCSM meeting in Santa Fe. NCAR is being obnoxious by running this at 9000 feet; it's a small climb for Boulderites, but everybody else spends the a good fraction of the time dizzy and air-headed at best if not (like me) actually sick.

I also detest large ski resorts (and other big private resorts) both on ethical and on aesthetic grounds.

For what it's worth I do think the way CCSM is headed is wrongheaded, though I actually am part of the community that tries to improve it.

Steve L said...

From the original thread, it looks like the issues of interest can be broken into three components:
1. Is/was climate modeling stuck?
2. If so, why is/was it stuck?
3. What are the best options for unsticking it?

Michael Tobis said...

Arthur, I wrote a review of this topic as a grad student, so I understand the issues but am behind the times on the state of the art.

As I said at Claes's place, there really isn't a satisfactory way to do this in the context of modeling hypothetical (as opposed to observed) climates.

Claes may underestimate the difficulties in climate modeling, but these guys might well be the ones to provide an amateur effort with a useful high-level dynamic core. I hope they do so in such a way that others can use it.

Steve L said...

Tonight I'm going to read a preprint of a potentially-relevant example: Modelling the future hydroclimatology of the lower Fraser River and its impacts on the spawning migration survival (Hague et al). James Miller (Rutgers) is one of the co-authors, and one of his research interests is downscaling GCMs to forecast impacts of climate change on large river systems.

From the abstract, the paper tries to forecast the increase by 2100 in the number of days per year that Fraser River temperatures will exceed given physiological thresholds. So there are a few aspects worth noting here: the forecasts are spatially restricted (the Fraser Basin lies within three grid cells 3x4 degrees each); the forecasts are temporally restricted (providing information on daily variability); and the forecasts are focused on specific times of year (when particular stocks are migrating).

He will be giving a talk at SFU (Burnaby, British Columbia), 1230-1330 in TASC rm 8219. The talk will focus on aspects of the modeling and how this approach differs from using more detailed hydrological models driven by global model output.

counters said...

In "Staying Geeky," you comment that, "Most scientists, especially climate scientists, know next to nothing about what is actually happening in computing. Most computer people, even academics, know next to nothing about what climate people consider standard practice."

Commenting on that first point, there's something far more worrisome than the fact that many climate scientists aren't the most up-to-date on computing technologies. Unfortunately, most of my fellow students I've worked with are falling into this same trend as they emerge as atmospheric/climate scientists.

Too many people are complacent with the existing technologies currently in use in the field. Consider visualization, for example. Virtually everyone I've met uses a combination of NCL, IDL, or GrADs for visualizing meteorological data. Of course, there's nothing wrong with sticking with an established, well-supported technology. But there are newer, better alternatives out there. For instance, part of my research at CMMAP this summer involves developing a visualization package for displaying our model output, which is based on a geodesic grid. You can indeed do simple contouring on this grid with the old-school languages. But since I'm a Pythonista, I adopted MayaVi so that I can make a more powerful visualization package which a) will be freely available to all end-users (not dependent on having an IDL license) and b) can easily be improved upon in the future by an actual computer scientist, rather than an atmospheric/CS hybrid.

It's a bit distressing that the AMS curriculum doesn't strongly recommend at least an introductory CS course for meteorologists. A simple course in CS (say, data structures, OO, or graph theory) would help develop analytical and problem-solving skills in our students, and introduce our students to the newest technologies that are out there, encouraging them to explore and bring back to our field.

Until we start bringing in a new generation of students who advocate newer computing technologies and techniques, I'm not too sure how far we can advance the field of modeling - regardless of the advances being made in numerical methods and dynamics.


By the way, Dr. Tobis - at the 2011 AMS conference, there's going to be a "Special Symposium on Advances in Modeling and Analysis Using Python" (http://www.ametsoc.org/meet/annual/call.html). I think it's a brilliant opportunity to really advocate for more computing awareness in the field.

Michael Tobis said...

Yes, Johnny Lin has already invited me to that session. I expect there will be some preaching to the choir, alas.

Much as I am a fanatical Pythonista myself, I think it's a bit problematic and pedestrian to name a specific language for such a session. If somebody were doing something clever in Ruby or Perl or even Eiffel or a DSL, should they go to a different session?

But maybe I'll go and talk up my ensemble control system; it should be released by then.

G-Man said...

There are good reasons why climate scientists use tools like NCL, IDL, GrADS, etc. - not just familiarity. Many have painstakingly developed nontrivial analysis packages with those tools that do far more than just draw contour plots.
One of the issues with CS folks is that they (stereotypically) develop what they believe to be gee-whiz stuff that never really gets to the hardcore production level. Sure, it looks neat, but requires rewriting large software packages to accommodate the idiosyncrasies of the tool, and by the time that's done, the CS folks have dropped it and are on to the next Neat Thing. That's lot of wasted time and effort.
I don't mean to be cynical, but having some experience as a software engineer in the field of climate modeling, I've seen a great many Neat Things come and then go almost as quickly.
Lastly, climate scientists aren't doing what they do to be experts in CS. That's what we SEs are for. Now, if we can only get the money to hire more SEs to lighten our load, that would be dandy. We may even pick up some really great skills along the way.

Michael Tobis said...

Division of labor is a key issue.

Both our primitive organizational structures and our attachment to Fortran, which applies primitive conce[ts of computing to advanced architectures, enforces a flat heirarchy wherein scientists have half-baked ideas, earnest grad students try to code them up, we are left to glue it all together, funding agencies conspire to inflict inappropriate coding standards, and the whole mess is cobbled together with duct tape in time for the IPCC runs.

Commercial software, much of which most people would say is far less important than what we are doing, has far more complex institutional organization and the organizations are in many cases more rigorous and productive.

We should start over.

G-Man said...

I agree, Michael, about how things work. However, funding agencies pay modeling centers (at least US ones) to do science, not software development. Likewise, the scientists want to do science, not explore interesting CS stuff. More than once I have heard "Oh, that's just coding", or "You're just a programmer", as if the code and the person who knows how to make it work and work well are minor considerations to the Science. There's a mindset in place, at least as I've experienced, that makes big changes very difficult. As per usual, the technology is trivially easy - the other stuff (the policies, the politics, the personalities, the funding) is very very hard.

counters said...

Might be time to revive this conversation. The presentations from the Global High-Resolution Modeling Conference are available here, and there should be a host of great presentations coming down the pipeline from last weeks CCSM meeting in Breckenridge. I wanted to wait until the presentations were available rather than comment based on the notes I scribbled during the workshop, since I'm a bit of a greenhorn still when it comes to the science here and I don't want to misrepresent some of the incredible research that was illustrated.

With regards to the original question of "Is climate modelling stuck," I think there's both a yes and a not answer. Clearly, there is great ingenuity being demonstrated in developing ways to model the atmosphere (and climate). The physics and the numerics aren't the problem. On the other hand, I think the computer science is the problem. It's not that people aren't trying - there's great strides being made in visualization, IO, and model formulation. But it's clearly not keeping pace with the needs of the modeling community. It's late, so I'll wait until the conversation restarts to really throw my 2-cents into the fray, but I strongly feel that if a renaissance in modeling is coming down the pipeline, it'll be from computer scientists lending their expertise, not atmospheric scientists.