Essentially the argument is that we know what we need to know from the GCMs, and should stop modelling and start mitigating. Tom's reply is that, despite what the denialists would have you believe, the actual expense of GCMs to date is trivial.
Actually, they could get massively better with an order of magnitude more spending and an importation and empowerment of real software management talent. I once tried to interest Google in taking this on. They stopped returning my calls, alas.
Vranes' argument is that this counts for nothing. The government shouldn't do it and Google shouldn't do it. Nobody should pay anything for GCMs, because the purpose of GCMs is to tell us we're in trouble, and they have already done their job.
Vranes is wrong. That is not, as some argue, because the GCMs are insufficient to tell us we're in trouble. Hell, we don't even need the GCMs to know we're in deep. As realclimate said a while back:
The main reason for concern about anthropogenic climate change is not that we can already see it (although we can). The main reason is twofold.Vranes is wrong because it's time to get down to brass tacks on the adaptation side. It's time to start computing in earnest to support the adaptation question, which is pretty much what the Ramanthan NAS panel is saying.
(1) Carbon dioxide and other greenhouse gases are increasing rapidly in the atmosphere due to human activity. This is a measured fact not even disputed by staunch “climate skeptics”.
(2) Any increase in carbon dioxide and other greenhouse gases will change the radiation balance of the Earth and increase surface temperatures. This is basic and undisputed physics that has been known for over a hundred years.
Eli sensibly argues that adaptation without mitigation is insane. This is true. On the other hand, it's far too late to have a pure mitigation strategy. The main applied science role of the large climate models should be to inform adaptation.
I just chipped in at the Rabett Run with my view that adaptation, mitigation and an appreciation of nature, i.e. ecosystem connections, are all necessary.
ReplyDeleteCarry on modeling! (And please tell me more about your third paragraph.)
No doubt models will continue to improve, and to inform the debate. But it appears likely that such obvious empirical matters as the elimination (or not) of the arctic ice pack each summer will beat the models to the punch.
ReplyDeleteI'm not sure that global modeling is the best way to go. IMHO downscaling is the more urgent issue. For example, the most important current questions include climate in the Arctic/Greenland/Antarctic peninsula, OTOH California is large enough to model, has relatively simple boundary conditions (I am being naive here) with an ocean on one side, a wall of mountains on the other and lots of nuts in the middle.
ReplyDelete"The main applied science role of the large climate models should be to inform adaptation."
ReplyDeleteIt is legitimate to question to what extent the models can usefully inform on that process - especially, to what extent 10 years (say) of model development will provide significant improvements on where we are now.
At a recent workshop I attended, someone made the point that models can hardly be expected to predict changes in phenomena that they do not simulate at all reasonably at present (the specific example was atmospheric blocking patterns over the UK). It is certainly not an argument that can be dismissed with a hand wave.
James, agreed, it is certainly a legitimate question. I would claim that it is worth pursuing even though success is not guaranteed.
ReplyDeleteI would also argue that there are reasons that progress of late has been limited, and that it is time for a fundamental re-examination of the computational strategy.
Perhaps we'll get a chance to talk this out at AGU. I believe there's a Bayesian statistical component to a sensible strategy, if there is one at all.