Thursday, December 27, 2007

Why is Climate Modeling Stuck?

Why is climate modeling stuck?

There is no obvious reason why we can't do better, and yet the promised progress in this direction isn't appearing anywhere near as fast as the policy sector needs it.

Specifically, is it possible to provide useful regional prognostics of the consequences of global change? That's what most people want us to do and what most people think we are up to. Whether that is what we want to do or not, I think some of us should rise to the occasion.

I would like to make the case that I can help do something about this. It's a personal ambition to actually influence the architecture of a significant climate modeling effort before I retire. I think my skill set is unusual and strong in this regard, but unfortunately my track record is less so. It's not that I haven't accomplished anything as a coder, a software architect, a EE, or a manager, actually. It's just that academia treats my time in industry as tantamount to "unemployment" and vice versa.

At the least, though, I can try to start a conversation. Are we stuck? If so, why? What could be done with a radical rethinking of the approach?

I have an essay on one view of the problem on another blog of mine. ... I think both climate dynamics and software engineering issues are germane, and I'd welcome any informed discussion on it. There seem to be enough people from each camp lending me an ear occasionally that we might be able to make some progress.

Update: Well, since I've failed to move the discussion over there, I'll move the article here. Thanks for any feedback.




I believe that progress in climate modeling has been relatively limited since the first successes in linking atmosphere and ocean models without flux corrections. (That's about a decade now, long enough to start being cause for concern.) One idea is that tighter codesign of components such as atmosphere and ocean models in the first place would help, and there's something to be said for that, but I don't think that's the core issue.

I suggest that there is a deeper issue based on workflow presumptions. The relationship between the computer science community and the application science community is key. I suggest that the relationship is ill-understood and consequently the field is underproductive.

The relationship between the software development practitioners and the domain scientists is misconstrued by both sides, and both are limited by past experience. Success in such fields as weather modeling and tide prediction provide a context which inappropriately dominates thinking, planning and execution.

Operational codes are the wrong model because scientists do not modify operational codes. Commercial codes are also the wrong model because bankers, CFOs and COOs do not modify operational codes. The primary purpose of scientific codes as opposed to operational codes is to enable science, that is, free experimentation and testing of hypotheses.

Modifiability by non-expert programmers should be and sadly is not treated as a crucial design constraint. The application scientist is an expert on physics, perhaps on certain branches of mathematics such as statistics and dynamics, but is typically a journeyman programmer. In general the scientist does not find the abstractions of computer science intrinsically interesting and considers the program to be an expensive and balky laboratory tool.

Being presented with codes that are not designed for modification greatly limits scientific productivity. Some scientists have enormous energy for the task (or the assistance of relatively unambitious and unmarketable Fortran-ready assistants) and take on the task with energy and panache, but the sad fact is that they have little idea of what to do or how to do it. This is hardly their fault; they are modifying opaque and unwelcoming bodies of code. Under the daunting circumstances these modifications have the flavor of "one-offs", scripts intended to perform a single calculation, and treated as done more or less when the result "looks reasonable". The key abstractions of computer science and even its key goals are ignored, just as if you were writing a five-liner to, say, flatten a directory tree with some systematic renaming. "Hmm, looks right. OK, next issue."

This, while scientific coding has much to learn from the commercial sector, the key use case is rather atypical. The key is in providing an abstraction layer useful to the journeyman programmer, while providing all the verification, validation, replicability, version control and process management the user needs, whether the user knows it or not. As these services become discovered and understood, the value of these abstractions will be revealed, and the power of the entire enterprise will resume its forward progress.

It's my opinion that Python provides not only a platform for this strategy but also an example of it. When a novice Python programmer invokes "a = b + c", a surprisingly large number of things potentially happen. An arithmetic addition is commonly but not inevitably among the consequences and the intentions. The additional machinery is not in the way of the young novice counting apples but is available to the programmer extending the addition operator to support user defined classes.

Consider why Matlab is so widely preferred over the much more elegant and powerful Mathematica platform by application scientists. This is because the scientists are not interested in abstractions in their own right; they are interested in the systems they study. Software is seen as a tool to investigate the systems and not as a topic of intrinsic interest. Matlab is (arguably incorrectly) perceived as better than Mathematica because it exposes only abstractions that map naturally onto the application scientist's worldview.

Alas, the application scientist's worldview is largely (see Marshall McLuhan) formed by the tools with which the scientist is most familiar. The key to progress is the Pythonic way, which is to provide great abstraction power without having it get in the way. Scientists learn mathematical abstractions as necessary to understand the phenomena of interest. Computer science properly construed is a branch of mathematics (and not a branch of trade-school mechanics thankyouverymuch) and scientists will take to the more relevant of its abstractions as they become available and their utility becomes clear.

Maybe starting from a blank slate we can get moving again toward a system that can actually make useful regional climate prognoses. It is time we took on the serious task of determining the extent to which such a thing is possible. I also think the strategies I have hinted at here have broad applicability in other sciences.

I am trying to work through enough details of how to extend this Python mojo to scientific programming to make a credible proposal. I think I have enough to work with, but I'll have to treat the details as a trade secret for now. Meanwhile I would welcome comments.

59 comments:

  1. I'll start with a question.

    Is it stuck? That is, in what way is it stuck?

    Clearly there is some limit to reductionism beyond which simply pouring more computer resources at a problem does not help that much. New scientific/technical ideas are required, these often having little directly to do with computer science or software engineering issues. An example is multi-scale modeling which has recently captured the interest of some material scientists and mechanical engineers.

    ReplyDelete
  2. fast as the policy sector needs it.

    What does the policy sector need, how fast does it need it, and why didn't it need it in the past?

    I suggest your point may be rephrased more correctly as "the promised progress hasn't appeared as fast as the scientists claimed it would when they wrote their funding proposals". Move along now, nothing to see here... :-)

    ReplyDelete
  3. James, well said, but did we believe it when we claimed it, and if so, why have we failed?

    Snark aside it's an interesting question.

    ReplyDelete
  4. James, well said, but did we believe it when we claimed it, and if so, why have we failed?

    Snark aside it's an interesting question.

    ReplyDelete
  5. I think to be honest we might have genuinely believed it, not realising how challenging it would be. There are still some naifs who think (or at least claim) it is just a matter of getting the computer power to do cloud-resolving simulations...

    A few other random ideas: the whole IPCC project is founded on global climate change, there is little impetus for regional stuff (this is changing a bit, now the global scale is well and truly flogged to death).

    Also, I think you may be being harsh in saying little progress has been made. Steady incremental improvements (as expected from a fairly mature field) would also be a defensible description IMO. Demonstrating a significant change in large-scale temperature past and future (ie the existing IPCC consensus) is so much easier than accurately predicting regional changes in precipitation patterns (what is needed for regional impacts) that a bit of a hold-up is only to be expected at this stage.

    This is not to say I disagree with much of your main thrust, I'm just tossing out some ideas...

    ReplyDelete
  6. Is it a mature field, or is it stuck? That's one way to frame the question. If it were mature, wouldn't the various modeling centers have converged better?

    Another way to frame the question is this: is it fundamentally infeasible to predict regional climate transients, and if so, why?

    Basically, I look at it from an optimal use of information perspective. (You can call it Bayesian but in honor of Norbert Wiener and despite inevitable misunderstandings I continue to call it the "cybernetic" perspective.)

    Let's generously assume for purposes of argument that governance and economics were reasonably rational and were approaching the allocation of resources to climate change vs everything else, and to adaptation and mitigation within climate change.

    In this view, our obligation as climate professionals pretty much reduces to the question of what information we can bring to bear on that question.

    I also think (as WIlliam pointed out in his farewell message on RC) that it's clear that what we can contribute is secondary to the progress that is needed in the economic and political sectors, who had damned well better hurry it up in my opinion.

    Still, the question remains whether we can do substantially better quickly enough to matter, and if not, what useful work the field ought to be doing and how much it is worth.

    Note that Joe Romm suggests that the IPCC has done its work and should be shut down, if I recall correctly. Presumably he would argue for a drastic scaling back of physical climatology work as well.

    Now I consider Joe Romm a sort of hit-or-miss proposition himself, and I think he's missed the mark this time pretty badly. That said it's a question that needs answering.

    Are we back to being a pure science with little value add outside (ironically enough) mineral prospecting? (It's a little emphasized fact that petroleum companies are the main commercial users of paleoclimate models.) Or is there some larger practical importance to our work?

    My idea is to approach the proof that we have reached a limit by presuming the contrary and looking for a contradiction. I don't think we have put the limitations of climate prognosis to any serious test, and that we won't be able to until the software design methodology progresses beyond a roughly 1982 vintage.

    ReplyDelete
  7. i hope we're not stuck !
    maybe it is simply that the rate of progress in any science decreases over time, after some "qualitative bounds" ? the question is then, will sufficient progress be made before on-going climate change solves the questions...

    ReplyDelete
  8. David, I agree in principle with your suggestion.

    First of all, the multiplicity of scales is indeed the key to the problem. On the other hand engineers have advantages that natural scientists don't have, so it's not obvious what will or won't carry over.

    Indeed this is the case I make for myself. I'll never be the climatologist Ray Pierrehumbert is nor the software developer Ian Bicking is nor a statistician to compare with Donald Childers, but I've had the privilege of learning from all of them. It's the dilettante in the room that is the one in the best position to make the unexpected connections.

    I disagree, though, when you say computer science has little to do with it. Information is the stuff that science is made of.

    ReplyDelete
  9. I don't think any radical rethinking is really required; just a shift from our current focus on computational speed to a focus on ease of code development. So many times I have been fighting with a minor problem in Fortran that I can't pin down, and I've longed for a well designed Object to take away my pain. It usually takes me 2 months to get a model code that I am not familiar with running, and 3 days for the actual model run. I'd gladly take 6 days to run the code if it meant it only took a month to get the code in a usable form. It would also open model design to a much wider audience of scientists who avoid it now because of the difficulty of getting something to compile. If we did this, I think that would go a long way to answering whether we are stuck or just up against a wall of code complexity.

    ReplyDelete
  10. Jordan, yup. Exactly.

    Not sure it's sufficient but it is necessary.

    ReplyDelete
  11. I suppose that depends on one's perception of just what computer science is. One part which might be he3lpful is a better understanding of parallel system structure and how this might affect both compiler and application code development. I doubt it is the major player. Appropriate means of treating multi-scale modeling seems to me to be much more important.

    After reading the previous comments, I certainly encourage a move away from any dialect of Fortran to a functional language (Python will do well enough). This will certainly free up time spent hunting for obscure bugs.

    ReplyDelete
  12. Michael,

    Regional forecasts can also be viewed as an ambitious extension of the indisputably mature field of weather forecasting. People are pushing the boundary in various ways but it's a tough job.

    I don't recall Joe Romm's comment but have been known to utter similar sentiments myself, to general horror in my institute which basically exists (in large part) to serve the IPCC. Assuming a similar situation exists elsewhere, the IPCC will not go away any time soon...turkeys do not vote for Christmas.

    Jordan,

    All of the major "flagship" modelling efforts take months to run on some of the largest computers in the world. It's dubious (IMO) that this is the best way to use resources, but anyone using slower-but-friendlier coding methods would find themselves well behind in the "arms race" to higher resolution and complexity. Scientists actually don't cost that much...

    ReplyDelete
  13. All my programming experience has been in business programming (except for a little Fortran a long time ago), but my view may add value:

    I'm a big fan of Kuhn, and my first reaction is that the field is set for a "Kuhnian revolution". That is, you may have taken the current general approach as far as it can go and you need to go back to square one and design a completely different kind of model.

    Another reaction, agreeing with Jordan, is that you'd do well to build your whole assembly in an object-oriented language, using a full object-oriented design methodology. Not only are the objects reusable, once verified, but the simple (NOT!) act of analyzing what you're doing well enough to define object interfaces can teach you a lot about the system you're trying to design. Whatever specific functions or processes are available in other languages can be replicated as object methods once, then used forever.

    You mention learning from the commercial sector (in the other blog), but I can say from experience that there's much in the commercial sector you don't want to learn, or copy. Rebuilding a large climate model is a major project, and a high percentage of major projects in the commercial sector are failures, even when papered over.

    ReplyDelete
  14. James,

    The flagship efforts are, I think, different than what we are talking about here. They certainly aren't the testbeds for new parameterizations and process studies. But even then, the benefits of fast code above all else are always overestimated. Say an easier-to-read code has a 50% performance hit. How much of a resolution hit does that correspond to if you want to do a similar duration run? Say you reduce the resolution in all directions by a factor of X and increase by time step by a factor of X as well. To make up for the speed hit, X^4 = 2, so X = 1.1892. So if your grid spacing is 10 km, you only have to increase it to 12 km to make up for the speed hit. That's a pretty minor change.

    Furthermore, as the model resolutions get larger and larger, the model runtime is increasingly dominated by large matrix ops like the continuity solver and the radiation scheme. These parts of the model can be tightly coded by programming experts and sealed off inside objects, never to be seen by the average modeler, so I don't see why there has to be major performance hit at all. Certainly, as model resolution increases, the performance hit becomes less and less of an issue, but the time you lose because you can't read the code is only going to increase as the codes become more convoluted.

    ReplyDelete
  15. Jordan and I seem cut from the same cloth on these matters. I don't think the speed hit is what prevents the sort of progress we are talking about.

    However, I am also asking: have we reached a plateau, and if so, is this the final plateau?

    ICE says no, no plateau as yet. James says there is little likelihood of any major further progress.

    Jordan seems to agree with me that there is a plateau but we have no basis as yet for saying it is the last one. However, we will have to try something a little bit different to put it to the test.

    ReplyDelete
  16. I'll add that easier-to-read-and-write code is much more likely to be correct.

    Speed issues can be handled as others have described, but there are two other ways: (1) Use a language with a good foreign function interface to use stable arithmetic packages; (2) Improve the compiler's ability to produce fast code.

    With regard to the second point, compiler technology is now so mature that most (popular) languages have compilers that generate code that runs about as fast as any other.

    The other place than language choice makes a difference is in the ability to prove properties of the algorithms as written. (There seems to be little work in this direction for numerical applications in the literature I follow. The main issues have to do with distributed and parallel computing, operating system stuff.)

    ReplyDelete
  17. David, yes. (The whole proposal is gradually leaking out here.) Automatic reasoning of various kinds is greatly facilitated by designing a formal language that is readable by humans.

    It turns out that the requirements for automatic reasoning about code are not that dissimilar from requirements about cognitive accessibility of code.

    ReplyDelete
  18. I question whether the Python threads are adequate for programming a multi-core machine.

    ReplyDelete
  19. The multi-core problem is a thorny one, and Python's threading limitations may sink it in the end, but it doesn't faze me much because I don't intend to rely on Python at runtime.

    Indeed, I think something like this idea may be the best way to handle multi-cores.

    ReplyDelete
  20. Sorry, but I can't avoid a cynical response when someone tells me that all my problems will be solved if only I were to learn Yet Another Programming Language.

    As a collaborator on the GENIE project (mentioned on the linked post), I don't want to sound too negative but after being managed under CVS for a few years, we were told by the computer scientists that SVN was the way forward...SVN being cutting edge enough that it isn't even on our supercomputer, we had to install that locally on a desktop computer, download the GENIE code and transfer it over. The run scripts are perpetually morphing, there was certainly python at some point and now I hear it is all wrapped up in XML somehow (haven't dared to look). Since a large part of our effort is actually taken up with interfacing this code with our own ensemble-generation stuff, this is a major pain in the arse that brings no benefit whatsoever. Meanwhile bugs that I reported several years ago in the code itself remain unfixed, cos there's no money in actually doing the geoscience, it's all invested in e-science.

    When jules grumbled about this (specifically the latest XML thing) on the GENIE mailing list, she was threatened with being thrown off the list and told that "old dogs need to learn new tricks"!

    There seems to be the pretence that all this will settle down once people have worked out the best way to do it. But the New Best Thing is only Best for a matter of months before the next New Best Thing comes along, and we all have to waste significant effort merely keeping our heads above water. Meanwhile I hope you'll excuse me if I stick with fortran (IDL for simple toy experiments and graphics). It may not be perfect, but at least I don't have to have the manual open beside me as I write each line. I even use some f90 intrinsics now!

    FWIW I've been programming computers since I was 12 (back in the days when a home PC actually needed programming rather than just game-playing), and my maths degree involved a large chunk of computer science (OK, theoretical), so I'm hardly at the computer-phobe end of the spectrum.

    ReplyDelete
  21. James, it is with some trepidation that I address you in a future year. I fully understand your skepticism.

    I maintain that the problems you are describing are attributable to bad management, or more specifically, management as if it doesn't matter. The specific technical decisions may be more sensible than trendy, but the thrashing is a typical sign of a mismanaged enterprise.

    As for the value of this or that form of expression, it is interesting to consider the extent that the that the progress of mathematics has tied up in the progress in its notational forms. Paul Graham explains it quite well.

    ReplyDelete
  22. Thinking more clearheadedly about James & Jules' complaints:

    1) Moving from CVS to something that actually works (maintains coherent data strcutures and allows refactoiring of directory trees) is necessary pain for any significant ongoing project. SVN is the most painless of the transitions as it is the most similar to CVS in normal usage. So I can't agree with this partiucular complaint.

    2) XML is not intended for direct modification by humans. If your usage requires direct manipulation of XML, your use case has been neglected. If that use case is ensemble management, that's a concern. I think it's likely, though, that a solution involving a proper language with a proper XML library makes much more sense than actually editing XML, and it is probably and preferably the case that this is the expectation.

    3) On the other hand, failure to address bug reports is not a good sign in a domain with such a low developer-to-user ratio as climate modeling. There are many other signs of bad management practice in academia and labs, in science, and in particular in climate science.

    XML and OOP and whatever other buzzword is current can always be misapplied. The greater the level of abstraction, the greater the potential for misapplication.

    ===

    The idea that you need to be expert in the entire tool chain that produced your result is misguided. It leads to very hard scaling limits in what can be achieved.

    It's also silly. I am willing to bet that your understanding of how a compiler is put together is even vaguer than my own, James, yet you unhesitatingly use them.

    Your own use case (similar to ours here, after all) is atypical.

    If the dynamicists of the world are being asked to learn too much computing technique, though, then what is happening is exactly the opposite of what I am suggesting.

    As for yourselves, if you actually need to touch some XML you should invest in a little time to build a tool to touch it for you. That, after all, is the whole point. Try ElementTree . That may require setting aside a little time to Dive Into Python.

    ReplyDelete
  23. Michael, I'm diving in a little at the end here, but as someone who actually has a fair degree of influence over the future directions of climate modelling (at least in my part of the world), there might be something useful I can add.

    First off, I think your metric for whether climate modelling is progressing (robust regional projections) is incomplete/flawed. I agree that this is something that is greatly desired outside modelling groups, but it is certainly not our primary (or even tertiary) goal in development efforts. Some groups may be more focussed on that, but modelling is a far wider enterprise. Frankly, projections as a whole are not even the primary focus.

    Second, looking at the development of earth system models over the last five years reveals enormous advances - taking completely separate research directions in different communities and integrating them with atmospheric and oceanic science. I'm speaking of atmospheric chemistry, aerosol formation and transport, the modelling of important paleo-tracers, dynamic vegetation, carbon cycles etc. This has taken big steps in model design and lots of old dogs learning new tricks. The upside is that much more science is possible, new feedbacks and interactions can be explored etc. There is still a long way to go (ice sheets for instance), but you can't deny the vast increase in functionality in models over this period.

    Third, as you've alluded to, there are lots of disadvantages to the current dependence on Fortran, however the cost of switching to any new language is huge and would be debilitating for any modelling center. Changes do happen, but the ones that stick are the ones that do the least damage to the teams competencies.

    Finally, James alluded to the cheapness of scientists. I disagree. The bottlenecks we have on development are finding scientists to put things together and test the results. Actual computation is not the issue (our resolution can always be expanded to fill any available space). But it really takes expertise to decide what the right balance is in including cloud and aerosol micro-physics for instance.

    I could regale you with real life examples from the introduction (and attempted introduction) of CVS, Makefiles, ESMF, Fortran90, python, C++, but every case has strengthened my opinion that evolution rather than revolution is the most productive way forward. This might only be true for medium sized govt/university partnerships with limited flexibility in hiring based in New York, but it might have some generality.

    ReplyDelete
  24. We have no support for the latest great new things frequently emerging from the UK escience guys and yet Genie really wants to be a model used by the widest possible community. Well, of course we have Google, like everyone else. So every time something is changed I have to waste a lot of time learning how to handle the new thing. Thanks very much but I don't want to put aside "a little time" just now to learn all about XML. I've got to run my model instead! As far as my science is concerned, the most important thing is that the way the model is run does not change. I don't actually care how "perfect" it is in the eyes of computer scientists.

    The other side-effect of all the fancy stuff is that the model code sinks ever deeper below the surface making it more and more difficult for a scientist to work out what is really going on in the code. I think this is because commercial computer software is made to protect the user from the inner workings of the code, so that any idiot can use it. Scientists, however, are not all idiots and require the opposite. They need super-easy access to the inner workings of the code.

    I envisage a hierarchical structure where you can view the basic flow of the model logic and progress deeper and deeper into the model structure, down to the level of individual lines of code as you require it. Some cool tool that does that might actually be useful...

    ReplyDelete
  25. Jules re: XML, it depends on how much time you spend grubbing with XML code. I'd say if it's in excess of twenty productive hours over your lifetime you are better off learning Python and XML on those grounds alone.

    Your second paragraph is more germane. Indeed I agree so completely that I could have said it myself. I'll just repeat it:

    The other side-effect of all the fancy stuff is that the model code sinks ever deeper below the surface making it more and more difficult for a scientist to work out what is really going on in the code. I think this is because commercial computer software is made to protect the user from the inner workings of the code, so that any idiot can use it. Scientists, however, are not all idiots and require the opposite. They need super-easy access to the inner workings of the code.

    Anything that works in the opposite direction is not progress.

    It's a very interesting problem looked at that way, and that is the way I look at it.

    So I'm afraid we cannot have a productive disagreement because unfortunately I agree with you exactly. See e.g., the opening sentence of my paragraph 6 "Being presented with codes that are not designed for modification greatly limits scientific productivity."

    ReplyDelete
  26. Gavin, thanks for stopping by.

    You say:

    First off, I think your metric for whether climate modelling is progressing (robust regional projections) is incomplete/flawed. I agree that this is something that is greatly desired outside modelling groups, but it is certainly not our primary (or even tertiary) goal in development efforts.

    I am not suggesting that there is nothing of value to be gained from existing models. I am suggesting that there is a great deal of value to be added from much better models, and that this should be the clearly stated goal that dominates expenditures in this area.

    I am an engineer and I am above all interested in designing methods for managing the planet to avoid catastrophe. Powerful modeling methods seem to me very much of the essence.

    I agree that projections have not been the primary intellectual focus of the enterprise to date. If you actually look at what most modeling centers have been up to in the IPCC era, that is rather a peculiar fact.

    The key engineering question is whether projections are technically feasible. I know of no compelling argument that would indicate otherwise. That being the case, and given the level of interest in the matter, it is very striking how little progress is actually being made or in prospect at present.

    The upside is that much more science is possible, new feedbacks and interactions can be explored etc. There is still a long way to go (ice sheets for instance), but you can't deny the vast increase in functionality in models over this period.

    I find this direction very disconcerting, actually. You are introducing more degrees of freedom into a system, longer time scales (which weakens data constraints) and very limited methodology for expanding general knowledge about behavior of coupled complex systems. I am not sure we know more when we add more phenomena.

    I think that what we need is a larger array of models, not a big huge model-sort-of-thing that claims to include all phenomena at the expense of any conceivable validation method. In order to have that, we need formal testing and tuning methods, and in order to do that we need higher levels of abstraction.

    Third, as you've alluded to, there are lots of disadvantages to the current dependence on Fortran, however the cost of switching to any new language is huge and would be debilitating for any modelling center. Changes do happen, but the ones that stick are the ones that do the least damage to the teams competencies.

    I have some ideas about how to do this incrementally.

    If there were a profit model commensurate with the actual value of the enterprise, you would see what you see in commerce. Organizations that cling to antiquated methodologies get blindsided quickly by upstart competitors.

    The bottlenecks we have on development are finding scientists to put things together and test the results. Actual computation is not the issue (our resolution can always be expanded to fill any available space). But it really takes expertise to decide what the right balance is in including cloud and aerosol micro-physics for instance.

    I propose doing away with the "scientists put things together" bottleneck.

    Scientists should be presented with an architecture where cloud physics is the domain of concern. The code should practically disappear as a matter of concern for them. Scientists should not have to work to express their ideas, but should work on understanding the ramifications of their ideas.

    Current methodologies and platforms make many useful abstractions unavailable. I think it's reasonable for people who have not been exposed to modern commercial practice to underestimate the potential of better abstractions, but even so you will agree that there is some value there.

    I could regale you with real life examples from the introduction (and attempted introduction) of CVS, Makefiles, ESMF, Fortran90, python, C++, but every case has strengthened my opinion that evolution rather than revolution is the most productive way forward.

    I am an incrementalist but not a gradualist. I think we are in a hurry, though, and need to put a lot more thought into how and why we compute about the earth system.

    I would love to hear your opinions about ESMF in particular. I think ESMF is more of a problem than a solution.

    I think ESMF shares with the modeling centers the misapprehension that the code base is precious. By commercial standards the code base is small and well documented. It can be replaced.

    The model is not the code. The theory underlying the code can be re-expressed at a cost small compared to the cost of creating the code. The potential benefits are very large.

    ReplyDelete
  27. James, I understand and sympathize with the complaints that learning new comp languages and technologies imposes a significant learning burden on what is already a really, really hard job. And I agree that jumping on a new YAPL bandwagon is not a good idea, but I don't think that is what is being proposed here.

    I would like to see a widespread movement to object oriented programming. This is not a new, flash-in-the-pan technique. OOP is the idea every modern programming language is built around, because it is such a powerful tool for building large, modular computing infrastructures, which is exactly what we need for modeling.

    Fortran(77/90) also hobbles us in less obvious ways. Hiring and training new programmers is much harder when we all work in Fortran, because no one outside the science community uses it. My understanding is that the most widely used language in 1st year programming courses is Java, which is Object-oriented to the core. This also means that, comparatively speaking, little work goes into Fortran development. By maintaining our dependence on Fortran we lose out on all the advances in XML processing, database access, web connectivity, and so on that are built into modern languages.

    Michael advocates Python as a logical choice (as do I) but honestly I don't care what we go to as long as it moves forward. Java would be the best choice, since it is the most commonly used language today, except for the fact that the Java Virtual Machine would make the performance hit unacceptable. Fortran 2003 would be even be OK since it has Objects, but there aren't any compilers for it that I know of.

    Finally, I'd like to add that one of the new realities of science in the last ten years is that, to some degree, we are all computer scientists now. Part of our job description now must be keeping up with modern computing. If you manage to make a career without learning these new coding techniques, power to you, but I suspect it will be increasingly difficult to do so as time passes. Computer infrastructure isn't going to settle down until years after Moore's Law stops working. Until then, we're all just hanging on to the exponential growth curve.

    ReplyDelete
  28. Now joining the fray quite late, I'd like to point out the factor that I think is most important. Michael hits the nail on the head in his last post when he suggests that a complete rewrite of the underlying code is not to be feared. What we want to provide is a mechanism to make this easy enough and powerful enough to be useful to the climate community. If the climate modeler user can write their new model for atmospheric chemistry in the language of mathematics (e.g. \partial c/ \partial t = -k\laplacian{c} - A*exp(t/T)) rather than in the language of some, say, fortran implementation of a explicit (in time), central (in space) finite difference algorithm, then that is true progress.

    ReplyDelete
  29. Jordan,

    I believe that further language development of Fortranlike languages backward compatible with existing Fortran codes is not a winning proposition, nor is developing significant new code bases in such a language.

    The "compatibility mode" in F03 will help rescue some of the F9* codes from oblivion. Trying to bolt OOP onto a basis that was designed in 1955 is not going to help matters, though.

    http://www.fortranstatement.com/cgi-bin/petition.pl

    Jordan and AK,

    Objects (as in object oriented computing) are not magical, at least no more magical than their designer is a magician.

    Knowing what the right objects are is not a trivial matter. It took me quite some time to start to come up with an object model for continuum modeling that seemed to me to have much advantage over procedural code.

    Scientists are skeptical by nature. They won't buy things based on trend, and as you can see, have been burned in the past when they let others' enthusiasms sway them.

    We need to explain why OOP matters. Raw assertion is insufficient and leads to suspicion that you don't have a clear idea yourself.

    All:

    It is interesting that science is essentially the last bastion of procedural coding. Any ideas why this is the case?

    ReplyDelete
  30. Your concern about earth system modelling is misplaced. The reason why it helps constrain models is because model results become testable over a larger range of parameters. i.e. you don't just need to get temperature right in a convective paramterisation, you need to get ozone/radon/d18O right as well. And with each extra tracer, you get an orthogonal look at the underlying process. It also allows you access to the paleo-climate record, all of which is encoded in proxies whose physics can be coded more completely within the GCM than in any inverse model. All of this stuff is of course optional, so if you don't like it, you don't have to use it. A hierarchy of model functionality is obviously very useful.

    As to the code base, I think you underestimate enormously how long it takes existing people to adapt to new coding environments. Even if I could get a python (or C++ or Java) version of our model tomorrow, it would take years before the users/programmers would be as confident editing code as they are now. And in that meantime most new development would cease. The only way you could justify that would be if for years afterwards you would be able to show that productivity would be vastly improved.

    It's not enough to just show 'improvements' because we are already making improvements in productivity through incremental approaches - which indeed focus on increasing abstraction, but within the existing frameworks.

    ReplyDelete
  31. Gavin, your first point is very interesting. I hadn't thought of it that way. My first reaction is "sure, sometimes, but not always". I wonder how, for instance, a closed carbon cycle fits into that argument.

    On the second point, I think the emphasis is wrong. It may be hard for some people to adapt, but the problem is important enough that the general need for a solution trumps the need for comfort of all existing participants.

    Some people won't see any advantages to higher level descriptions, and so any change will be a pure cost for them. We can try to minimize that cost, but it won't have any accompanying benefits.

    I am much more interested in very quickly expanding the capacities of people who are interested in higher abstractions.

    I think there's an argument for a large increase in resources to do that.

    Meanwhile the question is whether the endeavor as currently constituted is now in a range diminishing returns per unit effort.

    I'm not alone in suspecting that this might be the case. I might well be wrong; I'm sort of a fringe player after all. That may be to my advantage in some ways but it's certainly not so in all ways.

    So, Gavin, I really appreciate your patience with these challenges. I'm entirely willing to consider that I may be confused. I'd appreciate your answers to these questions specifically:

    1) Do you think return on investment in climate modeling is increasing, static, or declining? How would you evaluate this?

    2) Do you think that reliable regional projection of climate change is possible in principle? If so, what is needed to make this practicable?

    3) Are there circumstances where legibility of code is comparably important to performance, and if so, how should we support those use cases?

    ReplyDelete
  32. Gavin, I agree that moving to a new codebase, especially for scientist-programmers who don't enjoy coding for its own sake, is a huge, multi-year process. The problem is that I don't believe we really have a choice. Seen the new edition of Numerical Recipes? It's only in C++; in a forum post, one of the authors says that website hits for the Fortran parts of NR are less than 10% of the total. Fortran is rapidly being replaced by better tools everywhere, except climate science. I think we're going to have to change one day, and the longer we wait, the harder it will be in the end.

    ReplyDelete
  33. Gavin, part of the approach that I've taken in the past, and, relatedly, that we're going to try to take here will mean that climate modelers write very little code when they add a new model or portion of a model to a "code". I'm thinking line counts for new additions in the low double digits or less. So, in some sense, who cares what language it's in? Our users will hopefully be writing mostly things that look like mathematics not code.

    Michael, science is the last bastion of procedural programming because scientists think that the lines that they write that express the translation of the mathematics that describe the discretization of their underlying problem (i.e., forming matrices, updating vectors, etc.) are the most important lines in their programs. This couldn't be further from the truth. It's practically the entire rest of the infrastructure that's truly important. Of course, it's very hard to appreciate this until you've written a few abstraction layers, made a code that solves more than one basically fixed set of PDEs, etc.

    Furthermore, the funding agencies don't help since they generally do not support software development but instead support science that happens to have software developed for it. Nor do they require a cogent software engineering plan in any proposal that does happen to propose some software development, BTW. NSF is at least starting to require Open Source licensing in some of their solicitations. They should require that every project have a test plan, do code reviews at annual reviews of big projects, require software to be on SourceForge or the like, etc.

    (OK, that's some of the most arrogant stuff I've said in public on the Internet in awhile. I think I'm going quit before I dig the hole any deeper!)

    ReplyDelete
  34. To answer your questions....

    > 1) Do you think return on
    > investment in climate modeling is
    > increasing, static, or declining?
    > How would you evaluate this?

    Increasing. And this is mostly because of two factors - i) the increasing range of problems and potential targets you can get to with an ESM, and ii) the accessibility of the IPCC AR4/CMIP3 database of model runs.

    In other areas - cloud or convective paratmerisations for instance, progress is not so obvious. Those are the areas where new approaches - machine learning/super-paramterisations/etc
    might benefit the most from completely novel approaches.

    > 2) Do you think that reliable
    > regional projection of climate
    > change is possible in principle?
    > If so, what is needed to make
    > this practicable?

    The spread of regional projections even in an ensemble set from the same model is very large. Given that this is related to weather in the most part makes it very difficult to see how uncertainties will come down. The best way forward is not necessarily ever-increasing resolution, but more focus on the validating the sensitivity of large scale climate modes (ENSO/NAO/THC etc.) which have the biggest impact on regional climate in the real world. That needs paleo as well as better ocean models/air-sea fluxes etc.


    > 3) Are there circumstances where
    > legibility of code is comparably
    > important to performance, and if
    > so, how should we support those
    > use cases?

    Legibility of code is highly tied to maintainability and is much more important than performance in the long run. It takes us much longer to write and test code than it does to do production runs. But legibility is also a function of how fluent one is in a language.

    For instance, I'm a pretty experienced programmer, but was completely unable to debug some oldish python code that had been used at in another software project recently. That isn't a problem I have with fortran code, despite the fact it is less 'legible'.

    I would be leaving a false impression if I didn't mention that are clearly some parts of our code whose accumulated illegibility is a hindrance to further development. However, rewriting it in C++ would not help. Clear thinking is better than clearer code.

    Jordan, The fact that libraries (like NR) are increasingly written in C++ is irrelevant. Compilers can link to all sorts of libraries. However, within our model I think we use precisely one NR routine (tridiag). Low level stuff like this though is very stable and not part of what I would consider 'climate modelling' at all. The fun stuff is the physics, not the solvers.

    ReplyDelete
  35. I would argue it is computing which is stuck, not climate modeling. I
    take issue with the "Newer Is Better" fallacy. Yes, FORTRAN is being
    replaced with newer languages like C++ (and Java and Python and
    whatever) but those tools are not "better", if they are really even
    different. As far as C++ versus FORTRAN is concerned, they typically
    use the same back ends, same debuggers, same runtime libraries, and
    we've had tools to translate back and forth between the two for
    decades. Broaden the scope to also consider Python and C# or any of
    the OOP-ish usual suspects, but they all have the same imperative
    programming model, same paucity of built-in parallelism, same
    stovepipe legacy... it's all a cosmetic papering over of 1950's first
    guess at how computers work.

    You can't even buy monoprocessor computers anymore -- that we even
    consider these languages on today's hardware shows how far from
    reality "software engineering" is.

    That's why I think computing is stuck, and is taking climate modeling
    (and innumerable other fields) down with it. Computer Science will
    change one funeral at a time.

    ReplyDelete
  36. Oh, there is stuff like Matlab which enables you to write and test stuff 10x faster, safer, and nicer because you can achieve the stuff with one tenth the code lines, compared to C++ for example... I have personal experience from this, when I've worked with both simultaneously. It's quite different from, say, C. It's also easily readable since ideas can be expressed more clearly because of the higher level. The gist is not hidden in the details of having to initialize various helper variables, update counters or using yet another custom library.

    This discussion tends to saying "C++ or JAVA is not much better than Fortran, so no language can be", when there in reality is a much wider spectrum of languages and paradigms.

    Forget C++ (or even JAVA) for research. You only use it when you know exactly what you're doing; for implementation. It's far too slow, inflexible and error-prone to write it just to research, experiment and try to develop models or algorithms. At least in the real business world. Maybe universities and NASA have endless hordes of researchers they can put to languish in code hell. :) You never know how long it will take if you want to make a small code change in C++ because it works on such a low level, and it turns out you have to change stuff left and right too. It's very inflexible.

    Matlab is slow-running and high level, but it's easy, clean and effortless to read, write and update. And it comes with very good sensible consistent libraries and a nice dev environment where you can load, save, plot and analyze your data, algorithms and models extremely easily.

    But it costs and is closed source. Hence some other open alternative that would have similar good qualities would rock.

    ReplyDelete
  37. Everything mz says is quite true, but if that were all there was to the story all I'd have to add would be
    Python, Python, Python.

    In fact, Turing aside, I believe it is impossible to express the architecture of the NCAR coupled climate model in Matlab. This is for reasons relevant to my friend JM's (whom I happen to know is neither Mashey nor Manzi for what that's worth) point which is sort of the flip side to mine.

    JM wants the compiler to understand the system architecture. That's deeper magic than I am proposing and I'm not even sure it's possible. I just want to hide it from the science guys and leave it as an endless of one-offs for the numerical library guys.

    ReplyDelete
  38. (Michael always has the interesting threads going right when I'm most swamped.)

    There's two answers to Michael's initial question of "is climate modeling stuck"? Gavin's point is correct that for adding new processes, progress is going along as well as can be expected in our tight budget, uh, climate. The next version of CCSM will have aerosol indirect effects, a carbon cycle and other things but probably be at the same resolution.

    Resolution, which you need for regional prediction, is what Michael is concerned about and there I would agree that climate modeling is stuck. But the problem is not code but data. I'd say its 70% data and 30% computer hardware/software. When you go down to 10km, the parameterizations used currently stop working and the high-resolution (space and time) data to derive new ones isn't there. (If you go really high, things like a deep convection parameterization aren't needed.)

    Also if you want a global model with regional accuracy, you need high-res data sets of all the variables you need with global coverage. Those are also lacking. Finally you need an assimilated initial condition for the slow parts. Again, not enough data.

    That being said, the software problem is real. Modifiability is a scientific code requirement that doesn't exist in commercial, user-drives-a-mouse software. I've been asking for something like what Bill is saying: write/see equations, get working code yet you can still debug. I don't know how to build it but would love to use it.

    ReplyDelete
  39. Rob, in some senses, I have it. It just doesn't solve climate problems very well. It's single-fluid (or single-material, really) at the moment, fully-implicit, doesn't have moving boundaries (so water and upper atmosphere gravitational waves would be troublesome), and probably lacks a few other things. But it is an unstructured finite element code that solves parabolic and elliptic nonlinear PDEs with algebraic and partial differential constraint equations in 3D in parallel at reasonably large scales, and you write the input in, basically, LaTeX. I gave it to a visiting ground water modeler here at one point and she manged to add a salt transport and Darcy flow to it in few weeks with little trouble. If she'd let me do it, it would have taken an afternoon at most.

    It needs a little preconditioner work for the Krylov solvers at large scale, more element types, boundary conditions need to be brought into the LaTeX-like language (right now they're quite general, but written in ad hoc C++), and it could really use adaptive mesh refinement. It could be made to do explicit problems pretty easily, as well--something my partner in crime and I have discussed doing more than once.

    Did I mention that it comes with a set of steak knives?

    ReplyDelete
  40. Michael, I like this line of thinking. The modeling community sometimes dives headfirst into model expansion without asking whether the increased complexity and capacity will actually solve any problems.

    One of the challenges - problems may be a better word - of the shift towards earth systems modelling is characterizing biological processes for which global data for validation does not exist. Complicated this is the fact that the modelling is largely done by computer-based physical scientists who may lack a finer appreciation of the real-world variability in biological processes.

    Here's an example. The availability of nitrogen is one of the key constraints the carbon cycle. So, naturally, as carbon cycle models evolve, they should include the nitrogen cycle. There's a lot of interesting work happening in that area. Sometimes, I worry that the stress is on including more processes and capabilities rather than including things that will improve predictive ability and accuracy. With nitrogen, the problem is the transformation processes - nitrification, denitrification, etc. - are extremely variable in time and space and little large-scale data exists. I don't think we should boast about describing nitrification in a global model, when we rally have no proper way to test it. Before adding any new process to a model, you need to step back and ask whether the added value of describing that process is clearly greater than the uncertainty introduced. It may be that a less complicated characterization of, say, the nitrogen cycle, would actually be "better".

    ReplyDelete
  41. Following up mostly on Simon's comment:

    I think to some extent the problem is the desire to build a massive all-inclusive ESM rather than a family of ESM tools.

    ESMF is in some ways an effort to address this which I think is technically misguided in several ways.

    These problems are hard. The thing about doing things that are hard is that trying is not the same as succeeding.

    The thing about the academic sector (this doesn't apply to medicine or engineering for reasons that have something to do with money) is that the measure of success is very vague. It is essential to be able to declare success, but actually succeeding is not highly motivated.

    There are also some traditions that have emerged in the relevant scientific communities that are a bit counterproductive that lead to this complexity-for-its-own sake blind alley. Since my career is in a position where I have much to gain and little to lose, maybe I should be the one questioning the imperial raiments.

    Existing groups seem to be saying that there is no model architecture problem and/or that ESMF has it well in hand. I believe this is incorrect or at best incomplete.

    I am not alone in this belief, though I may be more motivated than most to say so, having more to gain and less to lose than most.

    Criticizing the field fairly is somewhat frowned upon in the context of all the unfair crticisms it gets. I've been reluctant to do so publicly but I haven't really been able to gain momentum for the coversation otherwise.

    The achievements to date are real and substantial. I just think the benefits of the old methods are nearing their end and a new approach is needed.

    ReplyDelete
  42. Michael:

    I’m obviously coming very late to this party.

    As you know, I’m the least expert person on the topic of climate science on this blog, but on the other hand, I’ve been part of and led large teams of exceptional software engineers and mathematicians in building, testing and delivering complex, analytical software. I’ve built a fairly large applied AI software company. That said, I have a huge amount of respect for the differences between commercial development and what you guys are doing, as well as for the amazing work that has already been done in this area.

    Here are three quick reactions from a non-expert and well-wishing “outsider”, stated in declarative form even though they should be treated as discussion-starters, for what they’re worth:

    1. Somebody’s got to bite the bullet and convert from F77/90 to Java or something with good object orientation. At the most simplistic level, you will increase the relevant talent pool from which you can recruit by something like an order of magnitude. I get Gavin’s point that the fixed costs of this root canal are almost unimaginable, so I assume most groups couldn’t do it for the foreseeable future, but somebody has to show that it can be done. I read through some of what looked to be some of the key modules in a NASA model, and I have nothing but sympathy for the guys that have to maintain, validate and (worst of all, of course) extend this code. I can’t imagine that groups aren’t hitting the quicksand of lack of an architectural foundation by this point. I also get the huge premium on algorithm execution speed. As long as the framework has good OO, you can embed C++, Fortran or whatever processes within it when it is really critical. (Although my instinct is that you’ll be surprised by all of the unexploited algorithm optimization opportunities you uncover if you can clean up the code). I think all of the “ancillary” technologies, such as version control, bug tracking, etc. are less critical technology choices, and success in these areas is more driven by management practice.

    2. Michael’s general point about abstraction seems essential to me. This applies at many levels. Ideally, creating some kind of a scripting language so that climate scientists can interact with developers in a high-bandwidth way, while still maintaining a partial division of labor would be incredibly valuable. I’ve found that managing this interdisciplinary process to get the benefits of focus while avoiding the problems of “siloization” is central to success for a large-scale, deeply domain and algorithmic development process. Easier said than done.

    3. The last is somewhat broader, and may be off-topic. I think that clarity of objective with some harsh real-world feedback is essential. As an illustrative example only, imagine that you took forecasting global temperature sensitivity as the goal. This would involve something like a structured program of 1 – N year predictions made for each model version release with an escrow process (probably within your version control system) of the codes, execution scripts and so on at the date of the version release. A QA / Testing-type team (i.e., not the developers or scientists) would then execute the model at each of these 1 – N years and measure forecast error. Again, this is a hypothetical example – I hear Gavin’s point that this is not the real core objective given to the modeling teams. The models themselves are so sprawling, that I think it’s hard to see how you make rapid progress, ferret out error and so forth without such feedback.

    I hope this is useful, and once again want to emphasize that I’m just putting ideas out there for you, and have a huge amount of respect for what you guys are doing.

    Best,
    Jim Manzi

    ReplyDelete
  43. Jim, counting all the variations we are talking on the order of a million lines of F90. My CS collaborators don't quite get, yet, that most of this has almost nothing to do with PDE's.

    A direct translation is possible, though not really fun. On the other hand, the pieces are small and well-documented. A migration path translating at the algorithmic level should yield an unambiguous formal representation in a few tens of thousands of lines.

    Regarding testing, again we can do much better, though outsiders seem a bit confused about what it is we can expect to achieve.

    I am very interested in Gavin's claim that intra-model variance is large compared to intermodel variance.

    If that's clearly so, I would say that the climate modeling enterprise is nearing some intrinsic limits and so that would be an argument against the sort of effort I'm advocating. My intuition based on the literature is that this is incorrect. It's sort of awkward, but if I can argue against John McCarthy I suppose I can raise questions about Gavin's assertions too.

    I'm interested in formulating a formalization of Gavin's hypothesis and testing it.

    ReplyDelete
  44. Michael:

    I almost certainly would not think a direct translation would be the way to go. I'd expect it to be clean-sheet-of-paper build of a GCM using OO and exploiting what has been learned in order to build a reaonably flexible architecture.

    I think that getting leverage from testing always requires clarity of mission in order to drive clarity of winning vs. losing. Without that, it becomes mostly just a search for trivial errors and a general pain in the ass.

    ReplyDelete
  45. Several quick items...

    - IMO Java isn't the way to go. Its major selling point is "platform independence" which simply isn't so when you get to the nuts and bolts of feeding complex data between different modules on different platforms. A Java application has to be tested on every platform it's going to run on before you can depend on it to work completely right. The good thing about it is that the very syntax enforces the good coding practices, especially data hiding. C++ allows just as much OOP, but doesn't enforce it. You need a language or development environment specifically designed for the sort of applications the models represent.

    - Question(s), semi-rhetorical: Suppose there was a real-life "tipping point" in Greenhouse effect over the Andean Cordillera where a 10-15ppm increase in CO2 (at some point) could produce a switch from El Nino as an occasional event to El Nino as the norm (with occasional reversions to a sub-La Nina like the current norm). Would the current models be able to discover it? If they could, what resolution would be necessary? If such a "tipping point" showed up on the current models, how likely would it be to be an artifact? How well would improving the resolution reduce the chance of such artifacts?

    - If the current general approach to climate modeling "is nearing some intrinsic limits", then perhaps a search for new approaches would be productive. IMO the proper answer to the suggestion that the current modeling approach is running out of steam is not to demand what should replace it: the situation where the current paradigm is unsatisfactory but no new paradigm has been settled on is one stage of a classic Kuhnian revolution.

    ReplyDelete
  46. Not that many people use it anymore, but I would think Ada would be ideal for this. International standard, designed for government use, (relatively) easy to read, and strict enforsement of "proper" programming practices.

    ReplyDelete
  47. Gavin said Even if I could get a python (or C++ or Java) version of our model tomorrow, it would take years before the users/programmers would be as confident editing code as they are now.
    Jesuz Cripes. No wonder your models are so terrible. You've employed programmers who take years to move from Fortran to a C-like language!

    ReplyDelete
  48. I'm always blown away by comments like this: Forget C++ (or even JAVA) for research. You only use it when you know exactly what you're doing; for implementation. It's far too slow, inflexible and error-prone to write it just to research, experiment and try to develop models or algorithms. At least in the real business world.

    The fact is there is absolutely no reason that C or C++ cannot be equally fast for a given solution method as Fortran. If somebody is having problems, it probably means they are a crappy C or C++ programmer to begin with.

    At its best, C++ facilitates object-oriented program designs that are generally have much greater flexibility, reliability and maintainability than is possible with a procedural-based program solution.

    If you treat C++ as a "better" procedural language, it should work roughly the same as Fortran. Because it facilities tighter design and a more compact memory model (elements of an object are stored together rather in parallel arrays as with F77), for some applications, the C/C++ code can be an order of magnitude faster in execution speed than a traditional procedural-based F77 implementation.

    But the point to note here is, any program structure or solution method is available to any language (you can write object oriented code in Fortran or even assembler.) The issue is with how well a given language facilities that type of solution.

    And that ties back into the question of program maintainability. For problems where an object oriented solution is preferred (most of the time it's better IMHO), C++ will give you a much more compact program size, as well as probably better reliability and maintainability.

    ReplyDelete
  49. Michael:

    I’m also very late for this party. But, having been involved in several fairly complex software projects in both the commercial and scientific arenas for something like 35 years (and now retired), I’m afraid both you and Jim are correct. Somebody is eventually going to have to do the root canal. When a software project, especially one as complex as climate modeling (i.e., thousands of algorithms and millions of lines of code), has been under constant development for 10’s of years without re-architecture, re-design and re-write, it eventually takes on the characteristics of the fabled Pillsbury Dough Boy: “you poke it in one place and it’s gonna pop out somewhere else.” If it hasn’t already, it will reach the point where even the slightest modification or extension may take weeks, or even months, to implement and debug. You can bet on that.

    I think it interesting that the roaring economy and the internet bubble of the mid to late 90’s were both caused almost exclusively by businesses that used the Y2K bug as an excuse to justify totally re-architecture and re-design of their core data processing applications. Prior to that time, they were stuck with hundreds or even thousands of applications that could no longer be modified, even slightly. These included every thing from payroll to forecasting models. Prior to this, the really successful businesses had at least one systems engineer looking at the problems involved well in advance.

    I just don’t see anything like the Y2K bug on the horizon for the climate modelers. Perhaps when both the scientists and programmers start spending more time worrying about how to implement a new function rather than what function to implement then it will be time to visit the dentist. However, it would sure be nice to get a head start before this happens. A really topnotch systems designer, one with the proper experience, might be able to come up with a design that could be staged and implemented in pieces as new modules are added and old modules rewritten. This is usually more of a dream that actuality, but I have seen it work successfully on occasion. And, at least this way it wouldn’t hurt as bad as a complete shut down for rewrite.

    On the subject of languages, with experienced software developers, the language is almost incidental. The system requirements (i.e., function, performance, modifiability, usability, reliability, maintainability and system integrity), the system architecture (i.e., program structure, user interfaces, data structures, etc.), program design (i.e., module functions, module interfaces, etc.) and coding standards and requirements (i.e., module structure, optimum size, format and documentation) must first be defined. The language selected should be a fall-out of the system requirements, not a result of what ever buzz word is currently flying around the software engineering departments of business or academia. I would imagine the maintainability requirements alone would lead to an object oriented design, followed by selection of a good structured, object oriented language. But, until the requirements are defined, this would be just a guess.

    I also agree with Jim on his point (#2) of creating a scripting language or pseudo-code for
    communication between the climate scientists and the software engineers (It’s a shame Ken Iverson’s APL is no longer around). I have worked on projects where the pseudo-code was included in each module’s header as part of the module documentation. This sure helped simplify maintenance.

    Anyway, I though I would just throw out a few ideas I found helpful in my past working life. And, I can’t thank you all enough for how much work has been put into this and how far the science has come in such a short time.

    best,
    Joe Crawford

    ReplyDelete
  50. The equivalent in climate modeling to the stimulus provided by the Y2K problem will be the gathering evidence that the globe is cooling.

    Back to the drawing board. Learn a few new tricks.

    I think I've never heard so loud
    The quiet message in a cloud.
    =======================

    ReplyDelete
  51. Err, no, but thanks for playing.

    It strikes me y'all didn't quite get the Y2K story quite straight either.

    ReplyDelete
  52. Dr Tobis,

    You wrote on another blog 'systematic explorations of model space'. Do you believe that the ability of a researcher to explore different theories or parameters of global warming is constrained by the current code? For example, is it possible (possible as in 'reasonably possible and not overly expensive in terms of programmer time, money or runtime') to reduce the model's sensitivity and boost another warming parameter? I think in particular of Palle/s work on falling albedo, my own area of strictly amateur interest. Can one turn down the CO2 and turn up albedo warming in current models? Would this task be easier in your proposed revision?

    Julian Flood

    ReplyDelete
  53. I think in particular of Palle/s work on falling albedo, my own area of strictly amateur interest.

    ref please?

    Can one turn down the CO2 and turn up albedo warming in current models?

    I think that question depends first of all on what the albedo forcing is, but more to the point I think you're heading towards, it depends on which data you look at. Non-greenhouse forcing is inconsistent with observed stratospheric cooling, I believe.

    Would this task be easier in your proposed revision?

    It would certainly be easier in terms of effort to try.

    My idea is that it's a good idea to make it easier to try a lot of things but also easier to test them against observations, thereby narrowing rather than broadening the spread of realistic cases.

    ReplyDelete
  54. pertinent?

    http://scienceblogs.com/stoat/2008/09/abrupt_climate_change_is_a_pot.php

    ReplyDelete
  55. Interesting thread. Wish I'd seen it long ago.

    Michael (and Gavin)... I know the fear of moving a million lines of code to a new "base". The fear is worse than the reality, if done well.

    I've led two teams through large scale code base conversions of this sort.

    I disagree with what some here have suggested: redoing the entire architecture along with the underlying language. This is a recipe for pain and likely failure, because you're moving both to an unfamiliar infrastructure and to an untested model.

    Better:

    A) Convert to the new infrastructure (language etc) in as simple and straightforward a way possible. There are ways to accomplish this with great reliability and without spending inordinate resources.

    B) Familiarize the team with the new environment. Much will look familiar, because the basic architecture and structure has not changed. Take as long as needed here to become fully productive.

    C) Re-architect the system in pieces. Since the system is already working, this too can be done reliably. Sure, it takes time but the risk is greatly reduced, and productivity remains high.

    As I said, I've led this process for two million-line class systems in my career. It works...you've likely made use of the result. :)

    ReplyDelete
  56. Thanks MrPete, there was some additional followup here and in links therein.

    I'm interested in reviving this discussion.

    ReplyDelete

Moderation is on. Apologies for any delays.



Err on the side of politesse and understatement please.



Before you speak, ask yourself if what you have to say will improve on silence.