Tuesday, August 2, 2011

Computational Prediction Lecture

Well, I went to the Austin Forum talk as promised, "a presentation by J. Tinsley Oden on predictive computational science. In this talk, Oden traces the development of scientific thinking and philosophy that underlie predictivity" and about a third of it was tantalizingly close to being interesting. It really looked like it was leading up to some substantial insights, but then it sort of fizzled into a tired supercomputing pep talk.

First my favorite slide:



Yes. Although Oden allowed that this might have been slightly tongue-in-cheek on Eddington's part, it is really important to emphasize that purely empirical science is a bore; that the ability to collect numbers and plot them on graphs is a very long way from the pinnacle of science.

This was in the context of the computation enthusiast's claim that computation is a third pillar of science, joining observation and theory as a coequal. I think this is possible but a long way off, and Oden did talk about some spectacular failings of modeling in a frank way, except that implicitly but notably none of those were at the University of Texas.

Another refreshing point was his antidote to the grade-school model of the scientific method, and the Popperian model for which he had equally little use. He offered this view of What Science is Like. I think it has a lot more merit than the gross oversimplifications that most nonscientists are taught to believe in in grade school.



Sorry, it's a bit blurry but I think you get the idea. (You can click on the images for a clearer view.)

Before indulging in twenty minutes of tedious cheerleading for TACC and ACES (our local supercomputing institutions) he offered the following four guideposts:



Translation:
V & V = Verification and Validation = (respectively) a) is it the right system for the purpose and b) does the software actually model the system?
QofI: Quantity of Interest = The "answer", the quantity or quantities you want to predict
UQ = Uncertainty Quantification: estimated error statistics of the predicted quantity

This seemed to be a good place to launch into some substance but that's as far as he got.

Note this talk was not about climate models but about predictive models in general. He certainly didn't get into the differences between prediction types, where climate models present some unusual philosophical problems.

I left feeling that I had seen a very fine meal but had not actually tasted it.

To be fair he actually did show Bayes' theorem and talked about it for a couple of minutes.

I really think there is some alternate universe where quantitative public talks are given to nonexperts who nevertheless can be expected to know fundamentals such as Bayes' theorem like their own phone number. Somewhere over the rainbow I suppose.

Update via rpenner at Stoat: Slides are at http://www.austinforum.org/presentations/oden.pdf

8 comments:

  1. "the ability to collect numbers and plot them on graphs is a very long way from the pinnacle of science."

    Eh oh, citing own thesis alert:

    "Koopman’s 1947 article, `Measurement without Theory’, is a scathing attack on what he sees as a work that is `unbendingly empiricist in outlook': collecting data and looking for trends with no thought given to underlying mechanism. (In the article, he is criticising work on business cycles.) Koopman compares the progress of economic analysis to that of studying planetary motion, from Tycho Brahe's exhaustive but theoryless collection of data, through Kepler's ideas about circular motion informing his empirical work, finally to Newton's formulation of `fundamental laws'. The discovery of `empirical regularities' he calls the `Kepler stage', and of `fundamental laws' the `Newton stage'. The one is distinguished from the other by being `at once more elementary and more general’."

    Tjalling C. Koopmans, “Measurement Without Theory,” The Review of Economics and Statistics 29, no. 3 (August 1947): p.161

    ReplyDelete
  2. Did he mention at all whether they were going to participate in the PSAAP follow-on program? I'm guessing, "of course", since it's a bucket of money on the offer, but just curious if the national lab connection came up at all.

    ReplyDelete
  3. There definitely was a PSAAP bullet somewhere, on an early slide, yes.

    You can count on this gang to know where the computation money is, reliably.

    ReplyDelete
  4. John Mashey covers a lot of dirty details, and makes a case for climate models that convinces genuine skeptics.

    http://www.realclimate.org/index.php/archives/2008/09/simple-question-simple-answer-no/#comment-97878

    Unfortunately, you will have to dig through the comments he highlights in this comment, like I did, and you will be quite pleased with the quality of exposition, like I was.

    If Mashey could be convinced to write a book, instead of making meta-comments on blogs, what a wonderful thing that would be.

    ReplyDelete
  5. Mashey is out of touch, in fact, the establishment funding the work that this presentation touches tangentially is evidence against his "1. Speculation".

    What would he write a book on? Cloak and Dagger Wars on Science? He's said on this site he won't touch the linear algebra (I'm assuming that would extend to the PDEs we actually care about).

    Wake me up when he comes up with something more interesting than the skeptics who think gubment funded scientist are an evil cabal.

    ReplyDelete
  6. This will be amusing.

    "jstults", can you point to a single concrete assertion of Mr. Mashey that you can refute and demonstrate incorrect?

    Quoting "jstults":

    > '''...in fact, the establishment funding the work that this presentation touches tangentially is evidence against his "1. Speculation".'''

    The link given points to Mashey's attempt to categorize skeptics. Speaking of "evidence against" a personal categorization suggests confusion on your part. Like claiming "evidence against" how somebody lays out a sock drawer would suggest confusion on the part of the claimant.

    ReplyDelete
  7. I think Mashey has more than a little bit of a point in his speculation, but I have found it more interesting. Everyone brings the expectations of their own discipline to climate science. Life science people tend to be the ones drawn into misleading discussions of statistical significance - they tend to be unfamiliar with the constraints of earth science. Fluid dynamicists tend to get hung up on predictability limits, but these are limits on predictions that climate scientists do not make. There's a common mistake that geologists who use spectroscopy make that leads them to incorrectly believe that the greenhouse effect is already saturated. Economists simply don't have any idea of time scales and believe nothing is predictable beyond two or three years. Computational scientists are offended at fudge factors applied to narrow straits in ocean models, without having any alternative to propose. And so on.

    It's all very Dunning-Kruger, with every discipline carrying its own arrogance to our field, and it's yet another complication we face.

    A climate model has to have serviceable contact with a large number of disciplines. As Oden says, the right question is whether the model is suited to purpose, and that depends on the purpose. People familiar with other classes of question do in fact get confused about what it is we do and how.

    ReplyDelete
  8. MT:

    > It's all very Dunning-Kruger...

    It is rational to *begin* with skepticism based on your understanding gained in a specific domain. It is only a cognitive failure if you refuse to take the second step.

    Happily, in the examples Mashey gives, his batting average is respectable, for communicating that climate science models didn't successfully predict the disruption of last few decades from wild luck, and there is a reasonable expectation that the models can be the basis for rational action over the next few decades.

    ReplyDelete

Moderation is on. Apologies for any delays.



Err on the side of politesse and understatement please.



Before you speak, ask yourself if what you have to say will improve on silence.