"Our greatest responsibility is to be good ancestors."

-Jonas Salk

Friday, November 12, 2010

Empiricism

A very nice piece by Dr. M.H.P.Ambaum appears in Skeptical Science. It explains tersely why the whole approach to climate statistics (detection/attribution) of the naysayer squad is wrongheaded, but the reader needs a little familiarity with statistical thinking. (I wonder if J. Curry can make heads or tales of it).

This ties into my generic dislike of what I would call empiricism in climate science. Actually, of course, without empirical evidence you are not doing science, but rather pure math. (or else economics!) The trouble comes when the empiricism is combined with a hypothesis that climate is stationary, which is implicit in how many of their analyses work. It's essentially begging the question. More to follow.

19 comments:

Lou Grinzo said...

"(or else economics!)"

Hey! I resemble that remark.

Andy S said...

MT wrote: "I wonder if J. Curry can make heads or tales of it"

Indeed!

Steve L said...

Hopefully the comments over there don't diverge too much into pirates, etc. I write 'etc' because there is some philosophy in the comments that I might not understand. It seems to me that some readers are using the post to argue against deductive reasoning, and I don't think that was Dr Ambaum's intention. The data are 'in the warmist camp' -- it would be a shame if most people predisposed to hearing what the data say (meaning not deafened by their ideology) distrusted formal quantification.

Michael Tobis said...

Whoa. I used to be able to spell better than that.

Let's just say it was deliberate. Yeah, that's the ticket...

Michael Tobis said...

Steve L, the point is, in the absence of knowledge, data tells you almost nothing.

The frequentist position substitutes a null hypothesis for knowledge. But unlike in clinical trials, in climate questions zero effect is not in any sense privileged, we don;'t have any basis, for instance, for knowing that temperature is not changing.

manuel moe g said...

Significance Tests in Climate Science - Maarten H. P. Ambaum

http://www.met.reading.ac.uk/~sws97mha/Publications/jclim_ambaum_rev2.pdf

along the same lines as:

Fetishizing p-Values

http://golem.ph.utexas.edu/category/2010/09/fetishizing_pvalues.html

The Cult of Statistical Significance

http://www.statlit.org/pdf/2009ZiliakMcCloskeyASA.pdf

where demonstration of strength of effect is what saves you from being fooled by any statistical anomaly. [from my possibly ignorant reading]

Martin Vermeer said...

But Michael, the problem then isn't so much with empiricism as with "silly null" hypotheses -- and with compulsive hypothesis testing when you should be doing parameter estimation!

Silly nulls are a problem in other sciences too, like in biology and ecology. Google.

Unknown said...

To add to the null hypothesis issue, here's one of several papers from the ecology lit that discusses problems with NHT:

Anderson, D. R., K. P. Burnham, and W. L. Thompson. 2000. Null hypothesis testing: Problems, prevalence, and an alternative. Journal of Wildlife Management 64(4): 912-923.

"This paper presents a review and critique of statistical null hypothesis testing in ecological studies in general, and wildlife studies in particular, and describes an alternative. Our review of Ecology and the journal of Wildlife Management found the use of null hypothesis testing to be pervasive. The estimated number of P-values appearing within articles of Ecology exceeded 8,000 in 1991 and has exceeded 3,000 in each year since 1984, whereas the estimated number of P-values in the Journal of Wildlife Management exceeded 8,000 in 1997 and has exceeded 3,000 in each year since 1991. We estimated that 47% (SE = 3.9%) of the P-values in the Journal of Wildlife;fe Management lacked estimates of means or effect sizes or even the sign of the difference in means or other parameters. We find that null hypothesis testing is uninformative when no estimates of means or effect size and their precision are given. Contrary to common dogma, tests of statistical null hypotheses have relatively little utility in science and are not a fundamental aspect of the scientific method. We recommend their use be reduced in favor of more informative approaches. Towards this objective, we describe a relatively new paradigm of data analysis based on Kullback-Leibler information. This paradigm is an extension of likelihood theory and, when used correctly, avoids many of the fundamental limitations and common misuses of null hypothesis testing. Information-theoretic methods focus on providing a strength of evidence for an a priori set of alternative hypotheses, rather than a statistical test of a null hypothesis. This paradigm allows the following types of evidence for the alternative hypotheses: the rank of each hypothesis, expressed as a model; an estimate of the formal likelihood of each model, given the data; a measure of precision that incorporates model selection uncertainty; and simple methods to allow the use of the set of alternative models in making formal inference. We provide an example of the information-theoretic approach using data on the effect of lead on survival in spectacled elder ducks (Somateria fischeri). Regardless of the analysis paradigm used, we strongly recommend inferences based on a priori considerations be clearly separated from those resulting from some form of data dredging."

Here's one that pushes back a bit:

Stephens, Philip A., Steven W. Buskirk, Gregory D. Hayward, and Carlos Martinez Del Rio. 2005. Information theory and hypothesis testing: a call for pluralism. Journal of Applied Ecology 42(1): 4-12.

Abstract here:
http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2664.2005.01002.x/abstract

Unknown said...
This comment has been removed by the author.
David B. Benson said...

Michael Tobis --- ... climate is stationary. Do you mean static or stationary in the statistical sense? The former is obviously wrong and I'm certainly uncertain what happens to the statistical stationarity assumption in the face of the 1/f noise present in almost all geophysical time series.

Steve L said...

Ide, thank you very much for the abstracts. Particularly in the second I found something addressing one of my misunderstandings quite directly (I thought an important Bayesian/frequentist boundary occurred at experimental versus observational studies). I was wondering why I was having a hard time understanding mt's response to me. I thought the point (Ambaum's point, at SkS) was applicable to clinical trials (his example is a lab experiment), so I may have been confused by that. Maybe it will help if re-read all of this focusing not on the method, but on the question.

Martin Vermeer said...

lde yep, that's a very nice one. Note that Anderson and Burnham wrote a textbook on information theoretical methods like Akaike (i.e., Kullback-Leibler based).

"Data dredging" is their expression for another dubious practice, trying out all sorts of crazy and less crazy model proposals until you find one that bites ;-)

Nick Palmer said...

I read the Skeptical Science post as claiming that around 75% of climate peer reviewed papers use statistics wrongly, so I'm not sure how MT can say "It explains tersely why the whole approach to climate statistics (detection/attribution) of the naysayer squad is wrongheaded"

Unknown said...

Martin, yes, Burnham and Anderson's text was a well-thumbed source for my PhD work

Another potential article of interest that bears on the hypothesis issues hails from experimental biology:

Glass, David J., and Ned Hall. 2008. A Brief History of the Hypothesis. Cell 134(3): 378-381.

It relates to experimentation, but it draws clear distinctions between hypothesis-driven and model-driven science, which are different frameworks that I suspect are often confused, even by scientists.

In my line of investigation, it is often awkward or impractical to frame the questions I am investigating as falsifiable hypotheses, as they are usually model-based investigations. My questions are along the lines of:

Where is an animal's habitat distributed in the landscape?

How might climate change shift this distribution?).

These are not necessarily easy (or even interesting) to frame as hypothesis, and are not answered with true/false.

Despite this, I have had some reviewers tell me I have to frame my questions as hypotheses.

So, it seems that much scientific work, at least in ecology (and I believe this holds for climatology), is model-based, yet there is an expectation in many quarters that science must be hypothesis driven. I believe that it is this expectation that leads to one of the commonly seen criticisms of AGW: "AGW is not falsifiable".

Here are the first few sentences of Glass and Hall, out of interest (I think it is behind a paywall, unfortunately):

"Scientists are commonly taught to frame their experiments with a “hypothesis”— an idea or postulate that must be phrased as a statement of fact, so that it can be subjected to falsification. The
hypothesis is constructed in advance of the experiment; it is therefore unproven in its original form. The very idea of “proof” of a hypothesis is problematic on philosophical grounds because the
hypothesis is established to be falsified, not verified. The second framework for experimental design involves building a model as an explanation for a data set.
A model is distinct from a hypothesis in that it is constructed after data are
derived."


Cheers, Lyndon

Michael Tobis said...

Thanks, all, for excellent insights and pointers to the literature in various fields.

David, for many purposes it is reasonably sound to treat the pre-industrial late holocene climate as stationary. This presumption is rarely defended, but often made. It does seem that we have been wandering around a well-defined segment of phase space.

But this presumption applied post-1850 leads inevitably to a conclusion that, wow, we are at an unusual global temperature peak, and a rapid cooling is bound to ensue. Versions of this fallacy go back to H H Lamb at least. Which is what I am aiming to talk about.

Michael Tobis said...

Nick P., perhaps "explains tersely" is too strong. I've already made the case that "CAGW" as the opposition likes to summarize what we are saying is not a hypothesis but an estimate.

"So attacks on climate change as if it were a "theory" make very little sense. Greenhouse gas accumulation is a fact. Radiative properties of greenhouse gases are factual. The climate is not going to stay the same. It can't stay the same. Staying the same would violate physics; specifically it would violate the law of energy conservation. Something has to change.

The simplest consequence is that the surface will warm up. That this is indeed most of what happens is validated pretty much in observations, in paleodata, in theory and in simulation. Further, all those lines of evidence converge pretty much about how much warming: about 2.5 C to 3C for each doubling of CO2. ... There's no single line of reasoning for this. There are multiple lines of evidence. ...

They want to know what it would take to pry me free of my "beliefs", but they are not beliefs, they are estimates."


Treating the idea that there is such a thing as "too much carbon" or "too rapid a rise of carbon" as a hypothesis may be formally correct, but it intrinsically conflates science and politics.

To be sure, I believe (and many of us believe) that if people actually had a good grasp of the science, they would probably agree that on the risk/benefit spectrum our behavior is far too risky. Therefore we feel some urgency in conveying that understanding. But this is not where the trouble starts.

The trouble starts when the physical hypothesis is framed with an ethical component, even a no-brainer of an ethical component. Suddenly we are cornered into proving the unprovable, whereas we walk in with a scary estimate of the reasonably estimatable.

Aaron said...

This post reminds me that JC uses the same logical fallacy that Mark Twain was poking fun at in those statistics in 'Life on the Mississippi' (published in 1883). The public may not understand statistics, but they do understand humor.

If you want to destroy a scientist, do not rant at them, laugh at them!

MT & JR et al are making JC a hero with legitimacy.

David B. Benson said...

Michael Tobis --- Thanks. Clear now.

Martin Vermeer said...

Thanks Lyndon, Michael. Perhaps relevant, a fun cite from Jaynes (p. 504) who cites Jeffreys:


"Jeffreys (1939, p. 321 notes that there has never been a time in the history of gravitational theory when an orthodox significance test, which takes no note of alternatives, would not have rejected Newton's law and left us with no law at all..."