This doesn't mean he's going to win me over to his point of view, of course.
Given that he has been so harsh on others, I think he ought to develop a thick skin. Still, please, let's take care to argue the arguments and avoid ad hominem, and avoid the whole CRU business which is happily quite tangential here. Frankly I think that leaves plenty of room for disagreement, since in this matter I disagree with everything Fuller says.
So, Tom Fuller, with regard to Anderegg, Prall et al writes:
Regardless of my opinion about the motives and eventual use of the defacto list that has been created (and time will tell, certainly), this is garbage science created by an amateur blogger and a grad student with Schneider's name tacked on top of it.I am pressed for time; need to prepare my laptop for SciPy tutorials (in Austin! Huzzah!) tomorrow. But I can address these briefly:
In reverse order:
The findings are incorrect. It incorrectly labels ACC experts as either CE or UE.
The analysis scheme is incorrect. It fails to account for confounding factors such as change of opinion over time, venue and approach for presenting petitions, comparative content of petitions, etc.
The data collection is incorrect. They have wrong names, wrong specializations, wrong counts of publications and wrong citation numbers.
The methodology is inappropriate. They searched with only one database, did not search in other languages, did not crosscheck their data.
Their hypothesis is flawed. There are many factors that could equally explain differences in publication and citation by UE and CE scientists, including publication bias, confirmation bias, fear of retribution or erosion of career potential, etc.
Spencer Weart said it best--the paper should not have been published in its present form. It does not survive the first casual reading.
And yet you've got it up there as if it's the Magna Carta.
1) The findings are incorrect. It incorrectly labels ACC experts as either CE or UE.
This is simply not true. The paper is clear that not all scientists are categorized by this method.
2, 3, 4) The analysis scheme is incorrect. It fails to account for confounding factors such as change of opinion over time, venue and approach for presenting petitions, comparative content of petitions, etc.
The data collection is incorrect. They have wrong names, wrong specializations, wrong counts of publications and wrong citation numbers.
The methodology is inappropriate. They searched with only one database, did not search in other languages, did not crosscheck their data.
These complaints introduce error but they do not introduce any obvious bias. Therefore they cast only modest doubt on the very robust result.
5) Their hypothesis is flawed. There are many factors that could equally explain differences in publication and citation by UE and CE scientists, including publication bias, confirmation bias, fear of retribution or erosion of career potential, etc.
I don't see how this is a flawed "hypothesis", indeed, these cannot affect the resutls. However, it would be foolish to say they don't affect the interpretation of the results. Note, though, that these can be argued both ways. Herd mentality is indeed a risk, but it is as applicable to systematic understatement of problems as to systematic overstatement of problems.
6) Spencer Weart said it best--the paper should not have been published in its present form. It does not survive the first casual reading.
I disagree with Spencer Weart. Perhaps I'm biased because I've been aware of Jim's serious efforts to collect these data over the past couple of years, but it is what it is. Perhaps more resources can be obtained for a more thorough study. A groundbreaking result which has been done without any funding should not be expected to be flawless. More to the point, the editors of PNAS disagreed with him, and here we are.
7) And yet you've got it up there as if it's the Magna Carta.
Well, I'm not the one who needs the existence of a robust consensus proven. I think it's interesting for what it is, and doubly interesting for how many people rushed to criticize it without reading it carefully.
8) The "black list" yadda yadda...
Oh, give us a break. That's just silly.
Update: Here's the Stanford press release. Notable quote from Steve Schneider:
"It is sad that we even have to do this," said Schneider. "[Too much of] the media world has just folded up and fired its reporters with expertise in science."
Image: The Magna Carta, naturally.
74 comments:
I'm not surprised we disagree. However I believe you'll need to amplify your points before we can discuss them at any length.
Assuming that's what you want, I'll just start off by saying that in point one, it's clear that some of the people they claim are UE are in fact CE.
They didn't sense check their output. With so few names, there's no excuse.
It's the same thing that wrecked Oreskes, really.
"There are many factors that could equally explain differences in publication and citation"
And having thrown up your hands, you consider this paper dismissed. It's no wonder your side doesn't have a pubication record.
C'mon, formulate some testable hypotheses and go about testing them!
V.
Mr. Diesel, I dismiss this paper because I have performed about 30 similar studies in the past 6 years for PESTLE, SWOT and competitive analyses and for industry studies. I'm doing one now, as a matter of fact.
They didn't do it right. That's all.
Tom, references please to these studies of yours?
V.
Weart writes here:
Although I am personally "convinced by the evidence" and am surprised at the number who are not, I have to admit that this paper should not have been published in the present form. I haven't read any other posts on this; the defects are obvious on a quick reading of the paper itself. Here's what I saw:
Many scientists might have been "unconvinced by the evidence" and yet chosen not to volunteer to sign a politicized statement that "strongly dissented" from the IPCC's conclusions -- which is the only criterion the authors of the paper had. What if they weakly dissented or are just, like many scientists, shy about taking a public stand? You don't have to invoke groupthink, fear of retribution or all that.
Does anybody else who has read the paper not so quickly see what's wrong with this critique?
Sorry Vic--ya gotta pay for 'em. And they don't come cheap.
Tom:
Does this mean that you have personally compiled 30 blacklists?
Wasn't not checking the output( and adding subjective bias) the point. All attribution is self declared for accuracy.
It's very telling that so-called climate 'skeptics', after finding all sorts of supposed 'flaws' in every survey of climate scientists that's actually been done, can't get their act together and do a proper survey of climate scientists on their own.
If the climate 'skeptics' know so well just how a proper survey should be done, and if they have the resources to do it (which they surely do), why haven't they done their own survey already?
Instead, they have to rely on nonsense 'petitions' many of which are created partly by mining quotes from deceased people. I'm waiting for Tom Fuller to criticize that methodology any moment now...
-- frank
"...created by an amateur blogger and a grad student with Schneider's name tacked on top of it"
Whatever happened to the glories of citizen science? I thought submitting one's results to peer review was a good thing.
Vic, your request for references spurred my curiosity.
While serving as custom research director at Kable, now a division of the Guardian, I bid for and won a project to do research for the UK National Health Service.
I designed the research study and served as program manager. I actually ended up doing a fair bit of the research as well, to save money.
As prime contractor, I selected and worked with Colin Drummond of St. George's University, a rather noted scientist who served as technical specialist for the study. I learned a lot from him about the practical aspects of conducting relevant research that can be useful for science as an institution.
I call your attention to the following:
1. The utility of using a spectrum of research techniques to investigate core issues is far more robust than relying on a Google search.
2. A true literature search can be a valuable addition to a research study, as shown within. However, a broad citation search is of little value as a standalone product.
I see that some mention this type of trawling through Google as something universities frequently do during their hiring process. I'm sure some do. However, you might note that they still do such old-fashioned research in the form of job interviews and inspection of resumes.
http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_4122341
If I find other published examples I will bring them to your attention. However most of the work I do is proprietary in nature and I have various signed agreements with my clients about disclosing the nature of the research, the results and the identity of the clients.
"Does anybody else who has read the paper not so quickly see what's wrong with this critique"
Tom says that "many scientists" are shy to speak up. If that holds for CE and UE alike, he doesn't have an argument. He only has a point if UE are much shyer to speak up. Even then, since the article considered the top publishers, for the (implied) conclusion to be wrong, there have to be a bunch of UE scientists who have not signed any statements. Since these are both prominent and productive, it should be easy to point them out.
Tom, so who would be some that are missed? Note that you'll need a few dozen of them (just guessing here) to sway the numbers.
V.
I just want to give my usual blurb. There's no good reason to give the hit-trolling right-wing vanity sites xxx.examiner.com any attention. It won't reward anyone who does it except the Marc Moranos of the world. And some examiner.com-ers are more ignore-worthy than others.
The paper in question could, in theory, lead to blacklisting. And the paper, in theory, could deprive allegedly skeptical scientists from an unearned parity status with actively publishing scientists. The squealing is born of the second, justified with appeals to the first.
...And thanks to TF for the reminder that also in domains other than feminine virtue, cheap and easy are not strict synonyms. So VicD can rest easy with full wallet.
Hi all,
I am not sure the full URL appeared from my previous comment about a published study. Let's try this:
http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_4122239.pdf
Deech, I have no problem with grad students doing science. I kind of think an awful lot of science wouldn't get done without them.
I do have a problem with anyone, regardless of status or accreditation, doing junk science. This is junk science done on the fly with a political motive.
Nicely put, Moe, although I would partly disagree with your opening point that the paper could lead to blacklisting. On a professional level it won't since the NSF and other scientists are already excruciatingly aware of who the problem children are. OTOH, it is intended to lead to what could be called a form of blacklisting by the media, ironically without directly providing a list.
Let's take this back to the original analogy:
Sen. McCarthy: "The purpose of this hearing is not to name names or cost anyone their livelihoods, but to convince the press that Communist views are so marginal that they ought not to be reported."
Somehow I think history would have been kinder to him if he'd taken that approach.
Re the feminine virtue thing, I sense you're headed for trouble. :)
What is the definition of "junk science"? Something where Tom Fuller doesn't like the result?
And where were you with Inhofe's actual blacklist? That was okay, then?
No, Michael, and I criticized Senator Inhofe. As Mike Morano is an aggregator, I tend not to go after him directly, although I have criticized him on other issues.
So that's the moral standard for Stephen Schneider? Setting the bar a bit low, don't you think? Also, refresh my memory about Inhofe's and Morano's scientific qualifications--I know those are mighty important to you...
Michael, the link to that report still doesn't show up completely. Can you help with it? It is:
http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_4122239.pdf
Thanks in advance.
Oh, and Michael, I hope you do find the time to amplify your initial robust, but sketchy, defense of this paper.
Tom,
your URL is fine to cut and paste I was able to read the report. (You can use html tags if you want it to be a link.)
What's not fine is that it has nothing to do with what we're talking about. You didn't do a literature search, and it had nothing to do with expertise evaluation. So you wrote a report. Attaboy.
"The utility of using a spectrum of research techniques to investigate core issues is far more robust than relying on a Google search."
The utility is more robust? Hm.
Anyway, what spectrum would you advocate? Personally I would first have tried the Science Citation Index, but Google Scholar probably has a better API. What is your specific problem with the techniques used?
V.
Mr. Diesel, I didn't do the literature search--undergrads did. It was extensive and comprehensive and should be a lesson to Schneider et al. I designed it, commissioned it and assisted Professor Drummond in the report writing.
As for strict bibliometric searches such as the one Schneider et al did, I have done many--but sadly they are proprietary.
You should certainly look at the methodology of the Alcohol Harm Reduction study to understand how science normally approaches this type of issue. It is very different to what the PNAS published last week.
Mr. Diesel, an incomplete list of my problems with this study is in the post. As nobody seems willing to address them directly, perhaps you'll volunteer.
I already addressed each of Tom's complaints in the posting. Tom seems to want me to be more prolix, but the points are wrong enough that I think they can be demolished tersely. I have little to add in the absence of any retort.
Tom, I don't really know what study or studies you are undertaking. I suppose it's reassuring that you aren't trying to live on the Examiner income. I'm not sure you have explained their relevance.
What we'd like to know from you, I think, is twofold:
1) Given that Jim collected the data in the way that he did, and took care to avoid bias, why should the data, for whatever they are worth, not be published?
2) If you were to construct a funded study (as opposed to this unfunded effort) to investigate the claim: "There is a scientific consensus that anthropogenic forcing, notably including CO2, is very likely to cause enough warming to lead to significant alterations in the climate system over the next century", how would you go about it?
You have said, I believe, that you agree with the proposition or something like it. The question is not whether you agree; it is whether there is a substantial population of climate scientists who do not.
Tom,
I *am* addressing your questions. My problem is that you're being very vague. So answer my last post: 1/ what's wrong (be specific!) with the technique they used and 2/ what techniques (again, be specific!) would you use.
V.
Part 2
Separately, a literature search would be conducted without reference to authors, using keywords developed during the course of study and from suggestions given by respondents.
Again, abstract evaluation would be performed and coded to show varying levels of support for or opposition to the consensus.
At the end of this phase of the research, a preliminary report would be prepared and submitted to various professional bodies, along with the research proposal and description of methodology. A formal request for permission to contact the members of these associations would be made, and if granted, members would be invited to participate in an online survey.
The survey would be prepared in accordance with market research standards, and vetted by both proponents and opponents of the climate consensus for fairness and clarity. The boards of each professional association would be asked to approve the draft of the survey before it went into the field.
The objective of the survey would be to measure the strength and depth of the consensus on observed and future climate change among practitioners of relevant scientific disciplines. The survey would begin with a demographic and organographic profile of respondents to assist in later classification. The body of the survey would be a fairly standard presentation of the major points of climate change, but would allow for more than a yes or no response. Most questions would also include the opportunity to say I don't know, none of the above, other, or write in their own responses.
At the end of the questionnaire respondents would be asked if they were willing to participate in focus groups, either in person or conducted online in a chat room forum. A series of focus groups would be conducted within each of the segments identified by responses to the questionnaires. (What I mean is that, if as I suspect, a 'lukewarmer' contingent is evident in the responses, they would have a focus group dedicated to them, as well as groups for consensus holders and skeptics).
The purpose of the groups would be partially to sense check prior stages of the research. Respondents would be given reports of the research done to date and asked to evaluate the solidity of the work and their assessment of the accuracy of findings.
All of the data and findings would be published online, except for details that would allow the identity of participants to be revealed.
Michael,
Part 1
1) I see no evidence that he took care to avoid bias. At all. Using his choice of Google Scholar alone is sample bias. What did he cross check against? Where is his search in other languages?
I have no problem with his publishing it. I have no problem with the PNAS risking their reputation by accepting it. However, the ethics of social research require researchers to insure the anonymity of participants. This is especially important in studies that could, in the opinion of respondents, pose a threat to them in future based on the information collected about them. That was not done.
2) A funded study to examine the strength and depth of consensus views on climate change would obviously start with secondary research to identify prominent proponents and exponent views. This research would include published papers, media statements and even--OMG--posts and comments on weblogs.
A selection of these would be screened (using publication records and citations) for qualifications and asked to participate in depth interviews designed to insure that they did indeed still hold views similar to what was published. The interviews would also be used as referral sources for further contacts and permission sought to use respondent names in contacting further sources. Qualitative impressions of percentages for consensus would be sought.
But one key finding would be a verified list of published papers from respondents to use for keyword searches for a more thorough pub check later on. Another would be respondents' impressions of journal quality and bias. Another would be 'best of league opponents' or who they consider the leading lights of the other side of opinion.
As mentioned, a more thorough literature search with hands-on evaluation of abstracts of published papers would follow. Although some useful segmentation buckets would fall out of the results, one to examine from the beginning would be scientists who study how the climate works and scientists who study what the climate does--in other words, who would be contributing to WG1 (or its antithesis in the skeptic community) as opposed to who would be working on WG2 or 3.
I have seen no defense of PNAS. I have just seen assertions that it is good.
I find this surprising, actually. I'm going to be offline for a couple of days. Perhaps that will give some here a chance to actually do some real work on this.
Tom, the phrase "junk science" is a loaded term, unless you are a part of the Steven Milloy camp. There is at least one survey that concludes that the continuum of acceptance of the AGW "consensus" is highest with published climate scientists, so the results of this literature survey are not far fetched.
This is an initial look, and others may come along with better or different methodology and replicate or find fault with this work. That's normal in the scientific method. the first word is not often the last.
Wrecked Oreskes? What is that supposed to mean?
"Wrecked Oreskes? What is that supposed to mean?"
The denialsphere beat it to pieces with a broken hockey stick stolen from CRU during the middle of the last ice age, which started in 1999.
Hey--I unexpectedly found a wireless connection out here.
By wrecked Oreskes, I am not referring to the short term use of her work to support your side of a political struggle. It will continue to serve. But in the longer term, her project fails because she didn't do the same thing Schneider et al didn't do--sense check her results. She should have looked at her 928 papers and said 'Hey, I know Christy, Lindzen, Singer and Spencer published in this time frame. Why didn't I pick that up?' As it happens, I found a dozen publications by those gentlemen in a ten-minute search. She should have, too.
The stuff that happened with Peiser and all afterwards was all played out as a nice set piece. But basically she got it wrong, and everybody's just being polite about it.
Nobody is contesting that your side of the fence has the numbers. I have never heard a skeptic say that they had as many adherents on the scientific point of view. Ever.
So I don't know why it's a big deal, really. Certainly not big enough to try and get away with crazy stuff like that.
"I have never heard a skeptic say that they had as many adherents on the scientific point of view. Ever.
"So I don't know why it's a big deal, really."
Agreed.
"Certainly not big enough to try and get away with crazy stuff like that."
I'm not sure what the crazy stuff is supposed to be...
Tom:
her project fails because she didn't do the same thing Schneider et al didn't do--sense check her results. She should have looked at her 928 papers and said 'Hey, I know Christy, Lindzen, Singer and Spencer published in this time frame. Why didn't I pick that up?' As it happens, I found a dozen publications by those gentlemen in a ten-minute search. She should have, too.
As an exercise, I've checked the ISI web of science, using the same method as Oreskes. Nothing that was published by Lindzen, Spencer, Christy et al. during that time frame contradicts Oreskes' conclusions in her study (the consensus of anthropogenic cause of AGW).
Now, I may be mistaken given the relative brevity of my search, but that's rather unlikely. I'm willing to give you the benefit of the doubt here though - What are the "dozen papers" in your list and can you show which part of their abstracts explictly rejected the IPCC consensus position?
The stuff that happened with Peiser and all afterwards was all played out as a nice set piece. But basically she got it wrong, and everybody's just being polite about it.
Your opinions appear to be contrary to the facts in this case.
Dirk, I'm on the road but I published the titles of the publications way back in January. I think it was the 10th. And I only searched on headline names.
From my observations, Fuller hasn't demonstrated much expertise with this type of analysis. In his "annual survey on global warming", he proudly wrote:
"In less than three days more than 3,000 of you have taken the time and made the effort to fill out this survey. By comparison, Pew's survey on climate change in October of this year gathered 1,500 answers."
Apparently he doesn't know what a random sample is, as no one with a basic training in statistics would compare an online open access poll (severe selection bias) favorably to one based on random sampling.
The PNAS study is a useful contribution to "scientific opinion on climate change" studies, as it takes a unique approach that required meticulous research. There are also statements from organizations (which broadly represents their members), surveys of scientists, surveys of the peer-reviewed literature, and the IPCC consensus that give us roughly the same picture.
http://en.wikipedia.org/wiki/Scientific_opinion_on_climate_change
In contrast, contrarians have their open access online polls and petitions.
Prall's list is also informative on its own (in the absence of any conclusions in the PNAS analysis). I often read about contrarians wanting to know who all these "consensus" scientists are, as they have their Inhofe list with actual names. They think it's just a bunch of government bureaucrats who believe it. Well, Prall's list contains thousands of scientists and details their expertise, publications, and links to their web pages when available (all info in the public domain). This could also be a very good resource for media seeking expert opinion.
Tom:
Thanks for replying, despite your busy schedule. I presume it was this article?
It appears that several of your list are generously categorized as "scientific studies."
I would be extremely hesitant to class letters & correspondence (your refs 4-8) in the same category as the other 928 studies. Ditto the Pat Michaels lecture at Lindenwood University in your 14th reference.
Your tenth reference does not even mention "climate change" or "global warming" in the document.
Your 12th and 13th reference are irrelevant as they would not be included in Oreskes' time-frame.
In any case, I've read the abstracts - not the headlines in Scirus - for each of the "legitimate" papers that are also on the ISI WoS. Once again, none of them explicitly oppose the IPCC consensus as stated per the method of Oreskes' study.
Besides the fact that a (flippant?) 5 min search on a different search engine is no serious basis to critique Oreskes' results, there may be another issue you have not considered. There is an important distinction between these scientists' public statements and/or blog posts with their published research. Maybe you're conflating these?
For Fuller (or others) to support an assertion that the study's results are in serious doubt (quibbles over citations from Google Scholar or what not actually matter), it shouldn't be that difficult to decisively do. For example, of the 3000+ scientists in Prall's list, Fuller could come up with 300 (10%) that have been clearly mischaracterized. The rest is just hand-waving. So far only a few have been identified as remotely contentious. This was done on the Inhofe list:
Center for Inquiry Reveals that 80 Percent of ‘dissenting scientists’ in report haven’t published peer-reviewed climate research
Have at it, Mr. Fuller. This is a chance for you to back up the smack, for once.
"Apparently he doesn't know what a random sample is, as no one with a basic training in statistics would compare an online open access poll (severe selection bias) favorably to one based on random sampling."
Fuller has shown that he doesn't understand random sampling elsewhere, as he's claimed that the highly selective sample of stolen e-mails from CRU are as statistically valid as the population samples chosen by political pollsters.
So this is what passes for serious discussion of the PNAS paper here?
Michael Tobis wants to play the moral equivalency game, meaning he doesn't have to defend the PNAS paper.
Frank wants skeptics to do their own survey, meaning he doesnt' need to defend the PNAS paper.
Marion Delgado just wants to remind you all of how low a human being I am, probably because he has not read the PNAS paper.
Steve Bloom wants to talk about McCarthy, so he doesn't have to defend the PNAS paper.
Deech wants to change the subject to Oreskes, so he doesn't have to defend the PNAS paper.
Dirk wants to defend Oreskes, so he doesn't have to defend the PNAS paper.
New York wants to talk about a survey I ran on Examiner, saying it was not a random sample, ignoring the fact that I said about three thousand times that it could not be considered a public opinion poll because it was not a random sample. But at least that gives him a reason not to defend the PNAS paper.
Dirk still wants to split hairs on the papers Oreskes missed, but doesn't want to defend the PNAS paper.
But finally--at last! New York wants to defend the PNAS paper. Oh--no he doesn't. He wants to ignore every criticism I have made and for me to do what he wants.
And then dhogaza gets to make up more stuff about me.
So far, you all are not impressing me very much at all.
Tom, good one!
If only you could think as well as you write you'd be a force to reckon with.
But no, sorry, you're the one with the criticisms, and so far they stand answered by my posting without rebuttal from you.
Nobody is saying this is a massively important paper, just that it's interesting enough to merit publication.
You are the one saying that it is not marginal; you are the one saying that it is fundamentally wrong. You are the one who needs to make a stronger case as to why it is not worthy of publication.
Tom,
I don't think your ideas about methodology are sound. You want to "identify prominent proponents and exponent views"
For this you would consider "even--OMG--posts and comments on weblogs."
1/ That may identify views, except that you need an objective way of selecting/describing them and assigning them to writers. The PNAS article had only two views, and these were unambiguously identifed and assigned.
2/ How are you going to decide which blogs are "prominent"? Looking at peer-reviewed literature is one widely accepted measure. Your alternatives come up short.
Again I conclude that you're good at throwing up smoke, but utterly failing to do anything constructive.
V.
The type of defense I expected to see:
The authors considered the issue of sample bias that could be introduced by the use of a single database from a commercial source that has no academic supervision and has published no quality standards. The authors rejected these issues for the following reasons:
The authors considered the issue of regional bias that could have been introduced by conducting their search in English only. The authors rejected this issue for the following reasons:
The following controls were used to verify number of publications and citations attributed to individual scientists:
The following analysis was made of the various opinion surveys used to label scientists as either UE or CE:
Instead, what I have received here boils down to:
1. Fuller is a big ugly meanie.
2. I disagree with Spencer Weart so that is enough of a defense.
3. Side issues that have nothing to do with this should be explored at length.
Oh, and Michael, you say "Nobody is saying this is a massively important paper, just that it's interesting enough to merit publication."
But exactly how much ink and prominent top of fold exposure have you given it? Why is that? Don't say it's because of skeptical criticism--skeptics criticize many things you manage to ignore.
What other scientific publications have you feature prominently recently?
Fuller says:
"New York wants to talk about a survey I ran on Examiner, saying it was not a random sample, ignoring the fact that I said about three thousand times that it could not be considered a public opinion poll because it was not a random sample."
Great. Link to 2 or 3 quotes that indicate this. There's no indication of this within the post that includes your quote comparing your useless online survey to Pew's scientific poll.
"But finally--at last! New York wants to defend the PNAS paper. Oh--no he doesn't. He wants to ignore every criticism I have made and for me to do what he wants. "
The problems with your critique have already been covered. Like I said, if your critique held any water, you would be able to identify a few hundred examples of which scientists had been clearly mischaracterized. It's a test of whether the study's alleged flaws are consequential. For example, RPJ is ticked his father shows up on that list, primarily because he signed an old skeptics statement. Intuitive enough. Opinions can change over time. But how often? Is Pielke's view with regards to the core IPCC conclusions much different these days? How many others were added based solely on a 1992 petition? I would suspect such a flaw in the study would be inconsequential for the study's results. Prove me wrong.
Sorry NY, I asked you first. Defend the paper, don't create tasks for me.
Tom, it seems to me that if you can't back up your attack against the defenses already presented in the top level article, the conversation is over.
The point is not that the method is perfect, it is that it is unbiased and has reasonable fidelity. That's what proxy data is about.
You keep insisting it is imperfect. We agree.
We ask you to proceed from there to demonstrating that the imperfections are consequential.
You either can't do that or can't be bothered. That would seem to bring the discussion to a close, wouldn't it?
Yeah, I guess so. I raise a number of points where potential bias undermines the conclusions. Your response is 'so what?' You disagree with Weart--but don't say where, why or how.
That does conclude any conversation aimed at finding out something useful, certainly.
I thought this was supposed to be about science, as opposed to just being 'sciency.'
Error isn't bias.
I said this regarding Fuller:
"Fuller has shown that he doesn't understand random sampling elsewhere, as he's claimed that the highly selective sample of stolen e-mails from CRU are as statistically valid as the population samples chosen by political pollsters."
Fuller then says:
"And then dhogaza gets to make up more stuff about me."
My statement was based on this statement by Fuller:
"Your [Brian Angliss'] argument that insufficient data is available for analysis is simply innumerate, and is essentially refuted every time a poll of 1,000 people is extrapolated to correctly predict an election. If you were to assume a total of 50 million emails involving the subjects of the controversy you would only need a sample of 666 emails to be able to make statistically significant statements at a 99% level of confidence with a confidence level of +/- 5%."
However, the selection of e-mails in question was not random, and as Mosher and Fuller themselves state, was most likely extracted from the full set by doing keyword searchers.
The reader may decide if my statement regarding Fuller's understanding of what constitutes a random sample which allows for proper statistical analysis is "made up" or not.
NewYork,
From http://www.examiner.com/examiner/x-9111-Environmental-Policy-Examiner~y2009m11d4-Global-warming-survey-results-Part-1
"For a variety of reasons this survey cannot be considered representative of either the population as a whole, the Internet population, or even those with an active interest in climate issues. This is why:
Where did you come here from?
Pct. Resp
Real Climate 1.6% 46
Watt's Up With That 78.7% 2214
Breakthrough Institute 0.2% 5
Only In It For the Gold 0.3% 8
Climate Change Fraud 2.6% 74
Roger Pielke Jr. 12.4% 349
Another part of Examiner.com 4.1% 116
Other (please specify) 484
answered question 2814
skipped question 486
I haven't counted the Other responses yet, but many, many came from Lucia's The Blackboard, Roger Pielke Sr., and surprisingly, Climate Progress.
So. Because so many people came from Watt's Up With That, later we'll build a nice profile of Anthony Watt's visitors. But we have other fish to fry."
From: http://www.examiner.com/examiner/x-9111-Environmental-Policy-Examiner~y2009m11d4-Global-warming-surveyPart-2-portrait-of-a-global-warming-skeptic
In a survey where most of the respondents fit one category, it makes sense to at least examine that category. As far as I know, there has been little work done trying to understand global warming skeptics. Perhaps we can contribute something to that understanding.
From: http://www.examiner.com/examiner/x-9111-Environmental-Policy-Examiner~y2009m11d5-Global-warming-survey-Part-3-More-about-skeptics
We're taking a good long look at global warming skeptics because so many of them took part in our survey. We'll compare them to other groups later in the analysis, but there's still a lot to say about skeptics that I don't think has been said before.
The presence of error doesn't mean there is no bias. Indeed the presence of error means the survey's authors should work twice as hard at assuring readers that the error did not in fact introduce or reinforce bias.
They did about as much work as you and your commenters here. The moral equivalent of zip.
But I guess that's okay, as long as you can use the headline for two weeks and remind people that the list is out there somewhere, and they sure don't want to get their names added to it.
Michael, the next time you complain about the state of communications about science, please remember this episode. You are a scientist trying to communicate with a journalist in front of an audience.
Were I in your position I would be embarrassed at the amount of time and attention you have spent on this, let alone the way it is concluding.
Tom:
You forgot that you brought Oreskes into this conversation? In the first post, no less. To pull the bait-and-switch here insults my intelligence. Don't whine.
I showed that your 5 min analysis on Oreskes was middling, to say the least. VicDiesel showed that your "report" was irrelevant to the current topic. Why are you avoiding the simple fact that you don't know what you are talking about on these topics, and when (politely, sans ad hom) called out on it, you avoid responding?
But I agree, we are talking about the PNAS paper. I have nothing to add as Michael's points are more than sufficient to counter your opinion, but I do have a simple and useful suggestion for you.
The Anderegg et al. paper adds on to the body of knowledge about scientific consensus on AGW. It is imperfect, but its results are robust. It is, however, by no means the final word. You are more than welcome -- using Prall's freely-available database for instance -- to author a paper and get it published in a respectable journal. I'm sure you can get Lindzen and/or either Pielke to sponsor the paper through the PNAS process, since people elsewhere claim that Schneider threw his weight behind this.
Talk is cheap, Tom. If the "bias" and "flaws" you claim are so evident, then you should have no problems translating this talk into action. None at all.
Tom, I have on occasion not let dhogaza's comments through as excessive. I say this as prefactory to not letting your comment to dhogaza through. It does not contribute to a polite exchange of ideas.
Unlike dhogaza's lying about me, which I guess contributes to a polite discussion.
Dirk, because nobody can answer basic questions about the validity of the methodology, the study does not contribute to the body of science and trying to tease information out of wrongly collected and bad data can be left to someone else.
Someone recently said that journalists 'were the ball.' You all dropped it.
Tom:
nobody can answer basic questions about the validity of the methodology
You know, if Boykoff, the other reviewer and the editors of PNAS did not explicitly critique the method in their reviews, it could (gasp!) mean that the study's method is sound.
Or are you implying that the editorial board of PNAS don't know good social science...? And that you have the expertise to overrule their judgment, based on your previous attempt at refuting Oreskes'...?
Let's face it - no matter what anyone says to answer your question, the only answer acceptable to you is "this paper is crap." How clever, Tom. How clever.
the study does not contribute to the body of science and trying to tease information out of wrongly collected and bad data can be left to someone else
So do it. My challenge to you stands; stop the cheap opinions and write a paper refuting the robust conclusions of Anderegg et al. Search other databases and languages to back your opinion and send it to PNAS, Science or Nature. Whinging on blogs and writing open letters gets you nothing.
Lastly, as evident in the past 50+ replies in this post, you have an undesirable trait of talking past people which makes fruitful discussion all but impossible. I sincerely hope you do not share this attribute in real life. Given this predilection of yours here and elsewhere, and as a nod to Michael's request for courtesy and politeness in this blog post, I see no point in further discussion. Have a nice 4th of July weekend.
"Unlike dhogaza's lying about me, which I guess contributes to a polite discussion."
It's a direct cut-and-paste of one of your posts, Tom, people can read the full thread over at scholars and rogues if for some reason they believe I'm misrepresenting you.
Right here:
http://www.scholarsandrogues.com/2010/06/08/climate-scientists-still-besieged/#comments
Comment 51. Readers may decide whether or not I'm lying for themselves by reading that comment and, dare I say it, putting it in context by reading the rest of the thread (because I do understand that context is important).
Tom: "The presence of error doesn't mean there is no bias."
That's been your refrain throughout: something could be wrong. However, you have shown no mechanism through which (perceived) errors could introduce bias, nor have you shown any evidence of bias. (Gives us the names of a bunch of highly publishing and referenced UE scientists who did not sign statements?)
Your ideas about methodology ("blog posts") are laughable.
Now put up or shut up. Above are three issues you can substantively address. Go ahead.
Victor.
I tend to agree Spencer Weart. As in the title of mt's newer posting, there are more than two camps. (And I think this is one of the points Schneider often remarks.) But the paper makes false impression that there are just two camps. The authors could first introduce several typical attitudes of scientists and then place UE and CE groups in the context. If properly introduced, the content is informative.
dhogaza, I tried to comment before but MT wouldn't post it. I'll try again.
I hope I can be disciplined enough to make this my last comment to you, ever, so I hope you'll pay attention.
Your statement is false. The quote is cherry picked from a longer discussion.
I wrote to Brian Angliss that his argument didn't rise to the level of wrong because he was evaluating the corpus of emails as a quantity and the leaked emails as a percentage that didn't rise to the level of statistical significance, as one does with a random sample. Because his sample was not random, the calculations he was using were not right or even wrong. They were irrelevant. I also showed that had they been a random sample of the figure of emails he estimated, they would have been statistically significant.
You know this, because you have commented on my response to this critcism elsewhere.
You have lied about me before. You have called me a pimp and a lying sack of shit, and many other things besides.
So, you get to say whatever you want on weblogs where your political position agrees with the host. I get reprimanded for responding to you.
But I am hoping it won't happen again. I will try to do the same thing with your denmates at Deltoid, as I mentioned to some of them yesterday.
Dirk, my response to your comment is pretty much a mirror of your sentiments. Talk is cheap, and there has been no defence of the methodological and analytical errors evident in the paper, just calls for me to spend my time working with data I don't trust. Did you read the commenter's comments on this paper?
At any rate, I have reached the same conclusion as you, and wish you a happy 4th weekend as well.
Michael, I am writing to ask you to either post my recent reply to dhogaza, delete his comment which contains lies about me, or remove everything I've ever written from your site.
I understand you are busy and moderate irregularly, but you did find time for him to repeat his lie this morning. I hope you will make time to extend me the courtesy of responding to that lie.
Thank you
Fuller:
"For a variety of reasons this survey cannot be considered representative of either the population as a whole, the Internet population, or even those with an active interest in climate issues. "
"As far as I know, there has been little work done trying to understand global warming skeptics. Perhaps we can contribute something to that understanding."
Better...but the survey still is far from a random sample, and has a severe and fatal selection bias, even among those who clicked through from WUWT that day. Some of the survey questions were of poor quality as well (which is perhaps why mainly skeptical websites linked to your survey). I assume there was no clear mechanism to prevent multiple responses from the same person (different IPs). Based on this, and your admission above, it was silly to compare your survey to the Pew Research poll.
Are these flaws consequential for the goal of determining skeptical or WUWT viewer opinions? I would say the severe selection bias inherent in an open-access survey is fatal. Determining the true balance of opinion within WUWT viewers would be difficult, as it's difficult to determine who they are. A random sample of the population might work but you'd need a huge general sample in order to get a large enough WUWT sample size. The problem with the poorly-formed questions could be tested by asking a different set of questions and seeing if "skeptical" results differ.
For proving your hypothesis that the NPAS study's flaws are consequential, all you need to do is show a significant number of scientists (say 10%) that have been clearly mischaracterized. Shouldn't be excessively difficult to do...that is unless you have no case.
It reminds me of the WUWT Surface Stations project. He's got photos and talking points, but no robust analysis or proof that flaws in some weather stations biases the homogenized trend. Others have done the work for him using his own data and found his hypothesis to be bunk. Watts has red herrings, smoke, and mirrors designed to cast doubt.
Let's see, Tom Fuller put up some challenges. I can give some response, but it's something Tom Fuller easily could have done himself.
1. "The authors considered the issue of sample bias that could be introduced by the use of a single database from a commercial source that has no academic supervision and has published no quality standards. The authors rejected these issues for the following reasons:"
ANSWER: Various papers report comparisons between Google Scholar and other sources (mainly Web of Science). Here is one such paper:
http://www.harzing.com/pop_gs.htm
One may note that Web of Science is often found to miss citations (as I have observed myself, too). Google Scholar is much better in that respect, although it also includes more 'questionable' citations. Therefore, *any* choice of database would have met criticism. More damning for the UE's is that Energy & Environment is not in the Web of Science, but *is* covered by Google Scholar. Using WoS would thus have slashed the number of papers by several UE's significantly.
2. "The authors considered the issue of regional bias that could have been introduced by conducting their search in English only. The authors rejected this issue for the following reasons:"
ANSWER: The vast majority of climate science is very likely published in English journals. This diminishes any regional bias. In addition, there is no reason to assume that UE's would publish more often in non-English languages than CE's would.
3. "The following controls were used to verify number of publications and citations attributed to individual scientists:"
In essence this question has little impact, as there is no reason to assume that Google Scholar has a bias to CE's versus UE's. While the actual numbers may not be 100% accurate (see also answer to point 1), they are likely to be reasonably precise. A systematic over- or undercounting of citations and/or publications would likely affect ALL authors.
4. "The following analysis was made of the various opinion surveys used to label scientists as either UE or CE:"
RTFSI.
Slightly longer: Read the F-ing Supplementary Information.
These four answers took me about ten minutes to answer. Worse even, I used a bit of "googling" (oops), something that every normal human being can do these days. That includes Tom Fuller. And since Tom Fuller apparently seems to know how to properly do these type of literature analyses, he should have known several of the answers himself.
Tom, presumably you have not taken it personally that I have been at a conference all day.
No, Michael, I understand the constraints on your time. Thank you for publishing my comments.
New York 1: "From my observations, Fuller hasn't demonstrated much expertise with this type of analysis. In his "annual survey on global warming", he proudly wrote:
"In less than three days more than 3,000 of you have taken the time and made the effort to fill out this survey. By comparison, Pew's survey on climate change in October of this year gathered 1,500 answers."
Apparently he doesn't know what a random sample is, as no one with a basic training in statistics would compare an online open access poll (severe selection bias) favorably to one based on random sampling."
Fuller: I didn't claim it was a random sample.
New York 2: "Great. Link to 2 or 3 quotes that indicate this. There's no indication of this within the post that includes your quote comparing your useless online survey to Pew's scientific poll."
Fuller: ""For a variety of reasons this survey cannot be considered representative of either the population as a whole, the Internet population, or even those with an active interest in climate issues."
New York 3: "Better...but the survey still is far from a random sample."
Yes, that's because it is not a random sample. Does it not occur to you that valid surveys can be conducted without a random sample?
Thanks for wasting my time.
Marco:
1. The standard method for dealing with datasets of uncertain provenance and/or reliability is duplication and control checks. Why didn't they use more than one database? Why didn't they look at what databases returned for scientists they could check with? In any event, where is their discussion of the decisions they made regarding this?
2. "Very likely?" Is that your opinion, their opinion or is it printed in a paper somewhere? Again, where is their discussion of methodological choices?
3. "There is no reason to assume Google Scholar has a bias...?" Again, is that Marco, the authors, a published paper somewhere? And again where's the discussion of the choices they made.
4. By 'Supplementary information' I presume you mean the Supporting Information listed along with the paper. If so, I did read it. I see a list of the petitions they used. I see the assertion that signing these petitions indicated that the scientist strongly dissented from the IPCC. I see no discussion of any homogeneity or heterogeneity amongst the various petitions. I see no description of how and when these petitions were presented, in what venues, using what methods. I see no discussion of how any of these potentially confounding factors could bias the sample of UE scientists (and the same is obviously true of CE scientists as well).
It's amateur hour stuff.
"Your statement is false. The quote is cherry picked from a longer discussion."
Well, I posted a link to your post, and suggested that context was important, and that readers go read that post and the context and make up their own mind.
I'll stand on that.
Ironic, though, that you complain about my cherry-picking a quotation and insisting that context will show that I'm lying while over in that thread you insist that context is not needed when considering the small, cherry-picked set of e-mails that led to the false claims made in your climategate book.
Consistency is not your strong suit.
"Does it not occur to you that valid surveys can be conducted without a random sample?"
Surveys are conducted with the intent of accurately measuring opinion of a particular population. Few surveys are perfect (response rate is always an issue). Your open-access web survey fails to be remotely useful. I apply that to any online poll (CNN, ABC,...) as well, so don't feel bad. Then again, CNN isn't comparing their number of responses with that of a scientific poll conducted by Pew Research - one that likely measures public opinion within a few percentage points of accuracy.
...but back on topic, the time Fuller spends dodging the issues could be better spent coming up with the few hundred examples of scientists he feels have been mischaracterized.
Tom, you've changed or are trying to change the goalposts in your answer to me. You now claim the paper is flawed because it does not do things you believe are necessary. But you don't show they ARE necessary. Of course one can be unhappy with the fact they do not list all their decisions and explain them in detail. But ultimately the issue is whether this changes anything. This means the methodology should have been described better, but does NOT mean the study is flawed.
And all answers are mine.
Regarding point 1: As I noted, there are plenty of studies with Google Scholar versus other databases. You can browse through those and then come back with your criticism.
Regarding point 2: this is a simple matter of experience in various fields of science. The number of non-English publications in the sciences is limited, for the simple reason that publishing in non-English venues reduces your audience to practically nil (with some exceptions).
Regarding point 3: this is what the various studies comparing Google Scholar and other databases show: the former invariably finds more citations than the latter do. And as I noted, several databases do not include smaller journals like Energy & Environment. They would likely introduce a bias. This is common sense, and something I have experienced with my own research (where WoS could not find two of my articles, despite their 25+ citation count, Google Scholar did).
Regarding point 4: all the petitions had to do was to question the IPCC conclusion. Who cares whether one said "yes, plenty of warming, but there is no need to do anything", whereas another said "the IPCC is a bunch of frauds".
Marco, sorry I can't stay and play today, but note the last thing you said. You (I'm sure unintentionally) showed the potential evil of this. It doesn't differentiate between, as you said,
" all the petitions had to do was to question the IPCC conclusion. Who cares whether one said "yes, plenty of warming, but there is no need to do anything", whereas another said "the IPCC is a bunch of frauds".
So now you have people on the same list with widely divergent beliefs. That list will be used to hurt some who are not skeptics, don't want to be called skeptics and won't get jobs or funding because they are on that list.
Tom, first of all they are called UE, as in "unconvinced experts". Which actually takes away some of the worst of the worst in several of the sources Anderegg et al used.
Second, please provide evidence that they "won't get jobs or funding because they are on that list". This paranoia is completely out of whack with reality. Lindzen still gets funding. Bob Carter seems to have no problems getting money. It's not like these two are known to be cheerleaders for the IPCC...
Not also that all of them *put themselves on a list*. If they don't want to be on a list that indicates they are 'skeptical' of the IPCC conclusions, DON'T SIGN UP FOR A LIST!
Post a Comment