Sunday, May 31, 2009

some sniping at peer-reviewed AGW science

The discussion of AGW at Overcoming Bias refers to a Physical Review Letters paper by Verdes. The first link to the paper in the Overcoming Bias post doesn't work for me, the second link costs money, but in the comments, commenter "g" gave a variant of the first link which works and is free.

The fit in the Verdes figure reproduced by Robin Hanson looks pretty good, probably about as good as can be expected given the quality of the experimental data. But it is not clear to me that it can remain so good outside Verdes' chosen interval.

At the high end: cutting off around 2001 in a paper submitted in 2006 seems peculiar, especially when linear extrapolation suggests that by 2005 the model might be diverging by more than it does in any of the years when Verdes chose to test it. And today we have data up to 2008, and it might be interesting to calculate what the Verdes model infers the CO2 level have been up to 2008. I'd rather expect it to diverge further (from skimming his description of his model, and noting that global temperatures have remained well below the strong-AGW projected long-term trend).

At the low end: while it seems fairly natural to stop fitting sometime in the 19th century (as data quality is falling fast in that period) it is not obvious that one should also stop testing the model sometime in the 19th century. There is a controversy about how much climate variability there has been in the last 1000 years, fanned by IPCC AR3 and An Inconvenient Truth promoting the famous low-variability "hockey stick" reconstruction. Since those heady days of settled science, IPCC has backed off to the variability illustrated in Fig. 6.10 of the AR4 report. It's not obvious that before 1800, when anthropogenic CO2 forcing is negligible, the Verdes model can produce as much variability as AR4 estimates to have existed. (And various IPCC critics, like me, doubt that AR4 has gone far enough: I hope that Robin Hanson will put his econophysicist hat on and make a nice time machine for betting markets, so that we can make and cleanly settle bets about pre-1800 variability.:-)

Also, setting aside concerns about the arbitrariness of the test window, is the fit good enough to justify the title "Global Warming Is Driven by Anthropogenic Emissions," and the concluding sentence of the abstract?

Here we show, using two independent driving force reconstruction techniques, that the combined effect of greenhouse gases and aerosol emissions has been the main external driver of global climate during the past decades.
That's a pretty strong statement. If we quantified the goodness of the fit over the three wiggles of the low-frequency signal within the window, would it be enough decibels of evidence to fairly paraphrase as "show ... has been the main external driver"? If I were an advocate seeking to make the most of this analysis, I might paraphrase it as "strongly support" the conclusion, rather than "show" the conclusion.

2 Comments:

Blogger Robin Hanson said...

The data and method are public. Why don't you replicate their work and check if your predictions about wider time windows are upheld?

4:41 PM  
Blogger William Newman said...

Robin Hanson: Because it isn't my job, at least in my opinion, and because the things I commented on made me less interested in the paper, not more.

I thought the comments I made were comparable to comments that PRL reviewers might have been expected to make before accepting the article for publication. Reviewers don't need to replicate the work before making comments of that sort, so why me? And when I was in grad school (for chemistry), no one seemed to feel obligated to replicate work before making comments of that sort about research papers (published or not).

I do sometimes work through the details of research papers (generally not in climatology but in other fields like randomized algorithms). But when I do so, it is almost always because I am positively impressed, not because I am negatively impressed. Seeing a model apparently diverging at an upper time limit which was chosen arbitrarily, with no discussion in the paper, doesn't motivate me to dig down to investigate how the clever thing was done. Instead, it motivates me to move on to other papers in my ongoing search for clever things.

I haven't completely lost interest in the paper, and if I'm horribly wrong about my quickie Mark I Eyeball analysis of the problem at the upper end of the range, so that the passage of time makes the model look better instead of making it look bad, I'll be very interested to find out. But I doubt I need to replicate the calculation myself to find out, because in that case the original author will become increasingly motivated to brag about the continued good performance of his PRL model paper.

7:57 PM  

Post a Comment

<< Home