Saturday, July 10, 2010

two different perspectives on an investigation

Is it noteworthy not to look at the emails in question in context? And not to ask questions such as: "Prof Jones, did you delete any e-mails?" Some think so. Others think not. If only there were any commonly accepted standards by which we could judge one of them to be absurdly wrong...

Monday, June 28, 2010

"destitute, hopelessly stagnant proletariat" in 1971 South Korea

another 1970s time capsule, this time from Super-Economy:
The book is hints at how crazy the ideological atmosphere was in 1971. As I wrote, Villy Bergström was a brilliant economist, and considered a centrist Social Democrat. Yet he writes in one point, favorably comparing North Korea with other nations: "[Classical] liberalism and capitalism in South Korea has led to fascism and an upper class in ruthless luxury, with a destitute, hopelessly stagnant proletariat. This has happened in South Korea, Taiwan, South Vietnam, Pakistan, South America and southern Italy."

I am quite impressed with the epic fail of choosing 2/4 of the Asian Tigers to illustrate "hopelessly stagnant proletariat."

(And incidentally, I don't think that that sharp disconnect from economic reality discredits the other remarks here. Most of the other remarks in the passage are not as sharp as "stagnant [vs. a reality of dramatic economic change]," leaving room for reasonable people to disagree about how correct they are, and I even agree that some of them are correct. I do disagree about how "classically liberal" these societies were. I also have a narrower disagreement with "fascist" not because it's overharsh, but because it's overspecific. I don't see how the 1971 snapshots can be classified with Hitler but not with Stalin, or with Mussolini but without 1900 Japan or 1900 Russia or 1920s Russia. Thus I'd prefer a term less misleadingly specifically referring to the enemies of the Social Democrats, perhaps "tyranny" or "absolutism.")

Saturday, June 26, 2010

A MISTAKE TO MAKE ONLY ONCE

Or, how I learned that (AWHEN (AND (BLOG-COMMENTING-P) (PASTING-P) (STRING-UPCASE-P IT) (NDOWNCASE IT))).

I cut capitalized page titles from an EconLog web form and an EconLog posting-delay-notification screen and pasted them into my remarks in two blog comments. OOPS!

I have spent a lot of time working with program comments and plain ASCII documentation files like this, using a convention where ALL CAPS indicates not yelling but quoting of fragments of computer programs, rather like italics can indicate not emphasis but quoting of title text like The Wealth of Nations. This seems to've created a blind spot in my proofreading skillz, since the yelling interpretation wasn't glaringly obvious to me. The yelling interpretation was certainly glaringly obvious to EconLib Ed., though: "If you can't wait so much as five or ten minutes before griping and screaming and yelling, you are pretty hair-trigger, eh?".

Sadly, perhaps EconLib Ed.'s assessment that I'm a ranting loon is uncontroversially true. More positively, though, perhaps I should take this a sort of double hint from fate (first that this occurred in my comment about how it's technically easier to do something on my own blog and second that this occurred in a blog post advising "Get Your Own Blog") reminding me that even if I am a ranting loon I can still post here! (What could possibly go wrong?)

Meanwhile I should probably try to remember to proofread more carefully the next time that I need to identify a web screen URL-suitable global meaning so that I am tempted to identify it by giving its title and furthermore its title happens to be capitalized. Too bad the brain is too inexpressive to support OAOO implementation of this as (DEFMETHOD MAKE :AROUND ((M MISTAKE)) (UNLESS (ALREADY-MADE-P M) (CALL-NEXT-METHOD)))...

On maintaining the appearance of balance

I'm surprised that both Tyler Cowen and Julian Sanchez wrote lengthy blog posts about the severity of Dave Weigel's problems with his Journolist emails, but didn't mention the coincidence that this is in the wake of the controversy over apparent maneuvering to try to protect Rep. Etheridge. (A "any video of a member [of Congress] acting strangely, no matter how grainy" forsooth!) When fire from the right catches you just as you're heeled over that far to the left, it tends to strike below the waterline.

UPDATE: also observed at Colby Cosh

Tuesday, May 18, 2010

On Hacker News on New Scientist on Living in Denial

(This is an extended version of a comment posted on a Hacker News article .)

(No, I don't remember the customary capitalization rules for different parts of speech in titles, why do you ask?)

From the article in question: "Vaccine denial: Umbrella term for a disparate movement claiming that certain vaccines either (1) do not work or (2) are harmful"

I have no particular sympathy with any anti-vaccine activism that I'm aware of. But I wonder how, other than by not being an important faction in the appropriate big political tent, "anti-vaccine denial" ended up on this article's excrement list along with Holocaust deniers, while "nuclear power denial" and "genetic engineering denial" didn't.

My impression is that political opposition to nuclear power plants, to nuclear waste facilities, and to GM crops have had at least as much economic impact as political opposition to vaccines. Thus, it seems to me that they shouldn't be left off this list because they're unimportant.

Perhaps the columnist thinks that the anti-nuke and anti-GM-crops political movements don't belong on the list because they have achieved their success primarily by honestly making valid technical points? Granting for the sake of argument that that is a tenable position, then why ignore them? Wouldn't the anti-nuke and anti-GM movements make useful examples to clarify his position by comparing and contrasting? Wouldn't describing what is healthy and good about the thinking of the anti-GM and anti-nuke coalitions help us understand better what is exactly is so characteristically diseased and vile about anti-vaccine folk to justify grouping them with Holocaust deniers?

Now, I vaguely remember that long ago, when I first encountered "politically correct" used in the usual modern sense, I laughed pretty hard. Cruelly. Since then I've used the term repeatedly, trying to criticize suitably dishonest tactics with the nasty reminder that they seem not only dishonest, but significantly parallel to dishonest tactics which were used to support murderous totalitarian regimes. So maybe I should recognize that being labelled a "denier" for arguing against the IPCC version of AGW (as opposed to, say, little-feedback "lukewarming") should be considered karmic justice. (Pretty crude justice, I think, since the parallel seems broken in various ways. E.g., the left regards Che t-shirts rather more fondly than right regards Rommel t-shirts. If you want a recognizable parallel, it would work a lot better to make a nasty reference to how Confederate sympathies are widely tolerated. It's not that there are no real problems to be nasty about, just that being nasty about the particular problem of Nazi sympathies is delusional.)

Perhaps what's going on is a rhetorical declaration of political factional loyalty by politically correct use of a partisan barb, not an actual assessment that people are violating some objective standard of analysis and discourse. Perhaps then I should accept that, since I've confessed to politically-correct-for-libertarians[*] use of the barb "politically correct." However, if indeed that's what's going on, I think it would be nice if the New Scientist would be honest about it. If the tables were turned, I'd be pretty disgusted with a magazine article, either an openly partisan one or a nominally objective one, which published a lot of text based on a definition of "politically correct" in neutral terms, but mysteriously happened to choose only targets on the left when illustrating those terms, avoiding e.g. any school boards which have made embarrassingly right-wing curriculum or library choices, or various episodes of narrowly doctrinaire in-group infighting weirdness among libertarian groups, even when discussing parallel kinds of misbehavior on the left.

Sunday, May 16, 2010

On David J C Mackay on AGW

(Maybe this weblog is still dead. The current unnatural stirring of its <body> is basically an elaboration of a comment I left on http://bishophill.squarespace.com/blog/2010/5/16/david-mackay-at-oxford.html, about David Mackay's talk about Mackay's freely-downloadable book Sustainable Energy --- without the hot air . I am well aware that it is insufficiently edited, and if more than three people read it, maybe I will feel bad about that. Or maybe not. And I refuse to feel bad about Blogger's endearing HTML-plus-homebrew-randomness treatment of blank lines, of paragraph tags, and of the interaction between the two, which might anyway have changed in the last year since I never was able to find it documented anywhere, so if this looks weird, then yup, I forgot my workaround, but I do remember I was never able to make my workaround less than clunky, and part of the fun was that the Preview always had different pathology than the published version, so I stand innocent whatever presentation hell I have sent my text into. And as for the logical hell it may seem to've issued from, I am vast, as vast as a swarm of subordinate clauses chewing on prepositional phrases and bleeding distracting observations, and sometimes losing the point, and indeed sometimes vaster than that. I contain vast sentences, and generally it takes time and care for me to shorten them, and it's a really nice day outside, and now the merest hundreds of lines of draft prose into this article I find it harder than before to care so much that someone is wrong on the Internet, and anyway this must be submitted as a blog post before I can submit a blog comment linking to it, and blog comments even more than blog posts should be timely, so it follows logically that my sentences contain multitudes, so deal.)

Mackay's other downloadable book, Information Theory, Inference, and Learning Algorithms, is impressive. I downloaded that book five years ago in order to study several sections of it. Hence my particular interest in seeing what Mackay had to say now about AGW.

Having just skimmed the first 15 pages or so of Sustainable Energy, I think Mackay is guilty of flaky preaching to the choir about the underlying AGW problem. He spends multiple pages in a lovingly detailed victory dance over the counterargument that CO2 concentration hasn't risen. He spends much less time on the counterarguments about the rather more important question of CO2 sensitivity --- disposing of them by saying it's complicated and then uncritically endorsing IPCCish results as a scientific consensus and thus a reasonable estimate of CO2 sensitivity. I don't see any nonpartisan way to justify that. As far as I can tell, disputes about the existence of a rise of CO2 level are completely marginal compared to disputes about temperature sensitivity. (CO2 level disputes seem to be a distant fourth behind at least three other disputes, about 1. temperature sensitivity, 2. temperature measurements, and 3. credibility of current-generation climate models.) Mackay has plenty of expertise in the fundamentals of statistical reasoning, and it would be nice if he'd write even a third as much about back-of-the-envelope cross-checks of his confidence in the IPCC temperature sensitivity as he spent on such cross-checks of CO2 level rise.

In particular, it'd be interesting to know how Mackay can justify appealing to a scientific consensus that circles the wagons around the original Mann hockey stick articles, and around the IPCC process which made those articles the flagship of sufficiently strong evidence and sufficiently sound analysis to justify punting previous ideas about large preindustrial climate fluctuations. Mackay has done a book's worth of research on quantitative sanity checks related to the AGW controversy, enough work to have published pages of cross-checks addressing a quaternary controversy. And before that, he published quite a good book on (more or less) the fundamentals of statistical inference. Thus, a lack of interest in quantitative sanity checks on the central CO2 controversy consensus seems out of place.

Perhaps Mackay subscribes to the view that the preindustrial temperature evidence is unimportant because the modellers have statistically sound demonstrations of sufficient ability to make quantitative predictions from first principles and recent measurements? Or that the statistical problems of the original hockey stick aren't important because later studies to defend its conclusions were done with fundamentally sound statistics, honestly accounting for what seems to have been a strong political temptation to, e.g., give rather heavier statistical weight to trees in a small dataset giving palatable results than to trees in larger datasets with less palatable result? I don't know how Mackay can justify deferring to the IPCC estimate of CO2 sensitivity without subscribing to one of those two possible views. However, I also don't know how he can easily subscribe to either. Conversely, it's sort of fascinating, but in a rather sad creepy way, when he finesses this by dropping from previous pages of physicist-speaking-to-physicist analysis --- performing sanity checks on the fundamentals by numerate back-of-the-envelope/elevator-pitch analysis --- to breezy chatter about "complex, twitchy beasts" and "Bad Things."

What kinds of cross-checks am I dreaming of here? As cross-checks of arguments for smallness of preindustrial climate changes, I nominate five examples. 1. How statistically reasonable is it to handle what is in effect a multisensor fusion problem by giving zero weight to our scattered incomplete hard temperature data (historical lake/river freezing times in Europe and various of its colonies, e.g.), calculating our result purely in terms of more indirect proxies (because of their compensating advantages like longer time series). 2. Roughly how sensitive might the post-Mann IPCC-camp temperature results be to outright cherry-picking and/or softer irregularities like giving heavier weight to a tree in an 8-tree dataset than to a tree in a larger dataset? 3. Given the level of local variation we observe in climate in naively-comparable sites today (e.g., comparable in altitude and latitude) roughly how often do we expect to see purely-local fluctuations of the level of the LIA/MWP observations? 4. How numerically reasonable was the Wegman network-analysis critique, and to what extent does it apply to the various generations of IPCC-favored analyses? 5. How information-theoretically reasonable is it to be pointedly uninterested (e.g., a long-standing pattern not publishing raw data and details about its collection, and of not energetically remeasuring and rechecking the proxies as the passage of time adds new tree rings or mudlayerwiggles or whatever; and the recently-controversial masterstroke of not graphing them either, in the famous "hide the decline" trick) in key details of at least the most heavily weighted proxies? Is the observed level of interest consistent with the hypothesis of a technical community honestly reaching a scientific consensus about a statistical reconstruction?

(I don't claim that each of those cross-checks would torpedo the IPCC position below the waterline. I do think that #1, #2, and #5 are serious criticisms. I also think that #2 and #5 are sufficiently common criticisms that addressing either or both instead of "CO2 concentration is not rising" would be much more unlike beating a strawman. I have mixed feelings about #4; quick-and-crude quantification of fundamentally messy things like social relationships doesn't appeal to me, but on the other hand, claims of "consensus" and "independent studies" are fundamentally equally crude quantifications also. Thus, to the extent that it's worth addressing such a crude simplification of a messy system, Wegman's calculation seems like a natural enough way for a statistician to try to do it. And I'm quite curious about #3, and I don't know why I have never run across a reference to such a calculation. I'm unlikely to do such a calculation myself, and less likely to write it up, since I think it would take me at least two weeks to get sufficiently up to speed on the data sources to get a result I'd be unembarrassed to put on a webpage. But many dozens of people are already very familiar with the data, and many of them might be able to do it in a weekend, and many of them write at least dozens of pages a year on similar subjects.)

It's harder to dream up direct quantitative cross-checks for the validity of IPCC modeller consensus. None of the AGW forecast data I've ever heard seem to be friendly to such back-of-the-envelope checks on any reasonable timescale. Thus my nominations for sanity checks here are not calculations, but questions "why [is it hard to find such cross-checks]?" and "what [the hell are we thinking then]?" 1. If modellers have the situation under control, why are they unable to (or unmotivated to?) pick easily-measurable numbers where their models make clear interesting predictions near-term predictions? 2. If they're not doing this, what is the strongest line of argument that they are reasoning clearly about having the situation under control, as opposed to, say, peddling overfitted nonsense?

To elaborate on point #1 here, it's a common situation in modelling that there are economically-important questions that we care about (e.g., how often a new kind of telephone exchange will have to refuse/drop calls because of congestion) which are expensive to measure directly. (First build the expensive piece of equipment, then wire it up to a bunch of customers to use them as guinea pigs...) Of course ultimately you *do* test this kind of thing on the poor guinea pigs, whether you like it not, but it's usually worth doing a lot of work before you get to that stage. If you have a model which purports to demonstrate a surprising result about behavior in full-scale customers-running-live mode, it's good to have evidence in support of that model before you actually test the thing on customers. (Note that "surprising" doesn't need to be terribly surprising, either: if you want to scale up a supermarket by 80% relative to the largest supermarket your chain has built so far, and claim that various size-dependent properties will be accurately (+/-15%, say) predicted by linear extrapolation from the sizes of existing stores, that might not be surprising in informal terms, but in this context it's surprising enough that you'd like some justification before you build it and let the guinea pigs in.) So if you have a model which is good enough to make "surprising" predictions in the expensive large, in my experience, it tends also to be good enough to make comparable predictions in the small.

In my experience in chemistry, a question which might matter economically is the value of a binding constant under some exotic conditions that will be very difficult (time-consuming, expensive...) to set up. If it's too difficult to set up unless the model is correct, then how then can your model prove its worth now, so that you know it's correct to do the expensive thing? The model doesn't prove its worth by stubbornly claiming that it really can calculate the one number that we ultimately care about, you damnable denier, but by successfully predicting surprisingly accurate results for dozens or hundreds of other numbers in related problems where measurement is much more practical. E.g., it might make a boatload of predictions about spectroscopic changes of the bound molecule in related conditions. In CO2 climate sensitivity questions I don't know enough to guess what should be simultaneously easy to predict, easy to measure, and surprisingly significantly different from naive extrapolation, but roughly the kind of thing I'd expect is Mackay writing "a famous early example was the Tarsasku model B 1997 "beach bunny" curve predicting the change in the power spectrum of coastal/inland nocturnal barometric fluctuations, which was clearly vindicated by 2002; Zer's microfoundations review article of 2003 gives 14 such predictions which met his 98% confidence level, and today we have approximately 300."

(Maybe that power spectrum example sounds unrealistically complicated? or unreasonably simpleminded? I don't mind simple predictions at all --- e.g., differential tropospheric warming? excellent in its simplicity. But my impression from other modelling is that in pursuit of results which are easy to predict, surprising, and easy to measure sufficiently accurately --- and from what I've seen of the troposphere controversy, tropospheric warming measurement accuracy is at best marginal --- one tends naturally to end up with esoteric predictions. Commonly, in fact, one ends up with very esoteric predictions, and I'd cheerfully accept predictions of the north/south hemisphere deviation of the El-Nino-corrected coastal/inland nocturnal barometric fluctuations, or results hairier than that, as long as they're precisely defined in terms of measurements which are routinely made to sufficient accuracy. Compared to the hairiness of tunneling down through all the layers of equipment and calculation to the actual physical reality of spectroscopic experiments (like 2D NMR, or various nonlinear superfast laser stuff), that seems almost tame. But I'm not impressed with hindcasting small noisy datasets with enormous computer programs, and I'm not impressed with any prediction which even today, after all the years which have passed since the science was settled, are hard to sharply distinguish from the supernaive naive rival hypothesis of "zero trend, not even no-feedback lukewarming, just the usual reddish noise drift" over the past decade, and which won't be clearly distinguishable from historical trend extrapolation for many years yet.)

And to elaborate on point #2, I'm venting, but fundamentally it's a serious question. And its seriousness doesn't depend on worries about left/right/enviro political subtexts, about professional clannishness, or about professional or financial incentives to reach particular kinds policy conclusions. Without any of those incentives, modellers can very easily be spontaneously guilty of overfitted nonsense; it seems to be very human to fall in love with the modelling approach one has chosen, and to believe its results with more confidence than one should. Relatedly, there is a strong human tendency to resent being suspected of merely falling in love with an unrealistic model, and to react by pumping out results which demonstrate that the model really can predict surprising stuff accurately, even if the experimentalists (did we mention that when attending institutions where people who qualify are able to become theorists, they "chose" to go into experiment? need we say more?) haven't yet been able to measure relevant quantities to sufficient accuracy to confirm our accurate surprising prediction for the result that people are economically motivated to care about. The modellers are human, if they aren't pumping out those results, I judge that it's alarmingly likely to be because their models simply aren't valid enough to produce forecasts sharply distinguishable from weak models like linear extrapolation.

(It looks to me as though preindustrial temperature fluctuations were large compared to deviations from the post-1800 warming trend. I know no strong reason to believe that non-CO2-driven fluctuations have calmed down since 1800. Thus, I have independent reason to suspect that modellers are peddling overfitted nonsense. But even if I didn't have that reason, perhaps in some alternative universe where written history only began in 1815, I'd still consider the lack of focus sharp specialized test predictions somewhere between "inexplicable" and "damning." )

(BRAAAAAIIINS!)

Wednesday, June 03, 2009

Happy upcoming Father's Day!

Several times over the past ten years or so my father has mentioned an old Horizon magazine article written by a British Labour MP. Sometimes he has paraphrased passages from memory, e.g.,

The point I am making is that although the doomwatchers say they are upset at the prospect of having to sacrifice freedom, their disregard of all serious argument against the need to do so, the indecent haste with which they embrace the authoritarian option, and the self-righteous passion with which they try to ram it down everyone's throat belie their words.
and sometimes wished he could check his memory of it. As is probably obvious from the wording of that quote, I have now successfully tracked down the article ("Getting Along With Doomsday" by Bryan Magee in the summer 1975 issue of Horizon magazine). Primarily I am smugly preparing to mail a photocopy as a more-thoughtful-than-usual-for-me Father's Day gift. Secondarily, though, I'll take the opportunity to remark about the article here.

The article is about general properties common to potential catastrophes which particularly alarm leftists, and common to the political agitation around them. It seems to me that the generalizations have held up very well over almost 35 years, so well that I'm surprised that no one has been motivated to reprint the article on the web. Instead the article seems to be completely invisible: my Google search for the quoted title string "getting along with doomsday" finds only two hits, both merely the tables of contents of entire issues of Horizon magazine. So I am apparently the first WWW author to recommend the article; go me!

Unable to hyperlink to it, I shall at least quote another paragraph, from the first page:

Having lived in the period when these views were popular, I am struck by several peculiarities about them. First, although most of them are logically unconnected, and some are mutually contradictory, they were all accepted and promoted by roughly the same people. And --- this may be merely a comment on the circles I happen to move in, but I do not think so --- their appeal seemed to be preponderantly to people of a certain left-wing persuasion. Many of my acquaintances moved from one to the next as each in its turn became fashionable. Some embraced two or more simultaneously. A few heroically muddled individuals tried to believe all of them at once.
Really someone should put this entire article properly on the web, blast it! It was hard to decide which of about ten paragraphs in a row (starting from this one) I wanted most to quote; and there are various rival paragraphs in other sections of the article too.

Besides securing the rights to post the original on the web --- well worth doing, I think, because part of the force of the article today is that it's a time capsule whose frozen analysis can be compared against developments since then --- it might be interesting if some thoughtful person wrote a generalized update. The generalization that I'd particularly like to see would be not only to critique the stereotypical leftist doomsaying as Magee does, but also to critique stereotypical vaguely-classical-liberal doomsaying. (Here by vaguely classical liberalism I mean a pretty big tent, including e.g. the authors of The Federalist). After all, we have a checkered record too. E.g., classical liberals have worried about standing armies making republican states hopelessly unstable, and about growth of state power being a one-way ratcheting garrote. Neither of those has been a really good predictor for the twentieth century. (The one-way ratchet isn't a terribly bad rule of thumb, but in the twentieth century as in the three centuries before it the exceptions were very important, and we're not doing very well at predicting the exceptions.)

Vaguely-classical-liberal folk have also done some doomsaying about absolute socialism causing absolute poverty. Such doomsaying seems now to be popularly discredited, considered to be overblown scare stories falsified by history. As far as I know, though, the most influential scare stories involving absolute poverty also involved truly absolute socialism, including things like absolutely eliminating money, and not merely Soviet-style 90% collectivization of agriculture, but absolute 100% collectivization of agriculture. Thus, it seems to me that the verdict of history is slightly unclear on the effects of such absolute socialism. The much clearer verdict is on how the ratchet tends to stop before you reach such absolute socialism, leaving grey market arrangements like private farming plots to play an important part in the economy.

The twentieth century does seem to have supported the vaguely-classical-liberal doomsaying about the dangers of strong states. States so strong that they wipe out most independent power centers have been extremely dangerous to their own subjects over the past 100 years; it's hard to use the twentieth century to support the proposition that there is any threat which justifies strengthening the state to the point that independent power centers start getting wiped out.

You could try to justify a very high level of state control by invoking the threat of conquest by an equally illiberal state. However, it's not so obvious that a supercentralized state is militarily stronger than a more liberal state. Nazi Germany is the obvious scary example to support the idea that supercentralized states can be ferociously strong in high intensity war. However, knowledgeable people seem to judge most of their relative effectiveness to have been due to getting tactics and doctrine right. The history of the last fifty years seems to support the idea that they're correct, and that the things the Wehrmacht got right are largely independent of fighting for a supercentralized state.

Incidentally, my wish for a more balanced update isn't intended as a criticism of Magee writing in 1975. Given his venue, his article was plenty long and ambitious. Also, the classical liberal bugaboos are mostly claims about economics and political science under different kinds of political systems. Today we have considerably more relevant and undisputed data on those subjects than we did in 1975: the passage of time has not only created much more economic data, but also unlocked previously hidden and disputed Soviet and Chinese economic data. So I'm only wishing for someone to take advantage of the analysis opportunities that we have in 2009, not complaining that Magee improperly left his analysis incomplete in 1975.

Finally, let me acknowledge that of course bad arguments or suspect motives do not suffice as a logical justification to deny an argued-for conclusion. Therfore, even if Magee correctly identified patterns of bad arguments or suspect motives, that doesn't suffice to disprove any conclusions. Sometimes, however, a pattern of bad arguments or suspect motives can suffice as a justification for becoming exasperated.

Sunday, May 31, 2009

some sniping at peer-reviewed AGW science

The discussion of AGW at Overcoming Bias refers to a Physical Review Letters paper by Verdes. The first link to the paper in the Overcoming Bias post doesn't work for me, the second link costs money, but in the comments, commenter "g" gave a variant of the first link which works and is free.

The fit in the Verdes figure reproduced by Robin Hanson looks pretty good, probably about as good as can be expected given the quality of the experimental data. But it is not clear to me that it can remain so good outside Verdes' chosen interval.

At the high end: cutting off around 2001 in a paper submitted in 2006 seems peculiar, especially when linear extrapolation suggests that by 2005 the model might be diverging by more than it does in any of the years when Verdes chose to test it. And today we have data up to 2008, and it might be interesting to calculate what the Verdes model infers the CO2 level have been up to 2008. I'd rather expect it to diverge further (from skimming his description of his model, and noting that global temperatures have remained well below the strong-AGW projected long-term trend).

At the low end: while it seems fairly natural to stop fitting sometime in the 19th century (as data quality is falling fast in that period) it is not obvious that one should also stop testing the model sometime in the 19th century. There is a controversy about how much climate variability there has been in the last 1000 years, fanned by IPCC AR3 and An Inconvenient Truth promoting the famous low-variability "hockey stick" reconstruction. Since those heady days of settled science, IPCC has backed off to the variability illustrated in Fig. 6.10 of the AR4 report. It's not obvious that before 1800, when anthropogenic CO2 forcing is negligible, the Verdes model can produce as much variability as AR4 estimates to have existed. (And various IPCC critics, like me, doubt that AR4 has gone far enough: I hope that Robin Hanson will put his econophysicist hat on and make a nice time machine for betting markets, so that we can make and cleanly settle bets about pre-1800 variability.:-)

Also, setting aside concerns about the arbitrariness of the test window, is the fit good enough to justify the title "Global Warming Is Driven by Anthropogenic Emissions," and the concluding sentence of the abstract?

Here we show, using two independent driving force reconstruction techniques, that the combined effect of greenhouse gases and aerosol emissions has been the main external driver of global climate during the past decades.
That's a pretty strong statement. If we quantified the goodness of the fit over the three wiggles of the low-frequency signal within the window, would it be enough decibels of evidence to fairly paraphrase as "show ... has been the main external driver"? If I were an advocate seeking to make the most of this analysis, I might paraphrase it as "strongly support" the conclusion, rather than "show" the conclusion.