- Posts: 7163
- Thank you received: 21
towermonkey wrote: Oh, yer just asking for it now....Fire away SC!
Please Log in or Create an account to join the conversation.
Dude, I posted the link earlier and I'm not re-hashing this again - I addressed it all last time. http://www.pinecam.com/phpBB2/viewtopic ... readme+txtThe Liberals GOP Twin wrote: I'm waiting Science Chick. Fisk the above document for all of us. Do some of your Mr. Wizard for us.
From page 2 of that threadScience Chic wrote:
I realized when I posted the release of the results of the first inquiry, that I never responded to this. Here's RealClimate's explanation, with a more recent email cited that states quite clearly that the datasets were fixed (and I don't mean in an illegal way, but in a corrective manner).Dr. Fill wrote: I guess I will have to have Ian Harris spell it out for you. The programmer who work for 3 years (2006-2009) on the Climate Research Units legacy climate modeling software and the legacy data bases. Data that ALREADY existed.
“This still meant an awful lot of encounters with naughty Master stations, when really I suspect nobody else gives a hoot about. So with a somewhat cynical shrug, I added the nuclear option - to match every WMO possible, and turn the rest into new stations (er, CLIMAT excepted). In other words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad, but I really don't think people care enough to fix 'em, and it's the main reason the project is nearly a year late.”
Read the whole narrative about his three years of work with faulty data and modeling software.
Download Ian Harris' 314 pages of programmers notes in PDF format
http://www.realclimate.org/index.php/ar ... k-context/
Update: Pulling out some of the common points being raised in the comments.
* HARRY_read_me.txt. This is a 4 year-long work log ( http://di2.nu/foia/1252090220.txt ) of Ian (Harry) Harris who was working to upgrade the documentation, metadata and databases associated with the legacy CRU TS 2.1 product, which is not the same as the HadCRUT data (see Mitchell and Jones, 2003 for details http://www3.interscience.wiley.com/jour ... 1&SRETRY=0 ). The CSU TS 3.0 is available now (via ClimateExplorer for instance), and so presumably the database problems got fixed. Anyone who has ever worked on constructing a database from dozens of individual, sometimes contradictory and inconsistently formatted datasets will share his evident frustration with how tedious that can be.
From this email: http://di2.nu/foia/1252090220.txt
On 3 Sep 2009, at 17:04, Tim Osborn wrote:
> Hi Harry and Phil,
>
> the mean level of the "updated-to-2008" CRU TS 3.0 now looks good,
> matching closely with the 1961-1990 means of the earlier CRU TS 3.0
> and CRU TS 2.1.
>
> Please see the attached PDF of country mean time series, comparing
> last-year's CRU TS 3.0 (black, up to 2005) with the most-recent CRU
> TS 3.0 (pink, up to 2008).
>
> Latest version matches last-year's version well for the most part, and
> where differences do occur I can't say that the new version is any
> worse than last-year's version (some may be better).
It's another non-issue.
Science Chic wrote: Now for the Ian Harris/computer code/data issue.
I've addressed this already on the 1st page of this thread.
http://www.realclimate.org/index.php/ar ... k-context/
Quote:
HARRY_read_me.txt. This is a 4 year-long work log of Ian (Harry) Harris who was working to upgrade the documentation, metadata and databases associated with the legacy CRU TS 2.1 product, which is not the same as the HadCRUT data (see Mitchell and Jones, 2003 for details). The CSU TS 3.0 is available now (via ClimateExplorer for instance), and so presumably the database problems got fixed. Anyone who has ever worked on constructing a database from dozens of individual, sometimes contradictory and inconsistently formatted datasets will share his evident frustration with how tedious that can be.
You are, again, making accusations without understanding the context of the data and the text.
As plainly expressed in both the first [see point #48] and second investigations, the data that CRU works with comes from primary observations recorded by National Meteorological Stations all over the world - data that is also used by GISS and NCDC (and groups in Japan and Russia). They each process that data independently with differing methodologies; and each groups results verify one another. The point being: if the CRU was cooking its data, it wouldn't look like everyone else's!
The second review stated specifically:
Quote:
we are satisfied that the CRU tree-ring work has been carried out with integrity, and that allegations of deliberate misrepresentation and unjustified selection of data are not valid.
Quote:
We have not exhaustively reviewed the external criticism of the
dendroclimatological work, but it seems that some of these criticisms show a rather selective and uncharitable approach to information made available by CRU. They seem also to reflect a lack of awareness of the ongoing and dynamic nature of chronologies, and of the difficult circumstances under which university research is sometimes conducted. From our perspective it seems that the CRU sins were of omission
rather than commission.
Now, yes the Met office has launched a re-examination of global temperature records and it is expected to take 3 years (can we hold comments until it's finished, please?). As noted before by myself, and mtspike, http://www.guardian.co.uk/environment/2 ... ta-science
Quote:
"It is important to emphasise that we do not anticipate any substantial changes in the resulting global and continental-scale multi-decadal trends. This effort will ensure that the datasets are completely robust and that all methods are transparent," the document says.
http://webcache.googleusercontent.com/s ... =firefox-a
Quote:
The current surface temperature datasets were first put together in the 1980s to the best standards of dataset development at that time; they are independent analyses and give the same results, thereby corroborating each other. Consequently we have been considering how the datasets can be brought up to modern standards and made fit for the purpose of addressing 21st Century needs. We feel that it is timely to propose an international effort to reanalyze surface temperature data in collaboration with the World Meteorological Organization (WMO), which has the responsibility for global observing and monitoring systems for weather and climate.
Never once did any of these investigations call into question the validity of the data, as you kept repeating again and again in your posts on this subject, and ignored the facts to the contrary as presented.
http://www.pinecam.com/phpBB2/viewtopic ... c&start=72
Why I Believe, and What It Would Take to Change My Mind
Dr. Fill wrote:
I think you need to go back and read ALL of the different documents and memos that I linked to. The Royal Society of Chemistry and the Institute of Physics are certainlly questioniong the valididy of the data.
And why do you think the British MET office is reexamining 160 years of temperature readings. Becuase they are concerned with the validity of the data.
And if you know anything about the legacy data and the legacy software that created it, you would know that there is a LOT of questions about the validity of the data.
Dr. Fill wrote:
None of the climate data from any of the three main sources are total interdependent from each other. You should know the dependencies that all the data sets have with each other. These interconnections of data sharing go back to the 1980's.
Even if we were to take your proposition as fact, then, we still have a major problem with 1/3 of the data available.
If you haven't read the programmers notes, the programmer who was tasked with maintaining the legacy data and legacy modeling software at the CRU, then you are certainly not even slightly aware of the problem with the CRU data.
Ian Harris was tasked with producing the CRU Time Slice 3.0 data set. He sat down with the legacy data and the programs what create CRU TS 1.0 and upward and he couldn't even get any meaningful results using the existing data sets and existing programs. He worked from 2006 to Dec. 2009 and even then, after three years of work, he produced a data set with errors.
The data was independently collected from the CRU, other centers had access to the same data, and all the independent analyses have agreed with one another - so even if you "throw out 1/3 of the results", you still have 2 other centers saying that global warming is happening using the same raw data! Ian Harris was working on the TS2.1 and TS3.0 data set which had nothing to do with the CRUTEM3 data! So where's the problem?
I may not be a computer programmer, but at least I'm addressing the true concerns, not getting sidetracked by computer codes for a different project that is likely being taken out of context as well. "Smoking gun"? This might interest you:
http://scienceblogs.com/deltoid/2009/12 ... g_code.php
Quote:
Today I'll look at Eric Raymond's alleged "siege cannon with the barrel still hot":
From the CRU code file osborn-tree6/briffasep98d.pro , used to prepare a graph purported to be of Northern Hemisphere temperatures and reconstructions.
;
; Apply a VERY ARTIFICAL correction for decline!!
;
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,- 0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
if n_elements(yrloc) ne n_elements(valadj) then message,'Oooops!'
;
yearlyadj=interpol(valadj,yrloc,timey)
This, people, is blatant data-cooking, with no pretense otherwise. It flattens a period of warm temperatures in the 1930s -- see those negative coefficients? Then, later on, it applies a positive multiplier so you get a nice dramatic hockey stick at the end of the century. -[Raymond]
But what is the code directly following the fragment Raymond quotes? Look:
;
;filter_cru,5.,/nan,tsin=yyy+yearlyadj,tslow=tslow
;oplot,timey,tslow,thick=5,color=20
;
IDL uses a semi-colon to indicate a comment, so the only code to use yearlyadj has been commented out. Raymond must have known this since he is an Emacs user and Emacs colour codes the comments. This doesn't seem to be a smoking gun so much as a gun that hasn't been fired.
Raymond has made no attempt to find out if the graph was actually used anywhere. The file name was osborn-tree6/briffa_sep98_d.pro, so we should look for a paper with authors, Briffa and Osborn published in 1998 and sure enough there's Briffa, Schweingruber, Jones, Osborn, Harris, Shiyatov, Vaganov and Grudd "Trees tell of past climates: but are they speaking less clearly today?" Phil. Trans. R. Soc. Lond. B 1998:
And figure 6 is basically the graph plotted by the code above and it does not include the "corrected MXD" data:
http://scienceblogs.com/deltoid/2009/12 ... g_code.php
Hm. So others are looking for blatant misrepresentation of the raw data in the code, but not checking whether what was published was modified or was modified improperly without obvious documentation. And running with their incomplete analysis of said "investigation". Sounds familiar...
Please Log in or Create an account to join the conversation.
Science Chic wrote: I've addressed this already on the 1st page of this thread. - http://www.realclimate.org/index.php/ar ... k-context/
HARRY_read_me.txt. This is a 4 year-long work log of Ian (Harry) Harris who was working to upgrade the documentation, metadata and databases associated with the legacy CRU TS 2.1 product, which is not the same as the HadCRUT data (see Mitchell and Jones, 2003 for details). The CSU TS 3.0 is available now (via ClimateExplorer for instance), [/b]and so presumably the database problems got fixed. Anyone who has ever worked on constructing a database from dozens of individual, sometimes contradictory and inconsistently formatted datasets will share his evident frustration with how tedious that can be.[/b]
Figure 1. Global temperature change during 1901–2002.
(a) Graph of global mean temperature departures relative to the mean during 1901–2002. (b) Graph of trend analysis. The trend analysis over the entire time period (black line) increased at a rate of 0.075°C per decade (0.75°C per century), the trend during 1941–2002 (purple line) increases at a rate of 0.11°C per decade, the trend during 1951–2002 (green line) increases at a rate of 0. 16°C per decade, the trend during 1961–2002 (orange line) increases at a rate of 0.22°C per decade, and the trend during 1971–2002 (red line) increases at a rate of 0.31°C per decade. Data for both graphs were calculated from the CRU TS 2.1 data set [10]. - doi:10.1371/journal.pone.0008320.g001
http://www.plosone.org/article/info:doi ... ne.0008320
We present climate trends calculated from the 0.5-degree resolution CRU TS 2.1 dataset [10] for 1951 to 2002 to identify those areas that have experienced the greatest rates of temperature and precipitation change (Figure 2). We chose this time period because high levels of anthropogenic greenhouse gasses were emitted during this 52-year period, it spans a long enough period for major environmental and ecological responses to climate change to have occurred, and the climate data are more robust for this period than earlier in the century.
http://www.plosone.org/article/info:doi ... ne.0008320
Please Log in or Create an account to join the conversation.