Climate change - the POLITICAL debate. Vol 3

Climate change - the POLITICAL debate. Vol 3

TOPIC CLOSED
TOPIC CLOSED
Author
Discussion

Jasandjules

69,947 posts

230 months

Sunday 8th March 2015
quotequote all
zygalski said:
So which climate data should we be looking at for trends? The last 18 years or the last 1800?
Talk amongst yourselves & come back to me when you've got your story straight.
laugh
Perhaps if you re-read, and correctly comprehend, it will be clearer to you. I trust from that statement above that you lack knowledge of the history of the AGW liars and the assertions made then changed.

turbobloke

104,046 posts

261 months

Sunday 8th March 2015
quotequote all
zygalski said:
So which climate data should we be looking at for trends? The last 18 years or the last 1800?
There's no visible causal human signal in any trend in any data for any era, so the question you posed is pointless.

zygalski said:
Talk amongst yourselves & come back to me when you've got your story straight.
Come back to us when you've got any story at all.

Some hope...

silly

Scuffers

20,887 posts

275 months

Sunday 8th March 2015
quotequote all
zygalski said:
Jasandjules said:
zygalski said:
That's what I like about you climate tin-foilers. You're just so damn consistent. laugh

Edited by zygalski on Sunday 8th March 08:59
Yes, consistency is used by those telling the truth. Only liars need to keep changing the story, such as "snow is a thing of the past", no wait, what we meant was there would be loads more snow, the models show this...

Or, the models will show the planet heating out of control. No wait, what we mean is there will be a 20-30 year period where there will be no statistical warming....
So which climate data should we be looking at for trends? The last 18 years or the last 1800?
Talk amongst yourselves & come back to me when you've got your story straight.
laugh
OK, explain how we were farming in greenland 500+ years ago? - it's now covered in ice.

Get real, climate change has been going on for millennia, yet the looneys think it's something new that's 100% caused by us.



LongQ

13,864 posts

234 months

Sunday 8th March 2015
quotequote all
zygalski said:
That's what I like about you climate tin-foilers. You're just so damn consistent. laugh
I think this one must have taken the full training course and passed with good grades and the extra marks for application of gratuitous insults.

Still, at least he's putting his scientific evidence in the political thread. In context (Mother Jones? Skepticalscience?) here would seem to be the right place so long as it is not discussed as if it were science.


But what is a politically green convert doing in a "Car Enthusiasts" forum anyway?


Yesterday I was talking to a colleague who lives in Pennsylvania. He's had weeks of snow to keep him busy clearing his driveway. Greater than a normal year and where he is at least it could be measured in inches more often than feet. There may yet be more.

A couple of nights ago he watched a program about Global Warming, presented as Climate Change, and took away the message that Hurricanes and storms in general as becoming more dangerous on the basis that they are more intense. He conceded that the frequency seems to have reduced and that measurements based on where people choose to live and the cost of damage basis (un-adjusted for inflation, etc.) may be questionable, but the "more intense is the big danger" message stuck with him.

Clearly the balance of reporting via all media is going to greatly favour influencing the impressionable and so far disinterested public over the next few months in order to build "support" for the Paris boondoggle. For that to happen in such a seemingly widely coordinated way almost certainly requires political support on a broad scale.

Political support usually emanates from a narrow band of society in which the majority of its affiliates seem to be most interested in self interest in one form or another. For such a group to be promoting a position based on selfless long term benefit projections for future humans just seems so unlikely that I it seems clear other matters must be behind it all.

I rather suspect that we are seeing the end of the Western Empire as it is currently known and understood, the climate change meme being an illustrative part of that.

Is it worth defending or would it be more entertaining to bring it forward in order see how things would develop in the perfect Green world? Get the pain over and done with as quickly as possible.

I've long had a bit of a soft spot for the benefits of nihilism to focus people's thinking.


Edited by LongQ on Sunday 8th March 21:03

Halb

53,012 posts

184 months

turbobloke

104,046 posts

261 months

Sunday 8th March 2015
quotequote all
Halb said:
Very interesting development, a bit over the top!

The trouble isn't so much with the ever-changing names given to the hoax, but with reporting such as this, taken from the link:

article said:
Gov. Scott, who won a second term in November, has repeatedly said he is not convinced that climate change is caused by human activity, despite scientific evidence to the contrary.
There's only scientific evidence to the contrary if scientific evidence is redefined to be non-scientific and not evidential.

Model outputs, bears, ice, floods, etc none of these constitute scientific evidence that mankind causes (insert name here) and - with no causality to humans established - there's no evidence in any trend in any temperature record over any time period.

The reporting should be about itself and how bad it is.

mybrainhurts

90,809 posts

256 months

Sunday 8th March 2015
quotequote all
Halb said:
Has ziggy emailed them yet? They're in for it, big time...hehe

don4l

10,058 posts

177 months

Sunday 8th March 2015
quotequote all
I'm wondering if our resident idiot has managed to work out what 97% of scientists were agreed about.

He seems to be confident that they agreed about something... or other. But, he doesn't seem to have a clue about what they did agree on. Also, he knows that he agrees with them, even though he doesn't know what they agree about. Furthermore, he is more intelligent than most of us because he agrees with them (even though he doesn't know what he is agreeing with).

Zyg, I am a caring, compassionate human being, so I will help you to find the answer to your question.

Try this website www.google.co.uk

Good luck with your research.


wc98

10,424 posts

141 months

Sunday 8th March 2015
quotequote all
don4l said:
I'm wondering if our resident idiot has managed to work out what 97% of scientists were agreed about.

He seems to be confident that they agreed about something... or other. But, he doesn't seem to have a clue about what they did agree on. Also, he knows that he agrees with them, even though he doesn't know what they agree about. Furthermore, he is more intelligent than most of us because he agrees with them (even though he doesn't know what he is agreeing with).

Zyg, I am a caring, compassionate human being, so I will help you to find the answer to your question.

Try this website www.google.co.uk

Good luck with your research.
smile

hidetheelephants

24,501 posts

194 months

Sunday 8th March 2015
quotequote all
hidetheelephants said:
zygalski said:
So the best way to develop renewables is what... not to invest in them?
"Appeasing simpletons". Interesting. So most seats of learning & the vast majority of scientists are included in this rather sweeping judgment of yours. I suppose the likes of the populist right wing UKIP voting PH intelligentsia are the ones with a firm grip on the truth!
What have renewables got to do with anything? You're coming from the position that assumes CO2 emission is likely to cause significant problems in the near future, but you want to tackle this problem in the least effective and most economically damaging way. Can you not see how ridiculous a position this is?

If the enthusiasm espoused by Labour for saving the planet in 1998 had been directed to a reasonably funded restart of the nuclear power programme rather than wasting time and money with wind turbine and solar subsidies, the UK would be well on its way to meeting the emissions targets set, as well as a robust energy supply unlike the nonsense of STOR we have now. What we got instead was window dressing and energy insecurity.
Is the lack of a reply acquiescence or do you still disagree but can't muster an argument?

vournikas

11,720 posts

205 months

Sunday 8th March 2015
quotequote all
zygalski said:
Mr GrimNasty said:
Lest we forget.

AreOut said:
130 years is also a very short period in the history of this planet
That's what I like about you climate tin-foilers. You're just so damn consistent. laugh

Edited by zygalski on Sunday 8th March 08:59
I also prefer extended time scales; ice core data is a particular favourite for demonstrating natural historic variability for the global climate. But here's a sample paper that examines 1,500 year climate cycles.

zygalski

7,759 posts

146 months

Monday 9th March 2015
quotequote all
don4l said:
I'm wondering if our resident idiot has managed to work out what 97% of scientists were agreed about.

He seems to be confident that they agreed about something... or other. But, he doesn't seem to have a clue about what they did agree on. Also, he knows that he agrees with them, even though he doesn't know what they agree about. Furthermore, he is more intelligent than most of us because he agrees with them (even though he doesn't know what he is agreeing with).

Zyg, I am a caring, compassionate human being, so I will help you to find the answer to your question.

Try this website www.google.co.uk

Good luck with your research.
The main thing is that you keep on paying your environment taxes.
Keep your chin up, pal. laugh

don4l

10,058 posts

177 months

Monday 9th March 2015
quotequote all
zygalski said:
don4l said:
I'm wondering if our resident idiot has managed to work out what 97% of scientists were agreed about.

He seems to be confident that they agreed about something... or other. But, he doesn't seem to have a clue about what they did agree on. Also, he knows that he agrees with them, even though he doesn't know what they agree about. Furthermore, he is more intelligent than most of us because he agrees with them (even though he doesn't know what he is agreeing with).

Zyg, I am a caring, compassionate human being, so I will help you to find the answer to your question.

Try this website www.google.co.uk

Good luck with your research.
The main thing is that you keep on paying your environment taxes.
Keep your chin up, pal. laugh
Have you found out what you agree with yet?

The really hilarious thing is that if you answer that question, then you will suddenly realise what an idiot you are.

You see, I am part of the 97%.

You really should try using Google. It is quite easy to use, and it can be quite educational.





Mr GrimNasty

8,172 posts

171 months

Monday 9th March 2015
quotequote all
There should be a do not engage list for this thread:- Flunker, Zitgalski, SexyTights (or whoever), they start off seemingly wanting a rational discussion, it quickly becomes obvious they are just professional agitators, trying to provoke with childish digs or drown/disrupt the thread with pages of impenetrable garbage. It's no accident they keep coming back. Attracting these people is a sign of the success and credibility of this thread.

4v6

1,098 posts

127 months

Monday 9th March 2015
quotequote all
Guam said:
They have no argument left save the precautionary principle and bluster.

The numbers don't add up and never have, those who can read the runes they cast with the models know this and have for many years.
How very very cru....er i mean true.

Dear old Harry said:
So, uhhhh.. what in tarnation is going on? Just how off-beam are these datasets?!!

Unbelievable – even here the conventions have not been followed. It’s botch after botch after botch.

22. Right, time to stop pussyfooting around the niceties of Tim’s labyrinthine software suites – let’s have a go at producing CRU TS 3.0! since failing to do that will be the definitive failure of the entire project..

Nearly 11,000 files! And about a dozen assorted ‘read me’ files addressing individual issues…

(yes, they all have different name formats, and yes, one does begin ‘_’!)

How handy – naming two different files with exactly the same name and relying on their location to differentiate! Aaarrgghh!!

If the latest precipitation database file contained a fatal data error… then surely it has been altered since Tim last used it to produce the precipitation grids? But if that’s the case, why is it dated so early?

So what’s going on? I don’t see how the ‘final’ precip file can have been produced from the ‘final’ precipitation database, even though the dates imply that. The obvious conclusion is that the precip file must have been produced before 23 Dec 2003, and then redated (to match others?) in Jan 04.

There is no way of knowing which Tim used to produce the current public files. The scripts differ internally but – you guessed it! – the descriptions at the start are identical. WHAT IS GOING ON?

So what is this mysterious variable ‘nf’ that isn’t being set? Well strangely, it’s in Mark N’s ‘rdbin.pro’. I say strangely because this is a generic prog that’s used all over the place! Nonetheless it does have what certainly looks like a bug…

Where is the documentation to explain all this?!

Bear in mind that there is no working synthetic method for cloud, because Mark New lost the coefficients file and never found it again (despite searching on tape archives at UEA) and never recreated it.

DON’T KNOW, UNDOCUMENTED. Wherever I look, there are data files, no info about what they are other than their names. And that’s useless..

So what the hell did Tim do?!! As I keep asking.

This is irritating as it means precip has only 9 fields and I can’t do a generic mapping from any cru format to cru ts.

Then.. like an idiot.. I had to test the data!

It’s halfway through April and I’m still working on it. This surely is the worst project I’ve ever attempted. Eeeek.

Oh bugger. What the HELL is going on?!

In fact, on examination the US database record is a poor copy of the main database one, it has more missing data and so forth. By 1870 they have diverged, so in this case it’s probably OK.. but what about the others?

Oh GOD if I could start this project again and actually argue the case for junking the inherited program suite!!

Oh Tim what have you done, man?

Just another thing I cannot understand, and another reason why this should all have been rewritten from scratch a year ago!

am I the first person to attempt to get the CRU databases in working order?!!

Oh bum. But, but.. how? I know we do muck around with the header and start/end years, but still..

In the upside-down world of Mark and Tim, the numbers of stations contributing to each cell during the gridding operation are calculated not in the IDL gridding program – oh, no! – but in anomdtb! ..well that was, erhhh.. ‘interesting’…So there is no guarantee that the station number files, which are produced *independently* by anomdtb, will reflect what actually happened!!

I am seriously worried that our flagship gridded data product is produced by Delaunay triangulation – apparently linear as well. As far as I can see, this renders the station counts totally meaningless. It also means that we cannot say exactly how the gridded data is arrived at from a statistical perspective – since we’re using an off-the-shelf product that isn’t documented sufficiently to say that. Why this wasn’t coded up in Fortran I don’t know – time pressures perhaps? Was too much effort expended on homogenisation, that there wasn’t enough time to write a gridding procedure? Of course, it’s too late for me to fix it too. Meh.

CRU NEEDS A DATA MANAGER.

Not only do both databases have unnecessary duplicates, introduced for external mapping purposes by the look of it, but the ‘main’ stations (2 and 4) have different station name & country. In fact one of the country names is illegal! Dealing with things like this cannot be automated as they’re the results of non-automatic decisions.

What a bloody mess.

Now looking at the dates.. something bad has happened, hasn’t it. COBAR AIRPORT AWS cannot start in 1962, it didn’t open until 1993! Looking at the data – the COBAR station 1962-2004 seems to be an exact copy of the COBAR AIRPORT AWS station 1962-2004. And wouldn’t you know it, the data for this station has missing data between 12/92 and 12/99 inclusive. So I reckon it’s the old FORREST AERO station (WMO 9464600, .au ID 11004), with the new Australian bulletin updates tacked on (hence starting in 2000) So.. do I split off the 2000-present data to a new station with the new number, or accept that whoever joined them (Dave?) looked into it and decided it would be OK? The BOM website says they’re 800m apart.

Hope that’s right..

All 115 refs now matched in the TMin database. Confidence in the fidelity of the Australian station in the database drastically reduced. Likelihood of invalid merging of Australian stations high. Let’s go..

getting seriously fed up with the state of the Australian data. so many new stations have been introduced, so many false references.. so many changes that aren’t documented. Every time a cloud forms I’m presented with a bewildering selection of similar-sounding sites, some with references, some with WMO codes, and some with both. And if I look up the station metadata with one of the local references, chances are the WMO code will be wrong (another station will have it) and the lat/lon will be wrong too.

I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO and one with, usually overlapping and with the same station name and very similar coordinates. I know it could be old and new stations, but why such large overlaps if that’s the case? Aarrggghhh! There truly is no end in sight.

I honestly have no idea what to do here. and there are countless others of equal bafflingness.

I suspected a couple of stations were being counted twice, so using ‘comm’ I looked for identical headers. Unfortunately there weren’t any!! So I have invented two stations, hmm.

I have to admit, I still don’t understand secondary parameter generation. I’ve read the papers, and the miniscule amount of ‘Read Me’ documentation, and it just doesn’t make sense.

As I was examining the vap database, I noticed there was a ‘wet’ database. Could I not use that to assist with rd0 generation? well.. it’s not documented, but then, none of the process is so I might as well bluff my way into it!

Units seem to vary: <DO YOU SEE? THERE’S THAT OH-SO FAMILIAR BLOCK OF MISSING CODES IN THE LATE 80S, THEN THE DATA PICKS UP AGAIN. BUT LOOK AT THE CORRELATIONS ON THE RIGHT, ALL GOOD AFTER THE BREAK, DECIDEDLY DODGY BEFORE IT. THESE ARE TWO DIFFERENT STATIONS, AREN’T THEY? AAAARRRGGGHHHHHHH!!!!!

Quite honestly I don’t have time – but it just shows the state our data holdings have drifted into. Who added those two series together? When? Why? Untraceable, except anecdotally.

But I am beginning to wish I could just blindly merge based on WMO code.. the trouble is that then I’m continuing the approach that created these broken databases.

Here, the expected 1990-2003 period is MISSING – so the correlations aren’t so hot! Yet the WMO codes and station names /locations are identical (or close). What the hell is supposed to happen here? Oh yeah – there is no ‘supposed’, I can make it up. So I have :-)

You can’t imagine what this has cost me – to actually allow the operator to assign false WMO codes!! But what else is there in such situations? Especially when dealing with a ‘Master’ database of dubious provenance (which, er, they all are and always will be).

False codes will be obtained by multiplying the legitimate code (5 digits) by 100, then adding 1 at a time until a number is found with no matches in the database. THIS IS NOT PERFECT but as there is no central repository for WMO codes – especially made-up ones – we’ll have to chance duplicating one that’s present in one of the other databases. In any case, anyone comparing WMO codes between databases – something I’ve studiously avoided doing except for tmin/tmax where I had to – will be treating the false codes with suspicion anyway. Hopefully.

This still meant an awful lot of encounters with naughty Master stations, when really I suspect nobody else gives a hoot about. So with a somewhat cynical shrug, I added the nuclear option – to match every WMO possible, and turn the rest into new stations (er, CLIMAT excepted). In other words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad, but I really don’t think people care enough to fix ‘em, and it’s the main reason the project is nearly a year late.

this was a guess! We’ll see how the results look Right, erm.. off I jolly well go!

The trouble is, we won’t be able to produce reliable station count files this way. Or can we use the same strategy,producing station counts from the wet database route, and filling in ‘gaps’ with the precip station counts? Err.

…It looks as though the calculation I’m using for percentage anomalies is, not to put too fine a point on it, cobblers.

So, good news – but only in the sense that I’ve found the error. Bad news in that it’s a further confirmation that my abilities are short of what’s required here.

…unusual behaviour of CRU TS 2.10 Vapour Pressure data was observed, I discovered that some of the Wet Days and Vepour Pressure datasets had been swapped!!

Ah – and I was really hoping this time that it would just WORK. But of course not – nothing works first time in this project.

Oh, GOD. What is going on? Are we data sparse and just looking at the climatology? How can a synthetic dataset derived from tmp and dtr produce the same statistics as an ‘real’ dataset derived from observations?

I DON’T UNDERSTAND!!!!!

Oh, sod it. It’ll do. I don’t think I can justify spending any longer on a dataset, the previous version of which was completely wrong (misnamed) and nobody noticed for five years.

“Bear in mind that there is no working synthetic method for cloud, because Mark New lost the coefficients file and never found it again (despite searching on tape archives at UEA) and never recreated it. This hasn’t mattered too much, because the synthetic cloud grids had not been discarded for 1901-95, and after 1995 sunshine data is used instead of cloud data anyway.”As for converting sun hours to cloud cover.. we only appear to have interactive, file-by-file programs. Aaaand – another head-banging shocker! The program sh2cld_tdm.for, which describes itself thusly:

program sunh2cld c converts sun hours monthly time series to cloud percent (n/N)

Does NO SUCH THING!!! Instead it creates SUN percentages! This is clear from the variable names and user interactions.

So.. if I add the sunh -> sun% process from sh2cld_tdm.for into Hsp2cldp_m.for, I should end up with asun hours to cloud percent convertor. Possibly.

It also assisted greatly in understanding what was wrong – Tim was in fact calculating Cloud Percent, despite calling it Sun Percent!! Just awful.

… So to CLOUD. For over a year, rumours have been circulating that money had been found to pay somebody for a month to recreate Mark New’s coefficients. But it never quite gelled. Now, at last, someone’s producing them! Unfortunately.. it’s me.

The idea is to derive the coefficients (for the regressing of cloud against DTR) using the published 2.10 data. We’ll use 5-degree blocks and years 1951-2002, then produce coefficients for each 5-degree latitude band and month. Finally, we’ll interpolate toget half-degree coefficients. Apparently.

So, erm.. now we need to create our synthetic cloud from DTR. Except that’s the thing we CAN’T do because pro cal_cld_gts_tdm.pro needs those bloody coefficients (a.25.7190, etc) that went AWOL.

Hunting for CDDs I found a potential problem with binary DTR (used in the construction of Frost Days, Vapour Pressure, and (eventually) Cloud. It looks as though there was a mistyping when the 2.5-degree binaries were constructed:

Another problem. Apparently I should have derived TMN and TMX from DTR and TMP, as that’s what v2.10 did and that’s what people expect. I disagree with publishing datasets that are simple arithmetic derivations of other datasets published at the same time, when the real data could be published instead.. but no.

I then look in the 1995 anomaly files…This whole process is too convoluted and created myriad problems of this kind. I really think we should change it.

I was going to do further backtracing, but it’s been revealed that the same issues were in 2.1 – meaning that I didn’t add the duff data. The suggested way forward is to not use any observations after 1989, but to allow synthetics to take over. I’m not keen on this approach as it’s likely (imo) to introduce visible jumps at 1990, since we’re effectively introducing a change of data source just after calculating the normals. My compromise is to try it – but to also try a straight derivation from half-degree synthetics.

So actually, this was saving with a gridsize of 5 degrees! Disquietingly, this isn’t born out by the file sizes, but we’ll gloss over that.

Station counts should be straightforward to derive from the anomaly files (.txt), as output by anomdtb.f90. This, however, will only work for Primary parameters, since Secondaries are driven from synthetic data as well. Further, the synthetic element in this is usually at 2.5 degrees, so a direct relationship with half-degree coverage will be hard to establish.

So, we can have a proper result, but only by including a load of garbage!

OK, got cloud working, have to generate it now.. but distracted by starting on the mythical ‘Update’ program.

Of course, one of the problems is that you need a latitude value to perform the conversion – so the CLIMAT bulletins lose the value if they can’t be matched in the WMO list! Not much I can do about that, and let’s face it those stations are going to end up as ‘new’ stations with no possibility of a 61-90 normal.

So the new cloud databases I’ve just produced should be, if not identical, very similar? Oh, dear. There is a passing similarity, though this seems to break down in Winter. I don’t have time to do detailed comparisons, of course, so we’ll just run with the new one.

The procedure last time – that is, when I was trying to re-produce TS 2.10, we have no idea what the procedure was for its initial production!

So after gridding we could add these.. except that after gridding we’ll have incorporated the DTR_derived synthetic cloud, which is of course based on the 1961-1990 normals as it’s derived from DTR!! Arrrrggghh.

So.. {sigh}.. another problem. Well we can’t change the updates side, that has to use 1995-2002 normals. But maybe we’ll have to adjust the station anomalies, prior to gridding? I don’t see an alternative.

The question is, IS THIS ANY GOOD? Well, we currently have published cloud data to 2002. So we can make comparisons between 1996 and 2002. Oh, my. I am sure I’ve written plenty of comparison routines, but as to their location or name..ah…The results were less than ideal, though they could have been much worse. Essentially, North America is totally different…

The deduction so far is that the DTR-derived CLD is waaay off. The DTR looks OK, well OK in the sense that it doesn;t have prominent bands! So it’s either the factors and offsets from the regression, or the way they’ve been applied in dtr2cld.

Well, dtr2cld is not the world’s most complicated program. Wheras cloudreg is, and I immediately found a mistake! Scanning forward to 1951 was done with a loop that, for completely unfathomable reasons, didn’t include months! So we read 50 grids instead of 600!!! That may have had something to do with it. I also noticed, as I was correcting THAT, that I reopened the DTR and CLD data files when I should have been opening the bloody station files!! I can only assume that I was being interrupted continually when I was writing this thing. Running with those bits fixed improved matters somewhat, though now there’s a problem in that one 5-degree band (10S to 5S) has no stations! This will be due to low station counts in that region, plus removal of duplicate values.

Had a think. Phil advised averaging the bands either side to fill the gap, but yuk! And also the band to the North (ie, 5S to equator) is noticeably lower (extreme, even). So after some investigation I found that, well, here’s the email:

<MAIL QUOTE>Phil, I’ve looked at why we’re getting low counts for valid cloud cells in certain 5-degree latitude bands.

The filtering algorithm omits any cell values where the station count is zero, for either CLD or DTR. In general, it’s the CLD counts that are zero and losing us the data. However, in many cases, the cloud value in that cell on that month is not equal to the climatology. And there is plenty of DTR data. So I’m wondering how accurate the station counts are for secondary variables, given that they have to reflect observed and synthetic inputs. Here’s a brief example: (all values are x10)

CLD——————- DTR——————- val stn anom val stn anom

553.00 0.00 -10.00 134.00 20.00 -1.00 558.00 0.00 -17.00 139.00 20.00 2.00 565.00 0.00 -23.00 137.00 20.00 5.00 581.00 0.00 -32.00 139.00 16.00 8.00 587.00 0.00 -38.00 137.00 16.00 9.00 567.00 0.00 -46.00 127.00 15.00 6.00 564.00 0.00 -49.00 120.00 14.00 3.00 552.00 0.00 -48.00 111.00 12.00 0.00 543.00 0.00 -45.00 105.00 12.00 -1.00 535.00 0.00 -40.00 99.00 10.00 -1.00

So, I’m proposing to filter on only the DTR counts, on the assumption that PRE was probably available if DTR was, so synthesis of CLD was likely to have happened, just not shown in the station counts which are probably ‘conservative’?<END MAIL QUOTE> I didn’t get an email back but he did verbally consent. So away we go!

Running with a DTR-station-only screening gives us lots of station values, even with duplicate filtering turned back on. Niiice. It’s still not exactly smooth, but it might be enough to ‘fix’ the synthetic cloud.

Differences with the climatology, or with the 2.10 release, are patchy and generally below 30%. Of course it would be nice if the differences with the 2.10 release were negligable, since our regression coefficients were based on 2.10 DTR and CLD.. though of course the sun hours component is an unknown there, as is the fact that 2.10 used PRE as well as DTR for the synthetics. Anyway it gets the thumbs-up. The strategy will be to just produce it for 2003-2006.06, to tie in with the rest of the 3.00 release. So I just need to.. argh. I don’t have any way to create NetCDF files 1901-2006 without the .glo.abs files to work from! I’d have to specially code a version that swallowed the existing 1901-2002 then added ours. Meh.

I really thought I was cracking this project. But every time, it ends up worse than before.

I really do hate this whole project.

No time to finish and test the fortran gridder, which will doubtless sink to some depth and never be seen again, we’ll carry on with this mediocre approach.

It’s not going to be easy to find 14 missing stations, is it? Since the anomalies aren’t exactly the same.

Should I be worried about 14 lost series? Less than 2%. Actually, I noticed something interesting.. look at the anomalies. The anomdtb ones aren’t *rounded* to 1dp, they’re *truncated*! So, er – wrong!

So let’s say, anomalies are done. Hurrah. Onwards, plenty more to do!

NO IDEA why, so saying they affect particular 0.5-degree cells is harder than it should be. So we’ll just gloss over that entirely ;0)

Just went back to check on synthetic production. Apparently – I have no memory of this at all – we’re not doing observed rain days! It’s all synthetic from 1990 onwards. Probably the worst story is temperature, particularly for MCDW. Over 1000 new stations! Highly unlikely. I am tempted to blame the different lat/lon scale, but for now it will have to rest.

Oh, my giddy aunt. What a crap crap system.

Also went through the parameters one by one and fixed (hopefully) their scaling factors at each stage. What a minefield!

- I was able to look at the first problem (Guatemala in Autumn 1995 has a massive spike) and find that a station in Mexico has a temperature of 78 degrees in November 1995! This gave a local anomaly of 53.23 (which would have been ‘lost’ amongst the rest of Mexico as Tim just did country averages) and an anomaly in Guatemala of 24.08 (which gave us the spike)…

Oh, ****. It’s the bloody WMO codes again. **** these bloody non-standard, ambiguous, illogical systems. Amateur hour again.

This whole project is SUCH A MESS.

I am seriously close to giving up, again. The history of this is so complex that I can’t get far enough into it before by head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the update prog. I could be throwing away all kinds of corrections – to lat/lons, to WMOs (yes!), and more.

You see how messy it gets when you actually examine the problem? What we really need, and I don’t think it’ll happen of course, is a set of metrics (by latitude band perhaps) so that we have a broad measure of the acceptable minimum value count for a given month and location. Even better, a confidence figure that allowed the actual standard deviation comparison to be made with a looseness proportional to the sample size.

All that’s beyond me – statistically and in terms of time. I’m going to have to say "30".. it’s pretty good apart from DJF. For the one station I’ve looked at.

OH F*ck THIS. … I’m hitting yet another problem that’s based on the hopeless state of our databases. There is no uniform data integrity, it’s just a catalogue of issues that continues to grow as they’re found.
The state of the art that is climate "science". biggrin

Andy Zarse

10,868 posts

248 months

Monday 9th March 2015
quotequote all
4v6 said:
The state of the art that is climate "science". biggrin
That article is so badly written and unintelligible it might easily have been written by the folk who concoct the datasets.

turbobloke

104,046 posts

261 months

Monday 9th March 2015
quotequote all
4v6 said:
The state of the art that is climate "science". biggrin
It's a while since I read that so thanks for another laugh at junkscience

But...but...never mind all that as 20XX was definitely, indisputably, the hottest scorchio year evah.

still no established causality anyway

jurbie

2,345 posts

202 months

Monday 9th March 2015
quotequote all
Lib Dems plans for saving us from carbon doom.

http://www.msn.com/en-gb/news/other/lib-dems-ponde...

Highlights include:

10 mph speed limit.
A ban on all high emission cars by 2040.
A ban on all coal fired power stations by 2050 unless they are fitted with yet to be invented carbon capture and storage.

A Liberal Democrat spokesperson said:
"It's no exaggeration to say our future, and that of our children, depends upon us tackling climate change head on.

"The more we delay, the more we are at threat from this ticking time bomb.

"This is why if in Government again we will set about making Britain zero carbon, securing home grown renewable energy while cutting our reliance on fossil fuels from abroad.

"Our proposals are a bold ambition to end Britain's adverse impact on climate change completely and these two policies form part of this."
Trying to out-loon the Greens?


4v6

1,098 posts

127 months

Monday 9th March 2015
quotequote all
I propose carbon capture gags for all libdems and greenies and taxable methane emissions nappies for em.....we'd make a mint! smile

turbobloke

104,046 posts

261 months

Monday 9th March 2015
quotequote all
BBC climate change stance is in brazen defiance of the law argues Christopher Booker

Booker said:
The fact is that they know they have a legal obligation to be impartial. They know that they are breaking the law. But they also know they can get away with it, because no one in authority will ever call them to account for doing so.
TOPIC CLOSED
TOPIC CLOSED