Posted by jlwile on April 10, 2013The Economist recently ran a story highlighting the fact that the Global Climate Model (GCM) predictions upon which most of the fear of global warming is based are not doing well when compared to measured surface temperatures over the past few years. I found the story to be surprisingly balanced and full of a lot of good thoughts. I strongly recommend that you read it if you can find the time, because it gives you a great idea of how little we know about climate science. I don’t want to rehash the article, but I do want to add some thoughts of my own.
If you look at the graph above (which is from the article), you will see the GCM predictions most recently cited by Global Warming advocates. The dark cyan areas represent what the GCMs predict with a certainty of 75%, and the lighter areas represent what the GCMs predict with a certainty of 95%. As you can see, the measured surface temperatures (given by the dark line) are not behaving as predicted for the past several years. In fact, they have already strayed out of the 75% certain predictions and are poised to stray out of the 95% certain predictions. This, of course, is discussed in the article. What is not discussed is that the graph is rather misleading.
If you look at the graph from 1950 to the present day, you will see remarkable agreement between the GCM “predictions” and the measured data. However, prior to 2001, none of those “predictions” are actual predictions. They are a retrospective fit to the already-known data. You see, the GCMs are so oversimplified that they contain all sorts of “fudge factors.” Those fudge factors are varied to produce as much agreement as possible between the known data and the GCMs. Since the GCM predictions shown in that graph were produced for the IPCC report issued in 2007, they represent work done after the IPCC report issued in 2001. As a result, the data that appear on the graph prior to 2002 were all known when the work was started on the 2007 report. This means that all agreement between the “predictions” and the data prior to 2002 say nothing about the ability of the GCMs. It only tells you how well the fudge factors could be varied to agree with the known data.
Is that legitimate? Of course it is! When you don’t understand a process well enough to model it properly, fudge factors are the best you can do. My point is simply that from the time the GCM predictions are real, honest predictions, they have failed miserably at describing the data trend. Notice that the real predictions show a constant increase from 2002 on. However, no such increase can be seen in the data. What does this tell us? It tells us what award-winning physicist and mathematician Dr. Freeman Dyson has already said about climatologists:
Their computer models are full of fudge factors…The models are extremely oversimplified…They don’t represent the clouds in detail at all. They simply use a fudge factor to represent the clouds.
In other words, the models are really bad.
If we are to make reasonable policy decisions regarding things like global warming, we need to know the limits of the models upon which people are making their predictions. Based on graphs such as the one given above, it is clear that the models have very little (if any) real predictive power. Of course, that could change. As we understand more about climate, perhaps the models will get better. However, at least according to two climatologists, while the current models are better, they might lead to more uncertainty.
I want to make two other quick points about the article in The Economist. The temperatures shown in the graph, and the temperatures discussed in the article, are exclusively surface temperatures. I find this to be a problem on at least two levels. First, surface temperatures are heavily biased towards temperatures above land, because that’s where most of the thermometers are. However, land covers only about 30% of the planet. Thus, the data being used are biased towards a small fraction of the planet. Second, carbon-dioxide-induced global warming is an atmospheric phenomenon. Thus, we should look at the temperature of the atmosphere, not the earth’s surface. If we use satellites to look at the temperature of the middle of the troposphere (the part of the atmosphere that contains earth’s weather), we find no warming trend since 1979. If we take those temperature measurements (which are also not biased towards the land) as the best probe of global temperature, the models are even worse!
The other point I want to make concerns a statement in the article. When discussing the possible reasons why the models aren’t doing very well, the author writes:
Or, as an increasing body of research is suggesting, it may be that the climate is responding to higher concentrations of carbon dioxide in ways that had not been properly understood before.
I personally think this is the most important reason the data and GCMs don’t agree with one another. The earth is an incredibly well-designed system that has all sorts of negative feedback mechanisms that allow it to remain a haven for life despite the changes that come along (see here, here, here, here, and here for just a few examples). While this does not give us permission to ruin the wonderful creation God has given us, it should cause us to pause and think about all the consequences associated with the actions we are told to take to mitigate this supposed climate disaster.
If we listen to the dire predictions made by those who ignore the negative feedback mechanisms that clearly exist in earth’s design, we are likely to kill a lot of people for no good reason.