When you read about global warming, aka “climate change,” you often hear about climate models that tell us the world will reach dangerously high temperatures if people don’t sharply reduce their use of carbon-dioxide-emitting energy sources. However, these models are built using our current understanding of climatology, which is incomplete at best. As a result, there is a lot of uncertainty in their forecasts. Specifically, they seem to overstate any warming that has actually occurred so far.
Why is that? The simple answer is that we don’t understand climate science very well, and as a result, it is hard to predict what effects human activity will have on future climate. Scientists, however, need a more detailed answer. What exactly is wrong with our understanding of climate science? Christopher Monckton, Third Viscount Monckton of Brenchley thinks he has found one reason. Whether or not he is correct, his assertion illustrates how little we know about forecasting climate.
Now, of course, Viscount Monckton is not a climate scientist. He has a masters in classics and a diploma in journalism studies. He served as a Special Advisor to Prime Minister Margaret Thatcher and is and a well-known skeptic of the narrative that global warming is a serious problem that has been caused by human activity. Nevertheless, he has studied climate science extensively and thinks he has found a “startling” mathematical error that is common to all climate models. He is currently trying to get a paper that makes his case published in the peer-reviewed literature, but as the article to which I linked shows, the reviewers have serious objections to its main thesis.
Viscount Monckton essentially says that climate models are overstating warming because they are not taking climate feedbacks into account properly. When the average temperature of the earth increases, it affects many processes that occur on the earth, and some of those processes, in turn, affect the climate. For example, as temperatures increase, some soil that has been frozen for many years starts to thaw, releasing more greenhouse gases. That, in turn, will cause even more warming. This is an example of a positive climate feedback – a response to increasing temperature that will further increase temperature. Please note that while the idea of thawing soil further warming the planet is the conventional wisdom, actual experiments demonstrate the opposite.
Climate models, of course, have to take such feedbacks into account, and Viscount Monckton is saying that they are doing it incorrectly. Climate models right now judge the strength of the feedbacks based on the change in global temperature. If the earth’s temperature rises by 1 degree, then the feedback should be calculated based on that and that alone. Viscount Monckton says that this isn’t proper. In other applications where feedbacks are important, the effect of the feedback is based on the actual value of what is changing. In climate models, then, you have to calculate what the feedbacks are already doing at the current temperature, and then see how they change at the new temperature.
But wait a minute, isn’t that doing the same thing as basing the feedbacks on the change in temperature? Not according to Viscount Monckton. He says that when you base feedbacks on the change in temperature, you are ignoring the current state of the feedbacks. As a result, you are amplifying the effect that a changing temperature has on them. What you need to do is think about the current state of the feedbacks based on the current temperature, and then you have to see what change occurs for any new temperature. That results in much weaker effects from the feedbacks.
I have no idea whether or not Viscount Monckton is correct. He has been shown to be wrong before (claiming the title “Lord” when he is not a member of the House of Lords, for example), and his rhetoric is often over the top (saying he will lock up the “bogus scientists” that have caused the global warming scare, for example). Thus, he could very well be wrong about this.
Here’s the more important issue that this controversy brings to light: According to him, taking the feedbacks into account the way he thinks they should be taken into account produces a warming of 1.17 degrees Celsius when the amount of carbon dioxide is twice its pre-industrial value. This is roughly one-third of the current IPCC prediction of 3.0 degrees Celsius. So the way you take into account the effect of climate feedbacks produces nearly a factor of three change in the prediction!
Now let’s suppose Viscount Monckton is wrong and current climate models are taking the feedbacks into account properly. Still, do you really think we understand climate well enough to take into account all of the possible feedbacks? Even for the feedbacks we currently recognize, are we really modeling them properly? After all, as I pointed out above, experiments indicate that the effect of thawing soil is opposite that of the conventional wisdom. How many other feedbacks actually act opposite of the conventional wisdom?
If feedbacks are that important, I think we have some indication of why global climate models are overstating the current warming we see. It’s probably because they don’t include all the possible feedbacks and/or don’t understand the feedbacks they are trying to model.
8 thoughts on “Another Illustration of How Little We Know About Climate Forecasting”
According to Dutch scientists we’ve grossly underestimated sea level rise to to a different form of feedback – sea floor sinking. All that new water weight is supposedly compressing the ocean floor. Whether or not this is true I don’t know… I did find it kind of funny.
It’s funny they think that they can measure the change so precisely (2.5 mm).
Wow I didn’t even catch that. Hillarious!
Sea floor sinking is not new science. Back in the 1980s Phillips Petroleum noticed their enormous oil platform group off the coast of Norway was subsiding. The term (subsidence) was used to indicate the sea floor depressing as oil was pumped out from beneath it. This began to be accurately measured in the 1990s using GPS. At the time, they used the “new” publicly available signal from US GPS satellites to measure as accurately as possible how much the platforms were sinking into the ocean (since the platforms are built onto the sea floor, when the floor sinks, the platforms sink with it). Phillips needed tremendous accuracy to determine how much to raise the platforms up above sea level again since they were all sinking in elevation. If memory serves, their accuracy back then was in the single digit centimeter range. I’m sure others have since improved upon this with the improvement of technology.
But this study didn’t use satellites. It used a mathematical model. Also, subsidence due to oil being pulled out of the earth is completely different from what this study is claiming.
Dr. Wile, I admit I have not read the scientific publication, only the story at the geographical.co.uk link. I must be misunderstanding this part:
“The existing numbers were based on satellite data which measured surface level relative to the centre of the Earth, assuming the ocean floor as basically a fixed constant depth. Now this assumption has been proved to be wrong, the significance being that even more water from melting ice has reentered the oceans than was thought, thanks to climate change and human activity.”
Anyway, I don’t agree with their second sentence. And the data isn’t shown to back up the claim of the first sentence. That’s why I referenced the Phillips work, since that was apolitical, but was instead purely business oriented. They didn’t want to lose their platforms to waves, and didn’t want to pay too much to raise the platform up more than necessary. Thus the importance of being both accurate AND correct about the subsidence.
I’d follow it up with the thought/hypothesis that since shallow oceans can crush under the weight of the water, it seems reasonable that deep ocean floors could also slowly sink.
I’m interested in seeing the actual data (often hard to find when it’s coaxed and massaged in published works) since GPS data on oceans seems difficult to manage mathematically due to tides, waves, etc. If you happen to see it please share it, or if I come across it, I’ll follow up with it here.
I would suggest that you read the paper, then. You will see there are no measurements. The quote you give starts out discussing the sea level rise estimates before the authors of the paper addressed the issue. That’s what “existing numbers” refers to. The satellites measured only the sea level based on the distance from the center of the earth. The quote says the authors “proved” (a word that no scientist should ever use) those existing numbers wrong. How did they “prove” it? With a mathematical model. As they say in the paper:
As you can see, then. This is a mathematical model. Using data on mass loss from Greenland and Antarctica, they estimate the new weight of the oceans, and then use a mathematical model to estimate how much the ocean floor has sunk as a result.
It is very possible that the deep ocean floors could sink due to increased weight. That’s not the issue. The issue is that they think their mathematical model is accurate enough to determine that sinking on the millimeter scale.
Having dealt with all sorts of computer models over nearly 4 decades, I would, as a matter of policy, disregard the results of most, if not all, climate models in use today, until the computer models were fully vetted by as many individuals as possible who have no part in the debate. The subject matter experts (in this case climate scientists) are more than likely to have got their models wrong in various ways, both obvious and subtle.
The basic problem that arises from all such models is the assumption of correctness when the results obtained match what the subject matter experts think should be correct for the test cases they use. I have seen far too many models that appear to give the correct answers and when you actually analyse the code, the results are not based on any sound algorithms or data.
None of these models will have been built up in any sort of audited logical manner. Neither will there be detailed discussion in the documentation as to why specific things are done in specific ways or why the various magic numbers have been chosen. Even the basic assumptions underlying the models will rarely be documented in detail.
Each of the above will contribute to the problems within the models, irrespective of any veracity that Christopher Monckton’s hypothesis may have.
It is interesting to note that there are various well-known systems that are used by and trusted by significant communities that were built and maintained by groups of very intelligent people in which we don’t actually have a fully audited and fully documented review of those code bases. It is actually unknown if the systems are correct. It is often just assumed that they are correct, until someone finds a “bug”.
Comments are closed.