A Significantly Better Way to Address the Possibility of Global Warming

An illustration of carbon dioxide molecules (left) becoming ethylene molecules (right) by passing through an electrified grid in water. (Credit: Meenesh Singh, one of the study’s authors.)

If you have been reading this blog for a while, you know that I am very skeptical of the idea that global warming (aka climate change) is a serious problem. There are many scientific reasons for this, most notably the fact that we still have no real understanding of how increased carbon dioxide in the atmosphere affects global temperature. However, we do know that the amount of carbon dioxide in the air is increasing, and we also know that human activity is causing that increase. As a result, it is reasonable to try to do something about that, as long as we don’t kill people or a disproportionate number of animals in the process. The problem, of course, is that most of the proposed solutions for reducing humanity’s carbon dioxide emissions do both (see here, here, here, and here).

Rather than trying to replace efficient and inexpensive sources of energy with our current unreliable, expensive, and deadly “green” sources, we should work on developing technologies that reduce carbon dioxide emissions in a way that doesn’t affect energy price and availability. One example of this better approach can be found in a recently-published study that reports on a new method which can be used to convert carbon dioxide into ethylene, a chemical that is used in a wide range of industrial processes.

The most common method currently in use for producing ethylene is called “steam cracking,” and it is a high-temperature process that is estimated to produce 260 million tons of carbon dioxide each year. However, the process described in the study can be done at much lower temperatures. It involves passing an electric current through a chemical reaction vessel that contains carbon dioxide and a water-based solution. Hydrogen atoms from the water end up combining with carbon atoms from the carbon dioxide to produce ethylene. While such methods have been explored in the past, this method is significantly more efficient. In addition, if the electric current powering the cell is produced by solar or other renewable energy sources, the process actually uses up carbon dioxide. In other words, this process turns an activity that adds carbon dioxide to the atmosphere into an activity that pulls carbon dioxide from the atmosphere.

But wait a minute. If this process requires renewable energy sources to pull carbon dioxide out of the air, aren’t we just falling back on the old “green” technologies that cause so many problems in the first place? Not really. Two of the biggest problems with current “green” technologies are that they make energy more expensive and often cannot keep up with our energy demands. This process actually reduces cost significantly, because it doesn’t require so much heat. Thus, the higher expense of “green” technologies is offset by the lower cost of the process as a whole. Also, we don’t need to be making ethylene constantly. We can make lots of it when the “green” sources are producing energy, and we can wait when they aren’t. As a result, we are not harmed by their limitations.

In addition, scaling up this technology to make it useful for industrial purposes will probably lead to ideas of how similar processes can be used in other industrial applications. In my view, this kind of technology should be the focus of any proposed solutions for reducing humanity’s carbon footprint.

An Excellent Debate on Climate Change

The Soho forum recently held an Oxford-style debate on climate change between two experts on the matter. The resolution being debated was:

Climate science compels us to make large and rapid reductions in greenhouse gas emissions.

Arguing for the affirmative was Dr. Andrew Dessler, professor of geoscience at Texas A&M and director of the Texas Center for Climate Studies. Arguing for the negative was Dr. Steven Koonin, a professor of civil and urban engineering at NYU’s Tandon School of Engineering, who served as undersecretary for science at the Department of Energy during the Obama administration. You can watch it here:

You can watch it on YouTube by clicking here.

The debate was excellent, with both debaters showing why they are considered experts on the issue. However, Dr. Koonin was the clear winner, because the debate audience was asked to respond to the resolution both before and after the debate. Before the debate, 24.69% of those in attendance agreed with the resolution, while 48.77% disagreed. The rest were undecided. Thus, the audience was probably better educated on the issue than most of the public (and most politicians), since a plurality had the more scientifically-defensible reaction to the resolution. However, by the end, the “yes” votes had decreased by 5.56%, while the no votes increased by 24.69%.

Now, of course, you have to be the judge of why Dr. Koonin won the debate. For me, it was because his statements were backed up by data. I hope there are more debates like, because the better educated the public is on the science related to climate change, the more likely it will be for the world to enact sane measures to address it.

Another Failed Global Warming Prediction

A portion of the Great Barrier Reef (click for credit)

Climate change alarmists (aka global warming alarmists) are constantly making dire predictions, and over time, those predictions end up being falsified*. This doesn’t stop them from making even more dire predictions, however, since the media rarely point out how wrong they have been in the past. Usually, those dire predictions are found in the popular media, but since climate science has been infected with craziness, such nonsense even makes it into the the scientific literature.

Consider, for example, this 2012 study, which was published in the prestigious Proceedings of the National Academy of Sciences of the United States. The authors claim that their study shows coral in the Great Barrier Reef (GBR) had been declining for the past 27 years, and then they state:

Without significant changes to the rates of disturbance and coral growth, coral cover in the central and southern regions of the GBR is likely to decline to 5–10% by 2022. The future of the GBR therefore depends on decisive action. Although world governments continue to debate the need to cap greenhouse gas emissions, reducing the local and regional pressures is one way to strengthen the natural resilience of ecosystems.

So governments must act decisively to cut greenhouse gas emissions, or the GBR is going to dwindle to a fraction of its original glory. Well no decisive action was taken, and it’s now 2022, so we might as well see how close to accurate the prediction is.

Probably the most reliable source on the state of the GBR is the Australian Institute of Marine Science. Every year, it does an extensive survey of the GBR and publishes its results. The study splits the GBR into three specific regions (Northern, Central, and Southern). I showed the results from last year’s survey about a year ago. Here are the results of this year’s survey:

Notice that for the northern and central regions of the GBR, coral cover is at an all-time high, and more than THREE TIMES the value predicted in the prestigious scientific journal. While coral cover in the Southern region isn’t at an all-time high, it is still more than three times the amount that was predicted.

Obviously, then, we can add this to the ever-growing list of falsified predictions made by those who care more about scaremongering than good science.

*NOTE: By “falsified,” I do not mean “altered in order to mislead,” which is the first definition. I am using it in the scientific sense, which means “shown to be false.” Scientific theories must make predictions. If those predictions are verified, the theory gains credibility. If they are falsified, the theory loses credibility.
Return to Text

Despite What You Have Read, Wildfires are NOT on the Rise

One of the more common myths associated with “climate change” (aka global warming) is the idea that wildfires are on the rise. Consider, for example, the article entitled, “How do we know wildfires are made worse by climate change?” It states:

In 2019 Australia was burning. In 2020 Siberia and California were burning. These are only the more recent examples of dramatic extreme wildfire. People are asking if forest fires are linked to climate change.

The experts tell us that yes, the growing extent and increasing severity of wildfires are indeed driven by climate change.

There is only one problem with such nonsense: wildfires are most definitely NOT increasing in severity. How do we know this? The European Space Agency has been using satellites to track global wildfires. At first, their data sets started in the year 2000, and those data sets show that over the past 20 years, there has been a slight decrease in the amount of land that was burned by wildfires.

However, they recently used historical satellite images to increase the length of time covered by their datasets. We can now trace the amount of land burned by wildfires all the way back to 1982. Thus, we have nearly 40 years worth of data that cover the entire world. What do the data show? The graph is at the top of the article. As you can see, there is no discernible trend. The amount of land burned by wildfires changes from year to year, but the average remains remarkably constant.

Like much of what you read about “climate change,” then, the idea that wildfires are becoming more severe or more frequent is just patently false. Unfortunately, no amount of data will stop this false claim from being promulgated, because most people writing about this issue care more about their agenda than they do about science.

Monarch Butterflies Provide a Lesson About Climate Change Hysteria

A cluster of monarch butterflies in Santa Cruz, CA (click for credit)

North American monarch butterflies perform an incredible migration from their summer breeding grounds to a place where they spend the winter. Most of the monarchs that are born east of the Rocky Mountains travel to Mexico, while those born west of the Rocky Mountains fly to the California Coast. Last year, the headlines about those that travel to California were dire:

Probably the most beloved and recognized butterfly in the United States, the western monarch is essentially on the brink of extinction, said Katy Prudic, co-author of a new report from the University of Arizona that found that the monarchs, along with about 450 butterfly species in the Western United States, have decreased overall in population by 1.6% per year in the past four decades.

Like much of what you read in science news, that’s just not true. The Xerces Society for invertebrate conservation does detailed surveys of western monarch populations, and here are their results for the “Thanksgiving Count,” which is done over three weeks around Thanksgiving:

Looking at the graph, you can see that Katy Prudic is simply wrong. In fact, the western monarch population has both decreased and increased over the past 24 years. Of course, you can also see why the headlines were dire in 2021. The monarch population is barely visible on the graph in 2020. It was the lowest seen in the entire period. So clearly, western monarchs were on the verge of extinction. However, last year’s count shows a population that is the eighth largest in that same time period!

And honestly, anyone who studies these amazing migrations should not be surprised. After all, the population of the monarchs that migrate to Mexico experienced a similar dramatic drop in population back in 2014, but they also recovered from it.

What does this have to do with climate change, aka global warming? First, there have been the predictable attempts to cite it as a cause for the monarchs’ woes. More importantly, however, it shows how dangerous it is to use a trend in nature to make dire predictions.

In both populations, the number of monarchs increased and decreased, but the overall trend for more than 20 years was downward. Then, there was a sudden change, and the decline stopped. This is the way nature works. There is a lot of variation, and tenuous trends in those variations usually don’t mean much.

Now don’t get me wrong. There is a lot of value in collecting data like population counts and global temperatures, and it is important to be concerned about apparent trends. A good scientist, however, should understand that apparent trends are not necessarily actual trends. As a result, good scientists should not make definitive predictions based on them.

Slaughtering Species in the Name of “Green Energy”

A wind farm in California at sunset (click for credit)

Nine years ago, I wrote about a study that indicated wind turbines in the U.S. kill more than half a million birds and more than 800,000 bats each year. Five years later, I wrote about another study that indicated wind farms act as apex predators in the ecosystem where they are built. While very little of this important information makes its way into the popular press, it has informed those who actually care about the environment. As a result, more studies have been done, and these studies indicate something rather surprising about two of the “green energy” solutions that have been promoted to “save our planet.”

So far, the most chilling study was published in Royal Society Open Science. The authors of the study collected feathers of dead birds found at selected wind farms and solar energy facilities in California. They found that of the many species killed by these “green energy” sites, 23 are considered priority bird species, which means their long-term survival is threatened. The list is quite diverse, including the American white pelican, the willow flycatcher, the bank swallow, and the burrowing owl.

Now, just because a species is threatened, that doesn’t mean a few extra deaths are going to be a problem. After all, these facilities are in specific places, and bird populations can cover wide geographical regions. A few extra deaths in some regions can be compensated for by more reproductivity in other regions. Thus, the authors used models to estimate the impact that deaths from wind farms and solar facilities will have on the overall populations. They conclude:

This study shows that many of the bird species killed at renewable energy facilities are vulnerable to population or subpopulation-level effects from potential increases in fatalities from these and other anthropogenic mortality sources. About half (48%) of the species we considered were vulnerable, and they spanned a diverse suite of taxonomic groups of conservation concern that are resident to or that pass through California.

In other words, the study indicates that 11 priority bird species that live in or pass through California are now more at risk because of “green energy” sites in that state. Now, of course, this conclusion is model-dependent, and the models might be wrong. However, at minimum, this study identified with certainty that at least 23 species of threatened birds are being slaughtered at wind farms and solar facilities. That should cause people who actually care about the natural world at least some concern. Unfortunately, even though this study was published a month ago, I haven’t seen a single report about it in the popular press.

As an aside, it’s pretty obvious how wind farms kill birds, but how do solar energy facilities do it? The short answer is that we don’t know. What we do know is that birds tend to crash into solar panels. Perhaps they interpret the shiny surface of the solar panels as a body of water where they can land. Perhaps they interpret it as more sky. Whatever the reason, we know that birds are dying at solar farms. Preliminary research indicates that in the U.S., somewhere between 37,800 and 138,600 die each year as a result of crashing into solar panels.

So what’s the take-home message from this study? It’s rather simple:

The “green energy” solutions touted by politicians and the press are not necessarily better for the environment.

The fact is that “green energy” processes are mostly new, so their long-term effects on the environment are mostly unknown. Ten years ago, no one would have thought that wind farms and solar facilities might be threatening the long-term survival of certain bird species. We now know otherwise. And don’t forget the bats. The effect that wind farms have on their populations hasn’t been studied in nearly as much detail, even though more bats are slaughtered by wind farms than birds!

Those who are pushing “green energy” might actually be pushing environmentally-hostile energy without even knowing it. That’s what happens when those who call themselves “environmentalists” ignore science and simply follow the politicians and the press.

The Great Barrier Reef Is Doing Well, Despite What the Alarmists Have Said

A portion of the Great Barrier Reef (click for credit)

Over the years, the headlines have been dire. Just two years ago, USA Today claimed:

Great Barrier Reef can’t heal after global warming damage

Similar news outlets ran similar headlines, because a study showed that the number of new corals being produced was significantly lower than normal.

The problem, of course, is that such studies have no context whatsoever. Yes, the Great Barrier Reef went through some tough years. Lots of corals experienced bleaching, a process by which they expel the algae that live inside them. Since the algae provide a significant amount of food to the corals, bleaching can be deadly. However, it’s a process we still don’t understand well, and there is some evidence that it is actually a mechanism designed to help the coral adapt to changing conditions. In addition, bleaching events have been happening for more than 300 years, and evidence suggests they happened more frequently in the late 1800s, the mid 1700s, and the late 1600s than they do today.

Given these facts, it’s not surprising that the doom and gloom headlines about the Great Barrier Reef have been shown to be completely wrong. The Australian Government, in collaboration with the Australian Institute of Marine Science, has been monitoring the health of the reef since 1985. They have collected an enormous amount of data, but I just want to focus on Figures 3, 4, and 5, which show the amount of coral cover in the Northern, Central, and Southern Great Barrier Reef.

The blue lines represent the percent of the area that is covered in coral, and the light-blue shading represents the uncertainty in that value. Notice that in each part of the Great Barrier Reef, there are times when the coral coverage is high, and there are times when it is low. Sometimes, the coral coverage is very low. Nevertheless, right now, coverage is nearly as high as it has ever been. Contrary to what USA Today claimed just two short years ago, then, the Great Barrier Reef has recovered from its low point over the past few decades.

Does that mean there’s nothing wrong with the Great Barrier Reef? Of course not! With less than 40 years of direct study, we really don’t know what the overall status of the reef should be, so we have no idea whether what we are seeing now is good, bad, or indifferent. My point is simply that it’s too easy for scientists and the media to look at a short-term trend (like 2010-2106 in the Northern Great Barrier Reef) and draw irrational conclusions. In the end, that not only hurts the cause of science, but it also makes it nearly impossible for us to figure out how best to care for the creation God has given us.

How Preconceptions Influence Science

Two different scientific reconstructions of the average energy the earth has received from the sun since 1800. (graphs edited from one of the papers being discussed, original shown below)

In its Climate Change 2014 Synthesis Report Summary for Policymakers, the Intergovernmental Panel on Climate Change (IPCC) stated:

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. (p. 5, emphasis theirs)

In other words, more than half of the warming that has been observed since the mid 1900s has been caused by human activity. How did they arrive at that conclusion? The scientists involved attempted to determine the natural variation in global temperature that would have occurred without human influence, and they found that it accounted for less than half of the observed warming that has been observed. Thus, human activities are responsible for more than half.

The problem, of course, is how do you determine how much warming would have occurred without human activity? The way the IPCC did it was to look at the natural variation that has occurred in the factors that are known to influence the temperature of the planet. One of the biggest factors is how much energy the earth is getting from the sun, which is often called the total solar irradiance (TSI). Well, we have been measuring the TSI with satellites since 1979, and while each satellite comes up with a slightly different value (the reason for that is unknown), all of them agree on how it has varied since they began their measurements.

However, in order to properly understand the long-term effect of TSI, we need to go farther back in time than 1979. As a result, observations that should be affected by TSI are used to estimate what it was prior to 1979. These are called “proxies,” and their use is common among scientists who attempt to reconstruct various aspects of the past. Unfortunately, it’s not clear what the best proxies are for TSI, so the details of how it has changed over time varies substantially from one study to another. That’s what I am attempting to illustrate in the figure above. It shows two different graphs for how TSI has changed since the year 1800. Notice the one on the left (from this study) shows that TSI has been quite variable, while the one on the right (from this study) shows it hasn’t varied significantly over the same time period. Both of these studies were published in the peer-reviewed literature, and both use accepted methods for reconstructing TSI from proxies. The difference, of course, is caused by the different preconceptions in each group of scientists. Whose preconceptions are correct? We have no idea.

To demonstrate just how much variation occurs due to these preconceptions, here is a figure from a very interesting study that I somehow missed when it first came out (in 2015). It shows eight different reconstructions of TSI from eight different peer-reviewed studies:

I used the top two graphs to make the illustration that appears at the very top of the post. As you can see, the eight reconstructions are arranged so that the ones which show a high variability in TSI are on the left, and the ones which show a low variability are on the right. What about the “CMIP5” that shows up on low-variability graphs. It indicates that those were the graphs used in the IPCC’s Climate Change 2014 Synthesis Report, which I quoted above.

Think about what that means. The IPCC specifically chose from the scientific literature TSI reconstructions that indicate there has been little variation since 1800. Thus, natural variation in TSI cannot explain much of the variation we see in global temperature. However, if they had used one of the reconstructions on the left, their conclusion would have been much different. In fact, the authors of the study from which those eight graphs were taken showed that if you used the top left reconstruction, you could explain the vast majority of the variation we see in the earth’s temperature. Thus, had the IPCC chosen that reconstruction, their conclusion about the effect of human activities on global warming would have been radically different.

Hopefully, you can see what I am driving at. All eight of the reconstructions above are legitimate, peer-reviewed reconstructions of TSI. If you choose the ones on the right, you reach one conclusion about the extent to which human activities have affected global temperature. If you choose the ones on the left, you come to a completely different conclusion. How do you choose? You choose the ones you think are best. In other words, you choose the ones that fit your preconceptions.

Unfortunately, this inconvenient fact is left out of most discussions of climate change. As a result, most people state what the “science” says, when they are utterly ignorant of how much that “science” depends on the preconceptions of the scientists who produced it.

My Opinion on Harvard’s Initial Sun-Dimming Experiments

The device Harvard plans to use to study the effect of small particles in the upper atmosphere (Click for source)

A reader sent me this article and asked for my comments on it. For some reason, the experiment that is discussed therein escaped my attention, but I read a more detailed discussion of it and found it to be quite intriguing. Essentially, a team of Harvard scientists wants to know if they can block some of the sun’s light so as to counter the effects of global warming, aka climate change. They want to do it by releasing a fine powder of calcium carbonate (chalk) high in the atmosphere. The tiny particles will eventually fall to the ground, but while they are in the air, they will reflect some of the sun’s light so that it never reaches the surface of the earth. That way, it doesn’t have a chance to participate in the greenhouse effect.

If you have been reading this blog for long, you know that I am very skeptical of the idea that global warming (aka climate change) is a serious problem. We don’t know what portion of it is caused by human activities, and we have no idea how much danger it poses. Unfortunately, politics has poisoned the science surrounding it. As one of the most accomplished climate scientists of our time says, much of what passes as climate science these days is shoddy specifically because of the influence of politics.

Nevertheless, it is certainly possible that global warming (aka climate change) might have serious long-term consequences. As a result, we should look for ways to mitigate the effects, if we eventually find that there will be some. We know the result of cutting carbon dioxide emissions with current technology: people will die. That’s because reducing emissions with the technology we have now makes energy more expensive, and the more expensive energy is, the more people die. (see here and here). Thus, we should examine other methods that might mitigate global warming with a lower body count. That’s what this Harvard experiment is all about.

Are there risks associated with it? Initially, no, because the first experiment, planned for June of this year, would only test the hardware (pictured above). It wouldn’t actually release any powder. If that goes well, the scientists plan a small-scale test that would release no more than 2 kilograms (4.4 pounds) of the powder. That’s not enough calcium carbonate to produce any negative effects. However, it will allow the scientists to study how the calcium carbonate behaves and whether or not the computer simulations of its behavior are correct.

Of course, the issue is what happens next. In order to change the earth’s greenhouse effect in any significant way, there would have to be a lot more calcium carbonate released, and it would have to be released on a semi-regular basis. That could definitely produce serious, long-term consequences. Nevertheless, there is no way to realistically know what those consequences might be unless the initial tests are performed. Thus, this experiment seems reasonable, at least in the initial stages that have been proposed. It will simply be a way of judging the safety and efficacy of the process. The results won’t be definitive, but they can at least guide the scientists in their future plans.

Here’s the bottom line: We know that all currently-planned attempts to slow global warming (aka climate change) will result in people dying. This new approach might result in that as well, but we don’t know. If experiments like the ones planned by Harvard proceed, at least we can have some data that will allow us to see if this method has a lower body count than the currently-proposed methods. It seems to me that’s something worth learning.

An Interesting Interview with One of the Sane Voices in Climate Research

Dr. Judith Curry, a climate scientist who is actually committed to the science. (click for credit)
Dr. Judith Curry holds an earned Ph.D. in geophysical sciences from The University of Chicago. For the last 14 years of her career, she was a professor at the Georgia Institute of Technology. For the majority of that time, she was the chairperson of the School of Earth and Atmospheric Sciences. She has authored 196 peer-reviewed scientific papers and has two books to her credit. By any objective measure, she is a giant in the field of climate science. Because she is actually interested in understanding how climate works, she was officially branded a heretic by the High Priests of Science. Seven years later, she resigned her professorship at the Georgia Institute of Technology because she could no longer figure out, “…what to say to students and postdocs regarding how to navigate the CRAZINESS in the field of climate science.”

Because I respect her knowledge, intellect, and commitment to science, I read her blog. On Saturday, she posted the transcript from an interview she did for a podcast. I am not familiar with the podcast, and I prefer to read rather than listen. In reading the transcript, I found nothing new related to her views on climate change, but I was fascinated by her historical analysis of the field of climate science. While I encourage you to read the entire transcript, I will highlight what really struck me.

When asked about how climate scientists viewed climate change when she was getting her degrees (the 1970s and 1980s), she said that aside from a few “very rambunctious people,” climate change was not a big issue with scientists. When the IPCC formed in the late 1980s, she said that most climate scientists didn’t want to get involved with it:

They said, this is just a whole political thing. This is not what we do. We seek to understand all the processes and climate dynamics, we don’t want to go there. And that was really a pretty strong attitude, through, I would say the mid nineties, say 1995. We had the UN Framework Convention on Climate Change at that point, they’re trying to get a big treaty going. And so defenders of the IPCC started pushing the idea that anybody who doubts us or challenges us, they are in the pay of big oil. After that, it became much more difficult to really challenge all that. And certainly by the turn of the century, anybody who was questioning the hockey stick or any of these other things were slammed as deniers and ostracized. And then after Climategate in 2010, the consensus enforcers became very militant. So it’s a combination of politics, and some mediocre scientists trying to protect their careers. And, they saw this whole thing as a way for career advancement, and it gives them a seat at the big table and political power. All this reinforces pretty shoddy science and overconfidence in their expert judgment, which comprises the IPCC assessment reports.

I found this interesting because as an outsider looking in, I have to agree with her assessment that the IPCC has reinforced “shoddy science.” I don’t know even 5% of what Dr. Curry knows about climate, and I know precisely 0% of what she knows about the internal dynamics of her field. However, after reading each IPCC report (from the 2001 synthesis report on), I was amazed at the shoddiness of the science and the overconfidence they had in their conclusions.

Consider, for example, their view of how humans have impacted the earth’s climate. In 2001, they said that human-emitted greenhouse gases are “likely” responsible for more than half of the earth’s temperature increase since 1951. By 2007, climate scientists had shown that the models used in 2001 were wrong, and they also found new variables related to climate which were poorly understood. Nevertheless, in their 2007 report, the IPCC said that human-emitted greenhouse gases are “very likely” responsible. Over the next six years, climate scientists continued to show that the models used by the IPCC were wrong and continued to find more uncertainties in our understanding of climate. But over that same period, the IPCC decided that that human-emitted greenhouse gases are “extremely likely” responsible.

In real science, when uncertainties grow, the conclusions become more and more tentative. In climate science, the reverse seems to be the case. More uncertainties seem to lead to more confidence in the conclusions. That’s pretty much the definition of shoddy science.