The more we learn about the universe, the more we see that it is a product of design. Indeed, for quite some time now, many scientists have recognized that the universe is finely-tuned for life. There are many parameters that govern how things happen in the universe, and they all have the characteristics of being just what they need to be for life to flourish. An electron, for example, is precisely as negative as the proton is positive, despite the fact that they are very, very different particles. If the charges were off by as little as one billionth of one percent, the resulting electrical imbalance in molecules would make even very small objects too unstable to form.1 The most obvious explanation for such fine-tuning is that the universe has been designed for life.
Now, of course, if you don’t want to believe that the universe is a product of design, you can offer any number of desperate alternatives. Perhaps we are just very fortunate. After all, if the universe weren’t designed for life, we wouldn’t be here to study it, so the very fact that we can discover these relationships tells us that the universe just happened to evolve into one that appears to be finely-tuned for life. You could also suggest that there are a ridiculously large number of universes out there. Most of them don’t have life, because they don’t have the proper parameters. However, if there are many, many universes, there’s a high likelihood that at least one will have all the right parameters, making it appear to be finely-tuned for life. You could also argue that there are actually a lot of combinations of parameters that might work for life; we just don’t know them. In that case, the universe’s apparent fine-tuning is an illusion.
When you read or listen to the news, you are faced with dire predictions about global warming, aka “climate change.” For example, ABC news states:
Global warming will be twice as severe as previous estimates indicate, according to a new study published this month in the Journal of Climate, a publication of the American Meteorological Society. The research, conducted by the Massachusetts Institute of Technology (MIT), predicts a 90% probability that worldwide surface temperatures will rise more than 9 degrees (F) by 2100, compared to a previous 2003 MIT study that forecast a rise of just over 4 degrees.
Of course, a 9 degree Fahrenheit increase in global temperature will produce catastrophic results. How did the researchers come to the startling conclusion that there is a 90% chance it will happen? They used a computer model that attempts to simulate global climate under different scenarios. The problem, of course, is that the prediction is only as good as the model.
On average the models warm the global atmosphere at a rate three times that of the real world. Using the scientific method we would conclude that the models do not accurately represent at least some of the important processes that impact the climate because they were unable to “predict” what has occurred. In other words, these models failed at the simple test of telling us “what” has already happened, and thus would not be in a position to give us a confident answer to “what” may happen in the future and “why.”
Why are the models so bad? Because we don’t really understand climate science well enough to model it. A recent paper by Professor of Atmospheric Science Da Yang and his graduate student, Seth Seidel, provides a crystal clear example of what I mean. The paper’s abstract begins this way:
Moist air is lighter than dry air at the same temperature, pressure, and volume because the molecular weight of water is less than that of dry air. We call this the vapor buoyancy effect. Although this effect is well documented, its impact on Earth’s climate has been overlooked.
Because of its lower molecular weight, water vapor is less dense than air at the same temperature, so air with a lot of water vapor floats in dry air at the same temperature and pressure. As the paper says, this is well documented. However, no one thought to see how that might affect the earth’s climate. Well, these two scientists decided to do just that, and based on their calculations, it actually cools the atmosphere.
Here’s a simplified explanation for why: In the tropics, we find regions of wet air and regions of dry air. At the same elevation, the regions must have roughly the same density. Otherwise, the less dense region would rise. Thus, if I have a stable region of wet air next to a stable region of dry air, the dry air must be warmer, so it has the same density as the wet air. Thus, at any given elevation, the dry regions will be the warmer regions. Well, water vapor is a potent greenhouse gas, so wet air doesn’t allow as much energy to escape from the earth as dry air. Since the dry air is warmer, there is more energy in it. That means the energy is more concentrated in the air that will allow more of it to escape. This, of course, results in the earth getting rid of more energy, which causes it to cool.
Now here’s the interesting part: the author’s calculations show that this effect becomes magnified the higher the ocean temperature. In other words, if the tropical oceans warm up, this effect will end up producing even more cooling. This is an example of a negative feedback system, where a change produces an effect that resists the change. The earth’s climate is full of negative feedback systems (see here, here, here, and here, for example). This is exactly what you expect for a well-designed system, and the earth is a very well-designed system.
In their paper, the authors state that the climate models from which dire warnings are generated have the ability to simulate this effect, but they don’t. They suggest that climate models should be adjusted to take the effect into account. Of course, I agree. Whenever we learn more about climate dynamics, the models should be updated. However, my point is much more basic: This is a well documented, well understood aspect of the atmosphere, but until now, no one has thought to see its effect on the earth’s climate. After investigation, it is found to produce negative feedback, which makes earth’s climate more resistant to change. If this well documented, well understood aspect of the earth’s atmosphere has not been properly taken into account in the climate models that are forecasting doom and gloom, how in the world can we put any faith in them?
I am putting the finishing touches on my 7th/8th grade book Science in the Atomic Age (which should be available for purchase in June), and I wanted to post another excerpt from the book. The excerpt I posted previously comes from a section about the brain. This one comes from an earlier chapter, where I discuss plants.
By the time the students reach this point in the course, they know that producers are organisms which make their own food (usually through photosynthesis), and consumers must eat other organisms for food. They also know how to interpret chemical equations and the specific chemical equation for photosynthesis. In addition, I have just shown them the chemical equation for the process by which consumers burn their food for energy and have pointed out that it is the opposite of the chemical equation for photosynthesis. Here is the discussion that follows:
In other words, producers like plants use water and carbon dioxide to make glucose and oxygen, and consumers then use that glucose and oxygen to make carbon dioxide and water. So producers are feeding us, and we take what the producers make and then produce the chemicals they need to make what we need! In this sense, at least, consumers are the opposites of producers.
This is a real testimony to God’s power and ingenuity. He not only created the producers to feed the consumers, He also designed the consumers so that when they use what the producers made, they give the producers what is needed so that the producers can make more food. Now, of course, the sun plays its role, too. It provides the energy the producers need in order to do photosynthesis in the first place.
This is all summed up in the illustration above. The sun shines light on the earth. Producers absorb that light in the chloroplasts of their cells and use it, along with carbon dioxide and water, to make glucose and oxygen. Consumers then take that glucose and oxygen and use them to make energy for themselves. This ends up making carbon dioxide and water, which can be used by the chloroplasts in the producers (along with more energy from the sun) to make more glucose and oxygen. As a result, the only constant input needed is energy from the sun. Everything else just keeps getting recycled between producers and consumers!
This Balance Is Even More Amazing
The balance between producers and consumers, as illustrated in the drawing above, is amazing. However, we need to be aware that it is often oversimplified. I have heard many educators say, “Plants make food and oxygen, while animals use food and oxygen.” That is true, but it is oversimplified. Plants do make food and oxygen. It happens when they are doing photosynthesis. However, they also use food and oxygen.
Does that statement surprise you? It might, but if you think about it, the statement makes a lot of sense. After all, why are plants doing photosynthesis? Because they need to make food for themselves, right? Well, what does the plant do with that food? It burns that food for energy, according to the equation I showed you earlier. What does that equation say? It says oxygen and C6H12O6 are reactants. That means they are used up. So plants not only use carbon dioxide and water to make glucose and oxygen, but when it is time for them to burn their food, they must use glucose and oxygen to make carbon dioxide and water.
Now wait a minute. If plants end up using the glucose and oxygen they make through photosynthesis, how are we able to use it? Because of this important fact: Plants make a lot more food and oxygen than they ever need. If plants only made the food that they need, they would end up using it and all the oxygen they made, and there would be nothing for consumers to eat or breathe. However, plants have been designed to make much more food than they will ever need. That means they also make more oxygen than they will ever use. That way, there is food and oxygen for consumers.
This is a very, very important design feature that many people don’t appreciate. In order for us (and most consumers) to survive, it’s not enough that producers like plants exist. They must not only exist, but they must do a lot more work than just keeping themselves alive. They must overproduce food and oxygen so that there is plenty for the consumers. Thus, the proper way to describe the balance between plants and animals is, “Plants make food and oxygen, but they also use it. However, they make more food and oxygen than they need, so that animals can use the rest.”
I have been working on my new book, Science in the Atomic Age, which (Lord willing) will be published this summer. In the section where I cover the nervous system, I compare a mouse brain and a human brain to computers. It’s rather fascinating. Below, you will find a slightly-edited excerpt from that discussion. Please note that the students have already learned that neurons are cells found in nervous tissue and that the integumentary system is the system of organs that makes your skin:
The brain has three major divisions: the cerebrum (suh ree’ brum), the cerebellum (sehr’ uh bell’ uhm), and the brain stem. The cerebrum is in charge of most of the really complicated things that the brain does. For example, it receives signals from your eyes and interprets them so that you can see. It receives signals from your ears and interprets them so you can hear. It receives signals from all the nervous tissue in your integumentary system so that you can figure out what you are touching as well as things like whether you are too warm, too cold, or comfortable. It also helps you learn, and it stores your memories. All this takes a lot of work, so it requires a lot of neurons.
How many neurons? The average adult cerebrum contains about 20 billion neurons. That number doesn’t mean very much by itself, so by comparison, the average adult mouse cerebrum contains about 2.5 million neurons. So the human cerebrum contains about 10,000 times as many neurons as a mouse’s cerebrum. Of course, a mouse is much smaller than a person. By weight, a person is about 3,000 times as heavy as a mouse. At least part of the difference between a mouse’s cerebrum and a person’s cerebrum is due to that. But people are much more intelligent than mice, and the number of neurons in the cerebrum must also be related to that.
Bill Nye says and writes a lot of ignorant things (see here, here, here, here, and here, for example). While it is hard to choose the most ignorant statement he has ever made, this one has to be in the top five:
Denial of evolution is unique to the United States.
I have already shown how that statement is 100% false, and anyone who even casually investigates the issue would know that it is false. However, as the links above show, investigation is definitely not one of Nye’s strong suits! I was reminded of his incredibly ignorant statement when I read this article, from the journal Science.
Like Bill Nye, the author of the article doesn’t seem to understand how to investigate an issue. Nevertheless, the article has some interesting content. It seems that the federal government in Brazil has appointed Dr. Benedito Guimarães Aguiar Neto to head an agency called CAPES, which oversees Brazil’s graduate study programs. This is noteworthy, because Dr. Neto was instrumental in forming an Intelligent Design Research Center at Mackenzie Presbyterian University in Brazil. Of course, this infuriates the Scientific Inquisition, because Intelligent Design has been officially declared as heresy by the High Priests of Science. To have someone who believes in heresy positioned in a powerful educational office is unthinkable! As the article tells us, one Brazilian biologist has said:
It is completely illogical to place someone who has promoted actions contrary to scientific consensus in a position to manage programs that are essentially of scientific training.
Of course, that very statement is incredibly anti-science, because almost all of the great scientific advancements in history come from the very act of questioning the scientific consensus. I would think that every institution of higher education should have many high-level officials who challenge the scientific consensus.
As I said, the author of the article doesn’t seem to be able to investigate an issue, since he calls Dr. Neto a “creationist.” I realize that the term is very broad, but there is no indication that Dr. Neto is a creationist. In fact, all he has stated is that Intelligent Design should be introduced in Brazil’s basic educational curriculum. I suspect that he is an advocate of intelligent design for that reason, but that doesn’t make him a creationist. Dr. David Berlinski is an advocate of Intelligent Design, and he doesn’t even believe in God. However, if you are a lazy writer, it is easier to falsely label a person than it is to actually investigate what that person believes.
In any event, I can’t help but see this as a step in the right direction. The progress of science depends on questioning the scientific consensus. Whether or not it was intentional, Brazil’s government decided to appoint someone who is skeptical of the consensus in a position of influence when it comes to science education. Not only does this further demonstrate that Bill Nye’s statement is breathtakingly ignorant, but it also gives us more indication that the biological sciences are slowly emerging from the quagmire of NeoDarwinism and getting ready to truly advance.
When a scientist refuses to see the design that is so obvious in nature, it can lead to all sorts of incorrect conclusions. Consider, for example, transposable elements in DNA. Often called “transposons,” they jump around in an organism’s genome. In other words, they are in different places in different cells of the same organism. Those who have their naturalist blinders on initially thought that they were useless – part of the “junk DNA” that represents all the evolutionary “flotsam and jetsam” that has accumulated over hundreds of millions of years. Dr. Leslie Pray, writing in Nature Education, puts it this way:
Transposable elements (TEs), also known as “jumping genes” or transposons, are sequences of DNA that move (or jump) from one location in the genome to another. Maize geneticist Barbara McClintock discovered TEs in the 1940s, and for decades thereafter, most scientists dismissed transposons as useless or “junk” DNA. McClintock, however, was among the first researchers to suggest that these mysterious mobile elements of the genome might play some kind of regulatory role, determining which genes are turned on and when this activation takes place.
The High Priests of Science continue to assure us that there is no debate when it comes to the validity of evolution as an explanation for the history of life. As the National Academy of Sciences says:
…there is no debate within the scientific community over whether evolution occurred, and there is no evidence that evolution has not occurred. Some of the details of how evolution occurs are still being investigated. But scientists continue to debate only the particular mechanisms that result in evolution, not the overall accuracy of evolution as the explanation of life’s history.
The problem, of course, is that such dogmatic statements are not consistent with the data that is supposed to guide scientific inquiry. When people honestly evaluate such data, many see how wrong the High Priests of Science are. Nearly two years ago, for example, I wrote about a world-renowned paleontologist who put up a display in his museum showing how there was no controversy about evolution. The problem, of course, is that he had never investigated all the data. When he got up the courage to actually read books written by scientists who point out the many flaws in evolutionary thinking, he ended up being convinced by the data and defected away from Darwinism. This cost him his job, but at least his scientific integrity remained intact.
Now there is another addition to the list of high-profile academics who had the courage to investigate all the data. His name is Dr. David Gelernter, and he is a professor of computer science at Yale University. In May of this year, he wrote a very interesting article for The Claremont Institute. I encourage you to read the article in its entirety, but I cannot help but add a bit of “color commentary.”
Thirty-five years ago, Dr. Theodore P. Snow wrote a book entitled Essentials of the Dynamic Universe. On page 434 of the 1984 edition, he summed up the obvious consequence of the idea that earth was formed as a result of natural processes without any need for Divine intervention:
We believe that the earth and the other planets are a natural by-product of the formation of the sun, and we have evidence that some of the essential ingredients for life were present on the earth from the time it formed. Similar conditions must have been met countless times in the history of the universe, and will occur countless more times in the future.
In other words, there is nothing special about the earth; it is one of many planets that harbor life. The more we learn about the universe, the more we should realize just how mediocre the earth is.
Since Dr. Snow penned those words, almost 4,000 exoplanets (planets outside our solar system) have been discovered. How many of them are similar to earth? The most reasonable answer, based on what we know right now, is zero. Why? Well, let’s consider one and only one factor: whether or not the planet is in the habitable zone of its star. That’s the distance from the star which allows the planet to get enough energy to stay warm enough to support life as we know it.
Out of nearly 4,000 exoplanets, how many are within the habitable zone? With the recent discovery of a planet charmingly known as “GJ 357 d,” the number of planets that might possibly qualify is 53. If we are conservative in our estimate, the number drops to 19, but let’s be as optimistic as possible. Out of nearly 4,000 exoplanets, only 53 might possibly be in the habitable zone.
What do I mean when I say “might possibly be in the habitable zone?” Well, there are a few factors that influence a planet’s temperature, and the distance from its star is only one of those factors. Another important issue is the planet’s atmosphere. With the right mix and right amount of greenhouse gases, a planet that is a bit far from its star could be in the habitable zone, because even though it gets only a little energy from its star, its atmosphere holds onto that energy really well. In fact, that’s why GJ 357 d might possibly be in the habitable zone. It gets about as much energy from its star as Mars does from the sun, but it is massive enough to hold on to a pretty thick atmosphere. It’s possible that the atmosphere could make up for its distance from the sun, so astronomers say it is possibly at the “outer edge” of the star’s habitable zone.
Now think about that for a moment. If we consider only one factor necessary for a planet to sustain life (being in the habitable zone of a star), just over 1% might possibly have it. Of course, there are lots of other factors necessary for life as we know it. A life-sustaining planet must also have an abundance of water, the right mixture of non-greenhouse gases in its atmosphere, the right mix of chemicals in its crust to provide nutrition to organisms, a shield from both ultraviolet rays and cosmic rays that come from the star around which it orbits, a reasonable speed of rotation around its axis, etc., etc. The earth has all these things, but a survey of nearly 4,000 exoplanets shows that just over 1% have only one of those things. What’s the chance that one of those planets has everything else it needs to support life? The most reasonable answer based on what we know is zero.
Despite what naturalists expect (and most still want to believe), it is clear that the earth is a very, very special planet. One might be so bold as to say that it is the Privileged Planet.
The more we learn about creation, the more it surprises us. While it is true in all areas of science, it seems particularly true in genetics. When I was at university, I was taught as definitive fact that each gene in my DNA determined the makeup of one protein in my body. We now know that is false. I was also taught as definitive fact that the only way a parent can transmit a trait to its offspring is through the sequence of nucleotide bases in DNA. As a result, if a new trait appears in a population, it must be due to a change in the species’ DNA sequence. We now know that is false. For example, I was taught as definitive fact in university that cave fish are blind because of mutations to their DNA. We now know that is false, at least for one species of blind cave fish.
So we now know that there are ways to inherit traits that go beyond the DNA sequence that you inherit from both parents. For example, we know that if you train mice to fear a certain smell, the next generation can inherit that fear. It’s not that the parents train the fear into their offspring (the offspring were raised separate from their trained parent). They actually inherited the fear. How in the world can a parent pass on a fear of something to its offspring? That’s what the field of epigenetics (which literally means “on top of genetics”) wants to find out.
We know that it has something to do with how an organism regulates the activity of its genes. An organism can alter chemical aspects of the DNA that are not related to its actual sequence, and that alteration can decrease the use of a gene, increase the use of a gene, turn a gene off so that it is not used at all, or turn a gene on so that it will start being used. For example, most people are not born lactose intolerant. After all, they drink their mother’s milk or a milk-based formula. Milk digestion requires the enzyme called “lactase,” which is coded for by a gene. While everyone has that gene turned on at birth, in some people, it gets turned off later on, causing lactose intolerance. Nothing has changed in the person’s DNA sequence – the gene is still there and has not been broken. However, that gene has been turned off by epigenetic mechanisms. It is thought that this process is responsible for epigenetic inheritance. To some extent, we must be able to inherit the “off” and “on” status of our parents’ genes.
If you have been reading this blog for a while, you probably know that I am very skeptical of climate models that predict the consequences of rising carbon dioxide levels in the atmosphere. Initially, this was due to my own experience with large-scale computer models. In my early scientific research, I both wrote and used them, so I know how much their results are affected by the assumptions programmed into them. As time has gone on, my skepticism has increased, since it has been demonstrated over and over again that the climate models do not line up with the most relevant data.
There is a lot of dead, decaying matter on the floors of the tropical forests of the world. As that dead matter decomposes, it releases carbon dioxide into the atmosphere. Well, decomposition is driven by chemical reactions, and chemical reactions speed up with increasing temperature. So, as the world warms, what should happen to the rate of carbon dioxide produced by decomposition? It should increase, right? That will release more carbon dioxide into the air, which will accelerate warming. This is an example of a positive feedback mechanism. In such a mechanism, a change promotes a process that amplifies the change. This particular positive feedback mechanism is programmed into the climate models that are being used to predict the consequences of increased carbon dioxide in the atmosphere.
While that assumption makes perfect sense, the real world often works differently from our simple assumptions. That’s one reason Stephanie Roe decided to test it. She went to Puerto Rico’s El Yunque National Forest, where the US Forest Service set up infrared heaters in different parts of the forest. Those heaters were programmed to keep their surroundings 4 degrees Celsius warmer than the rest of the forest. Those parts of the forest, then, should behave like the tropical forests will behave if the earth warms by an average of 4 degrees. In addition, there were parts of the forest where identical, non-working heaters were placed. They served as control areas – they stayed at the normal temperature of the forest, but they had the physical structures of the heaters present. Roe introduced various kinds of dead matter (both native and non-native) to the forest in both the warmed sections and the control sections. She then collected samples later to test the rate of decomposition in each.
What did she find? She found that the result was precisely opposite of what is programmed into the climate models. The warmed areas of the forests had slower rates of decomposition than the control areas. Why? According to her research, it is because the warmer parts of the forest were drier. The process of decomposition is accelerated strongly by moisture, so the loss of moisture slowed down the decomposition more than the higher temperature sped it up. Thus, according to her research, increased temperatures should reduce the amount of carbon dioxide produced by decomposition. This, of course, is an example of a negative feedback mechanism: a change promotes a process that decreases the rate of change. Once again, such mechanisms are the hallmark of designed systems, so it is not surprising that it exists here on earth.
The more we learn about climate, the less confidence I have in the predictions of the climate change doomsayers.