On March 11 of 2011, the most powerful earthquake known to have hit Japan struck near the east coast of Honshu. The earthquake generated a tsunami that reached a height of more than 130 feet. Just last month, the Japanese National Police agency reported that there were at least 15,870 people who died, an additional 6,114 who were injured, and 2,814 who are still missing as a result.1 Obviously, it was a disaster of truly stunning proportions.
One of the many things that happened as a consequence of the disaster is that some of the reactors at the Fukushima Daiichi Nuclear Power Plant went into meltdown, and radioactive substances were leaked into the ocean and released into the air. People in a 12-mile radius around the power plant were evacuated so that they would not be exposed to too much radiation. As a result of the meltdown, there is increasing political pressure for Japan to end its reliance on nuclear power. According to the Christian Science Monitor, Prime Minister Yoshihiko’s party has recommended that Japan phase out all nuclear power by the year 2030.
Back when the nuclear disaster was in the news, I commented on it (here and here). Since then, I have been following the scientific literature to see what those who have been monitoring the situation are saying regarding its long-term effects. Recently, a study and some commentary on the study were published in the journal Energy & Environmental Science, and they are surprising, to say the least.
Mutualistic symbiosis, the process by which organisms of different species interact so that all of them benefit, is a very common phenomenon in creation (see here, here, here, here, and here for a few examples). A recent study in the Proceedings of the National Academy of Sciences, USA highlights a very interesting case of mutualistic symbiosis that not only has some important implications for farmers, but also relates to the creation/evolution controversy.
The study examined insecticide resistance in bean bugs (Riptortus pedestris) and similar insects. The authors considered one of the most popular insecticides used by farmers across the world, fenitrothion. It has been known for some time that certain insects, such as the bean bugs in the study, can develop resistance to that insecticide. This is a problem, since bean bugs not only damage bean crops, but also some fruit crops.1 The authors were interested in what causes this insecticide resistance. As they state in the introduction to their paper:
Mechanisms underlying the insecticide resistance may involve alteration of drug target sites, up-regulation of degrading enzymes, and enhancement of drug excretion, which are generally attributable to mutational changes in the pest insect genomes.
In other words, when an insect develops resistance to an insecticide, it is generally assumed that there was a change in the DNA of the insect. A mutation might have damaged the site where the insecticide is supposed to bind; the activity of a gene involved in the destruction of unwanted chemicals might have been enhanced so that the insect destroys the insecticide; or perhaps the activity of a gene involved in getting rid of waste is enhanced so that the insect just excretes the insecticide.
The authors show that for the specific case of fenitrothion resistance in bean bugs and similar insects, none of these mechanisms play a role.
Not long ago, Dr. Jerry Alan Fodor (a professor of Philosophy) and Dr. Massimo Piattelli-Palmarini (a professor of cognitive science) wrote a book entitled What Darwin Got Wrong. I haven’t read the book, but what I have read about it indicates that the authors strongly believe Darwin was right when it comes to the idea that all species descended from a common ancestor. However, they strongly disagree with the mechanism that Darwin proposed (and most Neo-Darwinists accept) for the process by which that happened. While most modern evolutionists contend that mutation acted on by natural selection is the main process by which species adapt and change, the authors argue that it is only one of many considerations. In fact, they go a step further and claim that there is no scientific reason to elevate natural selection above these other processes which it comes to their relative importance.
Since I have not read the book, I cannot comment on the validity of their arguments. However, I ran across a quote from the introduction that makes me want to read the entire book. They call their introduction “Terms of Engagement.” After laying out the terms and outlining the contents of the book, the authors write:
So much for a prospectus. We close these prefatory comments with a brief homily: we’ve been told by more than one of our colleagues that, even if Darwin was substantially wrong to claim that natural selection is the mechanism of evolution, nonetheless we shouldn’t say so. Not, anyhow, in public. To do that is, however inadvertently, to align oneself with the Forces of Darkness, whose goal it is to bring Science into disrepute. Well, we don’t agree. We think the way to discomfort the Forces of Darkness is to follow the arguments wherever they may lead, spreading such light as one can in the course of doing so. What makes the Forces of Darkness dark is that they aren’t willing to do that. What makes science scientific is that it is.
[Jerry Fodor and Massimo Piattelli-Palmarini, What Darwin Got Wrong, Farrar, Straus, and Giroux, First American Edition 2010, p. xx]
I couldn’t agree more. When those who call themselves scientists want to shut off debate on an issue, they are exposing themselves for what they are: rabidly anti-science. Science is all about following the evidence, regardless of where that evidence might lead. It is unfortunate that some (if not many) in the scientific community attempt to silence those who are simply trying to follow the evidence.
While there is some disagreement on the subject, most medical scientists would agree that Autism rates are on the rise in the U.S. and in many other parts of the world. What’s the reason for this increase? Like most medical issues, there are probably a variety of reasons. Some have suggested that the increase in autism can be linked to childhood vaccination, but the data argue strongly against it. Most likely, there are a series of genetic and environmental factors that play a role in the increase.
For quite some time now, there has been strong evidence that the age of the father has a significant effect on the chance of his child having autism.1 There has been evidence that the mother’s age also plays a role, but its effect is much smaller.2 However, these studies simply demonstrate a correlation between parental age and autism. They do not show that increased parental age plays a direct role in the cause of autism. However, a recent study published in the journal Nature has changed that. It seems to provide a direct link between the age of the father and autism in the child.
The authors of the study examined the entire genomes of 78 parent-offspring trios (mother, father, and child) to directly determine what mutations the child received from the father’s sperm cell and what mutations the child received from the mother’s egg cell. Because they were specifically interested in the cause of neurological disorders, they used a large number of trios that contained a child with either autism or schizophrenia. In the end, 44 of the children had autism spectrum disorder, and 21 were schizophrenic. In addition, the genomes of 1,859 other people were sequenced to serve as a population comparison.
The authors focused on the de novo mutations in the children. These are mutations that do not exist in either parent but do exist in the child. Thus, they must arise from a mutation that occurred when the father made his sperm or the mother made her egg. Such mutations happen in every production of egg and sperm cells, and the authors wanted to know which parent (if either) was more responsible for them. The results were surprising, to say the least!
As I mentioned in two previous posts (here and here), the coordinated release of scientific papers from the ENCODE project has produced an enormous amount of amazing data when it comes to the human genome and how cells in the body use the information stored there. While the majority of commentary regarding these data has focused on the fact that human cells use more than 80% of the DNA found in them, I think some of the most interesting scientific results have gotten very little attention. They are contained in a paper that was published in a journal named Genome Biology, and they relate to the pseudogenes found in human DNA.
For those who are not aware, a pseudogene is a DNA sequence that looks a lot like a gene, but because of some details in the sequence, it cannot be used to make a protein. Remember, a gene’s job is to provide a “recipe” for the cell so that it can make a protein. Well, a pseudogene looks a lot like a recipe for a protein, but it cannot be used that way. Think of your favorite recipe in a cookbook. If you use it a lot, it probably has stains on it because it has been open while you are cooking. Imagine what would happen if the recipe got so stained that certain important instructions were rendered unreadable. For someone who has never looked at the recipe before, he might recognize that it is a recipe, but because certain important instructions are unreadable, he will never be able to use the recipe to make the dish. That’s what a pseudogene is like. It looks like a recipe for a protein, but certain important parts have been damaged so that they cannot be used properly anymore. As a result, the recipe cannot be used by the cell to make a protein.
Pseudogenes have been promoted by evolutionists as completely functionless and as evidence against the idea that the human genome is the result of design. Here is how Dr. Kenneth R. Miller put it back in 1994:1
From a design point of view, pseudogenes are indeed mistakes. So why are they there? Intelligent design cannot explain the presence of a nonfunctional pseudogene, unless it is willing to allow that the designer made serious errors, wasting millions of bases of DNA on a blueprint full of junk and scribbles. Evolution, however, can explain them easily. Pseudogenes are nothing more than chance experiments in gene duplication that have failed, and they persist in the genome as evolutionary remnants…
Obviously, Dr. Miller didn’t understand intelligent design or creationism when he wrote that, as they can both explain nonfunctional pseudogenes. Before I discuss that, however, I need to point out that since 1994, functions have been found for certain pseudogenes. As far as I can tell, the first definitive evidence for function in a pseudogene came in 2003, when Shinji Hirotsune and colleagues found that a specific pseudogene was involved in regulating the functional gene that it resembles.2 Since then, functions for several other pseudogenes have been found. In fact, a recent paper in RNA Biology suggests that the use of pseudogenes as regulatory agents is “widespread.”3
Even though functions have been found for many pseudogenes, the question remains: Are most pseudogenes functional, or are most of them non-functional? Well, based on the ENCODE results, we might have the answer. While the ENCODE results indicate that the vast majority of the genome is functional, they also indicate that the vast majority of pseudogenes are, in fact, non-functional.
As I posted previously, a huge leap in our understanding of human genetics recently occurred due to the massive results of project ENCODE. In short, the data produced by this project show that at least 80.4% of the human genome (almost certainly more) has at least one biochemical function. As the journal Science declared:1
This week, 30 research papers, including six in Nature and additional papers published by Science, sound the death knell for the idea that our DNA is mostly littered with useless bases.
Not only have the results of ENCODE destroyed the idea that the human genome is mostly junk, it has prompted some to suggest that we must now rethink the definition of the term “gene.” Why? Let’s start with the current definition. Right now, a gene is defined as a section of DNA that tells the cell how to make a specific protein. In plants, animals, and people, genes are composed of exons and introns. In order for the cell to use the gene, it is copied by a molecule called RNA, and that copy is called the RNA transcript. Before the protein is made, the RNA transcript is edited so that the copies of the introns are removed. As a result, when it comes to making a protein, the cell uses only the exons in the gene.
By today’s definition, genes make up only about 3% of the human genome. The problem is that the ENCODE project has shown that a minimum of 74.7% of the human genome produces RNA transcripts!2 Now the process of making an RNA transcript, called “transcription,” takes a lot of energy and requires a lot of cellular resources. It is absurd to think that the cell would invest energy and resources to read sections of DNA that don’t have a function.
In addition, the data in reference (2) demonstrate that many RNA transcripts go to specific regions in the cell, indicating that they are performing a specific function. Since there is so much DNA that does not fit the definition of “gene” but seems to be performing functions in the cell, scientists probably need to redefine what a gene is. Alternatively, scientists could come up with another term that applies to the sections of DNA which make an RNA transcript but don’t end up producing a protein.
There is another reason that prompts some to reconsider the concept of a gene: alternative splicing. The ENCODE data show that this is significantly more important than most scientists ever imagined.
In 2001, the initial sequence of the human genome was published.1 Not only did it represent a triumph in biochemical research, it allowed us to examine human genetics in a way that had never been possible before. For the first time, we had a complete “map” of all the DNA in the nucleus of a human cell. Unfortunately, while the map was reasonably complete, scientists’ understanding of that map was not. Despite the fact that scientists had a really good idea of what was in human DNA, they didn’t have a good idea of how human cells actually used that material.
In fact, there were many scientists who thought that most of the contents of DNA is not really used at all. Indeed, when the project to sequence the human genome was first getting started, there were those who thought it would be senseless to sequence all the DNA in a human being. After all, it was clear to them that most of a person’s DNA is useless. In 1989, for example, New Scientist ran an article about what it called “the project to map the human genome.” In that article, the views of Dr. Sydney Brenner were brought up. As the director of the Molecular Genetics Unit of Britain’s Medical Research Council, he was considered an expert on human genetics. The article states:2
He argues that it is necessary to sequence only 2 percent the human genome: the part that contains coded information. The rest of the human genome, Brenner maintains, is junk. (emphasis mine)
This surprising view was probably the dominant view of scientists during the 1980s and 1990s. Indeed, the article represents the idea that the rest of the human genome might be worth sequencing as being the position of only “some scientists.”
Now why would scientists think that most of the human genome is junk? Because of evolutionary reasoning. As Dr. Susumu Ohno (the scientist who coined the term “junk DNA”) said about one set of DNA segments:3
Our view is that they are the remains of nature’s experiments which failed. The earth is strewn with fossil remains of extinct species; is it a wonder that our genome too is filled with the remains of extinct genes?
Indeed, evolutionists have for quite some time presented the concept of “junk DNA” as evidence for evolution and against creation. In his book, Inside the Human Genome: A Case for Non-Intelligent Design, Dr. John C. Advise says:4
…the vast majority of human DNA exists not as functional gene regions of any sort but, instead, consists of various classes of repetitive DNA sequences, including the decomposing corpses of deceased structural genes…To the best of current knowledge, many if not most of these repetitive elements contribute not one iota to a person’s well-being. They are well-documented, however, to contribute to many health disorders.
His point, of course, is that you would expect a genome full of junk in an evolutionary framework, but you would not expect it if the genome had been designed by a Creator. I couldn’t agree more. If evolution produced the genome, you would expect it to contain a whole lot of junk. If the genome had been designed by a loving, powerful Creator, however, it would not. Well…scientists have made a giant leap forward in understanding the human genome, and they have found that the evolutionary expectation is utterly wrong, and the creationist expectation has (once again) been confirmed by the data.
The leap began back in 2003, when scientists started a project called the Encyclopedia of DNA Elements (ENCODE).5 Their goal was to use the sequence of the human genome as a map so that they could discover and define the functional elements of human DNA. Back in 2007, they published their preliminary report, based on only 1% of the human genome. In that report, they found that the vast majority of the portion of the genome they studied was used by the cell.6 Now they have published a much more complete analysis, and the results are very surprising, at least to evolutionists!
It’s popular these days to claim that science and Christianity are incompatible. Of course, no one who spends any amount of time learning the history of science can be fooled by such a claim, because the history of science makes it very clear that modern science is a product of Christianity. Specifically, because early Christians understood that the world was created by a single God who is a Lawgiver, it made sense to them that the universe should run according to specific laws, and those laws should be the same everywhere in the universe. In addition, because they believed they had been given the image of God, they thought it was possible to understand those laws. That’s what prompted the revolution that produced science as we know it today.
For example, Morris Kline discusses Sir Isaac Newton in his book, Mathematics: The Loss of Certainty. He explains why Newton believed that the same laws which govern motion on the surface of the earth should also govern motion in the heavens:1
The thought that all the phenomena of motion should follow from one set of principles might seem grandiose and inordinate, but it occurred very naturally to the religious mathematicians of the 17th century. God had designed the universe, and it was to be expected that all phenomena of nature would follow one master plan. One mind designing a universe would almost surely have employed one set of basic principles to govern related phenomena.
Morris Kline was a mathematician, but I recently ran across a scientist who says essentially the same thing.
One of the truly remarkable things about creation is how one substance can be used in nature to do all sorts of different jobs. Take ribonucleic acid, for example. Commonly referred to as RNA, scientists have known for quite some time that it is an integral part of how the cell makes proteins. A particular kind of RNA, called messenger RNA, copies a protein recipe contained in DNA, and it takes that copy to a protein-making factory called a ribosome. Once the recipe is at the ribosome, two other kinds of RNA, transfer RNA and ribosomal RNA, interact with the messenger RNA to build the protein in a step-by-step manner.
Because RNA is such an important part of how the cell builds proteins, some scientists speculated that this was its only job. In 1993, however, Victor Ambros, Rosalind Lee, and Rhonda Feinbaum found another job for RNA. Short strands of RNA, which are now called microRNAs, sometimes regulate how much of a particular protein is made in the cell.1 Since then, other forms of RNA have also been shown to regulate the amount of protein produced in a cell. In addition, scientists have found that some types of RNA perform functions that aren’t even directly related to the production of proteins. For example, some types of RNA serve as “molecular guides,” taking proteins where they need to be in the cell, while other types of RNA serve as a “molecular adhesives,” holding certain proteins to other RNA molecules or to DNA.
Now even though the last two jobs I mentioned are not directly related to protein production, they still involve proteins. So is it safe to say that while RNA performs several functions in the cell, all of them are related to proteins in some way? I might have answered, “Yes” to such a question if a student had asked me that just a few weeks ago. However, a new paper in Nature Medicine has found a function for some microRNAs that has nothing to do with proteins. Some microRNAs serve as radiation detectors.2
Everyone has heard of DNA, but many don’t appreciate its marvelous design. It stores all the information an organism needs to make proteins, regulate how they are made, and control how they are used. It does this by coding biological information in sequences of four nucleotide bases: adenine (A), thymine (T), guanine (G), and cytosine (C). The nucleotide bases link to one another in order to hold DNA’s familiar double-helix structure together. A can only link to T, and C can only link to G. As a result, the two linking nucleotide bases are often called a base pair. DNA’s ingenious design allows it to store information in these base pairs more efficiently than any piece of human technology that has ever been devised.
What you might not realize is that pretty much any information can be stored in DNA. While the information necessary for life involves the production, use, and regulation of proteins, DNA is such a wonderfully-designed storage system that it can efficiently store almost any kind of data. A scientist recently demonstrated this by storing his own book (which contained words, illustrations, and a Java script code) in the form of DNA.1
The way he and his colleagues did this was very clever. They took the digital version of their book, which was 5.27 megabits of 1’s and 0’s, and used it as a template for producing strands of DNA. Every time there was a “1” in the digital version of the book, they added a guanine (G) or a thymine (T) to the DNA strand. Every time the digital version of the book had a “0,” they added an adenine (A) or a cytosine (C). Now unfortunately, human technology cannot come close to matching the incredible design of even the simplest living organism. As a result, while living organisms can produce DNA that is billions of base pairs long, human technology cannot. It can produce only short strands of DNA.2 So while a single-celled organism could have produced one strand of DNA that contained the entire book (and then some), the scientists had to use 54,898 small strands of DNA to store the entire book.