I have always said that evolution requires the presence of a large amount of junk DNA. Not only does it make sense that the “trial and error” nature of random mutations acted on by natural selection would produce a large amount of garbled nonsense in the genome, evolution simulations like Avida require a very large percentage of the virtual genome to be junk in order to get any evolution. A few months ago, I discussed a piece by Salvador Cordova that seemed to make the case even more strongly. However, it was based on the work of Dr. Dan Graur, which I had not read. Thus, I couldn’t evaluate it in a detailed fashion. That has changed.
Just recently, a paper by Dr. Graur was published in Genome Biology and Evolution. In it, he makes his argument in a detailed, mathematical way. Having read his paper, I can now see why he made the following statement:
If ENCODE is right, then evolution is wrong.
If you don’t recognize the word “ENCODE,” it refers to a huge scientific initiative that is designed to determine what portions of the human genome are actually used by the various cell types that exist throughout the human lifespan. Their landmark publications in 2012 came to the conclusion that at least 80% of the human genome is functional. Dr. Graur says that if their conclusion is right, then there is no possible way we could have been the product of naturalistic evolutionary processes. When I read his argument as discussed by Salvador Cordova, I was a bit skeptical. However, now that I have read his paper, I am inclined to agree.
Graur’s argument hinges on the idea that natural selection has to “weed out” harmful mutations that are bound to occur from time to time. If harmful mutations are allowed to accumulate in the genome, the results can be devastating to a population. As a result, natural selection not only tends to allow beneficial mutations to survive and be passed on to future generations, it also does not allow harmful mutations to survive and be passed on. Dr. John Sanford and Dr. Robert Carter have already shown that for at least one virus, natural selection is pretty terrible at the latter, but for now, let’s assume it works as well as evolutionists claim that it does.
Dr. Graur makes the obvious point that the larger the percentage of functional DNA in a genome, the more likely it is for a random mutation to be harmful. So for a given mutation rate, the higher the percentage of functional DNA, the larger the number of harmful mutations. The larger the number of harmful mutations, the less likely it is to have a child who doesn’t carry at least one harmful mutation. As a result, more children have to be born so that some of them have no harmful mutations. Those children are the ones natural selection will preserve.
He then makes a mathematical model that tries to determine how many children must be born to each set of parents given a specific percentage of functional DNA in the human genome. He uses a range of mutation rates that come from the literature, and he uses reasonable assumptions (from an evolutionary point of view) regarding natural selection. He comes to the conclusion that given reasonable assumptions, the percentage of functional human DNA is 10-15%. Given the most unrealistic assumptions, it cannot be more than 25%.
He then goes on to evaluate the ENCODE conclusion that at least 80% of the genome is functional. According to his model:
By using the lower bound for the deleterious mutation rate…For 80% of the human genome to be functional, each couple in the world would have to beget on average 15 children and all but two would have to die or fail to reproduce. If we use the upper bound for the deleterious mutation rate…the number of children that each couple would have to have to maintain a constant population size would exceed the number of stars in the visible universe by ten orders of magnitude.
Obviously, then, if ENCODE is right, evolution is wrong.
Now I have to add a couple of caveats here. First, his model is heavily dependent on the mutation rate assumption, and one can make a lot of arguments to suggest that it was lower in the past. Thus, using current measurements of mutation rates might not be the best thing to do. In addition, he defines “function” very specifically. He says that a segment of DNA is functional if there is at least one possible mutation in that segment that would result in some harm to the organism. While I find it unlikely, I suppose it’s possible that some functional sections of DNA are completely immune to the effects of mutation.
However, what I can say is this: It seems that given a few assumptions that are commonly made in evolutionary circles (measured mutation rates are reasonably indicative of past mutation rates, any truly functional region of DNA can be harmed by at least one kind of mutation, mutations are truly random, the human genome has been around for at least a couple of hundred thousand years, etc.), then if the ENCODE conclusion is right, evolution cannot possibly work.
So in the end, it seems to me that this provides an excellent test between the naturalistic evolutionary view and the view that we have a Creator. If the naturalistic evolutionary view is correct, we should eventually find out that ENCODE is wrong, and only a tiny percentage of the genome is functional. If ENCODE is right, then the naturalistic evolutionary view is wrong and we have a Creator. I honestly think that the validity of the ENCODE conclusion will be confirmed within the next 20 years, so this is really an exciting time to be a creationist!
I do want to end with one more caveat. I seriously doubt that any hard-core evolutionist (including Dr. Graur) will give up his or her fervently-held belief in a naturalistic origin myth once it is confirmed that ENCODE is correct. Most likely, they will do what evolutionists have done time and time again. When an evolutionary prediction is falsified (see here and here), they will “explain around” the data in order to preserve their faith in naturalistic processes. Nevertheless, as more creationist predictions turn out to be correct (like these), more truth-seeking scientists will realize that we have a Creator.
Dr. Wile, have you read the new book,”Naturalism and its Alternatives in Scientific Methodologies,” by Jonathan Bartlett & Eric Holloway (http://www.blythinstitute.org/site/sections/66)? Sal Cordova has a chapter in it. The book addresses the issue of how to do operational science in a supernaturalist framework. From the website:
“Many volumes have addressed the question of whether or not naturalism is a required part of scientific methodology. However, few, if any, go any further into the many concerns that arise from a rejection of naturalism. If methodological naturalism is rejected, what replaces it? If science is not naturalistic, what defines science? If naturalism is rejected, what is gained and what is lost? How does the practice of science change? What new avenues would be available, and how would they be investigated?”
I would be very interested in your thoughts on the book. As a geologist, I’ve struggled with the problem of doing forensic geology within a supernaturalist, as opposed to a purely empirical, framework. I’m just on the first chapter, but it’s been very eye-opening to see the impact that different Enlightenment philosophies have had upon modern science.
I did review that book. I then dedicated another post specifically to Cordova’s chapter.
> mutations are truly random,
Do you have any familiarity/thoughts on the paradigm of those like James Shapiro / Perry Marshall who believe evolution happened but that mutations are *not* random – and that most DNA is not junk – and that there are actually more advanced tools within cells that are responsible for evolution? Without much qualifications for evaluating competing claims, I find it fascinating simply for the fact that even *within* an evolutionary framework there seems to be a strong case that *still* points to a creator (theological difficulties notwithstanding).
I do think the data speak strongly that mutations aren’t random. For example, Lenski’s E. coli that developed the ability to digest citrate did so because of a couple of mutations. We now know that this process can be reproduced over and over again. There is no way the process could be repeated with that much success if the mutations are random. Thus, we know that there are at least some adaptive mutations – mutations that are pre-programmed into the genome for the purpose of adaptation. I personally don’t think that can “save” a theistic evolution scenario, however. After all, random mutations also develop, and since natural selection seems terrible at weeding them out, they will pile up in any genome that is present for millions of years.
Hey Dr. Wile. Love your blog but I think there’s a bit more to this.
First, it’s useful to discuss the two different definitions of function. One definition is any nucleotide within a functional DNA element.
But not all nucleotides within functional elements have to have a specific sequence, and some can be spacers. So there’s a subset of these that is sequence specific functional. This is our second definition of functional.
ENCODE 2012 found that 80% of the human genome is made of functional elements, and at least 20% is sequence specific functional: “a minimum 20% (17% from protein binding and 2.9% protein coding gene exons) of the genome participates in these specific functions, with the likely figure significantly higher.”[1]
This 20% has been contested because proteins also weakly bind to sequences of random DNA. But across many organisms (including humans) we see a global avoidance of weak binding: “Using in vitro measurements of binding affinities for a large collection of DNA binding proteins, in multiple species, we detect a significant global avoidance of weak binding sites in genomes.”[2]
This is significant because: “Most DNA binding proteins recognize degenerate patterns; i.e., they can bind strongly to tens or hundreds of different possible words and weakly to thousands or more.”[2]
Therefore I don’t think it’s reasonable to contest that at least 20% of human DNA is sequence specific functional.
So back to Graur. The amount of sequence specific DNA is what we care about for determining if there are too many harmful mutations, and thus evolution fails. Graur is arguing that 25% of DNA is made of functional elements, but only 1% of DNA is sequence specific:
“the fraction of the genome that can be functional cannot exceed
25%. If the fraction of deleterious mutations out of all mutations in functional regions is even slightly higher than 4%, then the fraction of the genome that can be functional becomes much lower.”
4% of 25% is 1%. Larry Moran confirms I am not misreading this: “A mutation load of about one deleterious mutation per generation is the limit that a population can tolerate. Graur assumes 0.99.”[4]
So we have at least 20 times more sequence specific DNA than what Graur says evolution can account for.
However because of recombination (which Graur does not account for) I think we can likely tolerate more than 1 harmful mutation per generation. Suppose over many generations harmful mutations accumulate to the point where an average mom and dad each carry 100,000 harmful mutations. If this couple has kids, each will have 20+ new harmful mutations. But per the binomial distribution 25% of their kids will inherit 99,900 harmful mutations or fewer, plus 20+ new harmful mutations. Thus they end up with fewer harmful mutations than their parents. Problem solved, right?
Not quite. In this simplistic “mutation count” model all harmful mutations have the same negative effect. But in the real world, some mutations are ten thousand times more harmful than others. Natural selection works against those with the worst mutations rather than those with the most mutations. It’s blind to the slightly harmful mutations because the really bad mutations outweigh their effect by orders of magnitude.
John Sanford’s Mendel’s Accountant simulation program takes both recombination and the distribution of harmful mutations into account, as well as many other parameters such as linkage. When simulating “a [harmful] mutation rate of 10, almost half of all deleterious mutations were retained, with a nearly constant accumulation rate of 4.5 mutations per individual per generation”[5]
Since 10 – 4.5 = 5.5 mutations are being removed each generation, 5.5 might be close to the actual limit, not 1 harmful mutation per generation. Although it would be interesting to run Mendel with a lower number of mutations to find the actual limit.
[1] http://www.nature.com/nature/journal/v489/n7414/full/nature11247.html?WT.ec_id=NATURE-20120906
[2] https://journals.aps.org/prx/pdf/10.1103/PhysRevX.6.041009
[3] https://academic.oup.com/gbe/article-pdf/doi/10.1093/gbe/evx121/18475112/evx121.pdf
[4] http://sandwalk.blogspot.com/2017/07/revisiting-genetic-load-argument-with.html
[5] http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0010
Thanks for your insightful comments, John. I am sure that there is a lot more to it than this simple model, but I think the strength of the model is how it can be off by quite a bit and still be instructive. So, for example, his lower limit for deleterious mutations is 4×10-10. Let’s suppose we can tolerate 5.5 times as many mutations as he assumes, as you suggest. That reduces his lower estimate for deleterious mutation rate to 7×10-11. With that estimate, F = 1.4 when the genome is 80% functional, meaning each set of parents needs to have 2.8 children. That’s more reasonable, but now evolutionists must rely on very low mutation rates throughout the course of human history, which is tough. If the mutation rate is even twice that high, for example, F = 2.0. That means each set of parents must have 4 children. If it is three times as high, F = 2.8, and each set of parents must have 5.6 children.
As you say, it would be best if there were a more detailed model, but even this model makes it clear that evolution isn’t consistent with a genome that is mostly functional.
Very exciting time for sure. All I ask is that naturalistic models answer some very basic how-to questions without using conjecture, imagination, and story-telling. I think they owe it to me before I wager my everlasting soul. I think entirely natural methods would develop a pattern that could answer these questions but that is why you have evolutionist with their own models to explain rates and stabilities in the fossil record etc etc. I think this report is a serious problem and will be even more given time. I’m actually really surprised something like that would even get published but I guess publication bias swings both ways. Great Post and way to go going ahead and prepping us for the evolutionist comeback. – Glory to God
Maintenance. All ordered systems require it. Sometimes begrudgingly. If junk DNA exists, it is the dust and cobwebs of an unkempt house. Too much accumulation and the system begins to fail. It cannot be the bricks and mortar of the house… it is the disrepair.
Hello, I’m not a scientist but have studied this for decades. I’m sure in your blog somewhere you deal with this question, but please indulge me.
It seems to me that the VERY important mutation to evolution would be a “beneficial mutation” defined by my limited vocabulary as a mutation that produces MORE than the sum of its previous DNA or thee Genome had in it. Simply put; how can a one celled amoeba with no DNA or material for a skeleton; mutate into something with a skeleton. The mutation can multiply and divide but cannot “add” or “Introduce” a totally new “base” chemical. It can combine chemicals and produce some mutations different from what the “parent” was but it seems to me that it would be scientifically impossible for it to “create” a completely new base one. On a much larger and simpler explanation; sheep can mutate into a sheep with 5 legs or two heads. But the mutation would never include a sheep with wings. Canines can mutate into larger and smaller canines but never would be able to mutate and produce anything equestrian or feline. It is just not in the DNA. It seems that the entire discussion on this thread deals with the survival possibilities of already established life forms or species subject to mutations. It does not seem to deal with the core of “can mutations “CREATE” new species that have no similar DNA”. Am I right?
Thanks for your comment, Lennie. I have dealt with this on my blog (see here, here, and here, for example). I agree with you that it is hard to understand how random mutation can introduce novel information into a genome so that it can be acted on by natural selection. However, evolutionists think it can. Consider, for example, a gene being duplicated. This happens a lot. Now…the organism doesn’t need that duplicate gene – it already has one (the one that was copied). As a result, this new gene is free to mutate like crazy, because the organism doesn’t need it. This means it has “millions of years” to “sample” all sorts of possible combinations. By sheer coincidence, it might happen onto a novel protein. When that occurs, the organism is more survivable, and this new gene ends up getting fixed into the population.
Of course, the question is, “Could this actually happen?” Lenski’s lab has been following bacteria for more than 50,000 generations, and they thought they showed this happening in a line of bacteria that developed the ability to digest citrate in the presence of oxygen (they already can do it in the absence of oxygen). However, as discussed in one of the links above, it was shown that this was an adaptive mutation that is programmed into the genome. So far, then, I have seen no evidence that random mutations can produce novel genetic information. However, evolutionists have fervent faith that it can happen, especially given their magic wand of “millions of years.”
Great response and supporting articles. Keep up the good work!
Hey Dr. Wile. I enjoy your blog, it’s a nice onjective break from the doctrinaire of the mainstream sources.
I have a question on a topic unrelated to this post. It involves feathered dinosaurs, particularly this article. http://www.icr.org/article/new-doubts-about-dinosaur-feathers
What does this new find say about your previous post on the Amber fossil with an attached feather? More precisely, does it point to long tailed(25 vertebrae) birds, or feathered dinosaurs/flightless birds.
Also, in one paper regarding this they claim that the vertebrae actually found in the amber fossil are similar to dinosaur vertebrae. I wonder if that’s not just an appeal to the homology argument for evo. What I mean is that a similar anatomical structure does not support ancestry, it is merely one interpretation.
I hope you can reply. Your replies and activity defending and correcting your posts(if convinced there’s more to what your reporting) is my favorite thing about your blog. Best wishes Dr. Wile!
Hi Ben,
I am glad that you enjoy my blog. I am not sure the new find is relevant to the old one. Yes, the two fossils were probably formed at roughly the same time, but one had the remains of a tail and some feathers, while the other one is most of a juvenile bird. At best, that means whatever owned the tail was preserved in the same location that a juvenile bird was preserved. I agree that similar anatomical structure doesn’t support ancestry, since evolutionists are often forced to “explain around” strikingly similar anatomies that are in divergent lineages, such as the amazing similarities between the vertebrate eye and the cephalopod eye.
Changes in organisms are based on epigenetic mechanisms or loss of biological information.
Textbooks say that evolution is an inevitable natural process that has no direction. According to the theory of evolution, small random changes are naturally selected in a population for better survival fitness. Does this theory have anything to do with observed science? Absolutely not. Let’s find out the most significant reasons, why the theory of evolution is a pseudoscientific theory.
1. Human genome is rapidly deteriorating. There are 203,885 disease-causing genomic mutations in the human genome at population level. The number is rapidly getting higher. About 10% of people are living with a genetic disease. One in five ‘healthy’ adults may carry disease-related genetic mutations. Scientists are in a hurry to develop gene editing architectures, like CRISPR etc. Human Y-chromosome is rapidly losing genetic material. The number of SNPs is correlated with a loss of genes. For example, the Icelanders have lost 1,171 genes. There are over 20 million SNPs in their genome. Modern science is not aware of beneficial random mutations.
2. Rapid genetic degradation is a scientific fact within other mammals too. A good example is the Canidae lineage. The more adaptation and variation, the more organisms experience genetic degradation. The Red Fox has 1,500 protein coding genes less than the Bat Eared Fox or a dog. The lifespan of the Red Fox is only 2-4 years in the wild. It’s susceptible to several diseases. The decreasing number of chromosomes within mammals is associated with faulty and lost genes.
Bat-eared fox 72 chromosomes lifespan 13-15 years
Gray Fox 66 chromosomes lifespan 6-8 years
Fennec Fox 64 chromosomes lifespan 8-10 years
Bengal Fox 60 chromosomes lifespan 6-8 years
Kit Fox 50 chromosomes lifespan 5.5 years
Tibetan sand fox 36 chromosomes lifespan 6-10 years
Red Fox 34 chromosomes lifespan 2-4 years
3. Bacteria are also rapidly losing genetic material despite their ability to transfer genes between each other. Gene loss results in robust parasitic or pathogenic bacteria or highly specialized harmful bacteria. This same phenomenon can be observed all over nature. Loss of biological information leads to negative consequences within organisms.
4. Changes in organisms are based on epigenetic mechanisms directed by diet type, climate, stress, toxicants etc. Traits are not determined by gene sequences. Genes are no drivers or controllers. Alterations in epigenetic information patterns typically result in genetic errors. Aberrant methylation patterns are the most significant reason for genetic degradation. We can slow down the genetic degradation but we can’t stop it from happening. It’s an inevitable fact that there is only one direction in nature. A dead end. Evolution has never been observed because there are no mechanisms that could increase biological information leading to increase in functional or structural complexity. That’s why creation and design. Don’t get lost.
Thanks for your comment, Tomi. While I agree that genomes are degrading, I am not sure there is a simple relationship between chromosome number and lifespan. For example, the crab-eating fox (Cerdocyon thous) has 74 chromosomes. Thus, you would expect it to have a longer lifespan than any of the ones you listed. However, its maximum lifespan is 11 years, and that’s in captivity. The arctic fox (Vulpes lagopus) lives 3-6 years in the wild and up to 16 years in captivity. However, it has a chromosome count of 50. Neither the wild lifespan nor the captivity lifespan fits in the pattern you present. Indeed, the wide variation in lifespan between the wild and captivity indicates that there are a lot of factors besides chromosome count at play when it comes to lifespan.
I also don’t understand what you mean by “Changes in organisms are based on epigenetic mechanisms directed by diet type, climate, stress, toxicants etc. Traits are not determined by gene sequences. Genes are no drivers or controllers.” It’s quite true that epigentics can produce changes, but we have hardly begun to probe the extent of those changes. Also, we can definitely connect some traits to gene sequences. Sickle-cell anemia is a trait, for example, and it can be directly linked to a specific variation in the sequence of the gene that codes for hemoglobin. I certainly agree that there is a lot more to a person than his or her genes, but there are traits in people that can be linked to specific gene sequences. This is even more the case for less-complex organisms, like the fruit fly. Eye color, body color, and many other traits have been directly linked to the sequence of specific genes in the fruit fly.