Time to Redefine the Concept of a Gene?

The basic unit of heredity (the gene) has been defined as a stretch of DNA that codes for a protein. In plants, animals, and people, genes are made of introns and exons. The ENCODE results suggest this definition might need to be changed. (Click for credit)

As I posted previously, a huge leap in our understanding of human genetics recently occurred due to the massive results of project ENCODE. In short, the data produced by this project show that at least 80.4% of the human genome (almost certainly more) has at least one biochemical function. As the journal Science declared:1

This week, 30 research papers, including six in Nature and additional papers published by Science, sound the death knell for the idea that our DNA is mostly littered with useless bases.

Not only have the results of ENCODE destroyed the idea that the human genome is mostly junk, it has prompted some to suggest that we must now rethink the definition of the term “gene.” Why? Let’s start with the current definition. Right now, a gene is defined as a section of DNA that tells the cell how to make a specific protein. In plants, animals, and people, genes are composed of exons and introns. In order for the cell to use the gene, it is copied by a molecule called RNA, and that copy is called the RNA transcript. Before the protein is made, the RNA transcript is edited so that the copies of the introns are removed. As a result, when it comes to making a protein, the cell uses only the exons in the gene.

By today’s definition, genes make up only about 3% of the human genome. The problem is that the ENCODE project has shown that a minimum of 74.7% of the human genome produces RNA transcripts!2 Now the process of making an RNA transcript, called “transcription,” takes a lot of energy and requires a lot of cellular resources. It is absurd to think that the cell would invest energy and resources to read sections of DNA that don’t have a function.

In addition, the data in reference (2) demonstrate that many RNA transcripts go to specific regions in the cell, indicating that they are performing a specific function. Since there is so much DNA that does not fit the definition of “gene” but seems to be performing functions in the cell, scientists probably need to redefine what a gene is. Alternatively, scientists could come up with another term that applies to the sections of DNA which make an RNA transcript but don’t end up producing a protein.

There is another reason that prompts some to reconsider the concept of a gene: alternative splicing. The ENCODE data show that this is significantly more important than most scientists ever imagined.

Continue reading “Time to Redefine the Concept of a Gene?”

Surprising? Only to Evolutionists!

The human genome is the sum of all the DNA contained in the nucleus of a human cell.
(Click for credit)
In 2001, the initial sequence of the human genome was published.1 Not only did it represent a triumph in biochemical research, it allowed us to examine human genetics in a way that had never been possible before. For the first time, we had a complete “map” of all the DNA in the nucleus of a human cell. Unfortunately, while the map was reasonably complete, scientists’ understanding of that map was not. Despite the fact that scientists had a really good idea of what was in human DNA, they didn’t have a good idea of how human cells actually used that material.

In fact, there were many scientists who thought that most of the contents of DNA is not really used at all. Indeed, when the project to sequence the human genome was first getting started, there were those who thought it would be senseless to sequence all the DNA in a human being. After all, it was clear to them that most of a person’s DNA is useless. In 1989, for example, New Scientist ran an article about what it called “the project to map the human genome.” In that article, the views of Dr. Sydney Brenner were brought up. As the director of the Molecular Genetics Unit of Britain’s Medical Research Council, he was considered an expert on human genetics. The article states:2

He argues that it is necessary to sequence only 2 percent the human genome: the part that contains coded information. The rest of the human genome, Brenner maintains, is junk. (emphasis mine)

This surprising view was probably the dominant view of scientists during the 1980s and 1990s. Indeed, the article represents the idea that the rest of the human genome might be worth sequencing as being the position of only “some scientists.”

Now why would scientists think that most of the human genome is junk? Because of evolutionary reasoning. As Dr. Susumu Ohno (the scientist who coined the term “junk DNA”) said about one set of DNA segments:3

Our view is that they are the remains of nature’s experiments which failed. The earth is strewn with fossil remains of extinct species; is it a wonder that our genome too is filled with the remains of extinct genes?

Indeed, evolutionists have for quite some time presented the concept of “junk DNA” as evidence for evolution and against creation. In his book, Inside the Human Genome: A Case for Non-Intelligent Design, Dr. John C. Advise says:4

…the vast majority of human DNA exists not as functional gene regions of any sort but, instead, consists of various classes of repetitive DNA sequences, including the decomposing corpses of deceased structural genes…To the best of current knowledge, many if not most of these repetitive elements contribute not one iota to a person’s well-being. They are well-documented, however, to contribute to many health disorders.

His point, of course, is that you would expect a genome full of junk in an evolutionary framework, but you would not expect it if the genome had been designed by a Creator. I couldn’t agree more. If evolution produced the genome, you would expect it to contain a whole lot of junk. If the genome had been designed by a loving, powerful Creator, however, it would not. Well…scientists have made a giant leap forward in understanding the human genome, and they have found that the evolutionary expectation is utterly wrong, and the creationist expectation has (once again) been confirmed by the data.

The leap began back in 2003, when scientists started a project called the Encyclopedia of DNA Elements (ENCODE).5 Their goal was to use the sequence of the human genome as a map so that they could discover and define the functional elements of human DNA. Back in 2007, they published their preliminary report, based on only 1% of the human genome. In that report, they found that the vast majority of the portion of the genome they studied was used by the cell.6 Now they have published a much more complete analysis, and the results are very surprising, at least to evolutionists!

Continue reading “Surprising? Only to Evolutionists!”

Another Scientist Who Gives Credit Where It Is Due

It’s popular these days to claim that science and Christianity are incompatible. Of course, no one who spends any amount of time learning the history of science can be fooled by such a claim, because the history of science makes it very clear that modern science is a product of Christianity. Specifically, because early Christians understood that the world was created by a single God who is a Lawgiver, it made sense to them that the universe should run according to specific laws, and those laws should be the same everywhere in the universe. In addition, because they believed they had been given the image of God, they thought it was possible to understand those laws. That’s what prompted the revolution that produced science as we know it today.

For example, Morris Kline discusses Sir Isaac Newton in his book, Mathematics: The Loss of Certainty. He explains why Newton believed that the same laws which govern motion on the surface of the earth should also govern motion in the heavens:1

The thought that all the phenomena of motion should follow from one set of principles might seem grandiose and inordinate, but it occurred very naturally to the religious mathematicians of the 17th century. God had designed the universe, and it was to be expected that all phenomena of nature would follow one master plan. One mind designing a universe would almost surely have employed one set of basic principles to govern related phenomena.

Morris Kline was a mathematician, but I recently ran across a scientist who says essentially the same thing.

Continue reading “Another Scientist Who Gives Credit Where It Is Due”

It’s Amazing What RNA Can Do!

A sunburn comes from micro-RNAs that are released by damaged cells. (Click for credit)
One of the truly remarkable things about creation is how one substance can be used in nature to do all sorts of different jobs. Take ribonucleic acid, for example. Commonly referred to as RNA, scientists have known for quite some time that it is an integral part of how the cell makes proteins. A particular kind of RNA, called messenger RNA, copies a protein recipe contained in DNA, and it takes that copy to a protein-making factory called a ribosome. Once the recipe is at the ribosome, two other kinds of RNA, transfer RNA and ribosomal RNA, interact with the messenger RNA to build the protein in a step-by-step manner.

Because RNA is such an important part of how the cell builds proteins, some scientists speculated that this was its only job. In 1993, however, Victor Ambros, Rosalind Lee, and Rhonda Feinbaum found another job for RNA. Short strands of RNA, which are now called microRNAs, sometimes regulate how much of a particular protein is made in the cell.1 Since then, other forms of RNA have also been shown to regulate the amount of protein produced in a cell. In addition, scientists have found that some types of RNA perform functions that aren’t even directly related to the production of proteins. For example, some types of RNA serve as “molecular guides,” taking proteins where they need to be in the cell, while other types of RNA serve as a “molecular adhesives,” holding certain proteins to other RNA molecules or to DNA.

Now even though the last two jobs I mentioned are not directly related to protein production, they still involve proteins. So is it safe to say that while RNA performs several functions in the cell, all of them are related to proteins in some way? I might have answered, “Yes” to such a question if a student had asked me that just a few weeks ago. However, a new paper in Nature Medicine has found a function for some microRNAs that has nothing to do with proteins. Some microRNAs serve as radiation detectors.2

Continue reading “It’s Amazing What RNA Can Do!”

Move Over, Kindle. This Scientist Stored His Book on DNA!

DNA stores information more efficiently than any human technology. (montage of Art from Kevin Spear and the public domain)
Everyone has heard of DNA, but many don’t appreciate its marvelous design. It stores all the information an organism needs to make proteins, regulate how they are made, and control how they are used. It does this by coding biological information in sequences of four nucleotide bases: adenine (A), thymine (T), guanine (G), and cytosine (C). The nucleotide bases link to one another in order to hold DNA’s familiar double-helix structure together. A can only link to T, and C can only link to G. As a result, the two linking nucleotide bases are often called a base pair. DNA’s ingenious design allows it to store information in these base pairs more efficiently than any piece of human technology that has ever been devised.

What you might not realize is that pretty much any information can be stored in DNA. While the information necessary for life involves the production, use, and regulation of proteins, DNA is such a wonderfully-designed storage system that it can efficiently store almost any kind of data. A scientist recently demonstrated this by storing his own book (which contained words, illustrations, and a Java script code) in the form of DNA.1

The way he and his colleagues did this was very clever. They took the digital version of their book, which was 5.27 megabits of 1’s and 0’s, and used it as a template for producing strands of DNA. Every time there was a “1” in the digital version of the book, they added a guanine (G) or a thymine (T) to the DNA strand. Every time the digital version of the book had a “0,” they added an adenine (A) or a cytosine (C). Now unfortunately, human technology cannot come close to matching the incredible design of even the simplest living organism. As a result, while living organisms can produce DNA that is billions of base pairs long, human technology cannot. It can produce only short strands of DNA.2 So while a single-celled organism could have produced one strand of DNA that contained the entire book (and then some), the scientists had to use 54,898 small strands of DNA to store the entire book.

Continue reading “Move Over, Kindle. This Scientist Stored His Book on DNA!”

There Seems To Be No Question About It: The Sun Affects Some Radioactive Half-Lives

NOTE: Long after this article was published, new experimental data was published indicating that the effect is not real.

Almost three years ago, I wrote about how I had changed my mind on radioactive half-lives. Throughout my scientific education (from high school through graduate school), I had it pounded in my head that radioactive half-lives are constant. There is so much energy involved in radioactive decay that there is just no way to change the fundamental rate at which a given radioactive isotope decays without taking extreme measures that don’t generally occur in nature. This was considered a scientific fact, and to question it was just not reasonable.

Over the years, however, more and more evidence has been piling up indicating that this scientific “fact” is simply not true. Some of the most surprising evidence has come from Brookhaven National Laboratory (BNL) and a German lab known as the Physikalisch-Technische Bundesanstalt (PTB). The group at BNL had been studying the radioactive decay of silicon-32, and they noticed that the half-life of the decay periodically increased and decreased based on the time of year. The half-life was shortest in the winter and longest in the summer. The variations were very small, but they were measurable. The PTB group was studying the decay of radium-226, and they noticed the exact same behavior. In the end, both groups concluded that the half-lives of these two isotopes were changing slightly in direct correlation with the minor variation in the distance between the earth and the sun. Thus, they concluded that the sun was affecting the rate of decay in those two isotopes.1

This conclusion was bolstered by a fortunate coincidence in which the BNL group was measuring the radioactive decay of manganese-54 before, during, and after the solar flare that occurred on December 13, 2006. They noticed that the half-life of that isotope’s radioactive decay increased more than a day before the solar flare occurred. In addition, the behavior repeated itself on December 17, when another solar flare occurred.2 Based on these two papers, it seemed obvious that the sun was exerting some influence over the half-lives of at least some radioactive isotopes.

Obviously, of course, others tried to replicate these results, and they weren’t always successful. A group at the University of California Berkeley analyzed their data for several different radioactive isotopes but saw no correlation between their half-lives and the seasons.3 However, a reanalysis of the same data seemed to show some variation correlated with the distance between the earth and the sun, although it was much weaker than what was seen by BNL and PTB. The authors of the reanalysis suggested that perhaps the influence of the sun was different for different isotopes. Since different isotopes have different half-lives, it makes sense that they would respond differently to an outside influence such as the sun.4

Well, some new data have come to light, and as far as I can tell, they confirm that at least for some radioactive isotopes, the sun is affecting the value of their half-lives.

Continue reading “There Seems To Be No Question About It: The Sun Affects Some Radioactive Half-Lives”

Stone-Age Animation

When you flip this thaumatrope back and forth, it looks like the flowers are in the vase. (public domain image)
It’s sad to see how evolutionary thinking causes so many misconceptions in the realm of science. For example, evolutionary thinking has produced the idea that “stone age” people were primitive and barbaric. Of course, as is the case with most evolution-inspired ideas, this one doesn’t stand up in light of the evidence. The more research is done, the more we know that “stone age” people had an advanced culture all their own.1 A recent finding that I just read about in Science News adds more evidence to support the fact that there was nothing very “primitive” about ancient people.

The article starts out like this:2

By about 30,000 years ago, Europeans were using cartoonlike techniques to give the impression that lions and other wild beasts were charging across cave walls, two French investigators find. Artists created graphic stories in caves and illusions of moving animals on rotating bone disks…

While it’s very interesting that ancient artists were painting scenes that produced the impression of motion, the thing that really caught my eye was the part about the rotating bone disks. The article has three pictures that show how one of them worked (you can see them here), and when I saw those pictures, I immediately recognized it as a thaumatrope. However, according to everything I have read, the thaumatrope was invented in 1825. For example, here is how Ray Zone puts it in his book, Stereoscopic Cinema and the Origins of 3-D Film, 1838-1952:3

The fundamental principle behind the movies is persistence of vision, when a visual impression remains briefly in the brain after it has been withdrawn. This principle was demonstrated in 1825 with an optical toy called the “Thaumatrope,” invented by Dr. John Ayrton Paris.

Obviously, Mr. Ray is off by a few years!

Continue reading “Stone-Age Animation”

Animal Magnetism

Brown trout like this one return to the stream in which they hatched in order to spawn. (Click for credit)
Many species of fish, such as the brown trout pictured on the left, hatch in streams and then travel away from those streams in order to mature. However, when it is time to reproduce, they end up navigating back to the same stream in which they hatched so they can spawn there. How do they accomplish this? How do they know where they are and which way to swim in order to get back to that special stream? Based on behavioral studies, scientists have thought that these fish are able to sense the earth’s magnetic field and use it as an aid in their navigation. However, the specific source of this magnetic field sense has been elusive…until now.

A recent study has shed a lot of light on this magnetic sense, at least for trout (and presumably other similar fish, like salmon). The authors of the study set out to determine what gives the trout their magnetic sense, and they developed a rather ingenious method to aid them in their search. First, they took tissue samples from the trout’s nasal passages, because previous studies indicated that there was magnetite (a mineral that reacts strongly to magnetic fields) in those tissues.1 Then, they put cells from the tissues under a microscope and exposed the cells to a rotating magnetic field. In response, some of the cells rotated with the field.2 You can actually see a video of this happening here! Just click on the links for downloading the movies.

This is a very simple, very sensitive method for finding the cells responsible for the trout’s magnetic sense. As you can see from the video, the cells that are sensitive to the rotating magnetic field are smaller than the other cells in the tissue. Also, the authors found that only 1 in 10,000 cells in the nasal tissue have a magnetic sense. No wonder these cells haven’t been found until now! Of course, as the authors studied the cells more closely, they found evidence of thoughtful design.

Continue reading “Animal Magnetism”

Another Example of Three-Way Mutualism. Is This Just the Tip of the Iceberg?

A white-spotted pufferfish in a seagrass bed (click for credit)

Over two years ago, I wrote about an interesting three-way mutualistic relationship between a virus, a fungus, and a plant. Less than a year later, I wrote about how people are actually walking ecosystems, participating in a huge number of mutualistic relationships with many different species of bacteria. Last night, while reading the scientific literature, I ran across another example of a three-way mutualistic relationship, and it is equally as fascinating!

This three-way relationship starts with seagrasses. Coral reefs are the “stars” of the marine world, but seagrass communities can be considered its “workhorses.” While they make up only 0.2% of the ocean’s ecosystems, they produce more biomass than the entire Amazonian rainforest!1 Why are they so productive? Because they form a wide variety of marine ecosystems that serve as nurseries for many developing fishes and homes to a wide variety of sea creatures including turtles, manatees, shrimp, clams, sea stars, etc. Because of their amazing ability to support such ecosystems, seagrasses have been studied by marine biologists for some time. However, there has always been a nagging mystery associated with them.

The roots of seagrasses trap sediments which form a rich mud that is often several feet deep. The mud is rich because it contains all manner of decaying organic matter. However, the reason the organic matter decays is because bacteria decompose it. One of the byproducts of this bacterial decomposition is sulfide, and if that sulfide were allowed to build up to high concentrations, it would actually end up harming the seagrasses themselves. However, it never does. No one has proposed a satisfactory explanation as to why this doesn’t happen.

Certainly, the seagrasses transport oxygen to the mud through their roots, and that oxygen can turn the sulfide into sulfate, which is harmless to the seagrasses. However, detailed studies show that the sulfide produced by the resident bacteria accumulates far faster than it can be removed by the oxygen that is added to the mud through the seagrasses’ roots, especially during warm seasons.2 Thus, there must be some other way that sulfide is being removed from the mud.

Marine biologists had no idea what this other way was…until now.

Continue reading “Another Example of Three-Way Mutualism. Is This Just the Tip of the Iceberg?”

Human Body Hair is Useless, Right? WRONG!

Many evolutionists think that body hair in humans is useless. The data say otherwise. (Click for credit)
One of the many reasons scientists are rejecting the hypothesis of evolution (see here and here, for example) is that many of its predictions have been falsified (see here, here, here, and here for even more examples). The more we learn about the world around us, the more clear it is that the predictions of the evolutionary hypothesis just don’t work. This is probably most apparent when it comes to “vestigial organs,” biological structures that are supposed to serve no real purpose; they are simply leftover vestiges of the evolutionary process. As Darwin himself said, they are like the silent letters of a word. They don’t serve a purpose in the word, but they do tell us about the word’s origin.

I have written about vestigial structures many times before (here, here, here, here, here, here, and here) because they are so popular among evolutionists. However, as the data clearly show, the evolutionists are simply wrong about them, and the more research that is done, the more clear it becomes. The latest example is human body hair. This has always been a favorite among evolutionists. Here are two evolutionary descriptions of human body hair. The first comes from a book specifically designed to help the struggling evolutionist in his attempt to convince people that his hypothesis has scientific merit.1

Humans, like all other organisms, are living museums, full of useless parts that are remnants of and lessons about our evolutionary histories (Chapter 6). Humans have more than 100 non-molecular vestigial structures. For example, our body hair has no known function.

The second comes from a textbook2

Body hair is another functionless human trait. It seems to be an evolutionary relic of the fur that kept our distant ancestors warm (and that still warms our closest evolutionary relatives, the great apes).

As is the case with most evolutionary ideas, serious scientific research has shown that such statements are simply wrong.

Continue reading “Human Body Hair is Useless, Right? WRONG!”