The bones that make up the skeletons of animals and people are a marvel of engineering. As one materials scientist put it:1
…bone properties are a list of apparent contradictions, strong but not brittle, rigid but flexible, light-weight but solid enough to support tissues, mechanically strong but porous, stable but capable of remodeling, etc.
More than three years ago, I posted an article about research that helps to explain why bones are so strong. The calcium mineral that makes up a significant fraction of the bone, hydroxyapatite, is arranged in crystals that are only about three billionths of a meter long. If the crystals were much longer than that, the strength of the resulting bone tissue would be significantly lower. What restricts the size of the crystals? According to the previous research, the tiny crystals are surrounded by molecules of citrate. It was thought that the citrate latches onto the outside of the crystal, stopping it from growing.
Some very interesting new research from the University of Cambridge and the University College London indicates that this is, indeed, what happens. However, it also indicates that citrate does much more than simply restrict the size of the crystals. It also helps to produce a cushion that allows bones to flex rather than break when they are under stress.
In the 1880s, an Italian scientist named Angelo Mosso built a balance that tried to measure the net flow of blood in the body. A man was put on the balance and asked to clear his mind. The balance was then set so that it stayed horizontal. The man was then asked to read something, and invariably, the balance tilted towards the head, indicating that his brain got heavier. According to Mosso, when the man read a newspaper, the balance would tilt a bit, but when he read a page from a mathematics manual, the balance would tilt more. One man was asked to read a letter from an angry creditor, and it tipped the balance more than anything else!
These results led Mosso to conclude that when the brain is actively working, it gets more blood from the circulatory system. The more it has to work (to process difficult information or strong emotions), the more blood it gets. When I originally read about Mosso’s work years ago, it reminded me of Dr. Duncan MacDougall’s experiments in which he tried to weigh the soul. If you have never heard of Dr. MacDougall’s work, he tried to measure the weight of six terminally-ill patients at the moment they died. He then did the same procedure on dogs. He claimed that while the people lost weight when they died, the dogs did not. As a result, he claimed to have demonstrated that the human soul has weight.
Of course, there are all sorts of problems with Dr. MacDougall’s work, and when I read about Mosso’s work, I rashly put it in the same category. While I am more than willing to believe that the brain needs more nutrients when it is hard at work, I have a hard time believing that its blood flow patterns would be changed dramatically enough to be measured by a balance. Fortunately, other scientists weren’t so rash. Dr. David T. Field and Laura A. Inman decided to replicate Mosso’s experiments, and the results surprised me.
Pacific salmon are fascinating to study, because their lifecycle is so interesting. They hatch in freshwater streams, at which point they are called alevin. Although they have hatched, they still have a yolk sac upon which they feed. Once they have absorbed the yolk sac, they are called fry, and they begin feeding on the plankton in the stream. They eventually mature into parr, which are also called fingerlings. After about 12-18 months in freshwater, they move to the brackish waters of estuaries, ecosystems where freshwater rivers meet the ocean. At this point, they are usually called smolts. After a few months, they venture out into the ocean, where they will spend several years growing.
The amazing part, of course, is that after spending several years in the ocean, they return to the same freshwater stream where they hatched to spawn another generation. From a scientific point of view, one of the most important questions you can ask about this lifecyle is, “After spending years in the ocean, how do the salmon know the way back to the freshwater stream in which they hatched?” It makes sense that while they are fry and parr, they get a good sense of the mix of chemicals that make up their “home stream,” but they obviously can’t follow that trail of chemicals from the ocean! So how do they get from the ocean to the correct estuary so that they can get back to the stream in which they hatched?
About a year ago, I discussed a study that gave a partial answer to that question. It showed that sockeye salmon use the earth’s magnetic field as a “map” that leads them to the proper estuary. The study suggested the salmon had other means of navigation at their disposal, but the magnetic field was a very important tool in the fish’s repertoire. How do the salmon acquire this map? In the previous study, it was suggested that the map is imprinted in the salmon’s brain as it is traveling from the estuary to the open ocean.
Well, the same research team has done a follow-up study, and they have decided that this suggestion is probably not correct. Instead, the real story is more complex and much more interesting!
Many people know that bacteria have developed resistance to popular antibiotics. Indeed, it is a big problem in medicine, and it has caused many health-care providers to call for doctors to prescribe antibiotics only when they are necessary. The Centers for Disease Control calls this “antibiotic stewardship” and thinks it will improve medical care throughout the country.1 I have written about antibiotic resistance before (see here and here), because some evolutionists try to cite it in support of the idea that novel, useful genes can be produced by evolutionary processes. Of course, the more we have studied the phenomenon, the more we have seen that this is just not the case.
There are essentially two ways that a bacterium develops resistance to an antibiotic. One way is to have a mutation that confers the resistance. For example, a bacterium can become resistant to streptomycin if a mutation causes a defect in the bacterium’s protein-making factory, which is called the ribosome. That defect keeps streptomycin from binding to the ribosome, which makes streptomycin ineffective against the bacterium. However, it also makes the ribosome significantly less efficient at its job.2 So in the end, rather than producing something novel (like a new gene that fights the antibiotic), the mutation just deteriorates a gene that already existed. While this is good for a bacterium in streptomycin, it doesn’t provide any evidence that novel, useful genes can be produced by evolutionary processes.
There is, however, a second way that a bacterium can develop resistance to an antibiotic: It can get genes that fight the antibiotic from another bacterium. Bacteria hold many genes on tiny, circular portions of their DNA called plasmids. Two bacteria can come together in a process called conjugation and exchange those plasmids, which allows bacteria to “swap” DNA. If a bacterium has a gene (or a set of genes) that allows it to resist an antibiotic, it can pass those genes to others in the population, ensuring their survival.
Of course, the natural question one must ask is, “Where did those antibiotic-resistance genes come from in the first place?” Many evolutionists want you to believe that evolution produced those genes in response to the development of antibiotics. After all, antibiotics didn’t exist until 1941, when penicillin was tested in animals and then people. Why would antibiotic-resistance genes exist before the antibiotics?
When I scuba dive, I love finding tubeworms like the one pictured above. As adults, these worms build tubes made out of calcium carbonate to house their delicate bodies. They feed by extending feathery appendages called radioles, which catch nutrients that are floating in the water. On the left side of the picture above, you see a tubeworm with its radioles extended. However, if you scare a tubeworm (I do so by flicking my fingers at it), the worm will pull its radioles back into its tube for protection. At that point, you see only the opening of the tube, which is shown on the right side of the picture above.
An adult tubeworm spends its life attached to a hard surface, such as a piece of coral, a rock, or even the hull of a ship. However, when a tubeworm egg hatches, the larva that emerges is free-swiming and looks nothing like the adult. In order to mature, it must find a surface to which it can attach itself. It has long been known that tubeworm larvae tend to attach themselves to surfaces that contain specific bacteria, but no one understood how the larvae know where the bacteria are.
Nicholas J. Shikuma and his colleagues have done a study that helps us understand this amazing process. They concentrated on a specific species of tubeworm, Hydroides elegans, which is a common nuisance because it tends to stick to the hulls of ships (that’s not the species pictured above). They already knew that these tubeworms tend to settle where a specific bacterium, Pseudoalteromonas luteoviolacea, is found. As a result, they studied the bacterium in detail, and they found something rather incredible.
Approximately a year ago, I wrote about the bacteria in human breast milk. While that may sound like a bad thing, it is actually a very good thing. Over the years, scientists have begun to realize just how important the bacteria that live in and on our bodies are (see here, here, here, here, and here), and the bacteria in breast milk allow an infant to be populated with these beneficial microbes as early as possible. Not surprisingly, as scientists have continued to study breast milk, they have been amazed at just how much of it is devoted to establishing a good relationship between these bacteria and the infant who is consuming the milk.
For example, research over the years has shown that human breast milk contains chemicals called oligosaccharides. These molecules, such as the one pictured above, contain a small number (usually 3-9) simple sugars strung together. Because oligosaccharides are composed of sugars, you might think they are there to feed the baby who is consuming the milk, but that’s not correct. The baby doesn’t have the enzymes necessary to digest them. So what are they there for? According to a review article in Science News:1
These oligosaccharides serve as sustenance for an elite class of microbes known to promote a healthy gut, while less desirable bacteria lack the machinery needed to digest them.
In the end, then, breast milk doesn’t just give a baby the bacteria he or she needs. It also includes nutrition that can be used only by those bacteria, so as to encourage them to stay with the baby! Indeed, this was recently demonstrated in a study in which the authors spiked either infant formula or bottled breast milk with two strains of beneficial bacteria. After observing the premature babies who received the concoctions for several weeks, they found that the ones who had been feed bacteria-spiked formula did not have nearly as many of the beneficial microbes in their intestines as those who had been feed bacteria-spiked breast milk.2
Since the realization that DNA is the molecule that passes traits from parents to offspring, it has been thought that the only way to inherit a trait is through the genes. If offspring have traits that are similar to their parents, it is because they inherited similar genes. If they have a trait that is different from their parents, it’s because the genes are different. Over the past decade, however, that view has been moderated to some extent. There seems to be something other than genes at play when it comes to inheritance. The study of heritable traits that do not involve the genes themselves is called epigentics, and it is a fascinating field of study.
While there have been a lot of studies trying to figure out if traits really can be inherited through epigenetics, many of them have been inconclusive or suffer from experimental design flaws. However, I recently ran across a study that I think produces the most convincing argument yet that at least some new traits can be passed from parents to offspring (and beyond) without any change in the genes themselves.
In the experiment, the authors started by exposing male mice to acetophenone, a chemical that has a fruity smell. When the mice were exposed to the chemical, they were also given a mild electrical shock. As a result, the mice began to associate the shock with the smell. After a while, the mice would shudder when they smelled acetophenone, even if they weren’t given a shock. The authors then bred those males with females who had never been exposed to acetophenone. During the entire time the offspring from this mating were raised, neither the offspring nor the parents were exposed to acetophenone. Once the offspring matured, they were then exposed to acetophenone, and they shuddered, even though no shock was given to them. When those offspring were bred, their offspring also exhibited the same behavior. Offspring bred from males who had not been conditioned with acetophenone and shock did not shudder when exposed to the chemical.1
Now, of course, there are several possible explanations for these results, and had the authors stopped there, the paper would not be nearly as convincing as it is. However, the authors did several follow-up experiments that seemed to rule out any explanation other than inheritance.
Naturalistic evolutionists are forced to look at the world very simply. After all, they think there is no plan or design in nature. Instead, they believe that random events filtered by natural selection are responsible for all the marvels we see today. Because of this unscientific way of thinking, they tend to look for simple processes to explain amazingly complex interactions in nature. Cellular communication is a perfect example of how this simplistic way of looking at things can produce serious errors.
In order for the different cells of an organism to be able to work together, they must communicate with one another. One of the most well-studied versions of cellular communication is called endocrine communication, and the insulin-producing cells in the islets of the pancreas (illustrated above) provide an example of how it works. These cells produce insulin, which is then released into the bloodstream. When cells in the liver, skeletal muscles, and fat tissues are exposed to this chemical, they absorb glucose (a simple sugar) from the blood. By controlling the release of insulin from the pancreatic islets, then, the body can control how much glucose is in the blood.
Now, of course, this is a great design for cellular communication that needs to affect a wide array of cells in many different places. It makes the release of the chemicals easy to control but their effect long-ranging. As a result, when the body needs widespread communication in different cells, endocrine communication is used. However, there are often times when cells need to communicate with other cells that are nearby. This is called paracrine communication, and biologists have taught (as fact) for many, many years that paracrine communication happens in essentially the same way as endocrine communication. For example, one of the volumes of the Handbook of Cell Signaling says:1
Paracrine interactions induce signaling activities that occur from cell to cell within a given tissue or organ, rather than through the general circulation. This takes place as locally produced hormones or other small signaling molecules exit their cell of origin, and then, by diffusion or local circulation, act only regionally on other cells of a different type within that tissue. (emphasis mine)
In other words, a cell releases some signaling chemicals, and those chemicals simply have to find their way to their targets via diffusion or some other local means of movement. Of course, such a signalling scheme is rather inefficient for communication with nearby cells, and new research indicates that it’s not the way paracrine communication is done.
As I have written previously, several lines of scientific evidence point to the fact that even while they are in the womb, babies are fully human. Far from being a “mass of flesh” that hasn’t reached the status of personhood, a baby in the womb has all the genetic characteristics of a human being as well as some of the social and mental characteristics of a human being. Three new studies demonstrate that they also have some communication characteristics of a human being.
In one study, for example, 12 pregnant women played a CD loudly five times each week during the last trimester of their pregnancy. It contained excerpts from several different melodies, and there was talking in between the excerpts. However, the important melody on the CD was “Twinkle, Twinkle, Little Star,” which was repeated 3 times. The babies developing in these mothers’ wombs heard this melody 138 to 192 times before they were born. The mothers then destroyed the CD once their child was born, so that there was no chance the baby could hear the contents of the CD afterwards.
Shortly after birth and again at the ripe old age of 4 months, the babies were played a modified version the “Twinkle Twinkle Little Star” melody nine times. In this modified version, 12.5% of the notes from the original melody were randomly changed to a single note – “B.” While the modified melody was playing, an EEG recorded the electrical activity in each baby’s brain. The researchers also chose 12 babies whose mothers had not been given the CD and did the same thing to them. The babies who had heard the CD in the womb had significantly higher electrical brain activity when the modified notes were played, indicating that these notes were unfamiliar to them. For the babies whose mothers had not been given the CD, the electrical activity in the brain was the same during both the original notes and the changed notes.1 This gives strong evidence that babies can learn the music they hear while they are in the womb.
The picture you see above is an iconic image in science. Does it look a bit odd to you? That’s probably because it’s usually rotated 90 degrees when it is shown in most resources. After all, it is a picture of the earth rising over the horizon of the moon. Shouldn’t the moon’s surface be at the bottom of the photo, with the earth at the top? It should be if it were taken from the surface of the moon, but it wasn’t. It was taken from a spacecraft that was orbiting the moon. The photographer was in the spacecraft, so he didn’t see it from the same perspective as he would have had he been standing on the moon.
While I have seen this photograph many times and have even put it in a textbook, I got to appreciate it in a whole new way thanks to a team at NASA. By correlating an automatic camera that was taking pictures of the moon’s surface while the spacecraft was making its orbit back in 1968 with data from the modern Lunar Reconnaissance Orbiter, they were able to determine exactly when the picture was taken and where the spacecraft was at the time. They then made an animation in which the events were correlated to the audio taken during the December 24, 1968 orbit. The result (shown below) is an exciting re-creation of how this iconic image was captured.
As you watch the video, note how it demonstrates that this iconic photo is not the result of careful planning. Instead, the spacecraft just happened to be making a maneuver at the right time, and the astronauts quickly understood what an amazing photo-op they had. It’s especially exciting when the astronauts are afraid they missed taking a color version of the picture because they couldn’t find the color film quickly enough!