NOTE: Long after this article was published, new experimental data was published indicating that the effect is not real.
One of the foundational assumptions of the various radioactive dating techniques that attempt to measure the age of things is that the half-life of a radioactive isotope does not change significantly over the time period being measured. Even though we have been measuring half-lives for only about 100 years, those who want to believe that the earth is billions of years old are forced to assume that over those billions of years, the half-lives of various radioactive isotopes have not changed significantly. As I have pointed out before, this is a terrible extrapolation, and a careful scientist should avoid using it unless there are very good reasons to believe it is justified. As more and more data come in, it becomes more and more clear that there are very good reasons to believe it is not justified.
I previously discussed data that indicate radioactive half-lives are not constant, but over the past year and a half, some new information has come out that lends more strength to the claim. As I discussed previously, two independent labs noticed that the decay rate of certain isotopes were influenced by the distance between the earth and the sun. They produced a paper in 2008 reporting on their findings: the rate at which these isotopes decayed varied in perfect sequence with the changing of the distance between the earth and the sun1 Many in the scientific community blamed this on experimental errors such as environmental changes or problems with the detectors that were monitoring the isotopes. Studies published over the past year and a half, however, seem to have ruled out these possibilities and have lent even more credence to the idea that the sun influences radioactive decay rates.
The scientists have ruled out factors such as temperature, atmospheric pressure, and humidity as causing the observed effects2, and they have also shown it is not the result of changes in background radiation.3 As a result, it looks like the original data are not the result of experimental error. The detected variation in radioactive decay is, most likely, a real effect.
In addition to this, the same investigators have shown that the radioactive decay of magnesium-54 decreased during the solar flare of December 13, 2009.4 Indeed, as the solar flare caused a monitor detector to register high levels of protons and X-rays coming from the sun, the detectors monitoring the radioactive decay of magnesium-54 showed a distinct reduction in rate. In addition, another “blip” of increased protons and X-rays coming from the sun four days later coincided with another decrease in the rate of the decay of magnesium-54.
So there seems to be a real interaction between the sun and the rate at which certain radioactive isotopes decay. Since these are relatively new results, not a lot of extra investigation has been done, so we don’t know how widespread this interaction is. It could be between the sun and all radioactive isotopes, or it could be between the sun and just a few radioactive isotopes.
Now don’t get all excited. These results are measureable, but they are small. If you are looking for something that shows “billions of years” worth of radioactive decay could occur in a few thousand years, these results will not please you. The observed effect is simply too small to affect radioactive dating techniques in any significant way. There is evidence that what appears to be “hundreds of millions of years” of radioactive decay can occur very quickly, but as you might expect, that evidence is indirect. The experiments I am discussing here are direct evidence that at least some radioactive half-lives can change at least somewhat.
While these studies don’t directly affect the validity of radioactive dating techniques, they nevertheless tell us something very important: We clearly do not understand the process of radioactive decay as well as some would have you believe. Until the original study was published, no one would have believed that the earth-sun distance had anything to do with the rate of radioactive decay here on earth. Even after the study was published, many refused to believe it, blaming environmental issues or detector problems. Now that the follow-up studies produce even more evidence for the reality of the effect, nuclear scientists still have no real idea of what could possibly be causing it. Sure, there are some guesses being bandied about, but none of them make sense given what we know about radioactive decay.
So what’s my point? It’s quite simple. We have only been studying radioactivity for about 100 years. In order to believe radioactive dating techniques that tell us the earth is billions of years old, you have to assume that radioactive half-lives have stayed constant for billions of years. “Don’t worry,” old-earth scientists have assured us, “we KNOW that radioactive decay rates cannot fluctuate based on any natural processes, so there is no reason to doubt the results of these radioactive dating techniques.”
Well…now we KNOW that at least one natural process (something having to do with the sun) does affect radioactive decay rates. What we don’t know is the extent or possible magnitude of the process. We also don’t know what other surprises lie in store for us when it comes to radioactive decay. The extrapolation involved in radioactive dating techniques can only be justified if we KNOW that radioactive half-lives cannot change significantly as a result of natural processes. Since these data make it clear that we DON’T KNOW the extent to which natural processes can cause changes to radioactive half-lives, it is obvious that the extrapolation is simply not justified.
REFERENCES
1. Jere H. Jenkins, et. al., “Evidence for Correlations between Nuclear Decay Rates and Earth-Sun Distance,” arXiv preprint 0808.3283v1 [astro-ph], August 25, 2008. Available online
Return to Text
2. Javorsek II, et. al., “Power spectrum analyses of nuclear decay rates,” Astroparticle Physics, 34:173-178, 2010.
Return to Text
3. Jere H. Jenkins, et. al., “Analysis of environmental influences in nuclear half-life measurements exhibiting time-dependent decay rates,” Nuclear Instruments and Methods in Physics Research A, 620:332-342, 2010.
Return to Text
4. Jenkins, J.H. and E. Fischbach, “Perturbation of nuclear decay rates during the solar flare of 2006 December 13,” Astroparticle Physics, 31:407-411, 2009.
Return to Text
Interesting article, thanks for pointing out.
Have there ever been carried out experiments to measure half-lives of nuclear elements in space?
Jacob, that is a GREAT question. There have never been any DIRECT measurements of the radioactive decay of isotopes in space, but there has been an INDIRECT measurement, and it saw no effect. The Cassini spacecraft is powered by a Radioisotope Thermoelectric Generator that runs on the decay of plutonium-238. Peter Cooper analyzed the power output of the generator while Cassini traveled from Venus to Mars. You would expect that if the decay of plutonium-238 varied significantly over the trip, the power output of the generator would vary as well. He saw no variance except what you would expect from the decay of the isotope. Those results, of course, seem to contradict the results that are described in the post.
Of course, it is not clear how sensitive this technique is, since it is indirect. Thus, it could be that variations occurred, but the technique was not good enough to detect them. Also, since you are looking at the power generated by a device that is POWERED by radioactive decay, you wonder if the technical details of the device might “smooth out” any variances, much like a television is not affected by fluctuations in its power source. Thus, direct measurements are in order, whenever they can be arranged.
How significant is this? At best it casts doubt on the integrity of the method, but it won’t cast enough doubt to move from 4.5 Billion to 6-20 thousand.
The Zircon research worked with the assumption that most radiation used for dating was released during the creation process while God was pinging particles about like snooker balls, the failing of science there would be in ignoring the possibility of the supernatural.
Josiah, it is significant because we truly don’t know the extent to this. Based on CURRENT measurements, the effects are small. However, based on CURRENT measurements, we would have thought there would be no effect at all. Until we understand this phenomenon more, we have no idea what it’s effect will be on radiometric dating. Even if we find its effect to be minor, it at least tells us that our understanding of radioactivity is incomplete. Thus, it would be foolhardy to make such an enormous extrapolation given the fact that we don’t completely understand the process.
The zircon research really doesn’t make an assumption about how or when the accelerated radioactive decay occurred. It merely shows that the only way we can understand the amount of helium in zircons (with our current level of knowledge) is to assume that a large acceleration in radioactive decay happened a few thousand years ago.
Surely “the only way” is a bit strong; one hypothesis consistent with the data doesn’t rule out the possibility of any other valid theories.
Other than that I take your point. In realizing that distance from the sun has this (comparatively minor) effect true scientists should be forced to acknowledge that other outside factors (e.g. size of the universe) might similarly affect the radioactive constant of materials, effects which might knock orders of magnitude off radiodating figures.
Josiah, no other hypothesis has been proposed to explain the helium data that is consistent with what we know. So right now, it is the only way to understand the data. There could be another way to explain them, but right now, I haven’t seen it.
Hi Dr. Jay,
Hope everything’s going well. As I said before, great blog! I have benefited greatly from the information outlined here. Thanks for all the great stuff!
I have one question about extrapolating data. You mentioned in your post “One Reason I am Skeptical of an Old Earth” (and confirmed in this post) that one must “only extrapolate the data when the range over which the extrapolation occurs is SMALL compared to the range over which the data are measured.” I understand how this principle would call radiometric dating into question, since the constancy of decay rates are extrapolated beyond this range. But wouldn’t this SAME principle also discredit some of the arguments you use in support of a young earth, such as helium in the atmosphere and sodium in the ocean, since we have only been measuring those processes for a few hundred years at most?
Thanks for your help!
Hi Dan. Thanks for your excellent comment! You are absolutely right. We must be skeptical of ALL extrapolations that go well beyond the range of the data – that includes young-earth extrapolations. However, there are three things to consider:
1. The more evidence you have for the consistency of the measured quantities, the more you can believe the extrapolation.
2. The longer the measured quantities have been studied, the more you can believe the extrapolation.
3. This is the big one: The smaller the ratio of extrapolation to measurement, the more you can believe the extrapolation.
For some young-earth arguments (like helium in the atmosphere), the consistency of the measured quantities is easier to believe, because helium doesn’t react with anything unless the situation is exotic (near absolute zero, for example). Thus, it is unlikely we are missing a means by which helium leaves the atmosphere. For other young-earth arguments (like the amount of sodium in the ocean), the quantities under investigation have been measured far longer than radioactive half-lives. For all young-earth arguments, the amount of extrapolation is much smaller (thousands of years), so the risk of extrapolation is less.
Now please understand what I am saying here. These facts just make the young-earth extrapolations easier to believe. It doesn’t mean they are correct. It just means I am less skeptical of them.
I like your first point the best, especially in light of the evidence that Radiation isn’t consistent.
Your second point is fine and has some merit but surely quantity & quality of research is more significant than simply time. For example CERN’s findings with its particle accelerators and highly trained physicists, &c are probably very trustworthy (thought their interpretation is open to debate). That’s not something I’d want to say for a few centuries of dabbling alchemists with nothing more complicated to use than flasks and flames in the dark ages.
Your third point must surely be somewhat fallacious when the entire debate hinges on time.
Josiah, you are quite right about the second point. Quality must factor in as well. However, the third point is not fallacious. The fact is that the scientific method favors short extrapolations over long ones. As a result, if you want to make a long extrapolation, your evidence to justify it must be stronger than the evidence needed to justify a shorter extrapolation. This is obvious, since the finite length of time that we have to measure anything means science is better at short-term predictions than it is at long-term predictions.
That and why short external extrapolations are favoured over long ones should be self-evident. The problem I see with that argument in this case, however, is that it can be used to discredit effectively any evidence for the old earth position. Specific evidence can be used to demonstrate that a result is untrustworthy, such as Helium in Zircons for radiation dating. But surely that 4 billion years is too long ago to work with can’t in itself be considered counter evidence to any claim showing that the earth is in the order of 4 billion years old.
No, Josiah. Remember what I say in my original post:
So…if there were overwhelming evidence for the constancy of radioactive half-lives, the extrapolation could be made. However, as this current post shows, there is no such evidence. In fact, there is evidence against.
So…trying to show that the earth is 4 billion years old SHOULD be hard – it requires a ridiculous amount of extrapolation. However, if you can overwhelm me with evidence to support the extrapolation, then I could believe it.
Regarding the sodium argument for a young earth, evolutionists have claimed that the study done by Austin and Humphreys underestimates the output rates by not accounting for diatomaceous earth and other removal processes of sodium. Is this a valid criticism, and do you think that it would significantly change the upper limit for the age of the oceans?
Good question, Dan. The problem with using processes like diatomaceous earth as a way of removing salt from the ocean is that the deposits occur only in certain epochs. For example, in Morton’s non-peer-reviewed letter that is linked from the talkorigins website, he discusses diatomaceous deposits in the Miocene epoch. That might help get rid of sodium in the ocean for the 18 million years or so during that epoch, but that’s it. You would have to find such deposits scattered throughout the geological record in order to come up with sodium balance that way. Thus, diatomaceous deposits might push the upper limit of the age of the seas to 80 or 100 million years (instead of the 62 that comes from Austin and Humphreys), but that it still orders of magnitudes less than the age that evolutionists want you to believe.
Note that Humphreys has replied to Morton’s contention about finding a way out of the sea salt dilemma:
http://creation.com/salty-seas-evidence-for-a-young-earth#albite
What about the argument that Sodium is picked and trusted because it supports the YEC model while other ions give very different results and are discarded? E.G. Aluminium ions would place it in the order of centuries, such that Columbus sailed the ocean blue to find the new world before either the ocean or the old world were created.
The reason sodium is picked is because it is a (reasonably) abundant ion in the oceans, and it is fairly easy to trace inputs and outputs. The aluminum ion has a concentration on the order of a few parts per BILLION. Thus, even tiny errors in measurement will lead to HUGE errors in determining the age of the ocean. Sodium ion concentration is on the order of 1%, so small errors will not affect the conclusions much. The chloride ion concentration is larger (almost 2%), but there are MANY sources of input and output for chloride ions, so it is harder to trace. Sodium has the largest concentration next to that, and since it has fewer inputs and outputs, it is the most reasonable ion to study.
Very useful to know. Thanks.
Thanks for that explanation, Dr. Wile. The source that talkorigins uses mentions some other removal mechanisms that Austin and Humphreys supposedly underestimated. Even without considering the diatomaceous earth, some of these other processes (if valid) would seem to make a significant difference in the output. The revised calculation for the “alteration of basalt by hydrothermal vents” would be most significant, as there would be 14 X 10^10 kg/yr of sodium removed, instead of 0.4 X 10^10 kg/yr, as Austin and Humphreys stated. Also, an additional 3.6 X 10^10 kg/yr of sodium is supposedly removed through “collective small outputs.” How significant of a role do you think these processes play in the equation for the maximum age of the ocean?
Dan, the “collective small outputs” number is nothing but wishful thinking. Essentially, there have been no successful measurements for those outputs. To me, that means they are too small to be measured, so that means they are VERY close to zero. To Morton, it could mean ANYTHING, so he wants to put some numbers in there. That’s why he says
So essentially, he puts fake numbers in for each process. Well of course, if you put fake numbers in, you can get things to balance. I prefer to follow the data. If it can’t be measured, then you have to take it to be zero until it is measured.
The “alteration of basalt by hydrothermal vents” is the issue covered in the link I gave previously. Morton thinks that albite formation (which takes sodium) removes it from the ocean. What he doesn’t consider is the albite decomposes once it moves from the vent (or the vent stops spewing), and when it does, it releases the same amount of sodium it sequestered. Thus, there is no net sodium loss due to albite.
That makes sense, thanks. I did read the link you provided, but I wasn’t careful enough to make the connection between the albite and the basalt alteration by hydrothermal vents, especially since Morton separated them into two different categories, and said that albite formation contributes 0 kg/yr. My bad for missing that!
Oops, I see that my comment may have sounded sarcastic. Just to clarify, no sarcasm was intended!
Dan, I certainly didn’t see it as sarcastic.