Maybe the Sun Doesn’t Affect Radioactive Decay Rates

variable_decay

NOTE: Even more data have been published indicating that the sun does not affect radioactive decay rates.

In previous articles (see here, here, and here), I discussed some very interesting results that were coming from different labs. These results indicated that the half-lives of some radioactive isotopes vary with the seasons. They seemed to imply that the sun was somehow affecting the rate of radioactive decay here on earth. This made the results controversial, because there is nothing in known physics that could cause such an effect.

One criticism of the studies was that weather could be the real issue. Even though labs are climate-controlled, no such control is perfect. Humidity, pressure, and (to a lesser extent) temperature can all vary in a nuclear physics lab, so perhaps the variations seen were the result of how the changing weather was affecting the detectors. However, the authors used several techniques to take changing weather into account, and all those techniques indicated that it couldn’t explain the variations they saw. The authors were (and probably still are) convinced that they were seeing something real. I was as well. In fact, one of my posts was entitled, “There Seems To Be No Question About It: The Sun Affects Some Radioactive Half-Lives.”

Well, it looks like there is some question about it. Two scientists from Germany decided to measure the rate of radioactive decay of the same isotope (Chlorine-36) that was used in some of the previously-mentioned studies. However, they decided to use a different experimental technique. The studies that showed variation in the rate of radioactive decay used a Geiger-Muller detector (often called a “Geiger counter”) to measure the radioactive decay. The two scientists who authored this study used a superior system, based on liquid scintillation detectors. The authors contend (and I agree) that the response of such detectors is much easier to control than the response of Geiger-Muller detectors, so their results are more reliable. They also used a particular technique, called triple-to-double coincidence ratio, that reduces “noise” caused by background radiation. When doing detailed measurements of radioactive decay, this is one of the standard techniques employed.

What did they see with their more reliable system? They saw some small variations (much smaller than those seen in the previous studies), but unlike the other studies, those variations were not correlated with the season. They seem to be completely random. This makes it look like the previous measurements that saw fluctuations correlated with the seasons were wrong. As the authors state:1

The work presented here, clearly proves that variations in the instrument readings in 36Cl measurements are due to the experimental setup rather than due to a change in the 36Cl decay rates.

Now, I think the authors are overstating their results. Most scientists understand that science cannot prove anything. However, their paper does make a very strong case that the previous studies weren’t seeing real fluctuations. Instead, they were seeing some artificial effect that was a result of their measurement technique.

Is this the final word? I doubt it. I suppose the authors of the previous Chlorine-36 studies will try to replicate this experiment and see if it has some flaw that would cause it to miss periodic variations. That’s certainly possible. Nuclear physics (and nuclear chemistry) experiments are very tricky, and there are a lot of unforseen factors that can affect them. However, there are two things we can say for sure:

1. Either the previous studies have come to the wrong conclusion, or this study has come to the wrong conclusion. Hopefully, more studies will be done to figure out which is which.

2. My previous post entitled, “There Seems To Be No Question About It: The Sun Affects Some Radioactive Half-Lives” has too strong of a title. There is definitely a question now, and it is a big one!

REFERENCE

1. Karsten Kossert and Ole J. Nähle, “Long-term measurements of 36Cl to investigate potential solar influence on the decay rate,” Astroparticle Physics 55:33-36, 2014
Return to Text

6 thoughts on “Maybe the Sun Doesn’t Affect Radioactive Decay Rates”

  1. This reminds me of seasonal dark matter signals: in the thermal dark matter paradigm, the dark matter particles couple with nuclei in a large region of the available parameter space. Further, there’s supposed to be a lot more dark matter than normal matter, and astrophysicists approximate the dark matter as a gravitationally interacting gas. So if you imagine there are dark matter flows on the galactic scale, it’s possible that, since the earth orbits the sun, for half the year the earth moves with the dark matter, and for the other half it moves against the dark matter. And that would mean a variable collision rate in dark matter detectors across the year. I think some experiments claimed to have seen this a few years ago, but the observation was disputed not too long afterwards.

    Of course, a lot of this clearly draws from modern cosmology: “thermal dark matter” arises because dark matter is produced at some stage after the big bang, and then the universe cools enough that the dark matter interaction decouples from the normal matter. You also have to assume dark matter exists in the first place; I find dark matter bothersome philosophically, but I am willing to believe the gravitational analyses of galactic orbital velocity curves and motion in galactic clusters (plus this isn’t my expertise). But anyway: if there is dark matter, and it interacts with nuclei, perhaps moving through a dense “chunk” of dark matter would change nuclear decay rates. This way it’s not the sun itself but the earth’s orbit around the sun that’s producing the seasonal change. Not that this says anything about the experiments above; it just provides a theoretical mechanism for changing decay rates.

    Here’s a review paper about dark matter and seasonal variations: http://arxiv.org/abs/1209.3339

    1. That’s incredibly interesting, Jake. I hadn’t thought of that, but it makes sense. It sort of reminds me of the famous Michelson-Morley experiment that looked for ether. It seems to me that actually seeing annual variations would produce a lot more evidence for dark matter.

      I agree that both dark matter and dark energy are difficult philosophically, the former less than the latter. To me, the more philosophically disturbing thing is when you put them together. In order to have enough dark matter and dark energy to make the Big Bang work, you need to assume that all we can detect makes up only about 4% of the universe. The other 96% is stuff we haven’t detected, at least not yet. While this isn’t beyond the realm of possibility, it seems, at best, a stretch.

    1. Thank you, Kevin. While you and I disagree on some things, we both agree that we should be searching for the truth. As such, we need to look at all the data, not just the data that support our positions.

  2. I greatly appreciate the way you work, Dr. Wile. I have seen a different YEC org (that shall remain nameless) abandon untenable arguments without acknowledging why. For example, that nameless org used to argue that astronomical red shift could be explained by cosmic dust. The claim was that the correlation between increasing distance (measured in millions of light-years) and the red shift was the product of greater amounts of dust over the distance, rather than greater relative speed with respect to Earth. And therefore, they concluded, the Big Bang cosmological model was not true and the universe is not 13.8 billion years old.

    However, that argument is based on a fundamental confusion between diffraction (which happens when dust scatters blue light and lets the red pass through) and red shift, which is detected by a shift in emission lines. The organization no longer makes this claim, but they never acknowledged the previous error.

Comments are closed.