In case you haven’t read two of my previous posts (here and here), I am doing something I haven’t done for nearly 20 years – teaching a university-level chemistry course. The class has been going on since the last week of August, but starting this past Friday, the topic has been nuclear chemistry, which is the speciality in which I got my Ph.D. Obviously, then, it is near and dear to my heart. We are probably spending too much time on the subject, but I just can’t help it. We will be getting back to “normal” chemistry (which concentrates on electrons) soon enough. For now, I want the students to see the wonders of the nucleus!
Of course, the most reasonable subject with which to begin a discussion of nuclear chemistry is radiation. So I taught the students about the various modes of radioactive decay, why radioactive decay happens, etc. Then I tried to make the point that radiation is everywhere, and that’s okay, since our bodies are designed to deal with low doses of it. I then showed them a Geiger counter and a radioactive source. The source was labeled with the warning symbols you see above. Not surprisingly, when I put the source up to the Geiger counter, the students heard lots of clicking, because the source was emitting gamma rays.
Then I surprised them a bit. I put an old orange ceramic plate up to the Geiger counter, and it started clicking a lot more than it did with the source I had just used. That’s because the pretty orange color was made using uranium oxide, which is radioactive. It emits alpha particles, beta particles, and gamma rays. People ate off those plates for many years before it was determined that they shouldn’t be made anymore. I did the same thing with an old wristwatch. Once again, the Geiger counter went nuts, because the watch’s hands and numbers had been painted with a mixture of radium and zinc sulfide to make them glow. The radium also emits alpha particles, beta particles, and gamma rays. I then assured them that modern luminous paints aren’t radioactive.
The reason I am writing this blog entry, however, is because of a question one student asked me.
Once I had stopped discussing the various modes of radioactive decay, I started talking about how quickly it occurs. This brought me to the concept of half-life, which is the time it takes half of a sample of radioactive atoms to decay. I used my typical line:
The half-life of carbon-14, for example, is 5,715 years.* So if I hold 100 grams of carbon-14 in my hand, after 5,715 years, only 50 grams will be left, because half will have decayed away. If I wait another 5,715 years, half of that will decay, so I will be left with only 25 grams.
At that point a student stopped me and asked a very important question:
But how do we know the half-life is 5,715 years? We haven’t been measuring it for that long.
I told him that was an excellent question. I said there is an equation for radioactive decay as a function of time. If I measure carbon-14 decay over just a few years, I can vary the half-life in that equation until the equation’s results fit the data. It is then assumed that the half-life that fits the data must be the half-life of carbon-14. He then asked one more question:
Isn’t that kind of a crazy extrapolation?
In my mind, I wondered if he had been reading my blog. After all, a long time ago, I posted an article pointing out the extrapolations used in radioactive dating and how they are probably not justifiable.
I told the student that it is a pretty wild extrapolation, and it’s not clear that it’s justifiable. However, we have to start somewhere, so that’s what we do. Of course, the longer the half-life, the less likely the extrapolation is justifiable, so I told him he was right to be skeptical of any really long half-lives, but he should even be more skeptical of the ones that are millions and billions of years. Now I also told him that right now those half-lives do a great job of explaining how the activity of these isotopes changes. Thus, they are good “constants” to use, even though it’s not clear that have really been constant over such a long period of time.
The reason I am writing about this exchange is that it gave me hope. It’s nice to know that there are at least some students out there who are willing to challenge the conventional wisdom and think for themselves. Science needs leaders like that. Perhaps this student will become one.
*The half-life is actually 5,730 +/- 40 years. The book uses 5,715 years, however, so that’s what I use in class
Return to Text
Okay, I tried, but I can’t resist.
“Yes, it is a crazy extrapolation. Now, there is some strong evidence from nuclear physics to suggest that radioactive decay rates shouldn’t be able to change, but science IS changing all the time, so that evidence by itself isn’t enough to justify treating that assumption as absolute.
“Simply assuming that short-term measurements accurately reflect long-term decay rates isn’t enough. We need to be able to connect this to something else, something we can physically measure. Thankfully, we can do exactly that. For example, we know from history that Richard III died in 1485, and we discovered his remains in 2012. If we measure the carbon-14 in his bones and get a modern-carbon percentage corresponding relatively close to 520-530 years of decay at that half-life, then we can be fairly confident that our half-life is accurate. We can do this with many more artifacts of known age from many different points in history, and each time we can gain more confidence that our half-life has remained steady.
“Once we’ve gained confidence using artifacts of known age, we can apply it to artifacts of unknown age, or even to other processes entirely. For example, if dated coral growth rates remain steady during known history, we can extend this past known history into prehistoric times. By comparing many different age records like this and seeing where they line up, we can get a better idea of which things were constant over time and which things changed over time.”
Thanks for your comment, David, but there are two fatal flaws in your argument:
1. Your example of Richard III is cherry-picked. In fact, the vast majority of artifacts of known age do not give the proper results. If you take artifacts from the 1700s and correct them for the 300+ years of decay that have taken place, they end up giving you an answer that is about 15% higher than modern carbon content. When you choose artifacts from the 1150s and correct for the decay that has taken place since then, you get an answer that is more than 20% lower than the modern carbon content. This is why carbon dating must be done with a calibration curve. Now, of course, the most likely explanation for this is not a change in carbon-14’s half-life. Instead, it is most likely the result of a change in the amount of carbon-14 being produced in the atmosphere. Nevertheless, what you said only works for a very small number of cherry-picked artifacts.
2. Even if the Richard III example were more common, it still is a wild extrapolation. After all, Richard III died about 530 years ago. That’s just over 10% of carbon-14’s measured half-life. So even if your example were not cherry-picked, it would be taking 530 years of data and extrapolating them over 5,000+ years. That’s still pretty wild.
Then, of course, there is the problem of carbon-14 being in places standard geology says it shouldn’t be, like dinosaur fossils and other fossils (as well as diamonds) that are supposedly more than a million years old (see here and here).
As far as using coral growths to help calibrate the carbon-14 system, that doesn’t work, either. Coral growth rates are incredibly variable and depend on many environmental conditions, including depth, water temperature, sediment levels in the water, etc., etc.
Just to note that the half life of carbon-14 (5730 +/- 40 years) is about the same time that has passed since creation, according to the hebrew calendar today is 5774 years although it is not the same as the western calendar.
Science can get complicated, that’s for sure. Calibration adjustment, contamination, and a host of other factors all need to be accounted for in the use of radiocarbon dating. The important thing, I think, is that we don’t simply stop at “Yeah, that’s a crazy extrapolation, and it’s complicated, so let’s not worry about it any more.” That’s not how science makes progress.
If there was one thing the Exploring Creation series taught me, it was to keep investigating, keep searching, keep trying to understand. Ask questions that prompt inquiry, not ones that discourage it.
Questions like these: Do artifacts of the same known age give comparable uncalibrated radiocarbon ages? Let’s try building a calibration curve using one set of known-age artifacts and see if it fits a separate, independent set of known-age artifacts. Does experimental error change disproportionately as we go further back in time, or does the variation average out? If the extrapolation is invalid, where? And can we find out why? If we come up with calibration curves from known-age artifacts, can they be matched to the growth layers in coral so we don’t have to rely on pure assumption? Are the radiocarbon levels in geologically ancient sources close to the detection threshold of radiocarbon, or do any of them match artifacts of known age? What trends can we find and what conclusions can we draw?
And, perhaps most importantly: Have other scientists already asked these questions? If so, can we reproduce their results? Can we find ways to improve on their results?
Never stop asking questions.
David, I am not sure where you got the impression that I was suggesting “Yeah, that’s a crazy extrapolation, and it’s complicated, so let’s not worry about it any more.” First, I said nothing about it being complicated (even though it is), and I most certainly didn’t even imply that we should not worry about it anymore.
I agree that we would never stop asking questions, including the question, “Is such a wild extrapolation justifiable?” The more we learn about nuclear processes, the more the answer to that question seems to be, “No.”
Off of the topic of carbon-14 dating, but probably related to the topic of Nuclear Chemistry –
Dr. Wile, on your advice given to me in this forum a few months ago I have purchased and am reading “The Quantum World: Quantum Physics for Everyone”, by Kenneth Ford. Thank you for that recommendation.
The book has been helpful to me and I am learning a lot. I needed a book that was a basic entry-level text. However, I do have a little more mathematics background (Engineering degree) than what that book assumes, it seems.
For example, I was left dissatisfied by the discussion about how Max Planck discovered that radiation is emitted in quantum lumps of energy. There is a graph of frequency vs. intensity, Planck creates a formula that fits that curve, and then “after a few weeks of the most strenuous labor of my life”, out pops E = hf and the theory that radiant energy is emitted in quantum lumps. Huh?
How does one get from that intensity vs. frequency curve to quantum lumps of energy? Can you point me to something that might explain that process a little better? I’m not afraid of an integral sign or two. 🙂
Thanks for your contributions to my continuing education. I’m trying to provide a good example of life-long learning to my children and to the students of my co-op homeschool Chemistry class, whom I am teaching using your “Exploring Creation With Chemistry: 2cnd Edition” textbook.
David, perhaps this discussion will help. In essence, Planck had to assume something in order to derive the function that fit the intensity versus frequency curve. He had to assume that a vibrating charge was limited in what energies it could release. He had to restrict the allowed energies to certain multiples of a constant, which we now call “Planck’s constant.” Of course, those multiples of Planck’s constant represent quanta of energy.
It has been some time since I read that book, so I don’t know if Ford mentioned this, but Planck didn’t come up with the idea of restricting the energies all by himself. Einstein had already shown that the photoelectric effect could only be explained by assuming that light isn’t made of a continuous wave. Instead, it is made of discrete packets (quanta) called “photons.” Planck extended that idea to energy.
Einstein’s Nobel Prize actually came from this discovery of the photoelectric effect, not from his later work in relativity.
Thanks to Einstein’s discoveries, Planck already knew that radiation needed to be emitted in discrete quanta. So it was only natural to make the assumption that energy states themselves are quantized. And of course this assumption worked out quite nicely.
Thank you for that link, Dr. Wile, that was exactly what I wanted! “The Derivation of the Planck Formula”. I shall work on digesting it.
Glad I could be of help, David.
As I recall, the half-life theory is consistent (mathematically) with the idea that each and every radioactive decay is a random event with a fixed probability, if so then I guess the question is whether or not that probability can ever change over time? Why would it be able to change since it would seem to be inherent in the element?
John, you are correct that radioactive decay is consistent with the idea that every event is based on a probability. We cannot know when a given isotope will decay, but we can tell you how many isotopes in a group will decay within a set amount of time. However, the idea that the probability of decay is “inherent in the element” is not really correct. We know that the half life can change, given the right circumstances. For example, Bosch and his colleagues found that the half-life of beta decay can be decreased by a factor of a billion if you strip all the electrons away from the beta-emitter. There is now strong evidence that beta decay half-lives change by a small amount because of the changing distance between the earth and the sun. There is also evidence that in the right environment, alpha decay half-lives can be decreased by more than a factor of 1014. Obviously, then, the probability of decay is not “inherent in the element.” It depends on other factors, at least some of which we do not understand.
Have you ever addressed the problem of short-lived radionuclides? I would be interested in a nuclear chemists perspective.
Thanks for the question, John. I have not directly addressed that issue. However, I was drug, kicking and screaming, to the conclusion that radioactive half-lives have not been constant over time (see here, here, and here, for example). If that’s true (and I can’t say definitively that it is – the evidence just points in that direction), the absence of short-lived radioactive isotopes that don’t have a replenishing mechanism isn’t a valid reason to think the earth is ancient.