Dr. Peter Boghossian and the Decline of Higher Education

Dr. Peter Boghossian, who has demonstrated the insanity that exists in some academic circles (click for credit)
Dr. Peter Boghossian is an interesting individual. He holds a doctorate in education (Ed.D.) and specializes in making philosophy more accessible to the average person. I first encountered him when I read his book, A Manual for Creating Atheists. I was not impressed. There are atheists who impress me (see here here, and here, for example), but based on his book, Dr. Boghossian was not one of them. In short, the book is full of misconceptions about the nature of faith, doesn’t engage well with the arguments of Christian philosophers, and exaggerates the strength of atheist arguments.

Nevertheless, I found myself being impressed by him a few years later, when he and two colleagues decided to demonstrate the insanity that exists in some academic disciplines. They did this by following a methodology to write “scholarly papers” that promoted nonsense, but it was nonsense which was perfectly welcome in certain fields of study. As they state:

Our paper-writing methodology always followed a specific pattern: it started with an idea that spoke to our epistemological or ethical concerns with the field and then sought to bend the existing scholarship to support it. The goal was always to use what the existing literature offered to get some little bit of lunacy or depravity to be acceptable at the highest levels of intellectual respectability within the field. Therefore, each paper began with something absurd or deeply unethical (or both) that we wanted to forward or conclude. We then made the existing peer-reviewed literature do our bidding in the attempt to get published in the academic canon.

Not surprisingly, their “scholarly papers” were accepted to be published in the prestigious academic journals of the disciplines that they lampooned, and until the authors themselves came forward, no one in the fields thought there was anything remotely wrong with their work!

Well, it turns out that Dr. Boghossian is back in the news, and I am once again impressed by his actions. This time, he thinks an entire institution, Portland State University (where he had been teaching), has gone off the deep end. He wrote an open letter to the Provost of the university, stating that he had to resign, because the university

…has transformed a bastion of free inquiry into a Social Justice factory whose only inputs were race, gender, and victimhood and whose only outputs were grievance and division.

Students at Portland State are not being taught to think. Rather, they are being trained to mimic the moral certainty of ideologues. Faculty and administrators have abdicated the university’s truth-seeking mission and instead drive intolerance of divergent beliefs and opinions. This has created a culture of offense where students are now afraid to speak openly and honestly.

I strongly encourage you to read the entire letter, as it describes the insanity that is currently infecting many institutions that were, at one time, centers of higher learning. Now, they are craven organizations that have chosen to protect their students’ feelings at the price of their students’ education.

Unfortunately, I think this can be said of a great many “universities” found in the United States. Instead of being institutions in which open inquiry is encouraged, they have become indoctrination centers where many views are considered completely off-limits and free inquiry has been sacrificed at the altar of people’s feelings. If you are thinking of sending your child to a university, please spend some time exploring its views on academic freedom. To give you an idea of what that means, this article has a list of “do’s” and “don’ts” when it comes to evaluating whether an institution is a university or a Social Justice Warrior Indoctrination Center.

It is unfortunate that parents must worry about this kind of nonsense today, but it is not unexpected. When a society ceases to believe in God, it will believe in virtually anything, including the ravings of anti-science, anti-reason lunatics.

The Great Barrier Reef Is Doing Well, Despite What the Alarmists Have Said

A portion of the Great Barrier Reef (click for credit)

Over the years, the headlines have been dire. Just two years ago, USA Today claimed:

Great Barrier Reef can’t heal after global warming damage

Similar news outlets ran similar headlines, because a study showed that the number of new corals being produced was significantly lower than normal.

The problem, of course, is that such studies have no context whatsoever. Yes, the Great Barrier Reef went through some tough years. Lots of corals experienced bleaching, a process by which they expel the algae that live inside them. Since the algae provide a significant amount of food to the corals, bleaching can be deadly. However, it’s a process we still don’t understand well, and there is some evidence that it is actually a mechanism designed to help the coral adapt to changing conditions. In addition, bleaching events have been happening for more than 300 years, and evidence suggests they happened more frequently in the late 1800s, the mid 1700s, and the late 1600s than they do today.

Given these facts, it’s not surprising that the doom and gloom headlines about the Great Barrier Reef have been shown to be completely wrong. The Australian Government, in collaboration with the Australian Institute of Marine Science, has been monitoring the health of the reef since 1985. They have collected an enormous amount of data, but I just want to focus on Figures 3, 4, and 5, which show the amount of coral cover in the Northern, Central, and Southern Great Barrier Reef.

The blue lines represent the percent of the area that is covered in coral, and the light-blue shading represents the uncertainty in that value. Notice that in each part of the Great Barrier Reef, there are times when the coral coverage is high, and there are times when it is low. Sometimes, the coral coverage is very low. Nevertheless, right now, coverage is nearly as high as it has ever been. Contrary to what USA Today claimed just two short years ago, then, the Great Barrier Reef has recovered from its low point over the past few decades.

Does that mean there’s nothing wrong with the Great Barrier Reef? Of course not! With less than 40 years of direct study, we really don’t know what the overall status of the reef should be, so we have no idea whether what we are seeing now is good, bad, or indifferent. My point is simply that it’s too easy for scientists and the media to look at a short-term trend (like 2010-2106 in the Northern Great Barrier Reef) and draw irrational conclusions. In the end, that not only hurts the cause of science, but it also makes it nearly impossible for us to figure out how best to care for the creation God has given us.

COVID-19 Vaccine Seems to Protect Against Variants Better Than Previous Infection

Transmission electron micrograph of SARS-CoV-2 virus particles, isolated from a patient (click for credit)
Since I wrote about the Pfizer vaccine being shown to be effective against COVID-19, people have asked me whether or not those who were previously infected should also get the vaccine. While I tell everyone to talk with a physician who is familiar with their medical history, as a scientist, I didn’t think vaccination after infection was beneficial. That’s certainly true for the vast majority of pathogenic viruses, so it made sense that it was true for the virus that causes COVID-19. In addition, two studies (here and here) indicated that immunity from infection was slightly more effective than immunity from the vaccine. However, a new study has just been published that has changed my mind on the issue.

The study looked at 246 patients who had been infected by COVID-19 in 2020 and then had a laboratory-confirmed reinfection in May or June of this year. It then matched them with 492 people who were roughly the same gender, roughly the same age, had been infected with COVID-19 at roughly the same time in 2020, and had not been reinfected with the disease. It compared the vaccination status of the 246 participants and the 492 controls. It found that those who had been infected but had not been vaccinated were 2.34 times (that number is poorly known, see below) more likely to be reinfected with COVID-19 than those who had been infected and were fully vaccinated (either two doses of the mRNA vaccine or one dose of Johnson & Johnson). Thus, the study indicates that vaccination after infection provides more protection than infection alone.

Why does this study contradict the two previous studies? Most likely, it’s because of the variants that have appeared. The earlier studies looked at reinfections that occurred before the variants had become prevalent in the U.S. The reinfections in this new study were more recent, so they probably contained more variants than the previous studies. Indeed, both of the previous studies list variants as an issue that they don’t address well. This interpretation is supported by another study that indicates vaccine-produced antibodies are significantly more effective against variants of the virus than those produced from infection.

Now I do have to include a couple of important caveats. First, this is a pretty small study, as a result, the statistical error is large. The study says people who were infected and not fully vaccinated were 2.34 times more likely to get reinfected. However, when the small nature of the study is taken into account, that number has a 95% chance of being as low as 1.58 or as high as 3.47. That’s a pretty wide range, indicating that the risk is not well known. However, despite the wide range, the conclusion that the vaccine offers more protection to those who had been previously infected is reasonable.

The study lists five limitations, each of which needs to be considered as additional caveats. However, I will highlight only one of them:

…persons who have been vaccinated are possibly less likely to get tested. Therefore, the association of reinfection and lack of vaccination might be overestimated.

I agree with that. I know many people (my wife included) who got tested for COVID-19 before being vaccinated, but have not been tested since being vaccinated. Since the study is small, it is possible that this issue could radically change the result. Thus, more studies need to be done.

Based on the information that exists right now, however, I do think that those who were infected with COVID-19 would probably be better protected if they got vaccinated. We don’t know that for sure, but the data are certainly leaning in that direction. Of course, no one should get any medical treatment (vaccine or otherwise) without getting the advice of a physician who is familiar with his or her medical history. Everything you put in your body (including food) comes with a risk, and your medical history helps a physician determine that risk.

New Archaeological Find Supports the Biblical Account of Gideon

Dutch painter Pieter Aertsen’s imagining of Gideon tearing down the altar to Baal
In the sixth chapter of the book of Judges, we read that the children of Israel were being oppressed by the Midianites. A young man named Gideon was harvesting wheat and trying to hide from the Midianites when an angel of the Lord visited him and told him that he could save the children of Israel. As a first step, Gideon had to tear down the altar to Baal that his fellow Israelites had constructed. Gideon did as the angel commanded, and when the people heard what he had done, they wanted to execute him. His father (Joash) stopped the execution with some simple logic:

But Joash said to all who stood against him, “Will you contend for Baal, or will you save him? Whoever will contend for him shall be put to death by morning. If he is a god, let him contend for himself, since someone has torn down his altar!” Therefore on that day he named Gideon Jerubbaal, that is to say, “Let Baal contend against him,” because he had torn down his altar.

Gideon went on to lead a small army of untrained men against the Midianites, saving the children of Israel from them. The Gideons International, an organization of which I am a member, takes their name from this inspiring Biblical account.

Recently, archaeologists have uncovered evidence that supports one detail of this account. While digging in a small site 3 km west of the large archaeological site/Israeli National Park called Tel Lachish, they uncovered pottery fragments that are thought to have been made in the late twelfth or early eleventh century BC, which is the same time period in which Gideon’s story happened.

While the pottery is far from complete, the recovered pieces contain a painted inscription that reads, “Yrb‘l,” which means Jerubba‘al, the name Joash gave his son. In discussing their find in the context of other relevant finds, the archaeologists say:

The chronological correlation between the biblical tradition and ancient Judean inscriptions indicates that the biblical text preserves authentic Judean onomastic traditions.

In other words, this find and others indicate that the names mentioned in the Bible are not fictional. They are real names that were used in the relevant geographic and historical settings.

How Preconceptions Influence Science

Two different scientific reconstructions of the average energy the earth has received from the sun since 1800. (graphs edited from one of the papers being discussed, original shown below)

In its Climate Change 2014 Synthesis Report Summary for Policymakers, the Intergovernmental Panel on Climate Change (IPCC) stated:

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. (p. 5, emphasis theirs)

In other words, more than half of the warming that has been observed since the mid 1900s has been caused by human activity. How did they arrive at that conclusion? The scientists involved attempted to determine the natural variation in global temperature that would have occurred without human influence, and they found that it accounted for less than half of the observed warming that has been observed. Thus, human activities are responsible for more than half.

The problem, of course, is how do you determine how much warming would have occurred without human activity? The way the IPCC did it was to look at the natural variation that has occurred in the factors that are known to influence the temperature of the planet. One of the biggest factors is how much energy the earth is getting from the sun, which is often called the total solar irradiance (TSI). Well, we have been measuring the TSI with satellites since 1979, and while each satellite comes up with a slightly different value (the reason for that is unknown), all of them agree on how it has varied since they began their measurements.

However, in order to properly understand the long-term effect of TSI, we need to go farther back in time than 1979. As a result, observations that should be affected by TSI are used to estimate what it was prior to 1979. These are called “proxies,” and their use is common among scientists who attempt to reconstruct various aspects of the past. Unfortunately, it’s not clear what the best proxies are for TSI, so the details of how it has changed over time varies substantially from one study to another. That’s what I am attempting to illustrate in the figure above. It shows two different graphs for how TSI has changed since the year 1800. Notice the one on the left (from this study) shows that TSI has been quite variable, while the one on the right (from this study) shows it hasn’t varied significantly over the same time period. Both of these studies were published in the peer-reviewed literature, and both use accepted methods for reconstructing TSI from proxies. The difference, of course, is caused by the different preconceptions in each group of scientists. Whose preconceptions are correct? We have no idea.

To demonstrate just how much variation occurs due to these preconceptions, here is a figure from a very interesting study that I somehow missed when it first came out (in 2015). It shows eight different reconstructions of TSI from eight different peer-reviewed studies:

I used the top two graphs to make the illustration that appears at the very top of the post. As you can see, the eight reconstructions are arranged so that the ones which show a high variability in TSI are on the left, and the ones which show a low variability are on the right. What about the “CMIP5” that shows up on low-variability graphs. It indicates that those were the graphs used in the IPCC’s Climate Change 2014 Synthesis Report, which I quoted above.

Think about what that means. The IPCC specifically chose from the scientific literature TSI reconstructions that indicate there has been little variation since 1800. Thus, natural variation in TSI cannot explain much of the variation we see in global temperature. However, if they had used one of the reconstructions on the left, their conclusion would have been much different. In fact, the authors of the study from which those eight graphs were taken showed that if you used the top left reconstruction, you could explain the vast majority of the variation we see in the earth’s temperature. Thus, had the IPCC chosen that reconstruction, their conclusion about the effect of human activities on global warming would have been radically different.

Hopefully, you can see what I am driving at. All eight of the reconstructions above are legitimate, peer-reviewed reconstructions of TSI. If you choose the ones on the right, you reach one conclusion about the extent to which human activities have affected global temperature. If you choose the ones on the left, you come to a completely different conclusion. How do you choose? You choose the ones you think are best. In other words, you choose the ones that fit your preconceptions.

Unfortunately, this inconvenient fact is left out of most discussions of climate change. As a result, most people state what the “science” says, when they are utterly ignorant of how much that “science” depends on the preconceptions of the scientists who produced it.

About That New “Dragon Man” Fossil….

A fanciful imagining of what the person who left behind the skull being discussed in the article might have might have looked like. (image from the scientific paper)

The media is abuzz with a “new” fossil discovery. Consider, for example, this article, which says:

A new species of ancient human dubbed Homo longi, or “Dragon Man,” could potentially change the way we understand human evolution, scientists said Friday.

A reader asked me to comment on the discovery, which I am happy to do. Please remember, however, I am not a paleontologist nor a biologist, so my comments are clearly from a non-expert position. Nevertheless, I think I can add a bit of perspective that is sadly missing in most discussions of this fossil find.

First, while this fossil is just making the news, it is anything but new. As one of the three scientific papers written about it informs us, the skull was discovered in 1933 by a man who was part of a team constructing a bridge. He hid it in an abandoned well, apparently with the idea to retrieve it later. However, he never did. Three generations later, the family learned about the skull and recovered it. One of the scientists who wrote the papers learned about it and convinced the family to donate the skull to the Geoscience Museum of Hebei GEO University back in 2018.
So what’s “new” isn’t the discovery of the fossil; it’s the analysis contained in the scientific papers.

Of course, one part of the analysis tried to answer the question of how old the skull is. Two different radioactive dating methods were used (comparing the ratio of two thorium isotopes as well as comparing the ratio of one thorium isotope and one uranium isotope). As is typical with radioactive dating, the two different methods didn’t agree with one another, and even the same method gave different ages depending on where the sample was taken from the skull. In the end, the ages based on these analyses ranged from 62±3 thousand years old to 296±8 thousand years old. Based on many factors, the authors said that the youngest this fossil could be was 146,000 years old.

Before I move on, I want to use these data to highlight something I have discussed before. The numbers that follow the “±” sign represent what scientists call error bars. They are supposed to tell you the most likely range over which the measurement can actually fall. When you read “62±3 thousand years old,” that is supposed to mean, “62,000 years is the most likely date, but it could be as low as 59,000 years or as high as 65,000 years.” In fact, the actual date could be lower or higher, but the most likely range is 59,000 years to 65,000 years. Notice, of course, that these error bars are utterly meaningless, since the measurements ranged from 59,000 years old to 304,000 years. This is one of the many problems with radioactive dating. Error bars that are utterly meaningless are constantly reported, giving the illusion of a very precise measurement, when in reality, the measurement is anything but precise!

With the “measured” age of the skull out of the way, let’s discuss the main issue. The scientific papers, as well as the media reports, indicate that this is a new species of human. It doesn’t represent “modern” humans, and it doesn’t represent other known “archaic” humans. There is one big problem with this idea. The paper says that based on the characteristics of this skull, the human it represents is more closely-related to modern humans than is Neanderthal Man. The problem, of course, is that we know Neanderthal Man is fully human, because we know that Neanderthal Man interbred with modern humans. Thus, Neanderthal Man is from the same species as modern man. Since this skull indicates its owner is even more closely related to modern humans, it is a modern human as well.

So while this skull can tell us something about the variety that has existed among humans over the years, it tells us nothing about the “story” of human evolution. It simply represents another variety of human being.

The American Biology Teacher Uses False Statements to Reassure Teachers

When dinosaur fossils like this one are tested, they contain carbon-14 in significant quantities, which is not possible if they are millions of years old.
Quite some time ago, a reader sent me this article from The American Biology Teacher. It attempts to assure biology teachers that the large amounts of carbon-14 found in dinosaur fossils is compatible with the scientifically-irresponsible idea that those fossils are millions of years old. The reader asked me to comment on the article, since I have said several times that carbon-14 in dinosaur bones is a very strong indication that the bones are not millions of years old. The author of this article (Dr. Philip J. Senter), however, is confident that this is not a problem at all. How can he be so confident? Because he seems to believe a lot of false information.

Early on, he makes a statement that indicates he has not studied carbon-14 dating very seriously:

…bone mineral is usually useless for radiocarbon dating, even though the carbonate that bone mineral incorporates during life contains 14C. The uselessness of bone mineral for radiocarbon dating is due to the fact that bone mineral accumulates new 14C after death, yielding a falsely young radiocarbon “age.”

This statement is utterly false, and anyone who knows carbon-14 dating would know that. Hundreds of radiocarbon dates have been published in the scientific literature using bioapatite, a bone mineral. This study examined using bioapatite in carbon-14 dating extensively, comparing it to two other commonly-used substances in carbon-14 dating. It concluded:

Most Holocene samples exhibit reliable 14C ages on the bioapatite fraction. Late Pleistocene samples have shown reliable results even for extremely poorly preserved bone in the case of samples derived from a non-carbonate environment.

The Holocene supposedly dates back to about 11,700 years, while the Pleistocene supposedly goes back to 2.6 million years. “Late Pleistocene” samples, then, would be samples that go back to the limits of carbon-14 dating (about 50,000 years old). Indeed, in that study, one of the samples had a carbon-14 date of 37,000 years old, which is older than most of the dinosaur bones that have been dated with carbon-14. So the idea that minerals from bone are “useless” for carbon-14 dating is demonstrably false.

Dr. Senter tries to back up his statement with a reference, but the reference doesn’t invalidate the use of bioapatite in carbon-14 dating at all. It does indicate that when you compare the date derived from collagen (a non-mineral that is often used in carbon-14 dating) to the date derived from bioapatite, the bioapatite date is often younger. However, the results depend heavily on where the fossil was found. More importantly, we know this has nothing to do with the carbon-14 dates of dinosaur bones, since many dinosaur fossils have been dated using both mineral and non-mineral samples (including collagen), and the ages are similar. In a hadrosaur fossil, for example, bioapatite dated as 25,670 years old, while the collagen dated as 23,170 years old. Note that contrary to the study Senter cites, in this case, the bioapatite age is older than the collagen age.

Senter then tries to explain why dinosaur bones read so young with carbon-14 dating. Most of his argument boil down to the idea that modern carbon has gotten into the fossils, and since the modern carbon is very young, it makes the fossil read young. The problem with that, of course, is that if modern carbon is getting into the fossil from the environment, there must be more contamination near the surface of the fossil and less near the center of the fossil. Thus, the carbon-14 age of the bone should vary depending on where in the bone the sample was taken. However, that’s not what is seen. In this study, Figure 7 has circles around the dates for samples taken from different parts of the same bone. They show very good agreement, indicating that what is being detected is not from contamination.

There is one argument Dr. Senter makes which isn’t about contamination. He says that radioactive materials can be absorbed by a bone, and those radioactive materials can cause nuclear reactions which will add carbon-14 to the bone, making it look young. Once again, he cites studies to support his claim, such as this one, but once again, the studies don’t support his claim. For example, the study I just linked shows how uranium decay can lead to the production of carbon-14, but as anyone who understands nuclear reactions would tell you, the effect is ridiculously small. Indeed, the study shows that nuclear reactions can account for no more than one hundreth (1/100) of the lowest amount of carbon-14 detected in dinosaur bones! Thus, there is no way that nuclear reactions are a viable means of explaining around the carbon-14 found in dinosaur bones.

In the end, then, we see that Dr. Senter must use false information to assure his readers that carbon-14 in dinosaur bones doesn’t invalidate the dogma that they are millions of years old. Unfortunately, since many teachers read the magazine in which his article was published, I am sure that this false information will be spread around, fooling unsuspecting students. Nevertheless, the more this is investigated, the more we will see that it poses a huge problem for those who are committed to believing that dinosaurs lived millions of years ago.

Radioactive Honey

In some regions, the honey is more radioactive than it should be. (click for credit)
Radioactivity is everywhere. You can’t get away from it. Even the foods you eat are radioactive, since the molecules in your food have chains of carbon atoms in them, and some of those carbon atoms are radioactive. However, one of the most radioactive foods is the lowly banana. It contains a lot of potassium, and about 0.01% of that potassium is radioactive, spewing beta particles and gamma rays into the surroundings. In fact, there is an informal radiation unit called the banana equivalent dose, which allows you to get an idea of how dangerous common forms of radiation are. A dental X-ray, for example, gives you about as much radiation as 2.5 bananas, while flying across country exposes you to about 400 bananas’ worth of radiation.

The radiation in a banana is natural, and because your body contains the same atoms as the foods you eat, you are naturally radioactive as well. Of course, technology has increased the amount of radiation in food and people, and sometimes, that increased radiation can be traced to specific events. For example, scientists from the College of William & Mary as well as the University of Maryland have studied the radioactive content of honey in the United States. They have found that it is surprisingly rich in radioactive cesium, which is not natural. It is produced by nuclear weapons and the tests associated with them. Plants require potassium to live, but cesium and potassium have similar chemical properties (anyone who has taken my chemistry course should know why), so plants can take up and use cesium instead, including the radioactive cesium made by an atomic bomb.

The researchers found that while there were no atomic bombs exploded in or around the eastern United States, the honey found there has detectable amounts of radioactive cesium. More importantly, there is a definite pattern to how much is found. There is significantly more radiation found in honey made in the southeastern United States, particularly Florida. In fact, the honey found in Florida can be as much as 500 times more radioactive than the honey found farther north. Now that sounds scary, but in fact, even the most radioactive honey they analyzed had only 18% as much radiation as a banana. Thus, it poses no threat to people who eat it, but we don’t know if it affects the bees that make and eat it. However, it does tell scientists that the radioactive materials released in nuclear explosions are not dispersed evenly throughout the world. Instead, weather patterns tend to concentrate them in specific regions.

The other thing the researchers show is that the more potassium found naturally in the soil, the less radioactive cesium found in the honey. Thus, even in Florida, honey from regions where the soil had a high potassium content was less radioactive than honey from regions where the potassium content was lower. As a result, if you want less radiation in your honey, you need to make sure the flowers from which it is made are grown in soil that is rich in potassium.

A Monkey/Man Embryo

An illustration from the scientific paper being discussed.
A few readers have sent me news stories like this one, entitled “Human-Monkey Hybrid Embryo Created by Joint China-U.S. Scientist Team.” Obviously, the concept of mixing the embryos of people and animals is abhorrent, so these readers wanted my view on the experiment and what it means. Since “science journalists” know little about science and even less about journalism, I ignored what has been written in the press and went straight to the actual scientific paper to find out what had been done. While the paper alleviated some of my initial concerns, it most certainly didn’t alleviate all of them.

What did the researchers actually do? They first developed a method for fertilizing the eggs of a long-tailed macaque (Macaca fasciularis) so that the monkey embryo could start development in a lab dish instead of the body of a female macaque. Once they accomplished that, they took human pluripotent stem cells (hPSCs) that had already been developed and injected them into the lab-dish monkey embryos once they reached a specific stage in development, called the “blastocyst stage.” They then followed the development of these mixed embryos over time.

Why would someone want to do something like this? The scientists on the project were mostly interested in trying to understand how the cells in an embryo communicate with one another so that they can do the jobs they need to do to make a baby. They thought that if they could see how the human cells communicated with one another in this kind of system, it would help them understand human embryonic development better. Of course, to “sell” the research, they also indicated that it could help produce animals that could grow human organs for transplant, since many people die waiting for an organ transplant.

Were they successful? Not really, but they were able to get some results. They started with 132 separate monkey embryos, and they injected 25 human stem cells into each of them once they reached the blastocyst stage. Stem cells have the ability to develop into many different kinds of cells, so they hoped that human cells would communicate with one another and mimic what happens during early human embryonic development. However, over the course of 20 days, most of the embryos died. Only three of them survived for 19 days, and there was significant interaction between the human and monkey cells. As a result, it’s not clear how much of what happened to the human cells is related to actual human embryonic development.

Now remember, I said the paper alleviated some of my concerns. That’s because the source of the human cells is not immoral. There are three ways to get human stem cells. The first way is to extract them from a human embryo. This kills the baby, so murders must be committed in order to use such cells. Thus, no one with any sense of morality should be involved in such research. Fortunately, this experiment didn’t use such cells.

The second way to get human stem cells is to harvest them from the umbilical cord after birth or from certain parts of an adult. This doesn’t kill anyone, so there is nothing unethical about their use. Indeed, several successful medical treatments are based on such cells, which are usually called “adult stem cells.” This experiment didn’t use those cells, either. Instead, it used mature cells (from an adult) that had been chemically reprogrammed to act like stem cells. Such cells are usually called “induced pluripotent stem cells,” and once again, no one is killed in the process. Thus, at least the source of human stem cells used in the experiment is a moral one.

However, my concerns are not fully alleviated. After all, this experiment could be done by others who are immoral enough to use stem cells that require babies to be murdered. Indeed, there are probably scientists reading this very paper thinking that one reason the survival rate was so low was because the stem cells did not come from a human embryo. As a result, they might try to replicate this experiment with immorally-sourced stem cells.

Of course, the other serious issue is that this experiment could be altered so that the mixed embryo is implanted back into a female monkey, in hopes that some kind of creature would actually be born. I seriously doubt such a thing could actually happen, because embryonic development is so complex that the communication between monkey and human cells would eventually cause the process to break down. Nevertheless, the possibility exists that such a monster could be produced, which is morally repugnant.

Now I do have to say that the authors of the study went to great pains to make sure that their experiment was deemed “ethical.” However, that doesn’t comfort me in the slightest. After all, the medical community thinks abortion is ethical, and it is clearly not. Given this fact, I don’t think this line of research should be extended. There are just too many ways it could be done immorally, and the medical community has shown that its sense of morality is, at best, stunted.

About That New Physics….

Fermilab’s muon 2- storage ring, where some are saying new physics has been confirmed. (click for credit)

A reader sent me this article over the weekend. The long title indicates something exciting is happening:

A new experiment has broken the known rules of physics, hinting at a mysterious, unknown force that has shaped our universe

I have been sent similar articles by others. Most of them have the same breathless excitement: physicists have found that the laws of physics as we know them can’t be right, because an experiment at Fermilab shows that they are being broken. If true, this is really exciting news. However, much like the “faster-than light neutrino” results that were later found to be incorrect, I remain very skeptical that there is any reason to think that the laws of physics as we understand them are wrong.

So what’s the story here? Twenty years ago, physicists at Brookhaven National Lab were studying muons, which are particles that have the same negative charge as the electron but are significantly heavier. Because they are charged, they produce a magnetic field, just like the electron does. The physics that we know right now (collectively referred to as the “Standard Model”) predicts the behavior of particles that produce magnetic fields, and the way the electron behaves agrees perfectly with the Standard Model’s prediction. Because muons are heavier than electrons, their behavior is more complex, so they can be used as an additional test for the Standard Model. In the Brookhaven experiments, the muon’s behavior differed very slightly from the predictions of the Standard Model. However, because of the limits of the experiment, the physicists couldn’t rule out the idea that the result was a fluke, so the team made no concrete statement about the accuracy of the Standard Model when it comes to muon magnetic fields.

Recently, Fermilab announced that they replicated the Brookhaven experiment, using the same basic setup as what was used at Brookhaven, but they did it with higher precision. They confirmed the Brookhaven experiment’s results, and based on the quality of their data, statistics indicate there is only a 1 in 40,000 chance that the result is a fluke. Because of this, there is a lot of excitement in some parts of the physics community. After all, if the Standard Model can’t correctly predict something, that means there is something wrong with it, and that means there is “new physics” to discover.

Of course, this line of reasoning ignores one inconvenient fact. There is another possible reason the Standard Model’s prediction for the muon is wrong: The prediction itself could be incorrect. It turns out that the muon’s complex behavior causes the math in the Standard Model to be very difficult to solve. As a result, the prediction against which these experiments are compared used a well-accepted shortcut: it incorporated some independent experimental results into the calculations to make things easier. Of course, this leads to a problem. Those experimental results produce uncertainties, because all experiments have error in them. As a result, what’s really going on here is that an uncertain prediction is being compared to an experimental result, which has its own uncertainties. When uncertain things are compared to other uncertain things, it’s not clear what the difference between them means.

To solve this problem, Borsanyi and colleagues did the tough math. They used millions of CPU hours at a supercomputer so that the prediction they produced was purely mathematical. They found that their prediction agreed with the experimental results at both Brookhaven and Fermilab. Thus, as far as they are concerned, there is no discrepancy between the behavior of the muon and the predictions of the Standard Model. Is that the end of the story? Of course not! It could be that Borsanyi and colleagues are wrong. However, I would think their conclusion is more reliable, since a prediction made with pure mathematics has less uncertainty in it.

So my bet is that there is no new physics here, and the Standard Model has been vindicated once again. Of course, only time will tell whether or not I am right. However, there is a lesson to be learned here, and it is an important one. Borsanyi and colleagues’ calculation was published at the same time as the Fermilab results. However, most science “journalists” aren’t bothering to mention their conclusion. Why? Either because it isn’t exciting, or because they haven’t bothered to see what other physicists are saying about the situation. Either way, it’s truly unfortunate, and it confirms what I have said many times before: most “science journalists” know little about science and even less about journalism.