Science is supposed to be self-correcting. The history of science is full of mistakes, but over time, those mistakes are usually found, and the findings are communicated to the scientific community so that the mistakes no longer influence scientific thinking. Unfortunately, one of the main ways that findings are communicated is through scientific journals, and there are times when scientific journals are not interesting in correcting mistakes, especially when those mistakes reflect badly on the journal’s reputation. I recently ran across a story that illustrates this point.
Back in 2008, the most prestigious scientific journal in the United States, the journal Science, published a study that attempted to understand the root causes of political beliefs. They exposed several participants to images and sounds designed to evoke fear and correlated the participants’ response to their political beliefs. Based on their results, the authors concluded:
…individuals with measurably lower physical sensitivities to sudden noises and threatening visual images were more likely to support foreign aid, liberal immigration policies, pacifism, and gun control, whereas individuals displaying measurably higher physiological reactions to those same stimuli were more likely to favor defense spending, capital punishment, patriotism, and the Iraq War. Thus, the degree to which individuals are physiologically responsive to threat appears to indicate the degree to which they advocate policies that protect the existing social structure from both external (outgroup) and internal (norm-violator) threats.
In other words, if you are prone to fear, you are more likely to be a conservative. If not, you are more likely to be a liberal.
The study was ground-breaking, and it has strongly influenced scientific research in the field. Indeed, at the time of this posting, the study has been referenced in 257 subsequent studies. There’s only one problem. It probably isn’t correct. How do we know? Because some researchers who were initially interested in expanding on the results of the study began doing some experiments, but the experiments didn’t seem to support the conclusions of the 2008 study. In an attempt to see what they were doing wrong, the researchers contacted the authors of the 2008 study so that they could replicate their methodology. They weren’t trying to demonstrate that the 2008 study was wrong. In fact, they were trying to use its methodology to “calibrate” their study so that they could get consistent results.