Science is supposed to be self-correcting. The history of science is full of mistakes, but over time, those mistakes are usually found, and the findings are communicated to the scientific community so that the mistakes no longer influence scientific thinking. Unfortunately, one of the main ways that findings are communicated is through scientific journals, and there are times when scientific journals are not interesting in correcting mistakes, especially when those mistakes reflect badly on the journal’s reputation. I recently ran across a story that illustrates this point.
Back in 2008, the most prestigious scientific journal in the United States, the journal Science, published a study that attempted to understand the root causes of political beliefs. They exposed several participants to images and sounds designed to evoke fear and correlated the participants’ response to their political beliefs. Based on their results, the authors concluded:
…individuals with measurably lower physical sensitivities to sudden noises and threatening visual images were more likely to support foreign aid, liberal immigration policies, pacifism, and gun control, whereas individuals displaying measurably higher physiological reactions to those same stimuli were more likely to favor defense spending, capital punishment, patriotism, and the Iraq War. Thus, the degree to which individuals are physiologically responsive to threat appears to indicate the degree to which they advocate policies that protect the existing social structure from both external (outgroup) and internal (norm-violator) threats.
In other words, if you are prone to fear, you are more likely to be a conservative. If not, you are more likely to be a liberal.
The study was ground-breaking, and it has strongly influenced scientific research in the field. Indeed, at the time of this posting, the study has been referenced in 257 subsequent studies. There’s only one problem. It probably isn’t correct. How do we know? Because some researchers who were initially interested in expanding on the results of the study began doing some experiments, but the experiments didn’t seem to support the conclusions of the 2008 study. In an attempt to see what they were doing wrong, the researchers contacted the authors of the 2008 study so that they could replicate their methodology. They weren’t trying to demonstrate that the 2008 study was wrong. In fact, they were trying to use its methodology to “calibrate” their study so that they could get consistent results.
What they found is that they could not replicate the results of the previous study. What is supposed to happen when something like this occurs? It is supposed to be communicated, right? After all, researchers need to know that there was something wrong with a study that is still influencing the field. Where should this be communicated? In the journal that published the original study, of course. That’s what these authors tried to do. They submitted the above-linked paper to the journal Science, and the editors refused to even consider publishing the paper.
When a scientific journal considers publishing a paper, it sends the paper out for peer review. That means other scientists in the field evaluate it to see if they can find any errors in the study. If the reviewers think that small errors exist, the journal might still publish the paper, as long as the authors correct those small errors. If the reviewers think there are serious errors in the paper, the journal will not publish the paper. The problem is that in this case, Science refused to even send the paper out for peer review. They just summarily rejected it, without any evaluation as to its quality.
Why? The editors thought that the field had “moved on” from the initial study (it was 11 years ago, after all), so there was no reason to publish the fact that the initial study was probably wrong. However, we know that is simply false. Of the 257 papers that cite the study, nine of them come from this year, 25 of them from last year, and 41 from the year before. Clearly, then, the initial study is still influencing research in the field, and the journal Science refuses to let scientists in the field know that the study is probably wrong!
Science requires us to have the courage to let our beautiful theories die public deaths at the hands of ugly facts…Our takeaway is not that the original study’s researchers did anything wrong. To the contrary, members of the original author team — Kevin Smith, John Hibbing, John Alford and Matthew Hibbing — were very supportive of the entire process, a reflection of the understanding that science requires us to go where the facts lead us. If only journals like Science were willing to lead the way.