The Problem with Science

Bad Science

…is that so much of it simply isn’t

by John Hartnett

In the opening sentence in an article titled “Scientific Regress”, the author William Wilson remarks:

“Scientific claims rest on the idea that experiments repeated under nearly identical conditions ought to yield approximately the same results, but until very recently, very few had bothered to check in a systematic way whether this was actually the case.”

The article is about science and the repeatability of scientific results published in the peer-reviewed scientific literature.

Claims not replicated

A group called Open Science Collaboration (OSC) tried to evaluate research claims by replicating results of certain science experiments. They replicated one hundred published psychology experiments and found 65% failed to show any statistical significance, and many of the remainder showed greatly reduced significance than originally reported. The OSC group even used original experimental materials and sometimes performed the experiments under the guidance of the original researchers.

They found, though, that the problem was not just in the area of psychology, which I don’t consider hard science anyway.

In 2011 a group of researchers at Bayer looked at 67 recent drug discovery projects based on preclinical cancer biology research. They found that in more than 75% of cases they could not replicate the published data. These data were published in reputable journals, including Science, Nature, and Cell.

The author suggested that the reason many new drugs were ineffective may be because the research on which they were based was invalid. This was considered the reason for the failure—the original findings were false.

Then there is the issue of fraud.

“In a survey of two thousand research psychologists conducted in 2011, over half of those surveyed admitted outright to selectively reporting those experiments which gave the result they were after.”

This involves experimenter bias. The success of a research program might be all that is required for success in the next funding round. So, what might start as just a character weakness in the experimenter ends up being outright fraud. The article states that many have no qualms in

“ … reporting that a result was statistically significant when it was not, or deciding between two different data analysis techniques after looking at the results of each and choosing the more favorable.”

One writer

“… theorized that the farther from physics one gets, the more freedom creeps into one’s experimental methodology, and the fewer constraints there are on a scientist’s conscious and unconscious biases. If all scientists were constantly attempting to influence the results of their analyses, but had more opportunities to do so the ‘softer’ the science, then we might expect that the social sciences have more papers that confirm a sought-after hypothesis than do the physical sciences, with medicine and biology somewhere in the middle. This is exactly what the study discovered: A paper in psychology or psychiatry is about five times as likely to report a positive result as one in astrophysics [emphasis added].”

Retracted claims in the hard sciences

I work in the field of physics (experimental and theoretical). I know first hand about the pressure to publish findings. I believe it is more difficult to commit fraud in physics but I also believe there exist opportunities to do so, particularly in areas that are difficult to check. An example is where there is a heavy content of theoretical physics, and/or where statistical analyses are critical to the finding. Detection problems arise in areas such as particle and astrophysics.

Two major claims have recently been retracted.

One was the announced discovery of both cosmic inflation and gravitational waves at the BICEP2 experiment in Antarctica, which I covered extensively in 2014/15. It was retracted only about one year after the initial announcement. In 2011 there was the reporting of an alleged discovery of superluminal neutrinos at the Swiss–Italian border, which, as is typical, was later retracted with far less fanfare than when first published. This situation involved an OPERA experiment in which neutrinos supposedly travelling faster than light were observed. A year after the OPERA claim, the co-located ICARUS experiment reported neutrino velocities consistent with the speed of light in the same short-pulse beam OPERA had measured.

In both cases, in which physics was central, independent measurements were able to check the validity of the initial claim. This, thankfully, occurs far more often in the hard sciences than other science fields. Sometimes a false hypothesis endures for a time, but eventually is overturned. Unfortunately, this is often not the case with the ‘softer sciences’, if they can be called that.

Evolutionary biology masquerades as hard science

So-called evolutionary biology, for example, masquerades as a hard science when, in fact, much of it is not operational science. Operational science is testable and repeatable, is open to criticism and subject to fraud detection. After all, science without debate is propaganda!

But evolutionary so-called science, is more like forensic science; it is weak because it is not subject to the same testable criteria…

CLICK THIS LINK TO READ THE FULL ARTICLE