How Good Is Your Bullshit Detector?

The ability to spot questionable research has never been more important. While scientific studies often appear credible on the surface, poorly designed research is far more common than most people realize, and distinguishing solid science from nonsense requires navigating a small learning curve.

First a little history. Back in the early 2010s a "replication crisis" was discovered and a startling number of published studies couldn't be reproduced meaning when independent teams followed identical methods, they failed to get the same results. This problem had been brewing since the late 1960s, but exploded into public view after a 2011 study claimed to find evidence of extrasensory perception. While researchers quickly dismantled the study's flawed methodology, they were disturbed to realize these same flaws were standard practice across psychological research. The scope of the problem became clear in 2015 when researchers attempted to replicate 100 psychological studies. Only 39% produced the same results, though accounting for various factors suggests the true replication rate might reach 60% still a sobering figure for a field built on empirical evidence.

This crisis doesn't invalidate science itself and the solution isn't to abandon scientific thinking but to become a more discerning reader using critical thinking skills when consuming scientific literature. By asking the right questions about research design, methodology, and conclusions, you can separate reliable findings from the studies that lack scientific substance.

Here are the four questions you should ask when evaluating any scientific research you read:

1. What is the effect Size? Consider the magnitude of the study’s findings. Results should report effect sizes to convey practical relevance. Tiny or implausibly large effects warrant suspicion.
2. What was the sample size? Verify if enough participants were studied. Small samples, especially in empirical research, produce unreliable conclusions. Larger sample sizes are needed to detect smaller effects with confidence.
3. What was the p-value? Check the study’s p-value, which indicates the likelihood that a result is due to chance. Reliable studies typically report low p-values, but be wary of results just below 0.05, as this might suggest “p-hacking” or methodological manipulation for publication.
4. What was the study design? Scrutinize whether the design allows for the claimed conclusions, distinguishing between observational and randomized experimental studies. Only randomized trials can robustly establish causation as observational studies may show mere associations.

No single indicator guarantees reliability but these tools foster more discerning consumption of scientific literature.

PS Citation count and journal reputation can be misleading, as sensational or flawed papers may receive high citations and publication in prestigious journals but that doesn’t guarantee reliability.

Next
Next

How the Consumer Mind Works