Why you should be a skeptical scientist
[This post has been reproduced with permission from Tim van der Zee. It was originally published on his blog.]
Don’t take my word for it, but being a scientist is about being a skeptic.
About not being satisfied with easy answers to hard problems.
About not believing something merely because it seems plausible…
.. nor about reading a scientific study and believing its conclusions because, again, it seems plausible.
"In some of my darker moments, I can persuade myself that all assertions in education:
(a) derive from no evidence whatsoever (adult learning theory),
(b) proceed despite contrary evidence (learning styles, self-assessment skills), or
(c) go far beyond what evidence exists."
– Geoff Norman
The scientific literature is biased. Positive results are published widely, while negative and null results gather dust in file drawers1, 2. This bias functions at many levels, from which papers are submitted to which papers are published3, 4. In turn, this incentivizes researchers to (consciously or unconsciously) engage in questionable research methods such as running many statistical tests but only reporting the ‘successful’ ones, called p-hacking 5. Furthermore, researchers often give a biased interpretation of one’s own results, use causal language when this isn’t warranted, and misleadingly cite others’ results6. For example: close to 28% of citations are faulty or misleading, which typically goes undetected as most readers do not check the references7.
This is certainly not all. Studies which have to adhere to a pre-registered protocol, such as clinical trials, often deviate from the protocol by not reporting outcomes or silently adding new outcomes8. Such changes are not random, but typically favor reporting positive effects and hiding negative ones9. This is not at all unique to clinical trials; published articles in general frequently include incorrectly reported statistics, with 35% including substantial errors which directly affect the conclusions10, 11, 12. Meta-analyses from authors with industry involvement are massively published yet fail to report caveats13. Besides, when the original studies are of low quality, a meta-analysis will not magically fix this (aka the ‘garbage in, garbage out’ principle). One such cause for low quality studies is the lack of control groups, or what can be even more misleading: inappropriate control groups which can incorrectly imply that placebo effects and other alternative explanations have been ruled out14. Note that these issues are certainly not restricted to quantitative research or (semi-)positivistic paradigms, but are just as relevant for qualitative research from a more naturalistic perspective15, 16, 17.
Everybody lies
This list could go on for much longer, but the point has been made; everybody lies. In the current system, lying and misleading is not only very simple, it is incentivized. Partly this is due to the publication system, which strongly favors positive findings with a good story. In addition to incentives built into the publication system, individual researchers of course also play a fundamental role. However, what makes it especially tricky is that it is also partly inherent to many fields, especially those which do not have ‘proof by technology’. For example, if you claim you can make a better smartphone, you just build it. But in fields like psychology this is rarely possible. The variables are often latent, and not directly observable. The measurements are indirect, and it is often impossible to prove what they actually measure, if anything.
Bad incentives won’t disappear overnight. People tend to be resistant to change. While there are many who actively fight to improve science, it will be a long, if not never-ending journey until these efforts bear fruit.
And now what…
Is this an overly cynical observation? Maybe. Either way, it is paramount that we should be cautious. We should be skeptical of what we read. What is more, we should be very skeptical about what we do, about our own research.
This is perhaps the prime reason why I started my blog: I am wrong most of the time. But I want to learn and be slightly less wrong over time. We need each other for that, because it is just too easy to fool oneself.
Let’s be skeptical scientists.
Let’s become better scientists.
References
- Dwan, K., Gamble, C., Williamson, P. R., & Kirkham, J. J. (2013). Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PloS one, 8(7).
- Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502-1505.
- Coursol, A., & Wagner, E. E. (1986). Effect of positive findings on submission and acceptance rates: A note on meta-analysis bias. Professional Psychology: Research and Practice, 17(2), 136-137
- Kerr, S., Tolliver, J., & Petree, D. (1977). Manuscript characteristics which influence acceptance for management and social science journals. Academy of Management Journal, 20(1), 132-141.
- Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS Biol, 13(3).
- Brown, A. W., Brown, M. M. B., & Allison, D. B. (2013). Belief beyond the evidence: using the proposed effect of breakfast on obesity to show 2 practices that distort scientific evidence. The American journal of clinical nutrition, 98(5), 1298-1308.
- Van der Zee, T. & Nonsense, B. S. (2016). It is easy to cite a random paper as support for anything. Journal of Misleading Citations, 33(2), 483-475.
- http://compare-trials.org/
- Jones, C. W., Keil, L. G., Holland, W. C., Caughey, M. C., & Platts-Mills, T. F. (2015). Comparison of registered and published outcomes in randomized controlled trials: a systematic review. BMC medicine, 13(1), 1.
- Bakker, M., & Wicherts, J. M. (2011). The (mis) reporting of statistical results in psychology journals. Behavior Research Methods, 43(3), 666-678.
- Nuijten, M. B., Hartgerink, C. H., van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2015). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior research methods, 1-22.
- Nonsense, B. S., & Van der Zee, T. (2015). The reported thirty-five percent is incorrect, it is approximately fifteen percent. The Journal of False Statistics, 33(2), 417-424.
- Ebrahim, S., Bance, S., Athale, A., Malachowski, C., & Ioannidis, J. P. (2015). Meta-analyses with industry involvement are massively published and report no caveats for antidepressants. Journal of clinical epidemiology.
- Boot, W. R., Simons, D. J., Stothart, C., & Stutts, C. (2013). The pervasive problem with placebos in psychology why active control groups are not sufficient to rule out placebo effects. Perspectives on Psychological Science, 8(4), 445-454.
- Collier, D., & Mahoney, J. (1996). Insights and pitfalls: Selection bias in qualitative research. World Politics, 49(01), 56-91.
- Golafshani, N. (2003). Understanding reliability and validity in qualitative research. The qualitative report, 8(4), 597-606.
- Sandelowski, M. (1986). The problem of rigor in qualitative research. Advances in nursing science, 8(3), 27-37.
Comments
You're looking to give wings to your academic career and publication journey. We like that!
Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.