Why you should be a skeptical scientist

Don’t take my word for it, but being a scientist is about being a skeptic.

About not being satisfied with easy answers to hard problems.

About not believing something merely because it seems plausible…

.. nor about reading a scientific study and believing its conclusions because, again, it seems plausible.

“In some of my darker moments, I can persuade myself that all assertions in education:
(a) derive from no evidence whatsoever (adult learning theory),
(b) proceed despite contrary evidence (learning styles, self-assessment skills), or
(c) go far beyond what evidence exists.”
– Geoff Norman

Why you should be a skeptical scientist

The scientific literature is biased. Positive results are published widely, while negative and null results gather dust in file drawers (1, 2). This bias functions at many levels, from which papers are submitted to which papers are published (3, 4). This is one reason why p-hacking is (consciously or unconsciously) used to game the system (5). Furthermore, researchers often give a biased interpretation of one’s own results, use causal language when this isn’t warranted, and misleadingly cite others’ results (6). For example: close to 28% of citations are faulty or misleading, which typically goes undetected as most readers do not check the references (7).

everybody lies.jpgThis is certainly not all. Studies which have to adhere to a pre-registered protocol, such as clinical trials, often deviate from the protocol by not reporting outcomes or silently adding new outcomes (8). Such changes are not random, but typically favor reporting positive effects and hiding negative ones (9). This is not at all unique to clinical trials; published articles in general frequently include incorrectly reported statistics, with 35% including substantial errors which directly affect the conclusions (10, 11, 12). Meta-analyses from authors with industry involvement are massively published yet fail to report caveats (13). Besides, when the original studies are of low quality, a meta-analysis will not magically fix this (aka the ‘garbage in, garbage out’ principle). One such cause for low quality studies is the lack of control groups, or what can be even more misleading: inappropriate control groups which can incorrectly imply that placebo effects and other alternative explanations have been ruled out (14).

Note that these issues are certainly not restricted to quantitative research or (semi-)positivistic paradigms, but are just as relevant for qualitative research from a more naturalistic perspective (15, 16, 17).

Everybody lies

This list could go on for much longer, but the point has been made; everybody lies.

In the current system, lying and misleading is not only very simple, it is incentivized. Partly this is due to the publication system, which strongly favors positive findings with a good story. In addition to cultural aspects, individual researchers of course also play a fundamental role. However, what makes it especially tricky is that it is also partly inherent to many fields, especially those which do not have ‘proof by technology’. For example, if you claim you can make a better smartphone, you just build it. But in fields like psychology this is rarely possible. The variables are often latent, and not directly observable. The measurements are indirect, and it is often impossible to proof what they actually measure, if anything.

Bad incentives won’t disappear overnight. People are resistant to change. While there many who actively fight to improve science, it will be a long, if not never-ending journey.

And now what…

Is this an overly cynical observation? Maybe. Either way, it is paramount that we should be cautious. We should be skeptical of what we read. What is more, we should be very skeptical about what we do, about our own research.

This is perhaps the prime reason why I started this blog: I am wrong most of the time. But I want to learn and be slightly less wrong over time. We need each other for that, because it is just to easy too fool oneself.

In the upcoming blogs I will continue to focus on issues in science, but more importantly I will attempt to highlight better ways to do science and share practical recommendations.

Let’s be skeptical scientists.

Let’s become better scientist.

References

  1. Dwan, K., Gamble, C., Williamson, P. R., & Kirkham, J. J. (2013). Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PloS one, 8(7).
  2. Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502-1505.
  3. Coursol, A., & Wagner, E. E. (1986). Effect of positive findings on submission and acceptance rates: A note on meta-analysis bias. Professional Psychology: Research and Practice, 17(2), 136-137
  4. Kerr, S., Tolliver, J., & Petree, D. (1977). Manuscript characteristics which influence acceptance for management and social science journals. Academy of Management Journal, 20(1), 132-141.
  5. Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS Biol, 13(3).
  6. Brown, A. W., Brown, M. M. B., & Allison, D. B. (2013). Belief beyond the evidence: using the proposed effect of breakfast on obesity to show 2 practices that distort scientific evidence. The American journal of clinical nutrition, 98(5), 1298-1308.
  7. Van der Zee, T. & Nonsense, B. S. (2016). It is easy to cite a random paper as support for anything. Journal of Misleading Citations, 33(2), 483-475.
  8. http://compare-trials.org/
  9. Jones, C. W., Keil, L. G., Holland, W. C., Caughey, M. C., & Platts-Mills, T. F. (2015). Comparison of registered and published outcomes in randomized controlled trials: a systematic review. BMC medicine, 13(1), 1.
  10. Bakker, M., & Wicherts, J. M. (2011). The (mis) reporting of statistical results in psychology journals. Behavior Research Methods, 43(3), 666-678.
  11. Nuijten, M. B., Hartgerink, C. H., van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2015). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior research methods, 1-22.
  12. Nonsense, B. S., & Van der Zee, T. (2015). The reported thirty-five percent is incorrect, it is approximately fifteen percent. The Journal of False Statistics, 33(2), 417-424.
  13. Ebrahim, S., Bance, S., Athale, A., Malachowski, C., & Ioannidis, J. P. (2015). Meta-analyses with industry involvement are massively published and report no caveats for antidepressants. Journal of clinical epidemiology.
  14. Boot, W. R., Simons, D. J., Stothart, C., & Stutts, C. (2013). The pervasive problem with placebos in psychology why active control groups are not sufficient to rule out placebo effects. Perspectives on Psychological Science, 8(4), 445-454.
  15. Collier, D., & Mahoney, J. (1996). Insights and pitfalls: Selection bias in qualitative research. World Politics, 49(01), 56-91.
  16. Golafshani, N. (2003). Understanding reliability and validity in qualitative research. The qualitative report, 8(4), 597-606.
  17. Sandelowski, M. (1986). The problem of rigor in qualitative research. Advances in nursing science, 8(3), 27-37.

Note: I published an earlier version of this post here, which is the blog of my research group.

One thought on “Why you should be a skeptical scientist

Leave a Comment