The Poverty of Parody

In a comment on this post at Statistical Modeling, commenter "Prison Rodeo" linked to a parody published in the BMJ. The full text is here, here's the abstract (format modified; incoherent use of colons original):

Objectives To determine whether parachutes are effective in preventing major trauma related to gravitational challenge. Design Systematic review of randomised controlled trials. Data sources: Medline, Web of Science, Embase, and the Cochrane Library databases; appropriate internet sites and citation lists. Study selection: Studies showing the effects of using a parachute during free fall. Main outcome measure Death or major trauma, defined as an injury severity score > 15. Results We were unable to identify any randomised controlled trials of parachute intervention. Conclusions As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.
Let's leave aside the question whether that last sentence qualifies as tasteful.

Apparently, the authors mean to express their view that in medicine there is an over-emphasis on randomized controlled trials (RCTs). But what exactly is their position? RCTs are often, but not always needed? (And if so, what are the criteria?) RCTs are better than observational studies, but something can also be learned from the latter? RCTs are no better than observational studies? Worse? Useless? We don't learn this because in a parody, your own position always remains implicit.

So do the authors' actual arguments against whatever position they're criticizing, another defining feature of the parody format. So let's take the article at face value for a moment: what's wrong with it? What's wrong with it is that we don't need RCTs to test for the efficacy of parachute use because we already know it works. The reason for this is not the observational evidence we have (although that's pretty suggestive), but rather that we can predict the effects of using a parachute based on knowledge of physics that is so basic that even I have it. 1: The lower the speed with which you hit a solid object, such as the earth, the lower your risk of sustaining serious injuries. 2: Parachutes slow you down due to that air resistance thingy.

We also do not need RCTs on whether it's bad for your health to slit your wrists because we know that the human body needs blood to survive and slitting your wrists causes lots of blood leave your body. Do you need more examples? You don't.

Generally, we don't need RCTs to test treatments the outcomes of which we can predict with certainty. Nor have I ever heard anyone advocate this,* so it seems that the authors are (implicitly) attacking a strawman.

It would have been more helpful if the authors would have spelled their arguments out, rather than taking a cheap shot, which would have enabled their opponents to attack them. (That's how science is supposed to work, right?) It might be that they did this somewhere else, but even if they did, the parody adds exactly nothing. I like a good parody as much as the next man, and yes, I enjoyed "The Economics of Brushing Teeth", but BMJ wasted some of the proverbial valuable journal space by publishing an article that does nothing at all to advance knowledge of the topics at hand. If academics enjoy publishing parodies of their pet peeves on their blogs, that's another matter.

*If you're aware of a counterexample, please point it out in the comments.

No comments: