31/05/2009

Previous Views about Human Psychology All Wrong, Groundbreaking New Blogspost Finds

Via Ben Goldacre, a short, ungated article about the accuracy of press releases by medical research centers. As summarized by Goldacre:
In this study, among the press releases covering human research, only 17% promoted the studies with the strongest designs, either randomised trials or meta-analyses. 40% were on the most limited studies: ones without a control group, small samples of less than 30 participants, studies looking “surrogate primary outcomes” (a blood cholesterol level rather than something concrete like a heart attack, for example), and so on.

[...]

58% - more than half - of all press releases from this representative sample of academic institutions lacked the relevant cautions and caveats about the methods used, and the results reported.
That's a problem in the real world in which journalists don't necessarily go back to the original sources and mail the authors to obtain their data for additional analyses.

If you read a bunch of academic publications, you would be likely to conclude that scientists are the first people in the world to acknowledge the limitations of their methods and stress how tentative their conclusions are. Why doesn't this translate into press releases?

One thing to note is that the press releases are often not written by the study authors, but by PR types, who presumably have their own ideas about "limitations" and "tentative". But I do imagine the researchers read those releases either because they have to greenlight them or afterwards. And in the latter case one should imagine they'd complain if their work has been misrepresented. So, why? A few ideas:

1. Funding: Presumably, the stronger the claims in the press releases, the more likely it is the story finds its way into the media. And it's possible that being represented in the media raises the likelihood that the next project will be funded, or at least the academics think so - I wouldn't be surprised to find it noted in a grant application if the researchers' work has been covered in the New York Times, etc.

2. Academics are human, too! (pt. 1): Researchers like being in the media - after all, they are just as vain as the next person there's a reason they're renowned for their vanity. And the stronger the findings look...

3. Academics are human, too! (pt. 2): Sure, they include caveats in their papers - because that's what reviewers demand. (In fact, not doing so is a rookie mistake.) But that doesn't mean they strongly believe in them. If you're looking for the person with the strongest faith in a paper's results, look for the author! As soon as they are speaking into journalists' ears, they throw off the shackles of peer review and start generalizing the living daylights out of their meagre findings. Some of them even mix metaphors!

If there's some truth in points 1 and 2, then a more basic problem is that there are no rewards for humility - quite on the contrary. This, I think, is a general problem with human psychology: The more confident someone seems, the more likely they are to believe him. While this is a good rule of thumb for within-person comparisons, I've found the opposite to be true when comparing people: Those who are always certain they're right are those who are most often wrong.

One wonders why. Assuming one's assumptions are correct, of course, which they might not be, in which case one wouldn't have anything to wonder about.

3 comments:

Andrew Hickey said...

Meta-analysis is the 'strongest' design for research in much the same way that toilet paper is the 'strongest' material for building skyscrapers...

LemmusLemmus said...

Well, all other things equal I'd rather look at the results from a lot of cases than those from a few. Of course, a good meta-analysis takes the differences between the quality of the input into account. See earlier post and comments.

pj said...

Systematic review plus meta-analysis is probably the strongest design for assessing the state of the literature - well, I guess ideally something like an unbiased sampling such as FDA data.

The alternative is to rely on subjective impression or the impact made by the largest/most positive studies.