27/02/2009

Academic Journals: More Cynicism

I earlier argued that journal editors are such that they should not be expected "to pursue The Truth but to publish articles that are as citable as possible, as citation counts are what the journal’s reputation hinges on."

As far as for-profit journals are concerned, however, the ultimate aim is not reputation, but - you've guessed it - profit. Now Ben Goldacre writes:

The British Medical Journal this week publishes a complex study which is quietly one of the most subversive pieces of research ever printed. It analyses every study ever done on the influenza vaccine [...] looking at whether funding source affected the quality of a study, the accuracy of its summary, and the eminence of the journal in which it was published.

[...]

We already know that industry funded studies are more likely to give a positive result for the sponsors drug, and in this case too, government funded studies were less likely to have conclusions favouring the vaccines. We already know that poorer quality studies are more likely to produce positive results - for drugs, for homeopathy, for anything - and 70% of the studies they reviewed were of poor quality. And it has also already been shown, in various reviews, that industry funded studies are more likely to overstate their results in their conclusions.

But Tom Jefferson and colleagues looked, for the first time, at where studies are published. Academics measure the eminence of a journal, rightly or wrongly, by its “impact factor”: an indicator of how commonly, on average, research papers in that journal go on to be referred to, by other research papers elsewhere. The average journal impact factor for the 92 government funded studies was 3.74; for the 52 studies wholly or partly funded by industry, the average impact factor was 8.78. Studies funded by the pharmaceutical industry are massively more likely to get into the bigger, more respected journals.

That’s interesting: because there is no explanation for it. There was no difference in methodological rigour, or quality, between the government-funded research, and the industry-funded research. There was no difference in the size of the samples used in the studies. And there’s no difference in where people submit their articles: everybody wants to get into a big famous journal, and everybody tries their arm at it.

An unkind commentator, of course, might suggest one reason why industry trials are more successful with their submissions. Journals are businesses, run by huge international corporations, and they rely on advertising revenue from industry, but also on the phenomenal profits generated by selling glossy “reprints” of studies, and nicely presented translations, which drug reps around the world can then use. Anyone who thought this was an unkind suggestion might need to come up with an alternative explanation for the observed data.

Thankfully, this is not an issue with the journals that I read.

No comments: