Meta-analysis is a very handy technique for statistically combining the quantitative results from different studies. It has at least two attractive features. One, compared to a narrative review, it reduces the influences of the reviewer's subjective biases on the conclusions (although it does not eliminate them). Two, it might allow you to find statistically significant relationships where the individual studies couldn't because the effect is too small. (My personal rule of thumb is that if an effect can't reach statistical significance in a sample of 300, it's not interesting. But you might disagree with that, for example when you study mortality.)
Traditionally, meta-analysis has been used on experimental data. Although there are always questions about which studies to include, which metric to use and the like, this is relatively straightforward because in an experiment you can randomize in order to isolate the effect of exactly one variable. (Yeah, I know there's also stuff like 3x3 designs; let's not complicate things here.)
In recent years, however, it has increasingly been used on observational (non-experimental) data. There are at least two problems with that. The first point I read somewhere (another great cite!), the second I came up with myself.
1. Meta-analysis does not account for scientific progress. For example, say there's a theory which suggests a causal relationship between X and Z. Researchers test for that time and again, and sure enough, every time they find a positive association between X and Z. Along comes a researcher who says: "I believe the relationship between X and Z is spurious. The real culprit is Y." How to test for that? Simple: Just take Z as your dependent variable and throw X and Y into the regression at the same time. And boom: The relationship between X and Z disappears. Soon many colleagues agree that the researcher was right. However, if you now did a meta-analysis of all the studies that test for a correlation between X and Z, including the new one, you'd still find a positive relationship between X and Z. But if our researcher is right, as his study suggests, that's highly misleading.
2. The second problem occured to me when reading this meta-analysis of macro-level predictors of crime by Travis C. Pratt. Social scientists will often use publicly available macro-level data - unemployment rates, population densities, homicide rates and the like - to perform analyses on them. "Publicly available" means that any Tom, Dick and Harry can use them - and they do. In the present context the problem with that is that if you just take every study you can find about the relationship between unemployment and homicide rates and do a meta-analysis using those, you're going to have some observations in your calculation multiple times. In the case of Pratt's study, my guess would be that he has Texas in 1970 in there about fifty times, whereas other entities will be in there only once. His weighting procedures to not account for this. That appears to be a problem.
I'm not against using meta-analysis on observational data at all. I'm just saying that when doing them - and reading them - you have to be very, very careful.
That's the end of the lecture.
The American Left's Authoritarian Turn
5 years ago
2 comments:
I was looking at a meta-analysis of this type fairly recently and it nicely illustrated your point (1). All the positive studies had a tiny number of confounding variables corrected for (age, gender, that's about it) while the negative studies all corrected for known causal factors that could mediate the association. A few studies even had over-correction where they were controlling for variables that seemed to be alternative proxies for the same thing the study was looking at (like controlling for family income when looking for an association between something and poor or rich areas and concluding there is no reationshop with poverty), so no wonder the studies were negative.
Concerning income, I could see a study that looks at both individual/family income and neighbourhood poverty using a multilevel model. In fact, that sounds rather interesting. But in medical contexts I'd probably always wonder what that proxied. Can't afford medication? Too many fast-food places nearby? In fact, I'm probably going to post something about that, once I get round to reading that paper, so stay tuned.
As for the paper you read, it seems the authors presented the results well enough so you could spot the problem. Hats off - that's what I meant by being careful.
I once had the wild idea that one could weigh the results of meta-analyses (which I have no special expertise in) by the R-squared of the regressions. But I haven't really thought that through and the point about overcontrol you mention seems to speak against that.
Post a Comment