Those tables (click to enlarge) are from an important paper called "Are All Economic Facts Greatly Exaggerated? Theory Competition and Selectivity" (gated, via) by Chris Doucouliagos and T.D. Stanley that's recently been published in the Journal of Economic Surveys (rather than the AER, where it probably belongs). The higher the beta-value, the more the publications in that literature are estimated to be biased due to selection (what gets published and what doesn't). Hence, the higher the value, the more the literature as a whole exaggerates how homogenous real-world phenomena are. Hypothetical example (mine, not theirs): Imagine you knew with certainty that, on average, a woman's colour of hair had no effect on how attractive men find her. Further imagine that the economic literature consistently showed that "gentlemen prefer blondes". You would then expect this literature to receive a high beta-value in the table. (The computation of the value is somewhat complicated, but based on the idea that literatures are selective if they feature many results that are just significant.)
The authors go on to estimate what predicts beta. Here's the paper's abstract:
There is growing concern and mounting evidence of selectivity in empirical economics. Most empirical economic literatures have a truncated distribution of results. The aim of this paper is to explore the link between publication selectivity and theory contests. This link is confirmed through the analysis of 87 distinct empirical economics literatures, involving more than three and a half thousand separate empirical studies, using objective measures of both selectivity and contests. Our meta–meta-analysis shows that publication selection is widespread, but not universal. It distorts scientific inference with potentially adverse effects on policy making, but competition and debate between rival theories reduces this selectivity and thereby improves economic inference.
Besides being a very important contribution on the trustworthiness of different literatures, this addresses a question that I've been thinking about quite a bit: Everybody knows that implausible results get double-checked more often than plausible ones (I once helped a friend who had found, using matching, that the results were exactly the opposite of what you should expect. Can you guess the reason?). So results that seem plausible get a leg up. Isn't that an unfair advantage for plausible results? Well, that depends on how good your prior theories are. If they're really good, then confirming results should get a leg up. But how do you know that they're good? Only by looking at empirical results. Etc., ad infinitum.
The authors run a regression to see what predicts the selectivity of literatures. It turns out that selectivity is higher when there is only one reigning theory - that is, when available theory makes a clear prediction on which way the results ought to go, the estimates are particularly untrustworthy. This suggests that results that conform to theory are given too much of a leg up, probably due to a number of processes including, but not limited to, double-checking. This, it seems to me, is a very, very important result.