... until the associations started flowing. Sometimes I wish I could switch my brain to "basic functions only". Come to think of it, one could imagine other useful switches. Oh, I'm digressing.
Anyway, I have found that there is a negative association between the quality of the posts on this blog, as judged by me, and the number of comments. The correlation is about -.4. This finding, which I don't have an explanation for, suggests this post will attract a few comments.
If I were an economist, I'd call this finding "counterintuitive", land it in the JPE and get tenure.
It surprises me how people tout their findings as being "counterintuitive" as though this were unambiguously a good thing. It isn't. In the terminology of Bayesian reasoning, "counterintuitive" means that the finding contradicts the priors. Sure, if you have new information you should adjust your estimate accordingly, but the priors don't suddenly become meaningless. In other words, your findings are likely to be, well, wrong. If you didn't make a mistake and didn't deliberately cook the data, it might just be a statistical blip.
Indeed, I believe there was a study not so long ago which showed that findings published in top-flight econ journals were less likely to be replicated than findings published in lower-tier journals. I've never read the study and I may be wrong here, but when I read that, I immediately thought: "No surprise there. Top-flight journals like to publish counterintuitive results." In other words, the findings were rather intuitive. (I'd like to read the study. If you have a cite, or even a link, please leave a comment. Thank you.)
What's more, people sometimes laud their theoretical models for yielding counterintuitive predictions. If I developed a model that yielded counterintuitive predictions, my first reaction would be to think: "Darn, probably something wrong with the assumptions."
Then again, you don't want to get too counterintuitive. The psychologist David Lykken once did a survey among colleagues describing a fanciful psychoanalytical theory (I think it had something to do with frogs) and varying levels of supporting evidence. He found that people wouldn't believe the theory no matter how strong the evidence was. (Presumably, his sample didn't include any psychoanalists.) He commented that if your theory is just too stupid, people are not going to believe it, full stop. (Disclaimer: This description is based on his autobiographical sketch, which I read years ago. It doesn't seem to be online anymore, so I may have got some details wrong. Again, if you have a link, etc.)
Of course, it is understandable why people prefer results that are counterintuitive to an extent, because those results are interesting, whereas results showing that most people prefer higher to lower pay are not. Most people, including myself, have thought upon reading something: "Did we really need a study to show that?" There is, however, an old empirical research textbook classic: The author goes, "Did we really need a study to show that...", lists some findings, and then says: "Haha, fooled you! In each case the exact opposite was found." The deeper point is that in science, "everybody knows" just isn't good enough.
I wish I had some clever punchline to finish this post off, but I don't. So instead, a Bayesian joke:
Q: "Why did the Bayesian cross the road?"
A: "This question can't be answered; more information is needed."
Yeah, I know it's not for everybody.
Nothing as Useful as a Bad Theory
4 years ago
No comments:
Post a Comment