21/09/2009

The No Fucking Clue Constructivist View of Survey Answers

Eric Crampton points me to this:
Recent surveys that have tried to gauge Americans' opinions about capitalism reveal either a public terribly confused about it, or remarkably perceptive about differences between its theory and its American manifestation. In the dark days of December 2008, as General Motors careened toward bankruptcy, a poll by Rasmussen Reports found 70% of voters endorsing a "free market" over any economy steered by government. A subsequent poll, just months later, found only 53% endorsing "capitalism" over socialism, while a third, around the same time, found that two out of three Americans believe government and big business collude in ways that hurt consumers and investors. "The fact that a 'free-market economy' attracts substantially more support than 'capitalism' may suggest some skepticism about whether capitalism in the United States today relies on free markets," said pollster Scott Rasmussen, trying to square the results. Americans seem to believe in free markets; they're just not sure they're getting them.
That's not the only explanation.

First off, one shouldn't really put too much faith in poll results unless one has seen the questionnaire, and here we're talking about two on different questionnaires. But let's assume they are comparable and the different results don't reflect a true shift in opinions or differences in sampling. Then I still have another explanation: People don't have a fucking clue what they're talking about, and anything with "free" in it sounds good.

This points to a more basic problem. Almost everybody, when writing about surveys, seems to think of surveys as follows: There is an opinion stored in the respondent's head and surveying him is the process of downloading a true copy of this opinion into the pollster's database. But that is best seen as one endpoint of a continuum on which answers can fall, the other being that opinions are constructed on the spot, as a reaction to the situation of being asked, and hence under a strong influence of how the questions are asked. We know such an influence exists because even the order in which questions are asked can make a huge difference. For example, if you want to produce only weak support for a law which allows pregnant women to freely choose abortion under any circumstances, take a question stem which asks under which circumstances the respondent thinks a woman should have the right to abort and start your list with some really gruesome scenario - say, the child is the result of a rape and is going to be severely disabled and the birth will put the mother's life seriously at risk. Ask about "normal" circumstances only last. If you want strong support for women's choice, do it the other way around.

This problem is exacerbated because many people give an opinion when they don't have one. We know this because respondents have given opinions about fictional ethnicities and claim to know politicians that don't exist (8% of them in the textbook example* I'm looking at).

This is the kind of stuff I would think about before coming up with theories about semantics as influenced by bailouts.

Added: Also via Eric Crampton, the taking-the-piss view of survey answers.

______
*Andreas Diekmann, 1996: Empirische Sozialforschung: Grundlagen, Methoden, Anwendungen. 2nd ed. Reinbek: Rowohlt, p. 386

No comments: