Long post

Fast food. Questionnaires. Publication strategies. They're all connected. Trust me.

Paul Gowder writes:

All too often, I, like all too many Americans, will walk into a fast food joint. As is well known, the fast food industry has, for a good number of years now, been pushing combination meals -- a single order will purchase a main course (classically, burger), a side order (fries) and a drink (coke). As is also well known (pdf), people respond to cues like this in judging how much to consume -- if something is packaged as a meal, we process it as a meal. (In case that link doesn't work, it's to Brian Wansink & Koert van Ittersum, "Portion Size Me: Downsizing Our Consumption Norms" Journal of the American Dietetic Association 107:1103-1106 (2007).)

All this stuff is old news. But, I wouldn't expect myself to fall for it (which is the point of this post: I did). [...] I flatter myself by thinking I'm somewhat intelligent. And I'm well aware of the above research.

Yet every few weeks until today, I'd walk into a Taco Bell and order one of those combo meals. This is so even though I often don't particularly want one of the items on the combo -- I'm usually fairly indifferent between, say, having a soda and just drinking water. Since water's free and soda isn't, rationally, I should just drink water every time. So why do I order the combo meal? Well, it's in a combo meal -- presumably, it's cheaper than buying the items separately. I'm saving money! [...] Or, at least, this is the rationalization my brain would supply, on a level just below consciousness except on those rare, fleeting, and unproductive moments when I'd bother to think before ordering.


I fell for this kind of stupidity even though I know the research. Do you?

I really think this bears emphasis. I know this research really well, and I have known it for over a decade. If they can get me, they can get anyone.

My first reaction to this was that Dr. Gowder seems to be an unusually irrational person. I must have been to McDonald's at least a hundred times in my lifetime, but I don't think I ever ordered a combo meal - for the simple reason that I don't like at least one of the items included. (I'm not much one for fries, and they always seem to include fries.) And this isn't coming from someone who never makes any stupid decisions. Oh, nonononono!

But what he's really saying is that he went for the standard. That's a nice parallel to something that has been found in research on question effects.

As you may know, the problem with using questionnaires is that the respondents have to be willing and able to give the valid answer. I'm certainly not going to exhaust this topic in this blogpost - that would rather be a book - but I highly recommend this overview article by Norbert Schwarz on fairly recent findings in the area (low-quality pdf). My favourite example from the article is that when people were asked how much TV they watch daily and the answer alternatives ranged from "up to half an hour" to "more than two and a half hours", 16.2% reported more than two and a half hours of TV consumption - but when the alternatives ranged from "up to two and a half hours" to "more than four and a half hours", 37.5% did. It seems that people tend to adjust their estimates towards the standard, and the middle category is perceived as the standard.

In short, how you ask the question can have a huge influence on the answers that you get. This is first-semester knowledge. Given this, it is hard to see why social science journals do not mandate their authors to publish the questionnaires on which their articles are based. But they don't. In the olden days this may have been justified due to the "valuable journal space", but nowadays we have the Internet. Either authors could be required to include a footnote along the lines of "The questionnaire this analysis is based on can be found at www.superuni.edu/interestingstudy/questionnaire.pdf, or there could be a requirement to allow the journal to publish the questionnaire as an online supplement.

(Most articles include verbal descriptions of the questions. But even the order in which you ask the questions can influence the answers. My favourite example in this respect is that if you ask persons about how satisfied they are with their recent dating history and then ask them about their life satisfaction, the correlation is in the region of .6. If you ask them the questions the other way around, the correlation is in the region of .0.)

While I'm at it: data! Authors should be required to make their data public. This wouldn't help against actual fraud, but it would make it easier for other researchers to detect simple errors the researchers made (trust me, there are lots of errors to make in data analysis), or try out a different, possibly better, analysis strategy. You might think this is a nonproblem as you can just email the researchers for their data and they're going to send it to you. To which I say: Hahahahaha. There was an article in American Psychologist not so long ago that was written by a group of researchers who emailed a large group of colleagues and asked for their data for reanalyses. If I remember correctly, they got about 50% of the datasets they had asked for. The other colleagues either a) didn't react at all, b) kept sending mails to the effect that they would soon send the dataset for months or c) claimed there had been some fatal computer crash and there was no backup.

I was going to write that I simply don't understand why journal editors don't adopt the measures I proposed above as it's not really hard to come up with these ideas. But I may have the answer. It is clear why authors want to make their work unassailable: They want it to look good and get cited a lot. The editors of the journals they get published in have basically the same interest.

No comments: