13/04/2011

Unpacking Karl Smith on Experiments and Regressions (An Introduction to Causality and How to Measure It), Part III

Part I, Part II

Internal Validity (Part C)
Smith writes: "Notably, double-blind experiments are an attempt in medicine to go beyond simple randomness because simple randomness not enough." If I understand Smith correctly, he brings up a very interesting problem here, namely that the difference between the independent variable you're actually interested in and the treatment you are in fact administering. The two can be different, which can bring about new confounding issues. I've never heard a general term for this problem, let's call it treatment confounding. The classic example is from medicine, as mentioned by Smith. Researchers are actually interested in the consequences of introducing a medical agent into the body. But if subjects in the treatment group are given a medicine, while subjects in the control group are given nothing, there are differences between the two groups other than the introduction of the agent into the body: They now also differ on the expectation of getting help, the act of taking a medicine, etc. Using placebos means matching treatment and control on these aspects. Making the administering person blind means matching treatment and control group on the expectations of that person. Randomized double-blind means that treatment and control group differ on nothing but the introduction of the agent into the body (the independent variable of interest).

The treatment confounding problem is not confined to medicine. For example, you might do a psychological experiment on aggression in the lab. You're interested in the effect of aggressive affect on aggressive behaviour. To instil aggressive affect in the treatment group you make them write essays on "a situation in the past that made you feel really aggressive;" the control group write essays about something else. You measure aggressive behaviour afterwards. Did you really measure the effect of aggressive affect? Perhaps what you actually measured was the effect of signaling that it is o.k. to express aggression (an experimenter effect) or the accessibility of aggressive scripts.

So, there's a potential problem to keep in mind. But our topic is comparing lab experiments and regressions with respect to the treatment confounding problem. Where do you think the problem is bigger, in multivariate regressions on observational data or randomized lab experiments? That's not such a tough one, is it?

No comments: