[...] effects, which are all much smaller and have confidence intervals that include zero (ie, no impact).You read this over and over again: Confidence interval including zero equals no effect. Which it doesn't. I'm afraid we'll have to shortly revisit the basics for this one.

What does "significance" mean anyway - in a statistics context, that is? O.k., you have a group of objects about which you want to know something. That's called the "universe". Inconveniently, you rarely have the opportunity to study the whole universe, say all living children in a certain country; hence, you take a sample - say, children who take an IQ test and about whom you know how many years of preschool they were exposed to. You then observe certain qualities in the sample. For example, you might be interested in whether there is an association between children's IQ and years of preschool. Let's say that each additional year of preschool is associated with two extra IQ points. But you don't really care about your sample. You want to know whether that's true in the universe. Enter significance tests.

Given your results, your best estimate about the universe (your "point estimate") is that one year of preschool is associated with two extra IQ points. If certain assumptions hold, significance test tell you the likelihood that in the universe the association between years of preschool and IQ falls within a certain range, the midpoint of which is identical to the point estimate. For example, the test may tell you that there is a 95% likelihood that in the universe an extra year of preschool is associated with between 1 and 3 extra IQ points. In this case, 1 to 3 is the "confidence interval": you can be 95% confident that the association in the universe is within that interval. Alternatively, your confidence interval might range from -1 to 5, in which case it obviously

*includes zero*. When the paper says that an association is "significant", it means that the confidence interval does not include zero.

But wait, wouldn't "significance" then seem to be continuous rather than binary? Why do people talk about it as though it were the latter? It's a good question; I'll say the following here:

1. The convention is the 95% confidence interval; if the paper only says "significant" or "not significant", the 95% confidence level is hence implied. (Otherwise it's misleading.)

2. That's pretty much the point I'm trying to make: Significance tests tell you the likelihood with which the association in the universe is zero or on the opposite side of zero (in the example: with which there is a

*negative*association between years of preschool and IQ). If your confidence interval ranged from -1 to 5 and you had to place a bet on the association between preschool and IQ in the universe, your best bet would still be 2 points per year of preschool, not zero ("no impact"). That's despite the test telling you that the chance of the association being positive is less than 95%.

I'm not writing all of this for a love of maths, but because this is where things get important: If the study was designed so that we have a good reason to believe that the observed association is causal (and significance tests tell you nothing at all about this) and if your best point estimate is 2 points per year of preschool and the likelihood of the association being positive is 94% (i.e., not significant) and given the cost of preschool, would you want your taxes invested in more preschool? And would your answer change if the likelihood of the effect being positive is only 64%? What about 6% and a 94% of it being negative or zero?

According to the prevailing logic, those numbers shouldn't make a difference: It's all "no impact". And that nonsense is spread not by politicians or journalists, but by the people who should know best.

## No comments:

Post a Comment