SCIENCE WITHOUT SENSE

The Risky Business of Public Health Research

by
Steven Milloy

Copyright © 1995 by Steven J. Milloy. All rights reserved. First edition. Published by the Cato Institute, 1000 Massachusetts Avenue, N.W., Washington, D.C. 20002. Library of Congress Catalog Number: 95-72177. International Standard Book Number: 0-9647463-2-8.


Chapter 3

The Significance of Significance


Assuming you've panned for a relative risk worthy of pursuit, you've got to ask yourself a question ... "Was I just lucky?" All your critics will ask that question, and, in this case, being lucky is not good.

You need to "prove" you weren't just lucky, that your relative risk was not a mere random occurrence, that it's real, that it's statistically significant. (Of course, if you can't do that, you can always try to ignore it. You might get away with it. In part, this will depend on how good of a job you did picking a risk to target.)

Statistical significance is an expression of how sure you are that your results did not occur by luck or chance, about how sure you are that they're not a fluke. Traditionally, conventionally and historically, a relative risk is statistically significant when we are 95 percent sure that it did not occur by chance.

Now, although the 95 percent level is not a law of nature and is not etched in stone, most scientists would be pretty embarrassed to label their results statistically significant at anything lower than 95 percent. In fact, many won't even consider publishing results that weren't statistically significant at a 95 percent level. You, however, cannot afford the luxury of being so finicky.

So, how do you perform the necessary statistical mumbo-jumbo to figure out whether your results are statistically significant at 95 percent? My advice is find some qualified statisticians and leave the divining to them. And here's what you do if your statisticians say your "p-value" is greater than 0.05 or your "95 percent confidence interval" includes a relative risk of 1.0 or less. Either means your relative risk is not statistically significant at 95 percent.

BORING TECHNICAL STUFF: The p-value indicates the probability that your statistical association (relative risk) is a fluke. The smaller number is better. A p-value of 0.05 or less means there is only a 5 percent or less chance that your relative risk is a fluke. You are halfway to achieving statistical significance at 95 percent.

Unfortunately, the p-value is immutable. Say your p-value is greater than 0.05 (very bad news). You're going to be stuck with it unless you (1) "adjust" your raw data or (2) ignore the p-value altogether. But if you play with the raw data, you'll probably have to recalculate your relative risk as well. In addition to a "good" p-value, you could also wind up with a higher (better) relative risk.

As for ignoring the p-value, most people don't even know it exists. Those who do know may think it's just another indecipherable statistical hieroglyphic. So if most people don't know or care, why worry about it?

The confidence interval represents the range of relative risks between which your risk number will likely fall. Let's say your statistician calculated a relative risk of 1.2 (a very weak association) and a 95 percent confidence interval of 1.05 to 1.35. Even though we calculated a relative risk of 1.2, we are almost 100 percent certain it does not equal the exact "true" risk. It's sort of a "best guess" of the true relative risk.

Even though we are nearly positive that 1.2 is not the precise and correct relative risk, we are 95 percent sure the "true" risk falls somewhere between the interval 1.05 to 1.35. All's well as long as the lower end of the risk range is above 1.0.

If the lower end of the range is above 1.0, then you can be at least 95 percent sure your relative risk is above 1.0. But if the lower end of the range is below 1.0, this means there is less than 95 percent confidence that your risk is real. This is bad. This means your relative risk is not statistically significant at a 95 percent level. What can you do about it?

If you are dealing with a lower range around 1.0, the simple solution is to narrow your confidence interval so that the lower end creeps above 1.0. How do you do that? You simply calculate a 90 percent confidence interval instead of the customary 95 percent confidence interval. Logically, less confidence means a narrower confidence interval while greater confidence means a wider confidence interval, and, after all, who needs greater confidence?

So you've lowered your standard for statistical significance from 95 percent to 90 percent. But you do have a defense in this case. Just say "Hey, 90, 95 percent, what's the difference? Close enough for government work."

You can try to "fix" your p-value or you can ignore it. You can adjust your confidence interval from 95 to 90 percent. Either way, it's sneaky. But remember: In risk assessment, it's not how you play the game; it's whether you win or lose.


Click here for Chapter 4: "Data" Collection

Click here to return to the home page




Copyright © 1996 Steven J. Milloy. All rights reserved. Site developed and hosted by WestLake Solutions, Inc.

1