A few weeks ago I posted some interesting analysis from Steve Kass on the difference between death rates and survival rates when it comes to looking at breast cancer. Last week FactCheck.org ran an excellent piece debunking an ad that ran before the healthcare vote which mangled breast cancer statistics in a fear-mongering anti-reform pitch. Their reasoning also looks at the difference between survival and mortality rates:
As for breast cancer survival rates, early screening certainly improves those. What’s less clear is whether screening actually improves survival, versus improving the statistics we use to measure it. We’ve written about this a few times before — including in our analysis of a previous misleading ad featuring Walsh.
Walsh’s claim that survival rates for breast cancer are notably higher in the U.S. than in the E.U. is backed up by a study published in the medical journal Lancet, which showed five-year relative survival rates of 83.9 percent in the U.S. and 73.1 percent for the European average. Five-year relative survival rates show the number of cancer patients who are still alive five years after diagnosis, compared with how many people would be expected to be alive in a healthy population. That means that early detection will always improve the five-year relative survival rate — more patients will be alive five years after diagnosis if their cancer is caught early in its course, regardless of whether they ultimately die from the disease. Breast cancer mortality rates — the number of people who died from breast cancer within a given period — are remarkably similar in the U.S. and the U.K., which recommends mammograms every three years starting at age 50.
We talked to a number of experts for our previous article who said that mortality rates were a more accurate statistic for comparing disease outcomes of different countries. The USPSTF’s conclusion is that the improvement in breast cancer outcomes from yearly mammograms starting at age 40 doesn’t outweigh the potential harm associated with the test, mostly harm from potential false positives. Mortality rate comparisons back up that assessment, and survival rate comparisons don’t necessarily challenge it.
Point taken–statistics don’t lie, but the way they are manipulated can be dangerously misleading.