An ounce of prevention is worth a pound of cure, according to Ben Franklin. But what if the prevention causes more harm than benefit? And what if the prevention doesn't prevent much of anything at all?

We've all heard the calls to get screened - mammograms, prostate exams, and more. And screening does save lives - a few minutes, some slight discomfort, and you can drastically reduce your chances of being surprised down the road by an undetected disease. But is screening 100 percent benefit, or are there some associated harms? Reading articles in the lay press, you'd be hard-pressed to find much discussion about the pros and cons. After all, isn't it better to get the masses in for screening, rather than having them question the value of a mammogram and perhaps electing not to be screened?

Preventing an avoidable death = good

Screening is considered overall a positive, as generally treatment options and survival are related to the stage of the cancer. Detect it early, better prognosis. Catch it late, well, you don't have a lot of options. Screening may reduce morbidity (disease) and mortality (death). Who can argue that reducing morbidity and mortality (at least, preventable mortality) is a bad thing?

True or false?







According to the National Cancer Institute, "several potential harms must be considered against any potential benefit of screening for cancer."




















  • Although most cancer screening tests are noninvasive or minimally invasive, some involve small risks of serious complications that may be immediate (e.g., perforation with colonoscopy) or delayed (e.g., potential carcinogenesis from radiation).




  • Another harm is the false-positive test result, which may lead to anxiety and unnecessary invasive diagnostic procedures. These invasive diagnostic procedures carry higher risks of serious complications.




  • A less familiar harm is overdiagnosis, i.e., the diagnosis of a condition that would not have become clinically significant had it not been detected by screening. This harm is becoming more common as screening tests become more sensitive at detecting tiny tumors.




  • Finally, a false-negative screening test may falsely reassure an individual with subsequent clinical signs or symptoms of cancer and thereby actually delay diagnosis and effective treatment.














False positives and negatives (and true positives and negatives) in the context of screening can be a bit confusing. False positives and negatives are just as they sound - a result that is false, or incorrect. A non-cancer example: You take a pregancy test and it comes out positive, but you aren't pregnant. (If you are into statistics, this is a type I or alpha error.) Or, you take a pregnancy test and it comes out negative, but you are pregnant (a type II or beta error).

But how often do false positives occur? Unfortunately, more than you'd think. Take mammograms - current estimates suggest anywhere from 5 to 15 percent of mammograms result in a false positive. But even among doctors and age groups there is variation, according to the American Cancer Society. It is estimated that a woman who has yearly mammograms between ages 40 and 49 has about a 30 percent chance of having a false-positive mammogram at some point in that decade. And in one study described on the ACS site, the chance of false positives ranged from 15 percent to 90 percent, just among the radiologists in the study! Or take ovarian cancer - in the U.K., researchers found that a one-stage tests (screening to scalpel) such as ultrasound were apalling. Among surgeries following an ultrasound, only one in 50 procedures found a true case of cancer. That means 49 of 50 surgeries were unnecessary.

Once a woman receives a false positive result, she may be more likely to come back routinely for screening. But maybe not - is the anxiety worth it? And if one result is falsely positive, how much stock do you put in the next positive result- which may turn out to be true?

There are a few journalists out there with the resources and support to forage in the gray area of early prevention, and who do so to the benefit of the public. For relatively recent examples, read Gina Kolata's Dec. 8 article in the NY Times, or Rob Stein's Oct. 8 article in the Washington Post. Then there are others, like Bill Hendrick's Oct. 8 story in the Atlanta Journal-Constitution, which failed to quote experts who weren't already on board with the procedure, discuss the strength of the study and its results in context, alternatives to this new and exciting scan, gets some of the facts about cancer and polyps wrong, and even predicts HHS would cover the scan despite the evidence to the contrary.

When journalism succeeds, we all win

One of the best screening stories I've ever read is in the January 2009 issue of Wired, by Thomas Goetz. His article - "Why Early Detection Is The Best Way To Beat Cancer" - captures the screening trade-off in an engaging and informative way.

Goetz first introduces Brenda Rosenthal, a stage IIIc ovarian cancer survivor (and breast cancer survivor 20 years previous) for whom screening may have overwhelmingly been a positive. "I could live 10 or 15 years more, but still won't have the quality of life I would've if we'd found the cancer early," Rosenthal says in the article. Goetz uses her experience to argue for early screening initiatives, citing examples of the failures of the "cure-driven approach" - billions of dollars are spent trying to save late stage patients, but the overall cancer mortality rate has fallen only by 8 percent since 1975. 

We are so consumed by the quest to save the 566,000 [who will die of cancer this year] that we overlook the far more staggering statistic at the other side of the survival curve: more than a third of all Americans - some 120 million people - will be diagnosed with cancer sometime in their lives. ... Find and treat their cancers early and that 566,000 will shrink.
This is the potential of early detection, Goetz says - to use data instead of drugs, to reveal a cancer before it reveals itself, and to leave the miracles for the patients who really need them.

The next argument for early detection comes in the story of Don Listwin, creator of the Canary Foundation. Listwin had impetus - his mother died of ovarian cancer after two misdiagnoses (bladder infection) - and the money to support researchers asking the right questions - why does survival drop off so steeply? What happens in these later-stage cancers that makes them so lethal? And why can't we find these lethal cancers earlier?

In the last 30 years, Goetz says, early detection has contributed to deaths from skin cancer falling 10 percent, and incidence and mortality rates from cervical cancer falling 67 percent. Yet the NCI spent just 8 percent of its 2007 budget, or less than 400 million, on detection and diagnosis research. The rest went to fund investigations into cures.

But even diagnosis and detection research has its problems. Goetz segues into a discussion of proteomics and blood protein biomarkers, and the riddle of early detection. Studies themselves should be subject to scrutiny: many proteomic trials are case/control studies, not the randomized controlled trial gold standard. And signals detected in the cases may not be related to cancer at all. Goetz cites the example of prolactin, a pituitary hormone thought to be a biomarker for ovarian cancer. A company released a commercial test for ovarian cancer (OvaSure), including prolactin in the armamentarium of biomarkers. But they were wrong - it's a biomarker for stress. (OvaSure was withdrawn from the market after FDA got involved.)

The riddle: finding a biomarker with a proven link to cancer, finding the marker in the blood, and detecting it accurately and consistently in the broad population.

Another example of dashed hopes: a stunning 2006 article in the NEJM made the case for widespread use of CT scans as a screening test for the early detection of lung cancer. Of 30,000 smokers scanned, 484 cases of potential cancer turned up, and 85 percent of those were true positives. And of the 375 patients who opted for surgery, 92 percent were still alive 10 years later. A follow-up review of CT scans showed that yes, the scans picked up a huge number of cancers. And of course the number of surgeries increased, thanks to the scans. The problem? There was no difference in mortality rates between the people who had CT scans and the people who did not. So scans successfully detected cancers, but didn't prevent mortality. That's a lot of unnecessary radiation. (The scans did pick up cancers, so they weren't necessarily false positives - these just weren't the types of tumors that were lethal. And the NCI is conducting a study to assess the true usefulness of CT scans for lung cancer, Goetz said, and early results could appear this year.)

So, what to do? O'Brian et al. review the benefit of cancer-related decision aids in the Journal of Clinical Oncology (Jan. 5 e-pub ahead of print), suggesting the education and communication may be one effective tool to help patients weigh the risks and benefits knowledgeably.

Goetz suggests early detection is more probabilistic, more calculation than divination. Early detection, he says, which is steeped in probability predictions and statistics, just makes these calculations more transparent than we're used to encountering. Early detection will always be a numbers game.

For more, see: Cancer Screening Overview, National Cancer Institute (targeted toward health professionals, but has a nice overview of measures of risk and the hierarchy of evidence)