Most Effects Are Smaller Than We Think

Follow on

I saw a patient the other week who complained of intolerable hot flashes for the last several months.  They were happening day and night, often awakening her from sleep, and after a series of questions, I realized they were significantly interfering with the quality of her life.  So I suggested she begin hormone-replacement therapy.

“What about the increased risk of breast cancer?” she asked, alarmed.

“Anyone in your family who’s had it?” I asked.

She shook her head.

“Then your baseline risk is average,” I told her.  “It’s true that studies have shown an increased risk in breast cancer in women who take hormone replacement, but that increase in risk is smaller than people commonly think.”

“I’m nervous…” she said.

I told her I understood her concern.  Then I explained how I think about risk and benefit when trying to make a decision to start a therapy.


Absolute risk represents one’s baseline risk of something bad happening, usually expressed in terms of their risk over a year or a lifetime.  For example, the average risk of developing breast cancer in the U.S. over a woman’s lifetime is 12.7% (several things can make that risk higher, of course:  a positive family history of breast cancer in a first-degree relative, for example, or the presence of a BRCA mutation).  But for the general population, most women won’t get breast cancer.  In fact, 87.3% of them won’t.

Relative risk, in contrast, represents the percentage increase or decrease over and above one’s baseline risk that one experiences as a result of belonging to one population compared to another (being a teenager compared to being an octogenarian) or as a result of one intervention compared to another (taking hormone replacement or not).  A recent study, for example, reported that women using combination hormone replacement (an estrogen and progestin) for 15 years or more had an 83% increased risk of developing breast cancer (though incidentally the same study showed estrogen-only replacement conferred only a 19% increased risk).

This seems at first glance to flip-flop the risk.  Rather than a woman having a lifetime risk of 87.3% of not getting breast cancer, it now appears if she uses combination hormone replacement therapy for more than 15 years, she’ll have an 83% chance of getting breast cancer.

But if this is what you concluded, you’d be wrong.  Why?  We have to remember the 83% risk is a relative risk, meaning we can only interpret its significance in terms of its effect on our absolute risk.

Because the average absolute lifetime risk of an American woman developing breast cancer is 12.7%, if she took combination hormone replacement for more than 15 years, her new absolute risk wouldn’t be 83%.  It would be 12.7% x 83% = a 10.5% increase in absolute risk, which then added to the baseline absolute risk of 12.7% would be 23.2%.

Now, a lifetime risk of getting breast cancer of 23.2% isn’t insignificant.  But it’s far less than the 83% relative risk implies.

The best way to decide whether or not to take the hormone replacement, I told my patient, was by weighing how miserable the hot flashes were making her against her fear of a 23.2% lifetime absolute risk of getting breast cancer.  And that, I told her, was a personal judgment.  In response, she told me I’d actually made the decision harder for her because the hormone therapy wasn’t tempting with a lifetime absolute risk of breast cancer at 83% but was at a 23.2%, given the severity of her symptoms.


Unfortunately, though the increase in absolute risk for most interventions turns out to be less than most studies imply, so do the decreases in absolute risk they offer as well.  Take the example of aspirin.

Studies show in patients who’ve had a heart attack that taking one aspirin a day reduces their relative risk of having a heart attack over nearly a 10-year period by almost 50%. In patients over the age of 80, for example, whose absolute risk of having a heart attack can be as high as 12% in just the first six months following their first heart attack, this amounts to a recalculated absolute risk of 6%.  Arguably still significant, but not nearly as much as the 50% relative risk reduction commonly bandied about in medical circles.

On the other hand, in men without known coronary disease (though importantly the same hasn’t been demonstrated in women), studies suggest taking an aspirin a day confers a relative risk reduction of 32%.  Not quite 50%, but not too bad.  But, again, because this 32% is a relative risk reduction, we can only sort out the change in absolute risk reduction it represents by first knowing the baseline absolute risk of the population of men without known coronary disease.  That population, it turns out (depending, again, on their risk factors), may have as low as a 2% 10-year risk of having a heart attack.  Which means a 32% relative risk reduction translates into a new absolute risk reduction of 2% x 32% = 0.6%, then added to the baseline absolute risk equals a recalculated absolute risk of 1.4%.  When we consider also that aspirin use increases the absolute risk for peptic ulcers by about .5% per year (5% over ten years), the benefit of using aspirin to prevent heart attacks in low-risk individuals (dropping the absolute risk from 2% to 1.4%) seems outweighed by the risk of peptic ulcers (at least 5% over the same time period—or more depending on your baseline level of absolute risk) such aspirin use poses.

An interesting question arises:  why do most studies in the medical literature tend to report both risk and benefit statistics in terms of relative risk?  I don’t think it’s as a result of a conscious attempt to make risks and benefits seem greater than they are (in most cases, at least).  I do suspect there’s an unconscious bias at work, however.

We all want to have interventions available to us that work and work well.  If you scan the medical literature with a full knowledge of the difference between relative and absolute risk, however, it becomes clear that the true magnitude of impact most interventions have is actually quite modest.

This isn’t to say medicines don’t work, that we shouldn’t use them, or that their effects aren’t often wondrous.  But in attempting to modify risk, we may all be guilty—researchers, doctors, and patients alike—of believing we’re altering our destinies to a greater degree than we actually are.  I find myself sometimes surprised to hear how significant some researchers feel about what I consider small changes in absolute risk reduction and have to remind myself that what each of us considers a significant reduction in risk isn’t set in stone by a committee but rather by each individual according to his or her life circumstances and proclivities.

My patient, for example, was being made so miserable by her hot flashes that, after a prolonged discussion, she decided to try hormone replacement therapy for six months.  I suggested if it worked that we could then taper the dose gradually and perhaps stave off her symptom’s return, exposing her only to a small increase in her absolute lifetime risk of breast cancer (in many women, covering them with medication in the immediate post-menopausal period with hormone replacement therapy often leaves them free of hot flashes thereafter).  I told her the decision was hers as she was the one experiencing life with frequent hot flashes.  I just wanted to make sure she understood the risks correctly.  Almost nothing good in medicine—or life, for that matter—comes without counterbalancing risks that tend to give us pause.  Which is why it takes courage to embark on almost any course of treatment, courage to mitigate our understanding that even when we think everything through and make our choices as carefully as we can, things still sometimes go wrong.

Next WeekWhen A Beloved Pet Dies