Before the fisking, a digression.
May I beg for a ban on titles for articles about fat issues featuring faux-clever wordplay? I’m talking about titles and subtitles like “Weight of Evidence” or “fat haters’ arguments flabby” or Cathy Young’s latest entry, “A Fistful of Lard.” It’s not clever, folks. It’s an embarrassing negation of cleverness; it’s saying “I’d like the idea of giving my article a clever title, but I’m fresh out of clever at the moment, so here’s some unbelievably obvious crap instead.”
Ahem. Pardon my digression.
Cathy Young’s latest Boston Globe column takes a critical look at a recent JAMA study, written by Katherine Flegal and colleagues, which found that being mildly “overweight” is in fact associated with a longer lifespan than being the so-called “normal” weight, and that overall only 25,000 “extra” deaths per year are caused by obesity and overweight combined. (That’s actually over 110,000 obesity-related deaths, minus the deaths “saved” per year because overweight people live longer than “normal” weight people). Unfortunately, Young’s analysis is flabby – pardon me, lacks weight – oh, I mean, pound for pound it’s not so sound – no, it’s thin on facts – no, it’s a glutton for…
Oh, heck. It just isn’t very good, all right?
Flaws in the “300000 to 400000 deaths caused by fat” studies
Cathy Young writes:
The new study by Katherine Flegal and her colleagues claims that excess weight causes “only” about 25,000 deaths in the United States annually, far below the earlier Centers for Disease Control and Prevention figure of 365,000. Yet, significantly, the CDC is not revising its official estimate…which is based on six previous major studies. It’s not unusual for different studies to yield contradictory results; the scientific consensus emerges from an overview of all available research.
It’s true that the CDC’s official estimate uses data from six previous studies, whereas the Flegal study draws data from only three previous studies. In fact, there are many studies supporting both points of view. What matters isn’t how many studies are done, but if they’re done well or poorly. I will argue that the CDC studies supporting the idea that there is an obesity crisis, with hundreds of thousands dying every year, are poorly done.
There is no single study supporting the CDC’s high death counts; they’ve revised their official number multiple times, and each revision represents a new study. However, all the studies share a common methodology, and share some terrible errors. W. Gibbs, in the current (June ’05) issue of Scientific American, criticizes one of the CDC’s high-death studies (written by Dr. David Allison and colleagues), but his criticisms apply to all of them:
[The CDC’s study] drew statistics on the riskiness of high weights from six different studies. Three were based on self-reported heights and weights, which can make the overweight category look riskier than it really is (because heavy people tend to lie about their weight). Only one of the surveys was designed to reflect the actual composition of the U.S. population. But that survey, called NHANES I, was performed in the early 1970s, when heart disease was much more lethal than it is today. NHANES I also did not account as well for participants’ smoking habits as later surveys did.
That matters because smoking has such a strong influence on mortality that any problem in subtracting its effects could distort the true mortal risks of obesity. Allison and his colleagues also used an incorrect formula to adjust for confounding variables, according to statisticians at the CDC and the National Cancer Institute.
Perhaps the most important limitation noted in the 1999 paper was its failure to allow the mortality risk associated with a high BMI to vary…in particular, to drop…as people get older.
Surprisingly, none of these problems was either mentioned or corrected in a March 2004 paper by CDC scientists, including the agency’s director, that arrived at a higher estimate of 400,000 deaths using [the same] method, incorrect formula and all.
That the CDC “fat crisis” studies don’t adjust for the effect of age on mortality is enough, by itself, to justify throwing them into the garbage. However, there are other reasons to doubt the CDC’s high obesity death counts.
The original 300,000 statistic wasn’t based on weight at all; former Surgeon General C Everett Koop simply misrepresented a study of deaths associated with unhealthy eating and inactivity as being a study of deaths caused by obesity. (Outside the world of anti-fat hysteria, this is called “lying.”).
In the years since, studies have tried to match Dr. Koop’s lie, using what the editors of the New England Journal of Medicine describe as “weak or incomplete data.” In addition to the flaws described in the Scientific American article quoted above, the CDC’s high-fat-death studies failed to account for confounding factors like socio-economic status, discrimination, eating habits, history of yo-yo dieting, body shape, and activity levels – factors that haven’t been accounted for even in Flegal’s better-designed study. Accounting for these factors could easily lead to a massive reduction of the “obesity death count.”
Risk Ratios can vastly exaggerate actual risk
What’s probably most important to understand – and least likely to be intelligently discussed by the mainstream media, unfortunately – is the limitations of the “risk ratio” (also called “hazard ratio”) studies we’re discussing.
In “risk ratio” studies, a very small actual increase in deaths can be made to look like a huge percentage increase. For instance, if one of a group of 4,000,000 pink lollypop lickers dies, while two of a group of 4,00,000 blue lollypop lickers die, a risk ratio study might say that blue lolly lickers have a relative risk of 2.0. “Twice as much” may sound high, but in terms of real-world significance, the difference for an individual lolly-eater between 1 in 4 million and one in 2 million is pretty slight; in either case, it’s amazingly unlikely that an otherwise healthy lolly eater will die of their lolly choice.
And that’s assuming that the lolly had any relationship at all to the death. But correlation does not prove causation. When risk rations are low, and many possibly relevant factors aren’t accounted for, any little thing (random deaths entirely unrelated to lolly licking, a tiny measurement error, a single confounding factor not accounted for) could be the real cause of that difference, not blue lolly licking.
Nonetheless, if blue lollies were treated the way fat is, the CDC would write up a press release proclaiming that studies prove blue lolly licking causes a “100% increase in death!” The press would pick up the claim and trumpet it uncritically. And a much better-supported finding from the same data set – which is that neither pink nor blue lolly lickers face a high risk of death, and blue lolly lickers should not panic – would be entirely ignored.
[I’ve removed a couple of paragraphs I can’t stand behind; they are preserved in the comments, however. –Amp]
Let’s return to Dr. Allison’s CDC study, which the CDC uses to claim that fat kills 350,000 Americans each year. As table 3 from Dr. Allison’s study shows, in every one of the six data sets used, the risk ratio (or “hazard ratio”) associated with overweight is below 1.7 (that is, less than 100%), and sometimes barely above 1 at all. In a couple of the table’s cells, the risk ratio is less than one – meaning that in that data cell, the “overweight” subjects had a lower mortality than “normal” weight subjects did.
Even looking only at the most obese category (people with BMIs above 35), in 9 of the 12 measurements the risk ratio is below 2.0 – and in every measurement, the risk ratio remains below 3.0.
What do those low risk ratios mean? It means that even in the study which supposedly shows fat is deadly, for all but the very fattest people, the increased risk is minimal. Even for the fattest, the risk isn’t particularly large. And at all BMI levels, the association between increased fat and increased death is weak, and would probably be dramatically reduced if unaccounted-for factors – such as age, yo-yo dieting, risky anti-obesity treatments, body shape, discrimination, and socioeconomic class, to name a few – could be accounted for.
The NHANES Trilogy (Star NHANES, The NHANES Strikes Back, and Return of the NHANES)
Let’s return to Young’s attempt to discredit the Flegal study:
Science writer Michael Fumento […] notes that the national survey data used in the study were collected at three points. The sample surveyed earliest had death rates close to the previous CDC findings. The much lower figure comes from adding the later data which, Fumento says, did not allow enough time for higher weight-related mortality to show up.
The national survey data Young is talking about is the National Health And Nutrition Examination Survey, or NHANES for short. To date there have been three NHANES surveys, NHANES I, II, and III, and each survey was gathered and updated at different times (NHANES I (1971-1975) and NHANES II (1976-1980) had follow-ups through 1992, and NHANES III (1988-1994) has been followed-up through 2000).
What Young doesn’t tell her readers is that Flegal and her co-authors wondered if there might be a longer follow-up effect, ran the data – and found that it made no difference. From the Flegal study:
To examine whether the higher relative risks in NHANES I might be due to the longer follow-up in NHANES I, we compared the relative risks from the first phase of NHANES I through the 1982-1984 follow-up with the relative risks from NHANES II and III. Thus, the follow-up period was similar for all surveys (10 years for NHANES I, 14 years for NHANES II, 9 years for NHANES III). The NHANES I relative risks over the first 10 years of follow-up were higher in almost every BMI-age subgroup than were the relative risks from the other surveys (data not shown). Thus, even after controlling for length of follow-up, NHANES I tended to have higher relative risks than the other surveys.
Michael Fumento dismisses this, claiming that the longer follow-up is the most important factor – but his claim contradicts the evidence. If longer follow-ups created a higher measurement of risk, then NHANES I with a ten-year follow-up would show lower risk ratios than NHANES II with a fourteen-year follow-up. But that’s not what the data shows. Even when NHANES II has a longer follow-up period, it still shows lower risk ratios.
Fumeto also quotes a long list of experts saying that the data must be wrong. That’s called “argument by authority”; instead of providing any actual logic or evidence, you parade a bunch of experts and hope that settles the matter. But argument from authority is a very weak argument, and it’s irrelevent when – as in this case – there are well credentialed authorities on both sides of a question. What matters isn’t a list of names, but evidence – and as the Flegal study showed, evidence indicates that mortality from obesity is in fact lower in NHANES II and III than it was in NHANES I, regardless of follow-up period.
If NHANES I hadn’t been included, the new Flegal study would have barely have found any increased mortality at all associated with obesity – and virtually none at all associated with being overweight. So why did NHANES I find such different results?
Probably part of the problem with NHANES I is that it failed to adequately account for the effects of smoking, as the Scientific American article pointed out. And, as Flegal and her co-authors argue, medicine has recently made enormous strides in preventing and treating heart disease and other conditions; conditions that might have killed fat people thirty years ago simply aren’t as deadly now. It’s also possible that Americans are now eating healthier diets and exercising more, compared to the early 1970s.
Does a High BMI Really Mean You’re Fat?
Next, Young questions how relevant BMI is:
Others note that many people classified as overweight in the study may not be “fat” at all. The study relied on Body Mass Index, a measurement that does not distinguish between weight from body fat and muscle, and even consigns some professional athletes to the ranks of the overweight.
This is all true. However, the same thing is just as true of the older BMI studies which found a high number of deaths associated with high BMI. It’s inconsistent of Young and other critics of Flegal to bring this up when she wants to discredit a study, but ignore the exact same flaw in studies whose results she likes better.
Even the Flegal study may overstate the risk of obesity
Cathy Young finds a silver lining: It’s still terribly dangerous to be very fat. Or so Young argues:
But let’s look at what the Flegal study actually said…and didn’t say.
The study didn’t say that you don’t need to exercise. Nor did the study say that severe obesity is harmless: Its death toll was estimated at 112,000 a year. (The 25,000 figure was obtained by subtracting the estimated 86,000 fewer deaths among the moderately overweight compared to people of “normal” weight.)
112,000 a year? Well, yes and no.
The Flegal study is a significant improvement over earlier CDC studies, because it accounts for essential factors such as age, and the more recent data means that improvements in medical technology are included in the results.
But many of the limitations of earlier stories are also present in the Flegal study. A lot of potentially important confounding factors, like yo-yo dieting, weight loss surgeries and drugs, discrimination, activity levels, unhealthy diets, and socioeconomic status aren’t accounted for in the Flegal study, either.
And the risk ratios are once again very low – even for the very fat. For non-smokers with BMIs above 35, the Flegal study found that fat people under age 59 had a risk ratio of 1.25; fat people ages 60-69 had a risk ratio of 2.3; and fat people age 70 and up had a risk ratio of 1.12.
What do the low risk ratios tell us? First of all, that even if you’re very fat, these findings shouldn’t make you panic. (The Flegal study describes the increased risk as “modest.”) And secondly, these low risk ratios suggest that a study that accounted for currently-unaccounted for factors could dramatically lower that 112,000 number. As Flegal and company themselves wrote:
Other factors associated with body weight, such as physical activity, body composition, visceral adiposity, physical fitness, or dietary intake, might be responsible for some or all of the apparent associations of weight with mortality.
(By same token, I’d also say that the risk ratios suggest that the “risk” of being “normal” weight rather than slightly overweight found by the Flegal study is not meaningful. Although it’s a fun mathematical game to say “this study shows that it’s safer to be overweight than ‘normal’ weight,” a more accurate assessment is that this study shows that neither “normal” nor “overweight” people should worry about their weight’s effect on their health.)
There’s another reason that a 30-year-old obese person may face less risk than the Flegal study indicates; medical technology has probably not stopped improving. Thirty years from now, how good will treatments for heart disease, high blood pressure, etc, be? There is no reason to think that a 60-year-old with high blood pressure in 2035 will face as high a risk as a sixty-year-old in 2005 does. But it is current risks – not future risks – that the Flegal study measures.
Finally, consider that most of the increased risk for higher BMIs the Flegal study found came from the NHANES I database. If only NHANES II and III had been considered, the risks found would have been much lower. But there are strong reasons to believe that NHANES II and NHANES III are more accurate data sources than NHANES I. If so, then the Flegal study is significantly overstating the current risk associated with obesity – and understating the benefits of being slightly heavier than “normal” weight.
Let’s go back to Cathy Young. She continues:
The researchers concluded that being more than 40 pounds overweight is indeed hazardous. Yet the activists who agitate for “fat acceptance” want us to believe there’s nothing wrong with 400 pounds of excess fat.
What the researchers actually found is significantly increased risk beginning at BMIs of 35 and above. For someone “40 pounds overweight” to have a BMI of 35, they’d have to be around four-foot-eight; for more average people, “70-90 pounds overweight” is a more accurate figure. But saying “40 pounds” is much scarier, isn’t it?
More importantly, the researchers didn’t say the risks, even for the fattest group of subjects, were “hazardous”; they said there was a “modestly” increased risk. But, again, Young’s version, while less accurate, is much scarier-sounding.
Do fat acceptance activists say that “there’s nothing wrong with 400 pounds of excess fat”? Well, morally, there is nothing wrong with it. No one is helped, and no one’s health is improved, because Cathy Young looks down her nose at the very obese.
Do I think that being 400 pounds overweight has no health consequences?
My impression, from having skeptically read a great deal of medical research, is that extreme obesity probably is an independent risk factor leading to earlier death – although current research, by failing to control associated factors, exaggerates the risk. (It would please me immensely if I could say that extreme obesity carried no risk at all, but alas, that’s probably not true).
However, I don’t think that the danger to the extremely obese justifies the CDC-fueled anti fat hysteria. First of all, even for the very obese, the risk doesn’t appear to be that huge. Second, the proportion of the population that’s 400 pounds overweight is tiny. There are other national health problems that should be much higher priorities for the government.
Furthermore, even for someone who weighs 500 or 600 pounds, I’m not convinced that weight-loss plans – the only solution ever advocated by anti-fat hysterics – are their healthiest alternative. No diet has ever been shown in clinical trials to turn obese people into non-obese people over the long run; nor has anyone ever been able to run a clinical trial showing that losing weight improves health over the long run. Furthermore, some studies have found that losing weight deliberately actually shortens life – especially for yo-yo dieters. Why prescribe a “cure” that probably won’t work, and that could shorten life, for a “disease” that simply isn’t that threatening?
On the other hand, clinical trials clearly show that frequent, mild exercise has reliable, significant health benefits even for people who don’t lose any weight. At any size, the best health plan isn’t losing weight; it’s healthy eating and regular exercise.
That, of course, is the sensible message said over and over by the majority of fat activists: in four words, Health at Every Size (HAES). But rather than giving her readers a fair picture of fat activists, Young attributes nonsense like this to us:
There’s nothing liberated about eating one’s way into diabetes, strokes, heart attacks, and other ailments.
Gee, is anyone saying that “eating one’s way into… ailments” is liberating? Of course not.
What do non-imaginary fat activists say? We say health at every size. We say that no one, at any size, should face contempt or discrimination for their weight. We say that evidence shows that the current anti-fat hysteria is just that – hysteria, unsupported by the best evidence. And we say that health care for fat people should be based on evidence of what actually works best for fat people, not on stereotypes, dubious data, or society’s thinly-hidden disgust.
(Link to Cathy Young article via Hit and Run.)