Monday, August 22, 2011

Meta-analyses and why I don't fear saturated fat

"The great tragedy of Science (is) the slaying of a beautiful hypothesis by an ugly fact"
 - Thomas Henry Huxley
I eat butter.  Plenty of it.  And I also started rendering lard and beef tallow for cooking.  Not many people my age (26) have ever seen someone render lard, let alone use it readily.  The reason I use these fats is because they are delicious, and because I can buy them at the farmers market.  The conventional wisdom is that I should be scared - or nearly terrified - of saturated fat.  Just take a look at this YouTube video from the U.K (see below).*  The eerie lighting and tone of the television announcer is enough to make me worry, but let's take a look at some recent evidence regarding saturated fat and heart disease to see if the sink analogy is apropos.

The reason that saturated fat has been demonized by the nutrition community is because it is the cornerstone of the Lipid Hypothesis.  The Lipid Hypothesis, which is more of a concept than a working hypothesis, proposes that dietary saturated fat elevates cholesterol in the blood, specifically LDL cholesterol, which in turn causes atherosclerosis.  Cardiovascular disease (CVD) is characterized by atherosclerosis (plaque in the arteries) and includes coronary heart disease (CHD; atherosclerosis of blood vessels in the heart) and cerebrovascular disease (atherosclerosis of blood vessels in the brain leading to stroke).

Source:  Wikipedia.  Myristic acid, a saturated fat
The Lipid Hypothesis has always had its critics, but it has generally been accepted as fact.  However, the totality of evidence is a bit fuzzy, as many studies are contradictory or only show a small benefit from restricting saturated fat.  But since CVD is the leading cause of death in the U.S., public health authorities argue that any intervention, even if it's small or somewhat uncertain, will be beneficial to the health of the population.  These circumstances are ideal for what medical researchers call a "meta-analysis."

Simply put, a meta-analysis is a single study that combines the results from numerous smaller studies to form an artifical mega-study that will have enough statistical power (ability to detect a difference between the control and experimental group) to determine what the true impact of an intervention is.  The goal is to walk away with an actual number, such as a relative risk or mortality statistic.  For a meta-analysis to be valid, the included studies must be sufficiently similar and test the same exposure-outcome hypothesis, e.g. dietary saturated fat causes heart disease.  And researchers will further restrict their inclusion criteria to well designed studies.  But even with these criteria, you might ask: if I lump all these studies together, doesn't that falsely give equal credibility to both good and not-so-good studies?

Researchers address the issue of good, better, and best studies by "weighting" the different study results.  This means that each study will have more or less impact on the final outcome measurement - again, such as the relative risk for heart disease - depending on the quality of the study.  What makes one study better than another?  Usually the size of the study (10,000 subjects is likely more accurate than 1,000), the number of confounders adjusted for (older studies might only correct for age and smoking status, whereas a newer study might have adjusted for age, smoking, socioeconomic status, cholesterol, fasting glucose, etc.), and the quality of the methods used (in a diet study, a trial that provided all of the food for the subjects is much more reliable than giving subjects a questionnaire to determine what they ate).  Once you have all the studies tabulated and weighted, then you can get a summary outcome measurement and a neat graph that looks something like this:

Fig 1.  Anatomy of a meta-analysis

Every meta-analysis has a graphic like this (Fig. 1), and usually they have several more depending on how many hypotheses are being investigated.  The x-axis represents the relative risk.  If you remember from my last post, a relative risk of 1 means no difference in risk between groups, whereas a relative risk greater than 1 indicates that the "experimental" group has more risk than the control group.  Each study included in the graph is represented by a hash; the length of the hash represents the 95% confidence interval for the relative risk of that study.  If you are unfamiliar with statistics, the 95% confidence interval shows the range of numbers that we are reasonably confident includes the true effect of the experiment.  A smaller interval means you are more confident of the real number.  All you need to know is that if the confidence interval intersects the vertical line, then our safest bet is to conclude that there is no difference between the groups, since the relative risk is likely to be 1.  If the confidence interval does not intersect the vertical line, then we can conclude with reasonable certainty that there is a difference in risk between the groups.  At the bottom of the graph there is a diamond that represents the confidence interval derived from all the (weighted) studies included in the meta-analysis.  As you can see in the example, the diamond does not intersect the vertical line, and so the relative risk of all the studies combined is 0.85.  This would mean that the totallity of the evidence, based on this meta-analysis, indicates that the treatment reduces the risk of whatever outcome by 15%.

This is virtually all that you need to know in order to interpret a meta-analysis.  And if you are still reading this post, now it's time to talk about saturated fat.  Several large meta-analyses have been published in the past couple of years, and they all seem to give roughly the same answer.

In 2010, Siri-Tarino et al., published a meta-analysis on prospective cohort studies that evaluated the assocation of saturated fat with cardiovascular disease.  Based on 21 studies, they find no difference in the risk of CVD (the confidence interval contained 1), and conclude that "there is insufficient evidence from prospective epidemiologic studies to conclude that dietary saturated fat is associated with an increased risk of CHD, stroke, or CVD."  Interestingly, they assert that there is evidence of publication bias.**  But as the authors correctly point-out, this meta-analysis was limited to cohort studies and not powered enough (not large enough) to analyze the effect of replacing saturated fat with specific nutrients, such as carbohydrates or polyunsaturated fats (PUFA; think walnuts and seed oils).  Fortunately, other meta-analyses have.

Mozaffarian et. al performed a meta-analysis on randomized controlled trials that replaced dietary saturated fat with PUFA.  They only looked at myocardial infarction (heart attack) and CHD death; these are known as "hard endpoints," as heart attacks and death are not usually mis-diagnosed.  They show that increased PUFA intake (from 5% of daily calories to 15%) in place of saturated fat reduces the combined risk of heart attack and CHD death by 19% (Fig 2).  However, when the analysis isolated people who did not have pre-existing CHD, the aforementioned benefit disappeared (became statistically insignificant).  And there was no benefit seen in total mortality.  That is, replacing saturated fat with PUFA did reduce CHD and CHD death, but the risk of dying from all causes remained the same.

Fig 2.  Source:  Mozaffarian et al. PLoS Medicine

What saturated fat is replaced with is not trivial.  Mozaffarian et al. also analyzed studies that replaced saturated fat with carbohydrates and monounsaturated fat (think olive oil).  The single randomized controlled trial showed that replacing saturated fat with carbohydrate had no benefit, and in cohort studies, carbohydrates appear to increase the risk of CHD.  Monounsaturated fat is expected to lower the risk of CHD because of its beneficial effects on the cholesterol profile, but this has not been tested in a randomized controlled trial, and pooled analysis of available cohort data show a borderline increased risk of CHD (Fig. 3).  Weird, huh?

Fig 3.  Source:  Mozaffarian et al.  PLoS Medicine

And lastly, The Cochrane Collaboration has recently published an updated meta-analysis on the effect of dietary fat reduction and/or modification (PUFA instead of saturated fat) interventions, in randomized controlled trials, on cardiovascular outcomes.  Similar to their previous study and the aforementioned meta-analyses, they find that reducing and/or modifying dietary fat intake, for greater than six months, reduces the risk of CVD (events, not deaths) by 14%.  This decrease is attributable to:
"studies of fat modification and reduction (not studies of fat reduction alone), seen in studies of at least two years duration, in studies of men (and not those of women), and in those with moderate or high cardiovascular risk at baseline (not general population groups)."
This means that there is a small benefit from replacing some dietary saturated fat with unsaturated fats, but this may only apply to men and those who are at risk of or already have CVD.  And again, with a "high quality of evidence" given the shear number and size of the studies included, reduction of fat intake or modification of fat intake did not decrease the risk of CVD mortality or total mortality.

To be a bit critical, meta-analyses are far from perfect.  Remember, they are simply a pooling of results that improves statistical power in order to weed-out a result.  They do not improve the quality of the data or the individual studies themselves.  An accurate colloquialism is that a meta-analysis of garbage is still garbage.  Given their difficulties, I wouldn't go out and replace all of my butter with vegetable oil and expect a precisely 14% decrease in my risk of CVD.  But they give a nice summary of the evidence.

In the case of saturated fat, there is consistency between these analyses.  Total dietary fat is irrelevant to heart disease.  Replacing saturated fat with polyunsaturated fat modestly reduces the risk of cardiovascular disease, whereas replacing saturated fat with carbohydrate has no effect and may be harmful (if it's refined carbohydrates or sugar).  But at the end of the day, modifying or decreasing saturated fat likely does not decrease the risk of dying from heart disease and certainly has no effect on total mortality.  So after looking over these meta-analyses and bouncing it off of my current understanding of diet and diease, here is my conclusion: there is clearly no over-whelming evidence that saturated fat is bad, and in fact, there doesn't really seem to be any evidence.  And if it replaces sugar (butter instead of jam on toast), then it might actually be "healthy."  And yes, I'm aware of how crazy that notion sounds.  So what is one to do?

Source: Wikipedia, photo by Steve Karg

There are plenty of people who have given up butter and whole fat milk because of trepidation about saturated fat bringing them to an early grave.  Or in the words of Michael Pollan from In Defense of Food, "over the last several decades, mom lost much of her authority over the dinner menu, ceding it to scientists and food marketers (p. 3)."  Since the message to restrict saturated fat was loud enough to disrupt dinner, it is shocking that the evidence seems to have vanished.  And this is why the conventional wisdom will not change overnight.  Marion Nestle, a nutrition professor whose schtick I otherwise like, wrote a post on her blog to acknowledge these recent publications, but she inexplicably fell short of saying that saturated fat is probably harmless.  So in my opinion, it seems that the facts have ruined yet another hypothesis, because clearly, butter isn't out to get you.

This post is shared on Real Food Whole Health's Traditional Tuesday's Blog Hop.


*   Ignore the impossibility that the "saturated fat" in the video is liquid in the refrigerator but solid at room temperature in the drain.  Saturated fats (coconut oil, beef tallow, butter) are solid in the refrigerator AND at room temperature.

**  Smaller studies showed an increased risk of CVD from dietary saturated fat, but larger studies, which will always be published since they are well-known and anticipated, showed an equal distribution of increased, decreased, and neutral risk.  The implication is that smaller studies that showed a detrimental effect of saturated fat were published, whereas smaller studies that showed no effect or a beneficial effect of saturated fat were either not submitted for publication or not accepted for publication.

Friday, August 12, 2011

A Thai Meal in NorCal

This meal was actually about a month ago, but the memory still remains.   When we were visiting the girlfriend's parents, we made a Thai dinner to enjoy on the patio on a mild NorCal summer evening.  Talking about isolated nutrients can get you into trouble because they are naturally part of a whole food.  But the same is true for foods.  They are normally a part of a meal.  I want to share our entire meal (recipes) because I think that it was truly greater than the sum of its parts.

Grilled chicken with red curry sauce

  1. In a medium bowl, combine 10 oz. of coconut milk, juice from 2 limes and 1/2 tsp of lime zest, 1 tbsp fish sauce, and 1 tbsp red curry paste.   In a heavy-bottomed sauce pan, heat 1/3 water and 1/3 sugar and allow to boil for 3 minutes - this will create a nice thick simple syrup.  Remove pan from heat, add coconut mixture, and return to heat and allow to simmer just under 10 minutes; whisk frequently to ensure syrup does not lump together.  Retain 1/3 cup of sauce for serving.
  1. Prepare 1.5 lb bone-in (1 lb boneless) chicken thighs by drying with a paper towel, and liberally seasoning with salt and pepper
  2. Grill on a pre-heated grill to your liking.  I personally grill chicken thighs on medium-high until a rich dark brown; turning relatively frequently.  Cooking thighs until roughly 180°F will render them less gummy, and they will not dry out at this temperature like breast meat will.
  3. Once the chicken is a dark golden-brown, baste red curry sauce onto each piece, flip and cook for one minute, baste the opposite side, and then flip once again for another minute.  Repeat if there is extra sauce.

     4.  Serve chicken with grilled zucchini (from the garden if you have it!) and the reserved red curry sauce.  Garnish with limes and scallions.            

Asian-style cucumber salad
  1. Peel and thinly slice 4 cucumbers.  Set in the fridge to chill while preparing dressing.
  2. Simmer 1/3 cup rice wine vinegar until reduced to 2 tbsp; pour into bowl to cool.
  3. While vinegar is cooking, whisk in juice from 1/2 of a lime, 2 tsp. honey, 1 tbsp. fish sauce, and 2 tsp. olive oil.  
  4. Mince 1-2 seeded serrano chiles, and finely chop 1/4 cup of fresh mint and 1/4 cup fresh basil
  5. In a large bowl, combine cucumbers, dressing, and chopped chiles and herbs.  Season with salt and pepper, to taste.

Coconut rice
  1. I used this recipe.  Garnish with cilantro and scallions.  Full-fat coconut milk, please. 

I occasionally eat white rice.  But to mitigate its nasty effects on blood sugar, I drowned it in coconut milk and crowded it out with meats and veggies :)


This post was submitted to Food Renegade's Fight Back Fridays.

Sunday, August 7, 2011

Be skeptical of small numbers

A nuclear bomb is far scarier than a fire cracker.  Both are dangerous, but a nuclear bomb is clearly more destructive.  Not exactly rocket science.  In science-speak, the magnitude of this destruction is called the effect size.  Researchers spend a lot of time determining if an effect is real and how big the effect actually is.  Unfortunately, this information tends to distill down to "there was an effect" or "there was no effect."  This post is inspired by a lunch conversation with the girlfriend's parents, as it seems that nearly every food is out to get us.  It is one thing to say that a food has an effect on our health, but it's just as important to ask how big the effect actually is.

Effect size is an easy concept to measure in the laboratory.  A treated neuron can depolarize 5 times per second while a control neuron can depolarize 2 times per minute - an increase of 3 times per minute.  Differences in blood concentrations of a hormone, weight gain in rodents, and increased muscle mass are all easily recognized as an effect size.  In nutritional epidemiology, and epidemiology in general, the effect size is the strength of the association between an exposure (a food) and an outcome (a disease or mortality).  This is often measured as a relative risk.

Before I can talk about relative risk, I should explain absolute risk.  Absolute risk is the probability that an individual will develop a health outcome during a stated period of time (Fig 1).  Absolute risk, often measured as an incidence rate, is only meaningful if we have the number of outcomes AND the size of the population at risk AND a period of time.  The statements "4 men had heart attacks" and "4 out of 10 men had heart attacks," do not contain enough information to draw meaningful conclusions.  Rather, we need to know that "4 out of 10 men had heart attacks over the 5 year study period."  If we have the valid rate information about one group of people, we can compare it to another group's.  Absolute risk is vital for the real world impact of some exposure, but we rely on relative risk to get a grasp on the effect size of an exposure.

Fig 1.  Absolute Risk

Relative risk is simply the ratio of the absolute risk in the exposed group compared to a non-exposed group (i.e. control group).  If there is no difference in incidence rates of disease, then the RR will be 1.  If the exposed group has a higher rate, then the RR will be greater than 1.  And if the exposed group has a lower rate, then the RR is less than 1.  They are often discussed as percents (e.g. an RR of 1.3 means a %30 increased risk in the experimental group compared to the control).  Scientific journals will report rate ratios, hazard ratios, observed-to-expected ratios, and odds ratios - all of which are permutations of relative risk that are particular to different study designs.  Now that we're up to speed on relative risk, let's talk about effect size.

Fig 2.  Relative Risk

Effect size can help determine if an association seen in a study is causal.  Provided that the study is reasonably well conducted, a large relative risk suggests a causal association between the exposure and the outcome.  But how large is large?  Smoking and lung cancer are a textbook example of this principle.  Lung cancer is exceedingly rare in populations that do not smoke, especially if there are no industrial hazards.  Based upon an average of relative risks derived from several cohort studies (remember the limitations), men and women who smoke more than 20 cigarettes per day are 16 times more likely to die of lung cancer than non-smokers.  That's a whopping 1,500% percent increase in the risk of dying from lung cancer!  More moderate smokers have a considerably lower risk than the heaviest smokers, but are still far more susceptible with a relative risk of 5.0 and 9.0 for women and men, respectively.  The shear size of the effect provides evidence that smoking can cause lung cancer.  So what about not-so-large effects?

Because nutritional epidemiology relies heavily on observation rather than randomized controlled trials, the strength of an association can be distorted by confounding variables.  In fact, chances are that every observed effect is confounded by myriad unmeasured variables; many are insignificant, but some are important.  A study from the Health Professionals Follow-Up cohort demonstrated that men who consumed the most sugar-sweetend beverages had a 25% increased risk of developing type 2 diabetes over the 20 year follow-up.  Men who drank the most artificially-sweetend beverages (e.g. diet soda) were 91% more likely to develop the disease compared to those who drank the least.  However, after adjusting for the known confounding variables, the sugar-sweetened beverages still increased the risk by 24%, whereas the risk seen in the diet-rinkers was completely abolished.  It is easy to see how a relatively large effect size suggests causality, but does not prove it.  But what if the effect persists after adjusting for confouners?

Source:  Wikipedia:  Processed meat

A relatively recent article in the American Journal of Clinical Nutrition reported that men who reported eating the most processed meat (2 ounces or greater per day) compared to those who ate the least (less than 0.7 ounces per day) had a 23% greater chance of having a stroke over the course of the 10 year study.  Fresh red meat had no effect.  23% sounds fairly alarming; should we go to our fridge and throw out all of our salami and deli meat?  Looking at it another way, the average man in this study had a 6% chance (2409 out of 40, 291 men) of having a stoke over an average follow-up of 10.1 years.  By eating the highest amount of processed meat, his chances now increase to 7.4% (6% x 1.23).  His absolute risk increased by 1.4%*.

This may seem like a lot to you.  But also bear in mind that obesity and heavy smoking increase the risk of stroke by 100% compared to lean persons and non-smokers, respectively.  Using the average Swedish man above, each factor would increase the risk of stroke  from 6% to 12%.  Trading processed meat for fresh meat surely doesn't cause any harm, and this potential risk may simply be worth avoiding.  But think about how we need to approach this as scientific evidence.  Given that this has all the standard caveats of a prospective cohort study; and that the food record was based on a single survey given at the beginning of the study; and that you can never meaasure all of your confounders (they forgot sugar); are studies like this actually capable of detecting a true 23% increase in the risk of a specific mortality from a single type of food?  And is it worth constantly changing our diets when we're presented with these kinds of results?

Next time you hear a claim about a foods effect on health, or read another headline, make sure you find out how strong the effect actually is.  More often than not, you will only have access to the relative effect.  So keep in mind that if a disease is exceptionally rare, it will take a very high relative risk to have any real impact.  The risk of non-Hodgkin's lymphoma is .003 per 1,000 people over 1 year, which is so unlikely that an increased risk of 15% probably doesn't reflect a true association, and even if it does, it is virtually irrelevant.  The relative risk allows us to better comprehend the effect, but the absolute risk is what matters to the individual.

The problem with nutrition is that when you change something in your diet, it has to be replaced by something else.  How can you know you are making a change for the better?  And enjoying your food is important as well.  There are few things better than salami with cheese and wine, and bacon is arguably the best food there is.  The goal is not to disparage every study, but for the sake of health and culture, be skeptical about small numbers.

*The baseline risk I am using for this example is a crude estimate.  By simply using the number of strokes dived by the number of study participants over 10.1 years, I am ignoring the fact that some men were followed for less while some where followed for more.  However, this crude estimate approximated stroke statistics in the U.S. that I came across.  So don't hate!