Showing posts with label scientific evidence. Show all posts
Showing posts with label scientific evidence. Show all posts

Monday, August 22, 2011

Meta-analyses and why I don't fear saturated fat

"The great tragedy of Science (is) the slaying of a beautiful hypothesis by an ugly fact"
 - Thomas Henry Huxley
I eat butter.  Plenty of it.  And I also started rendering lard and beef tallow for cooking.  Not many people my age (26) have ever seen someone render lard, let alone use it readily.  The reason I use these fats is because they are delicious, and because I can buy them at the farmers market.  The conventional wisdom is that I should be scared - or nearly terrified - of saturated fat.  Just take a look at this YouTube video from the U.K (see below).*  The eerie lighting and tone of the television announcer is enough to make me worry, but let's take a look at some recent evidence regarding saturated fat and heart disease to see if the sink analogy is apropos.



The reason that saturated fat has been demonized by the nutrition community is because it is the cornerstone of the Lipid Hypothesis.  The Lipid Hypothesis, which is more of a concept than a working hypothesis, proposes that dietary saturated fat elevates cholesterol in the blood, specifically LDL cholesterol, which in turn causes atherosclerosis.  Cardiovascular disease (CVD) is characterized by atherosclerosis (plaque in the arteries) and includes coronary heart disease (CHD; atherosclerosis of blood vessels in the heart) and cerebrovascular disease (atherosclerosis of blood vessels in the brain leading to stroke).

Source:  Wikipedia.  Myristic acid, a saturated fat
The Lipid Hypothesis has always had its critics, but it has generally been accepted as fact.  However, the totality of evidence is a bit fuzzy, as many studies are contradictory or only show a small benefit from restricting saturated fat.  But since CVD is the leading cause of death in the U.S., public health authorities argue that any intervention, even if it's small or somewhat uncertain, will be beneficial to the health of the population.  These circumstances are ideal for what medical researchers call a "meta-analysis."

Simply put, a meta-analysis is a single study that combines the results from numerous smaller studies to form an artifical mega-study that will have enough statistical power (ability to detect a difference between the control and experimental group) to determine what the true impact of an intervention is.  The goal is to walk away with an actual number, such as a relative risk or mortality statistic.  For a meta-analysis to be valid, the included studies must be sufficiently similar and test the same exposure-outcome hypothesis, e.g. dietary saturated fat causes heart disease.  And researchers will further restrict their inclusion criteria to well designed studies.  But even with these criteria, you might ask: if I lump all these studies together, doesn't that falsely give equal credibility to both good and not-so-good studies?

Researchers address the issue of good, better, and best studies by "weighting" the different study results.  This means that each study will have more or less impact on the final outcome measurement - again, such as the relative risk for heart disease - depending on the quality of the study.  What makes one study better than another?  Usually the size of the study (10,000 subjects is likely more accurate than 1,000), the number of confounders adjusted for (older studies might only correct for age and smoking status, whereas a newer study might have adjusted for age, smoking, socioeconomic status, cholesterol, fasting glucose, etc.), and the quality of the methods used (in a diet study, a trial that provided all of the food for the subjects is much more reliable than giving subjects a questionnaire to determine what they ate).  Once you have all the studies tabulated and weighted, then you can get a summary outcome measurement and a neat graph that looks something like this:

Fig 1.  Anatomy of a meta-analysis

Every meta-analysis has a graphic like this (Fig. 1), and usually they have several more depending on how many hypotheses are being investigated.  The x-axis represents the relative risk.  If you remember from my last post, a relative risk of 1 means no difference in risk between groups, whereas a relative risk greater than 1 indicates that the "experimental" group has more risk than the control group.  Each study included in the graph is represented by a hash; the length of the hash represents the 95% confidence interval for the relative risk of that study.  If you are unfamiliar with statistics, the 95% confidence interval shows the range of numbers that we are reasonably confident includes the true effect of the experiment.  A smaller interval means you are more confident of the real number.  All you need to know is that if the confidence interval intersects the vertical line, then our safest bet is to conclude that there is no difference between the groups, since the relative risk is likely to be 1.  If the confidence interval does not intersect the vertical line, then we can conclude with reasonable certainty that there is a difference in risk between the groups.  At the bottom of the graph there is a diamond that represents the confidence interval derived from all the (weighted) studies included in the meta-analysis.  As you can see in the example, the diamond does not intersect the vertical line, and so the relative risk of all the studies combined is 0.85.  This would mean that the totallity of the evidence, based on this meta-analysis, indicates that the treatment reduces the risk of whatever outcome by 15%.

This is virtually all that you need to know in order to interpret a meta-analysis.  And if you are still reading this post, now it's time to talk about saturated fat.  Several large meta-analyses have been published in the past couple of years, and they all seem to give roughly the same answer.

In 2010, Siri-Tarino et al., published a meta-analysis on prospective cohort studies that evaluated the assocation of saturated fat with cardiovascular disease.  Based on 21 studies, they find no difference in the risk of CVD (the confidence interval contained 1), and conclude that "there is insufficient evidence from prospective epidemiologic studies to conclude that dietary saturated fat is associated with an increased risk of CHD, stroke, or CVD."  Interestingly, they assert that there is evidence of publication bias.**  But as the authors correctly point-out, this meta-analysis was limited to cohort studies and not powered enough (not large enough) to analyze the effect of replacing saturated fat with specific nutrients, such as carbohydrates or polyunsaturated fats (PUFA; think walnuts and seed oils).  Fortunately, other meta-analyses have.

Mozaffarian et. al performed a meta-analysis on randomized controlled trials that replaced dietary saturated fat with PUFA.  They only looked at myocardial infarction (heart attack) and CHD death; these are known as "hard endpoints," as heart attacks and death are not usually mis-diagnosed.  They show that increased PUFA intake (from 5% of daily calories to 15%) in place of saturated fat reduces the combined risk of heart attack and CHD death by 19% (Fig 2).  However, when the analysis isolated people who did not have pre-existing CHD, the aforementioned benefit disappeared (became statistically insignificant).  And there was no benefit seen in total mortality.  That is, replacing saturated fat with PUFA did reduce CHD and CHD death, but the risk of dying from all causes remained the same.


Fig 2.  Source:  Mozaffarian et al. PLoS Medicine


What saturated fat is replaced with is not trivial.  Mozaffarian et al. also analyzed studies that replaced saturated fat with carbohydrates and monounsaturated fat (think olive oil).  The single randomized controlled trial showed that replacing saturated fat with carbohydrate had no benefit, and in cohort studies, carbohydrates appear to increase the risk of CHD.  Monounsaturated fat is expected to lower the risk of CHD because of its beneficial effects on the cholesterol profile, but this has not been tested in a randomized controlled trial, and pooled analysis of available cohort data show a borderline increased risk of CHD (Fig. 3).  Weird, huh?

Fig 3.  Source:  Mozaffarian et al.  PLoS Medicine

And lastly, The Cochrane Collaboration has recently published an updated meta-analysis on the effect of dietary fat reduction and/or modification (PUFA instead of saturated fat) interventions, in randomized controlled trials, on cardiovascular outcomes.  Similar to their previous study and the aforementioned meta-analyses, they find that reducing and/or modifying dietary fat intake, for greater than six months, reduces the risk of CVD (events, not deaths) by 14%.  This decrease is attributable to:
"studies of fat modification and reduction (not studies of fat reduction alone), seen in studies of at least two years duration, in studies of men (and not those of women), and in those with moderate or high cardiovascular risk at baseline (not general population groups)."
This means that there is a small benefit from replacing some dietary saturated fat with unsaturated fats, but this may only apply to men and those who are at risk of or already have CVD.  And again, with a "high quality of evidence" given the shear number and size of the studies included, reduction of fat intake or modification of fat intake did not decrease the risk of CVD mortality or total mortality.

To be a bit critical, meta-analyses are far from perfect.  Remember, they are simply a pooling of results that improves statistical power in order to weed-out a result.  They do not improve the quality of the data or the individual studies themselves.  An accurate colloquialism is that a meta-analysis of garbage is still garbage.  Given their difficulties, I wouldn't go out and replace all of my butter with vegetable oil and expect a precisely 14% decrease in my risk of CVD.  But they give a nice summary of the evidence.

In the case of saturated fat, there is consistency between these analyses.  Total dietary fat is irrelevant to heart disease.  Replacing saturated fat with polyunsaturated fat modestly reduces the risk of cardiovascular disease, whereas replacing saturated fat with carbohydrate has no effect and may be harmful (if it's refined carbohydrates or sugar).  But at the end of the day, modifying or decreasing saturated fat likely does not decrease the risk of dying from heart disease and certainly has no effect on total mortality.  So after looking over these meta-analyses and bouncing it off of my current understanding of diet and diease, here is my conclusion: there is clearly no over-whelming evidence that saturated fat is bad, and in fact, there doesn't really seem to be any evidence.  And if it replaces sugar (butter instead of jam on toast), then it might actually be "healthy."  And yes, I'm aware of how crazy that notion sounds.  So what is one to do?

Source: Wikipedia, photo by Steve Karg


There are plenty of people who have given up butter and whole fat milk because of trepidation about saturated fat bringing them to an early grave.  Or in the words of Michael Pollan from In Defense of Food, "over the last several decades, mom lost much of her authority over the dinner menu, ceding it to scientists and food marketers (p. 3)."  Since the message to restrict saturated fat was loud enough to disrupt dinner, it is shocking that the evidence seems to have vanished.  And this is why the conventional wisdom will not change overnight.  Marion Nestle, a nutrition professor whose schtick I otherwise like, wrote a post on her blog to acknowledge these recent publications, but she inexplicably fell short of saying that saturated fat is probably harmless.  So in my opinion, it seems that the facts have ruined yet another hypothesis, because clearly, butter isn't out to get you.

This post is shared on Real Food Whole Health's Traditional Tuesday's Blog Hop.

___________________________

*   Ignore the impossibility that the "saturated fat" in the video is liquid in the refrigerator but solid at room temperature in the drain.  Saturated fats (coconut oil, beef tallow, butter) are solid in the refrigerator AND at room temperature.

**  Smaller studies showed an increased risk of CVD from dietary saturated fat, but larger studies, which will always be published since they are well-known and anticipated, showed an equal distribution of increased, decreased, and neutral risk.  The implication is that smaller studies that showed a detrimental effect of saturated fat were published, whereas smaller studies that showed no effect or a beneficial effect of saturated fat were either not submitted for publication or not accepted for publication.

Sunday, August 7, 2011

Be skeptical of small numbers

A nuclear bomb is far scarier than a fire cracker.  Both are dangerous, but a nuclear bomb is clearly more destructive.  Not exactly rocket science.  In science-speak, the magnitude of this destruction is called the effect size.  Researchers spend a lot of time determining if an effect is real and how big the effect actually is.  Unfortunately, this information tends to distill down to "there was an effect" or "there was no effect."  This post is inspired by a lunch conversation with the girlfriend's parents, as it seems that nearly every food is out to get us.  It is one thing to say that a food has an effect on our health, but it's just as important to ask how big the effect actually is.

Effect size is an easy concept to measure in the laboratory.  A treated neuron can depolarize 5 times per second while a control neuron can depolarize 2 times per minute - an increase of 3 times per minute.  Differences in blood concentrations of a hormone, weight gain in rodents, and increased muscle mass are all easily recognized as an effect size.  In nutritional epidemiology, and epidemiology in general, the effect size is the strength of the association between an exposure (a food) and an outcome (a disease or mortality).  This is often measured as a relative risk.

Before I can talk about relative risk, I should explain absolute risk.  Absolute risk is the probability that an individual will develop a health outcome during a stated period of time (Fig 1).  Absolute risk, often measured as an incidence rate, is only meaningful if we have the number of outcomes AND the size of the population at risk AND a period of time.  The statements "4 men had heart attacks" and "4 out of 10 men had heart attacks," do not contain enough information to draw meaningful conclusions.  Rather, we need to know that "4 out of 10 men had heart attacks over the 5 year study period."  If we have the valid rate information about one group of people, we can compare it to another group's.  Absolute risk is vital for the real world impact of some exposure, but we rely on relative risk to get a grasp on the effect size of an exposure.

Fig 1.  Absolute Risk


Relative risk is simply the ratio of the absolute risk in the exposed group compared to a non-exposed group (i.e. control group).  If there is no difference in incidence rates of disease, then the RR will be 1.  If the exposed group has a higher rate, then the RR will be greater than 1.  And if the exposed group has a lower rate, then the RR is less than 1.  They are often discussed as percents (e.g. an RR of 1.3 means a %30 increased risk in the experimental group compared to the control).  Scientific journals will report rate ratios, hazard ratios, observed-to-expected ratios, and odds ratios - all of which are permutations of relative risk that are particular to different study designs.  Now that we're up to speed on relative risk, let's talk about effect size.

Fig 2.  Relative Risk


Effect size can help determine if an association seen in a study is causal.  Provided that the study is reasonably well conducted, a large relative risk suggests a causal association between the exposure and the outcome.  But how large is large?  Smoking and lung cancer are a textbook example of this principle.  Lung cancer is exceedingly rare in populations that do not smoke, especially if there are no industrial hazards.  Based upon an average of relative risks derived from several cohort studies (remember the limitations), men and women who smoke more than 20 cigarettes per day are 16 times more likely to die of lung cancer than non-smokers.  That's a whopping 1,500% percent increase in the risk of dying from lung cancer!  More moderate smokers have a considerably lower risk than the heaviest smokers, but are still far more susceptible with a relative risk of 5.0 and 9.0 for women and men, respectively.  The shear size of the effect provides evidence that smoking can cause lung cancer.  So what about not-so-large effects?

Because nutritional epidemiology relies heavily on observation rather than randomized controlled trials, the strength of an association can be distorted by confounding variables.  In fact, chances are that every observed effect is confounded by myriad unmeasured variables; many are insignificant, but some are important.  A study from the Health Professionals Follow-Up cohort demonstrated that men who consumed the most sugar-sweetend beverages had a 25% increased risk of developing type 2 diabetes over the 20 year follow-up.  Men who drank the most artificially-sweetend beverages (e.g. diet soda) were 91% more likely to develop the disease compared to those who drank the least.  However, after adjusting for the known confounding variables, the sugar-sweetened beverages still increased the risk by 24%, whereas the risk seen in the diet-rinkers was completely abolished.  It is easy to see how a relatively large effect size suggests causality, but does not prove it.  But what if the effect persists after adjusting for confouners?

Source:  Wikipedia:  Processed meat


A relatively recent article in the American Journal of Clinical Nutrition reported that men who reported eating the most processed meat (2 ounces or greater per day) compared to those who ate the least (less than 0.7 ounces per day) had a 23% greater chance of having a stroke over the course of the 10 year study.  Fresh red meat had no effect.  23% sounds fairly alarming; should we go to our fridge and throw out all of our salami and deli meat?  Looking at it another way, the average man in this study had a 6% chance (2409 out of 40, 291 men) of having a stoke over an average follow-up of 10.1 years.  By eating the highest amount of processed meat, his chances now increase to 7.4% (6% x 1.23).  His absolute risk increased by 1.4%*.

This may seem like a lot to you.  But also bear in mind that obesity and heavy smoking increase the risk of stroke by 100% compared to lean persons and non-smokers, respectively.  Using the average Swedish man above, each factor would increase the risk of stroke  from 6% to 12%.  Trading processed meat for fresh meat surely doesn't cause any harm, and this potential risk may simply be worth avoiding.  But think about how we need to approach this as scientific evidence.  Given that this has all the standard caveats of a prospective cohort study; and that the food record was based on a single survey given at the beginning of the study; and that you can never meaasure all of your confounders (they forgot sugar); are studies like this actually capable of detecting a true 23% increase in the risk of a specific mortality from a single type of food?  And is it worth constantly changing our diets when we're presented with these kinds of results?

Next time you hear a claim about a foods effect on health, or read another headline, make sure you find out how strong the effect actually is.  More often than not, you will only have access to the relative effect.  So keep in mind that if a disease is exceptionally rare, it will take a very high relative risk to have any real impact.  The risk of non-Hodgkin's lymphoma is .003 per 1,000 people over 1 year, which is so unlikely that an increased risk of 15% probably doesn't reflect a true association, and even if it does, it is virtually irrelevant.  The relative risk allows us to better comprehend the effect, but the absolute risk is what matters to the individual.

The problem with nutrition is that when you change something in your diet, it has to be replaced by something else.  How can you know you are making a change for the better?  And enjoying your food is important as well.  There are few things better than salami with cheese and wine, and bacon is arguably the best food there is.  The goal is not to disparage every study, but for the sake of health and culture, be skeptical about small numbers.



*The baseline risk I am using for this example is a crude estimate.  By simply using the number of strokes dived by the number of study participants over 10.1 years, I am ignoring the fact that some men were followed for less while some where followed for more.  However, this crude estimate approximated stroke statistics in the U.S. that I came across.  So don't hate!

Tuesday, July 12, 2011

On Cohorts and Coffee

Coffee beans by Elvis John Ferrao
Coffee beans, courtesy Elvis John Ferrao on Flickr.
Are you a big coffee drinker?  How many cups per day?  2? 4?  Well, if it's six or more, and if you're a man, then you may be lowering your risk of lethal prostate cancer.  That's great news for those of us who liberally indulge.  But if you don't drink coffee, should you?  Headlines that tout the benefits of individual foods are common.  Many of these findings are produced by observational epidemiology, so making an informed decision about integrating these foods into your diet requires an understanding of how these studies are designed, their strengths, and their weaknesses.  As in the aforementioned headline, much of the science of nutrition is derived from cohort studies.

A prospective cohort study, or longitudinal study, is an epidemiological study that defines two or more groups of people with various exposures (e.g hormone replacement therapy, coffee drinking), and then follows this cohort to measure any differences in outcomes (disease) between the groups in order to infer a causal association (see figure below).  Ideally, researchers ascertain a breadth of exposures and characteristics to discover associations and to improve the validity of such discoveries.  If an exposure is rare in the general population, a "special exposure cohort" can be used to follow a uniquely exposed group, such as vegetarianism in Seventh-day Adventists, and compare the special group's outcomes to a similar non-exposed group or the general population.  There are a few major prospective studies in nutritional epidemiology that warrant some attention.


 Source: Wikipedia.  Note that the investigator ascertains the exposures (black/white) prior to the unknown outcome.

One of the most influential diet studies is the Nurse's Health Study.  This study is technically composed of two phases, NHS I and NHS II.  NHS I began in 1976 to identify potential long-term complications of oral contraceptives that many women had begun to take.  It was later expanded to include diet and quality of life data.  NHS  II began in 1989 and recruited younger nurses for the purpose of collecting data on oral contraception, diet, and lifestyle factors that began earlier in life.  Major findings from this study include: smoking has a strong positive assocation with cardiovascular disease that reduces with smoking cessation, obesity increases the risks of several chronic diseases, and a Mediterranean-type diet appears protective.  However, the spurious idea that hormone replacement therapy would prevent coronary heart disease in all post-menopausal women was also produced by this study.

The Health Professionals Follow-Up Study (HPFS) began in 1986 as the male complement to the NHS.  And it produced the coffee-prostate cancer study above.  It is comprised of roughly 51,000 non-medical doctor health practitioners; over half of them are dentists and the vast majority are white.  Here is an example of the long form survey sent to participants.  Both the NHS and HPFS recruited motivated healthcare practitioners because this population is expected to accurately report disease outcomes and has the occupational commitment to maintain follow-up.  In fact, the NHS has retained a 90% response rate.

The Eurpoean Prospective Investigation into Cancer and nutrition, or EPIC, is a European equivalent.  This study has recruited over half of a million people from ten European countries, and studies the general population rather than healthcare practitioners.  Here's a neat infographic depicting the reported diet of "health conscious" and general population groups; it's a nice example of how types of people do not just aggregate around one food choice or the other, but rather a whole pattern of eating.

Cohort studies can be prohibitively expensive and are generally restricted to relatively common diseases or outcomes. But they offer substantial benefits over other types of observational studies for establishing a causal assocation.  If putative exposures and outcomes are measured at the same time, such as in cross-sectional studies, one cannot say with absolute certainty which one preceeded the other.  Cohort studies are better capable of  determing this information, or more technically, establishing the direction of causality.  Additionally, cohort studies often use real-time medical records, physical examination, or biological tests, and sometimes all three, to provide valid measurements of the exposures rather than relying on subjective recall.  However, unlike randomized controlled trials, the exposure status is chosen by the subjects and not the researchers.

In a prospective cohort study, the investigator ascertains the exposure status of the subjects and then groups them accordingly.  If science was easy, then these populations would just so happen to be the same with the exception of the exposure of interest.  But science can be cruel, and there is myriad reasons why individuals "choose" different exposures, which biases the results.  In our coffee study, it is possible that men who were developing lethal prostate cancer avoided coffee due to subclinical symptoms related to the impending prostate cancer diagnosis, which would bias cancer prone individuals away from coffee exposure.  This is called self-selection bias and can only be avoided by assigning exposure.  The investigators attempted to correct for this reverse causation by doing a sub-analysis with urinary symptoms to ensure that these type of symptoms were not associated with lower coffee consumption.  But such a bias could still have occurred from an unknown non-urinary symptomology or "drive" for cancer prone men to drink less coffee.   As such, the causal association between exposure and outcome from a cohort study is only inferred (e.g "heavy coffee consumption protects against lethal prostate cancer"), and can technically only be interpreted as "people who choose, or are otherwise driven by unknown factors, to consume large amounts of coffee tend to have a lower risk of lethal prostate cancer."  And then there's the issue of what we failed to measure.

The second major problem with the validity of cohort studies is the effect of confounding variables.  Because the groups are not randomized, the population with the exposure of interest may also have another exposure that associates with the outcome.  The classic example is the apparent positive association between coffee consumption and lung cancer.  This association is entirely explained by the fact that coffee drinkers also tend to smoke.  In the coffee and prostate cancer study, the invetigators made a Herculean effort to control for confounding by adding numerous potential confounders into their risk model.  These included: race, BMI, smoking, multivitamin use, PSA test history, and many more for a total of seventeen variables.  While adjusting for more and more confounders does enhance the validity of the association, remember that this type of manipulation is limited to "prostate cancer risk factors previously identified in this cohort and in other studies."  We cannot know what we have not measured, and it is always possible that there is at least one unknown variable that confounds our association of interest.

For what it's worth, I have a soft-spot for cohort studies.  The idea of a "natural experiment" is somehow quaint and very appealing.  They offer a lot to validity over other types of observational studies, but they always have important flaws, namely selection bias and potential confouding variables.  At the risk of pessimism, good science requires that we highlight the flaws of each experiment.  Read the headlines (and preferably the whole article!) with a critical eye.  Take note of the study design, and always ask how it fits into the greater scheme of the evidence.  As the authors concluded, "it is premature to recommend that men increase coffee intake to reduce advanced prostate cancer risk based on this single study."  Given the nature of this study, I will remain skeptical that coffee is therapeutic, although I am more confident that it is harmless.  But keep in mind that my opinion is heavily biased, as I've invested too much into my habit to stop any time soon.





Friday, July 1, 2011

Introducing Kefir

Kefir
After coming across various blog posts on fermented dairy (yogurt, kefir, cultured butter), I was intrigued.  I placed an order for kefir grains from The KefirLady and started a batch as soon as they arrived in the mail.  Once you start fermenting your own dairy, you've crossed into a another world of hippie-locavore-homecooking that you cannot return from.

Kefir is simply fermented milk.  Its origins have been traced to Caucasus, where the Muslim population regarded kefir as a gift from Allah.  The fermentation is achieved by inoculating a container of milk with a combination of bacteria and yeast, collectively known as kefir grains. The grains metabolize the nutrients in the milk - the lactose and some fatty acids - and in-turn produce carbon dioxide, lactic acid, and small amounts of a host of other compounds, including ethanol.  These benign microorganisms out-compete and actually fend-off pathogenic bacteria, thus preserving the milk without refrigeration.  Kefir is unique because it contains both bacteria and yeast, unlike yogurt which only contains bacteria, and because the grains are re-used again and again with each new batch.  Similar to other fermented foods, kefir is considered a probiotic.

Kefir grains
A probiotic is defined as "microbial cell preparations or components of microbial cells that have a beneficial effect on the health and well being of the host."  Many of the purported health benefits of fermented products are commonly attributed to improved gut health and modulation of the immune system (via the gut), although some have investigated possible anti-cancer properties. While everyone is familiar with probiotics because of the yogurt craze, the idea that fermented foods are healthful is not new.  As implied above, the people of Caucasus had much appreciation for kefir.  But despite the precedence, we cannot yet call probiotic foods a panacea. 

The Cochrane Collaboration has found no benefit in probiotics for inducing remission or maintaining remission in Crohn's disease, which perhaps argues against how strongly probiotics can modulate the immune response.  In regard to general gut health, Cochrane did find limited evidence for reduced c. dificile re-infection and minor improvements in childhood diarrhea.  These trials commonly used isolated  probiotics, so perhaps whole food probiotics are more beneficial.

There is some evidence that yogurt and the lactobacillus bacteria it contains can improve gut health in humans.  However, the most compelling evidence seems to be in regard to relieving constipation, particularly in children and women.  From what I can gather, lactobacillus containing yogurt does appear to relieve constipation, although not necessarily with the consistency asserted by advertisments.  To be sure, my cursory review of the literature does not warrant a definitive assessment for or against the benefits of probiotic foods, but I think there are a few things to keep in mind.  

The whole area of probiotic research suffers from inconsistant trial design, inconsistent use of bacteria and probiotic sources, and trouble defining exactly what "gut health" is.  The role of gut bacteria and health is compelling, but our understanding is still in its infancy.  We know that illness is associated with different gut microflora, such that obese children and those with Crohn's disease have different gut microflora composition than normal controls.  And as I learned from this past weekend's American Diabetes Association conference, these microflora are partially malleable e.g bariatric surgery changes the prevalance of different bacteria within the same individual.  However, we still do not understand what these associations and changes mean.  And while this research will likely provide improved treatment for gut disorders and diseases, we are still unsure how readily one food (kefir, yogurt, fermented vegetables) will change our gut bacteria and subsequently our health.  But I still think trying kefir is worthwhile.

From my experience, and I know that I am not alone, conventional medicine does a poor job with general gut health.  Conventional medicine clearly has a grasp on serious digestive diseases such as colon cancer, but more nuanced problems, such as food intolerances or sensitivities, usually evoke eye-rolling or a psycho-somatic diagnosis.  The diagnosis-by-exclusion Irritable Bowel Syndrome is likely a symptom of this general ignorance.  Although to be fair, I am not convinced that alternative medicine understands gut health as much as it purports to.  So given that fermented foods are both traditional and whole foods (avoid sugar containing yogurts!), and given their potential benefit, I think they are absolutely worth trying.  And because I make the kefir myself, I know that it has the probiotics that it is supposed to.  And if it is nothing but nutritional mysticism, at least it is low carb mysticism (the bacteria eat the milk sugar so that I don't have to).  And hey, instead of a cat to keep me company, now I have kefir grains to come home to.



Tuesday, June 7, 2011

Not-So-Placebo Controlled Trials

Source: Paul at FreeDigitalPhotos
Imagine that you are a new parent.  Your wife has had diabetes since childhood, and after being tested, you find out that you have a diabetes-associated HLA genotype.  Together, this means that your newborn has a genetic risk for diabetes (I'm referring to type 1 or insulin-dependent diabetes for this post), and if the right environmental exposure is present, she could develop the disease.  Being as proactive as you can, you search for available clinical trials that are testing interventions to prevent diabetes.  You acknowledge that it is important to advance scientific understanding of medicine and disease, but let's be honest, you are primarily concerned with your child.  Given that you are not the only participant with this mind-set, how are these experiments influenced by the evolving and sentient nature of human subjects?  After all, lab rats don't know what hypothesis is being tested, but you just might.

Randomized placebo controlled trials (controlled trials, for short) are arguably the best experimental design to understand human biology and behavior.  Unlike observational epidemiology, such as cohort studies that can only observe causal associations, controlled trials are able to determine the causality between an exposure - think treatment - and an outcome.  This is because controlled trials best approximate the "counter-factual," which is the impossible ideal where by the same subject is studied at the same time with and without an exposure.  This is accomplished by randomizing sufficiently large numbers of people into a control and an experimental group(s) so that any confounding characteristics are equally distributed between the groups to remove their influence on the exposure-outcome relationship, thereby studying the "same people" at the same time.  However, this assumes that both groups maintain randomization.

If a trial is sufficiently long enough, let's say between six months and a year or more, then some number of subjects will drop-out of the trial.  As mentioned above, subjects are randomized into two or more groups, and then demonstrated to be identical at baseline, at least for any known pertinent characteristics.  For example, groups will be shown to have the same distribution of body mass indexes, age, and exercise level.  However, as the trial progresses, some subjects will leave the trial, which could result in 20% to 40% attrition.  Every publication of a trial documents the similar baseline characteristics, the number of subjects in each group, and the attrition (See figure).  Researchers must keep track of who leaves the trial and why.


A particular concern is that subjects will learn what group they are in, and if they are unsatisfied with their assignment, may leave the trial and create a selection bias since the groups are no longer identical.  This is one of the reasons to blind the subjects, that is, give them a placebo treatment.  But this is usually easier said than done, especially in dietary trials, which is a concern for this blog.  One problem is that there is no sugar pill equivalent for a diet since we are all very aware of what are eating and have some idea, whether right or wrong, of the nutritional implications of our food. The A-to-Z trial* compared the effectiveness of several popular diets, including the Atkin's Diet and the Ornish Diet, on weight-loss and metabolic parameters.  The Ornish Diet is essentially a whole-foods nearly-vegan diet, while the Atkin's Diet is, well, the Atkin's Diet.  Unless you've been living under a rock, you would already have a preconceived notion of the Atkin's Diet and might even leave the experiment after watching an episode of Dr. Oz half-way through the trial.  Fortunately, the investigators documented how many people left the trial and why, and there was no apparent bias.  Although the degree of adherence was another matter.

The placebo is also necessary to ensure that the groups are exposed to the treatment in the desired manner - this is to avoid what is known as bias by differential misclassification.  This was particularly evident in a primary prevention trial for children at high risk for diabetes.  Previous non-trial research has shown a link between the age of dietary gluten introduction and the onset of diabetes, so that researchers hypothesized that delaying the introduction of gluten from 6 months of age to 12 months of age would prevent pancreatic islet auto-immunity and subsequent diabetes.  While this was a preliminary study designed to test the feasability and safety of such an intervention, it was still disappointing that the results were null.  But for our purposes, it demonstrated the placebo problem.  Subjects, by and large, did not leave the trial, but many opted to ignore their assignment.  At least 15% of subjects switched from the control group to the experimental group because they perceived a greater chance of success by delaying gluten exposure.  The researchers did analyze the initial assignments, and also the "new" experimental and control groups to avoid the differential misclassification bias.  However, this later analysis means that the groups were no longer randomized and that the experimental group now had a greater proportion of participants that are willing to switch groups - and any other characteristics (highly motivated, well-informed, aggressive?) that these types of people have.  Thus, the actions of the subjects can make our initial assumptions, that both groups are random and appropriately exposed, invalid, or at least, less valid.

Experimental trials are the most sound means for determining the causal association between an exposure and an outcome, but they still have inherent flaws that we must anticipate.  And his extends beyond mere methodological considerations and offers practical implications.  First, it may help to interpret clinical trials based on how well you match the study population - are you the same sex, age, BMI classification, and would you yourself participate in such a trial (this may help gauge your behavioral similarities with the study).  Second, the results of an experimental trial should not be regarded as dogma.  One clinical trial with little grounding in basic science (e.g. homeopathic medicine) likely needs much more investigation;  whereas several clinical trials that refute modest epidemiological evidence (e.g. dietary saturated fat and heart disease risk factors) probably clarify the true association.  We cannot study ourselves as though we are lab rats, so let's not interpret our studies as though we are.

*In case you didn't look at the link, the Atkin's Diet surpassed all of the other diets, including the Ornish Diet, in terms of weight loss and improvements in cholesterol/metabolic markers after one year in pre-menopausal women.  Perhaps this changes your preconceptions a bit?