1 Bodybuilderinfo: The Study Everyone Talks About Part 1: Correlation is NOT Causation

Friday 16 April 2010

The Study Everyone Talks About Part 1: Correlation is NOT Causation

The China Study

Whenever I get to talking paleo with people, it comes up.  Inevitably.

"Have you heard of The China Study?"

"But what about The China Study?"

"The China Study is based on tons of RESEARCH, where is yours?" and

"My friend/cousin/neighbor/sibling/pet became a vegan/vegetarian after reading The China Study--it was THAT convincing!"

Sigh.

Okay, so let's compile those reviews and research on why The China Study is NOT an insta-kill to the paleo/primal diet or low-carb approach.  I am breaking this topic into bite-sized portions since it is GINORMOUS (yes, that's a word)!  So today we'll tackle what the study said and the limitations of a study that large.

*crackles knuckles*  *takes a deep breath*


The China Study: What Is It?

From The China Study website (my emphasis in bold):
"The research project culminated in a 20-year partnership of Cornell University, Oxford University, and the Chinese Academy of Preventive Medicine, a survey of diseases and lifestyle factors in rural China and Taiwan. More commonly known as the China Study, “this project eventually produced more than 8000 statistically significant associations between various dietary factors and disease.” 
The findings? “People who ate the most animal-based foods got the most chronic disease … People who ate the most plant-based foods were the healthiest and tended to avoid chronic disease. These results could not be ignored,” said Dr. Campbell.
Wikipedia further elaborates on study size:
"The China Study," referred to in the title is the China Project, a "survey of death rates for twelve different kinds of cancer for more than 2,400 counties and 880 million (96%) of their citizens" conducted jointly by Cornell University, Oxford University, and the Chinese Academy of Preventive Medicine over the course of twenty years.
and provides the authors' recommendations:
The authors recommend that people eat a whole food, plant-based diet and avoid consuming beef, poultry, eggs, fish and milk as a means to minimize and/or reverse the development of chronic disease. The authors also recommend that people take in adequate amounts of sunshine in order to maintain sufficient levels of Vitamin D and consider taking dietary supplements of vitamin B12 in case of complete avoidance of animal products. The authors criticize "low carb" diets (such as the Atkins diet), which include restrictions on the percentage of calories derived from complex carbohydrates.
Bottom line: Dr. T. Colin Campbell and his team found that animal protein in the diet correlated with increased risk of disease through observational or epidemiological studies and meta-analysis.  His recommendation?  Go vegan.


The Limitation of Epidemiological Studies: Correlation Is NOT Causation, Peoples!

One of the major limitations of this kind of research is its breadth.  You can pull so much data together that it becomes muddied with confounding factors and the linkages you make are tenable at best.  Dr. Eades, author of Protein Power, has gotten so tired of arguing against these studies that he posted a reference about them:
Observational studies – also called prospective or cohort studies and sometimes even epidemiological studies – are the kind most often reported in the media simply because there are so many of them.  These are the studies in which researchers look for disease disparities between large populations of people with different diets, lifestyles, medications, incomes, etc.  If disease disparities are found to exist between groups, then researchers try to make the case that the difference in diet, lifestyle, medication, etc. is the driving force behind the disparity.
And meta-analyses:
For those who don’t know, meta-analyses are compilation studies in which researchers comb the medical literature for papers on a particular subject and then combine all the data  from the individual studies together into one large study.  This combining is often done to bring together a collection of studies, none of which contain data that has reached statistical significance, to see if the aggregate of all the data in the studies reaches statistical significance.  I think these types of meta-analyses are highly suspect, because they can lead to conclusions not warranted by the actual data. 
and the problem:
Researchers using meta-analyses set up selection criteria to pick which studies will be included in their final product, which leaves the door open for all kinds of mischief.   
Dr. Eades has a great analogy to share, so read the original reference, but it boils down to:
Problem is they can never possibly think of all the differences between the groups.  As a consequence, they never have a perfect study with exactly the same number, sex, age, lifestyle, etc. on both sides with the only difference being the study parameter. And so they don’t really ever prove anything.  
Observational studies only show correlation, not causation, a fact that everyone doing research and reading about research should have tattooed on their foreheads. 
Correlation can create a hypothesis for further testing.  That's it.  No light-bulb-over-the-head, ah-HA! moment of realization.  Sorry.


So where does The China Study come in?  The problem is that although The China Study definitely covers the Correlation is not Causation topic, it doesn't heed its own warnings.  It still slips down the rabbit hole when it says:
This does not mean that correlations are useless.  When they are properly interpreted, correlations can be effectively used to study nutrition and health relationships.  The China Study, for example, has over 8,000 statistically significant correlations, and this is of immense value.  When so many correlations like this are available, researchers can begin to identify patterns of relationships between diet, lifestyle and disease.  These patterns, in turn, are representative of how diet and health processes, which are usually complex, truly operate.  However, if someone wants proof that a single factor causes a single outcome, a correlation is not good enough.
Okay, I am getting a little nervous.  Correlations are now "patterns." Are they trying to make correlations sound more concrete?  Patterns are usually obvious and if obvious, does that mean they are real?Continuing down that hole:
After obtaining the results from a variety of studies, we can then begin to use these tools and concepts to assess the weight of the evidence.  Through this effort, we can begin to understand what is most likely to be true, and we can behave accordingly.  Alternative hypotheses no longer seem plausible, and we can be very confident in the result.  Absolute proof, in the technical sense, is unattainable and unimportant.  But common sense proof (99% certainty) is attainable and critical.  
Now my fears are realized.  The China Study authors have leapt from correlation to 'truth,' and "behave accordingly" sounds a whole lot like a prescription to me.  In one fell swoop, they've eliminated other hypotheses and don't even have to test their own due to their 'confidence in the result.'  Heck, why even bother seeking proof since it's "unattainable" and "unimportant."  "Common sense" is enough.  Well, maybe for them, but NOT for me.  And throwing out the animal-based diet that we evolved upon and that may even have been the impetus for our evolutionary path is NOT common sense to me!  

Okay, take a deep breath, Kristy...  Watch those cortisol levels.  I am just flustered at how well they can parlay the Correlation is not Causation topic and then with a final jab leave the average reader feeling satisfied that their breadth of research is enough to provide 'truth' and prescription.  Who the hell needs causation when you have those?


Think I am just crazy?  Think The China Study makes perfect sense?  Read Gary Taubes's (author of Good Calories, Bad Calories) thorough article for the New York Times.  He describes the leap from epidemiological study to preventative medicine as skipping vital experimental testing of the hypotheses epidemiological studies produce.  Why is this leap so often made?  Well, it's complicated:
The randomized-controlled trials needed to ascertain reliable knowledge about long-term risks and benefits of a drug, lifestyle factor or aspect of our diet are inordinately expensive and time consuming. By randomly assigning research subjects into an intervention group (who take a particular pill or eat a particular diet) or a placebo group, these trials “control” for all other possible variables, both known and unknown, that might effect the outcome: the relative health or wealth of the subjects, for instance. This is why randomized trials, particularly those known as placebo-controlled, double-blind trials, are typically considered the gold standard for establishing reliable knowledge about whether a drug, surgical intervention or diet is really safe and effective.
But clinical trials also have limitations beyond their exorbitant costs and the years or decades it takes them to provide meaningful results. They can rarely be used, for instance, to study suspected harmful effects. Randomly subjecting thousands of individuals to secondhand tobacco smoke, pollutants or potentially noxious trans fats presents obvious ethical dilemmas. And even when these trials are done to study the benefits of a particular intervention, it’s rarely clear how the results apply to the public at large or to any specific patient. Clinical trials invariably enroll subjects who are relatively healthy, who are motivated to volunteer and will show up regularly for treatments and checkups. As a result, randomized trials “are very good for showing that a drug does what the pharmaceutical company says it does,” David Atkins, a preventive-medicine specialist at the Agency for Healthcare Research and Quality, says, “but not very good for telling you how big the benefit really is and what are the harms in typical people. Because they don’t enroll typical people.”
These limitations mean that the job of establishing the long-term and relatively rare risks of drug therapies has fallen to observational studies, as has the job of determining the risks and benefits of virtually all factors of diet and lifestyle that might be related to chronic diseases. The former has been a fruitful field of research; many side effects of drugs have been discovered by these observational studies. The latter is the primary point of contention.
That latter is the basis for The China Study.  Still not convinced?  Read Lierre Keith's scour of epidemiological studies in The Vegetarian Myth, and her warning:
...until all the variables are controlled and the results reproducible, no conclusions can be drawn.  
Can you even begin to imagine the variables in a study as large as the China Study happily professes?  Remember it is based upon a "survey of death rates for twelve different kinds of cancer for more than 2,400 counties and 880 million (96%) of their citizens."  Even just small studies have nearly unlimited variables like diet, sleep, exercise, family, work, stress, transportation, socioeconomic status, environmental differences, genetic endowment, life history, having an ingrown toenail, etc.  How is anything meaningful ever said?  Well, researchers find correlations that create testable hypotheses and test them.  Over and over again.  Once they get the same results over and over again, THEN they can say something meaningful like X leads to greater risk of Y.  One study of 20 people and one study compiling the results of 300 different studies are just as meaningless when it comes to predictive power.

Gary Taubes (in the same New York Times article) provides a suggestion for critically evaluating scientific research:
So how should we respond the next time we’re asked to believe that an association implies a cause and effect, that some medication or some facet of our diet or lifestyle is either killing us or making us healthier? We can fall back on several guiding principles, these skeptical epidemiologists say. One is to assume that the first report of an association is incorrect or meaningless, no matter how big that association might be. After all, it’s the first claim in any scientific endeavor that is most likely to be wrong. Only after that report is made public will the authors have the opportunity to be informed by their peers of all the many ways that they might have simply misinterpreted what they saw. The regrettable reality, of course, is that it’s this first report that is most newsworthy. So be skeptical.
If the association appears consistently in study after study, population after population, but is small — in the range of tens of percent — then doubt it. For the individual, such small associations, even if real, will have only minor effects or no effect on overall health or risk of disease. They can have enormous public-health implications, but they’re also small enough to be treated with suspicion until a clinical trial demonstrates their validity.
If the association involves some aspect of human behavior, which is, of course, the case with the great majority of the epidemiology that attracts our attention, then question its validity. If taking a pill, eating a diet or living in proximity to some potentially noxious aspect of the environment is associated with a particular risk of disease, then other factors of socioeconomic status, education, medical care and the whole gamut of healthy-user effects are as well. These will make the association, for all practical purposes, impossible to interpret reliably.
The exception to this rule is unexpected harm, what Avorn calls “bolt from the blue events,” that no one, not the epidemiologists, the subjects or their physicians, could possibly have seen coming — higher rates of vaginal cancer, for example, among the children of women taking the drug DES to prevent miscarriage, or mesothelioma among workers exposed to asbestos. If the subjects are exposing themselves to a particular pill or a vitamin or eating a diet with the goal of promoting health, and, lo and behold, it has no effect or a negative effect — it’s associated with an increased risk of some disorder, rather than a decreased risk — then that’s a bad sign and worthy of our consideration, if not some anxiety. Since healthy-user effects in these cases work toward reducing the association with disease, their failure to do so implies something unexpected is at work.
All of this suggests that the best advice is to keep in mind the law of unintended consequences. The reason clinicians test drugs with randomized trials is to establish whether the hoped-for benefits are real and, if so, whether there are unforeseen side effects that may outweigh the benefits. If the implication of an epidemiologist’s study is that some drug or diet will bring us improved prosperity and health, then wonder about the unforeseen consequences. In these cases, it’s never a bad idea to remain skeptical until somebody spends the time and the money to do a randomized trial and, contrary to much of the history of the endeavor to date, fails to refute it.

I think I have given your brain enough food for thought for today.  At least now your first line of defense against The China Study question is that it can provide correlation but NOT causation.  There is NO predictive value through The China Study that an animal-based diet causes chronic disease or that a plant-based one does not.  There are no X leads to Y conclusions possible.  Period.

Feast upon it and we'll come back to discuss the reviews and research.  Enjoy your weekend!

Here is the next part: The Study Everyone Talks About Part 2: The Ravaging Reviews

No comments:

Post a Comment