Determining validity of scientific evidence has come up in TWO of my courses in the last week, and there’s been a lot of attention on this measles outbreak because people are ignoring the “science” and refusing to have their children vaccinated. I can imagine that a lot of my readers who aren’t versed in research methods don’t realize that any time the media presents a single new ground-breaking study about our health or happiness or environment, it should be taken with a grain of salt.
Just to be clear, this is quite relevant to my areas of interest, as all nutrition and health advice given by the government and health practitioners is supposed to be based on sound scientific evidence, according to established scientific methods.
There are a lot of directions I could go with this, but I think one of the most important lessons for the scientifically illiterate is correlation vs. causation. A very popular research method in public health and many other disciplines is an observational survey. This means that the researchers don’t control or manipulate any variables, as opposed to in an experimental study, where researchers would manipulate one or more variables. For example, some researcher decades ago said “Hey, maybe this cigarette smoking thing is related to lung cancer–let’s see if we can recruit a group of people, some with and some without lung cancer, and see if and how much they smoke.” I’m guessing that that researcher found that the people who had lung cancer smoked a lot more than the people who didn’t, on average. That is what we call a correlation: smoking in this case was positively correlated with having lung cancer. Even though now we are quite certain with very little doubt that smoking causes lung cancer, such an observational study can’t infer causation, because there are too many of what we call confounding factors.
A confounding factor would be something else that’s correlated with both smoking and lung cancer, and as a result you can’t distinguish which caused which. For instance, one of the studies brought up in my classes showed that women who took supplemental estrogen around the time of menopause had fewer heart attacks than those women who didn’t take estrogen. When this result got out to the public, post-menopausal women everywhere started taking estrogen. Then researchers did a controlled experiment, where they randomly assigned some women to take an estrogen tablet regularly and some to take a placebo (something that resembles the treatment but has no active ingredients; in theory, it should have no effect). That study found that the women taking the estrogen actually had more heart attacks than the women who took the placebo. The only way to reconcile this finding with the original observational finding is to acknowledge one or more confounding factors. In this case, experts have suggested that women who voluntarily took estrogen were probably more health conscious than those who didn’t, and more likely to take care of themselves in other ways like diet, exercise, and following their doctors’ instructions. This fundamental difference between the women who took estrogen and those who didn’t would be said to have confounded the results of the study.
In the case of nutritional science, theories based on observational studies are more complex, more confusing, and can be more misleading. For example, survey after survey has shown that people who consume more foods containing Vitamin A have less cancer than those who consume less Vitamin-A containing foods. In other words, Vitamin A consumption is negatively correlated with cancer incidence. Consequently, a lot of people take Vitamin A supplements. However, as far as I’m aware no study has been able to show that people who are randomly assigned to take Vitamin A supplements have any better health outcomes than those randomly assigned to a placebo. Therefore, we cannot say with any confidence that taking Vitamin A supplements alone will reduce your risk of cancer.
Another huge problem is that the research that tends to get the most public attention is that that shows novel findings, on which there is not a broad academic consensus as to its validity. So when one study observes that people who have gluten-free diets are thinner than people who consume gluten, someone might start cutting out gluten when there are so many other explanations for this finding that haven’t been ruled out yet by other studies.
So how do we prove (or rather infer or confidently conclude, as nothing can really be proven) causation, especially when it comes to food and health? To me, there are three important criteria. One is amount of evidence. I base my plant-based diet health claims mostly on this one. It would be almost impossible to test in a controlled, experimental way whether increasing the proportion of your diet that comes from whole, plant-based foods (as opposed to heavily processed foods and animal products) universally improves human health outcomes. However, the fact that researchers have shown in countless studies that among and between practically every population, that people who eat relatively more whole grains, fruits, and vegetables, and relatively less highly processed food and meat have better health outcomes is enough for me to be very confident in my current dietary pattern.
The second criterion is scientific plausibility. There should be an explanation, a pathway, something that can tell you why variable A is causing variable B. For example, smoking very plausibly releases carcinogens into one’s lungs that facilitate the development of cancer. Vitamin A (when in food, not supplements) may prevent cancer by acting as an anti-oxidant. There is not, I would argue, a plausible scientific explanation for why ice cream consumption is correlated with murder, or why living in a poor country is associated with larger penis size (both true correlations).
The third criterion, which I alluded to above, is experimental evidence. In an ideal experiment according to accepted scientific methods, you assemble a large sample of subjects, you assign each of them randomly to one of two or more conditions (e.g. Vitamin A supplement vs. placebo), and you try to control for any other factors that might affect the results. Like you’d want to make sure that everyone took their vitamin every day throughout the course of the study. Or if you were trying to study the benefits of eating breakfast, you’d want to control for the time at which breakfast is eaten and possibly the size and composition of the meal, as well as the size and composition of other meals throughout the day.
So if you don’t eat breakfast but are perfectly healthy, I wouldn’t really worry about the next scientific study that implies that you must eat breakfast to be healthy. There is plenty of evidence, strong and weak, on both sides of the debate. Same goes for eating dairy and gluten and soy and low-carb diets.
Ah, controversy. Anyway, that’s my spiel on science. And I could have gone on for longer. In summary, I think that we can get pretty close to the truth in a lot of cases. But don’t base the “truth” on any one study, especially any one observational study.