True or False: A Spotter’s Guide to Bad Science Reporting
I’ve been writing this column for a few years now, and I’ve learned a lot in the process. Although many of the columns have covered topics that were familiar to me, sometimes the answers I found when I started looking through the research in more detail surprised me. Often things weren’t as clear-cut as I was expecting, and the final article turned out to be more complicated and interesting than the one I had in mind.
Sadly, this is my final column in the series. So, to wrap up, I’m going to give some DIY myth-busting tips that you can apply to whatever the latest fitness fad of the month happens to be.
Correlation does not imply causation
If you remember just one thing from my columns, please make it this one. Just because two things are correlated, it doesn’t mean that one causes the other. Most people die in bed, but you’re unlikely to live longer by sleeping on the floor.
There are lots of good examples of a causal links being incorrectly implied – but one of my favorites is the “link” between diet drinks and depression. Yes, depression is more common in people who drink more diet soda. But before throwing away all your Pepsi Max, perhaps consider that drinking diet soda is also more common amongst people who are… dieting. And being underfed and unhappy with your body image may have more to do with the increased risk of depression than a small amount of aspartame.
Cherry picking
Scientific studies that show that something didn’t work are important, but often they’re just not very interesting to the general public. If your research shows that there’s no link between eating blueberries and winning Olympic medals (for example), it’s unlikely that many journalists will be banging on your door. Depending on how conscientious you are, you may not even publish it. If your study happens to find that there is a link, though, you can almost guarantee some press coverage. This means that even if most studies investigating something come out negative, the few positive results are likely to get much more attention. What’s more, someone trying to sell you blueberries is likely to pick out any positive results and quietly ignore the negative ones. They’re not lying to you, exactly—they’re just not telling you the whole truth.
Good scientists don’t attach too much significance to any one particular study; they know that it’s important to look at it in the context of all the work that’s been done on the topic. It’s all too easy nowadays though for someone who wants to prove a point to look up a couple of studies that support their point of view. Without looking at the field as a whole, these are often meaningless.
Low-quality research
Not all scientific research is equal. There are a lot of really bad scientific studies out there, including many that have been published in some journal or other. When I finally track down a particular piece of sports science research that is referenced in an article, I’m often unsurprised to discover that it was a study that was conducted on half a dozen sports science students. Of course, when this is reported in the press, no mention is made of details like the sample size, or other factors affecting the quality of the research.
Sometimes research isn’t just bad; it’s deliberately bad. In trying to come up with an interesting positive result (perhaps the one they set out to prove in the first place), many researchers have succumbed to the temptation of p-hacking – tweaking the variables and the statistics until they come up with something that is “statistically significant.” This isn’t always easy to spot (though there are some clever statistical methods which can help), but it’s always worth keeping in mind, especially if a study comes up with a “p-value” that is only just statistically significant.
Burying the crucial details
When a report tells you that some food causes cancer, but neglects to mention that the study tested this on rats eating massive quantities of it, or a headline suggests that a “long-term” vegetarian diet increases the risk of cancer and heart disease, but only mentions later on in the article that “long-term” means many generations, not a decade or two – it’s easy for a less careful reader to get the wrong end of the stick. Or, take the headline that “a glass of wine is equivalent to an hour at the gym.” How many readers would stay around long enough to find out that the study took place on rats, not humans, and that it only investigated one substance contained in red wine, not red wine itself? Worse still, the amount of wine you’d have to drink to get an equivalent dose would certainly come with far bigger problems of its own.
Sometimes, the crucial details aren’t so much buried, as changed beyond all recognition – as in the popular headline that “smelling farts can cure cancer.”
Mythbusting 101
By now you might be wondering whether you can believe any media reports about scientific studies at all. Most science, after all, is not especially interesting or dramatic. It progresses in small steps, and it’s full of caveats, “maybes” and uncertainty. Big dramatic results are rare, and many of those turn out to be unreproducible. So, if you don’t have time to read through the entire scientific literature on a topic, what’s the best way to sift the truth from the bullshit? Here are a few tips.
Sadly, this is my final column in the series. So, to wrap up, I’m going to give some DIY myth-busting tips that you can apply to whatever the latest fitness fad of the month happens to be.
Correlation does not imply causation
If you remember just one thing from my columns, please make it this one. Just because two things are correlated, it doesn’t mean that one causes the other. Most people die in bed, but you’re unlikely to live longer by sleeping on the floor.
There are lots of good examples of a causal links being incorrectly implied – but one of my favorites is the “link” between diet drinks and depression. Yes, depression is more common in people who drink more diet soda. But before throwing away all your Pepsi Max, perhaps consider that drinking diet soda is also more common amongst people who are… dieting. And being underfed and unhappy with your body image may have more to do with the increased risk of depression than a small amount of aspartame.
Cherry picking
Scientific studies that show that something didn’t work are important, but often they’re just not very interesting to the general public. If your research shows that there’s no link between eating blueberries and winning Olympic medals (for example), it’s unlikely that many journalists will be banging on your door. Depending on how conscientious you are, you may not even publish it. If your study happens to find that there is a link, though, you can almost guarantee some press coverage. This means that even if most studies investigating something come out negative, the few positive results are likely to get much more attention. What’s more, someone trying to sell you blueberries is likely to pick out any positive results and quietly ignore the negative ones. They’re not lying to you, exactly—they’re just not telling you the whole truth.
Good scientists don’t attach too much significance to any one particular study; they know that it’s important to look at it in the context of all the work that’s been done on the topic. It’s all too easy nowadays though for someone who wants to prove a point to look up a couple of studies that support their point of view. Without looking at the field as a whole, these are often meaningless.
Low-quality research
Not all scientific research is equal. There are a lot of really bad scientific studies out there, including many that have been published in some journal or other. When I finally track down a particular piece of sports science research that is referenced in an article, I’m often unsurprised to discover that it was a study that was conducted on half a dozen sports science students. Of course, when this is reported in the press, no mention is made of details like the sample size, or other factors affecting the quality of the research.
Sometimes research isn’t just bad; it’s deliberately bad. In trying to come up with an interesting positive result (perhaps the one they set out to prove in the first place), many researchers have succumbed to the temptation of p-hacking – tweaking the variables and the statistics until they come up with something that is “statistically significant.” This isn’t always easy to spot (though there are some clever statistical methods which can help), but it’s always worth keeping in mind, especially if a study comes up with a “p-value” that is only just statistically significant.
Burying the crucial details
When a report tells you that some food causes cancer, but neglects to mention that the study tested this on rats eating massive quantities of it, or a headline suggests that a “long-term” vegetarian diet increases the risk of cancer and heart disease, but only mentions later on in the article that “long-term” means many generations, not a decade or two – it’s easy for a less careful reader to get the wrong end of the stick. Or, take the headline that “a glass of wine is equivalent to an hour at the gym.” How many readers would stay around long enough to find out that the study took place on rats, not humans, and that it only investigated one substance contained in red wine, not red wine itself? Worse still, the amount of wine you’d have to drink to get an equivalent dose would certainly come with far bigger problems of its own.
Sometimes, the crucial details aren’t so much buried, as changed beyond all recognition – as in the popular headline that “smelling farts can cure cancer.”
Mythbusting 101
By now you might be wondering whether you can believe any media reports about scientific studies at all. Most science, after all, is not especially interesting or dramatic. It progresses in small steps, and it’s full of caveats, “maybes” and uncertainty. Big dramatic results are rare, and many of those turn out to be unreproducible. So, if you don’t have time to read through the entire scientific literature on a topic, what’s the best way to sift the truth from the bullshit? Here are a few tips.
- Try to maintain a healthy skepticism – both about the results that confirm your existing beliefs, as well as the ones that don’t. Don’t assume that just because an article quotes a “scientific study” that it means anything. If something sounds too incredible to be true, then there’s a very good chance that it is.
- How credible is the source? Although it’s not a bulletproof guarantee, there’s a fair chance that an article in Wikipedia (for example) has probably been more carefully researched and scrutinized than one on a dubious health blogger’s personal website.
- Look up the references. A good article should contain a reference to the studies that it’s talking about; failing that, it’s often possible to find the study by searching on a database like PubMed, or on Google Scholar. Look these up, and at the very least read through the abstract – or ideally the full article if you can get hold of it. Scientific papers don’t always make easy bedtime reading, but it does get easier with practice. If you have friends who are more familiar with the science than you are, ask them what they think of it.
- Look carefully at what the study actually shows. Is it the same as the article claims?
- How credible is the study? Making a thorough assessment of the quality of research isn’t easy if you’re not a scientist, but one thing you can look at is the number of participants in the study. Small studies with very few participants are less likely to be reliable than large studies.
- Pay attention to the consensus of scientists working in that field. Experts are useful, because they can summarize for us all the work in a particular field, and they can put that one particular result into context. If possible, look for a systematic review that summarizes the results of all the studies on a topic, as this will give a better overview than any one single study.
- Always, always remember that correlation is not causation! Two things can be linked without one causing the other. Make a habit of looking for possible alternative explanations.
Rosi Sexton studied math at Cambridge University, and went on to do a PhD in theoretical computer science before realizing that she didn’t want to spend the rest of her life sat behind a desk, so she became a professional MMA fighter instead. Along the way, she developed an interest in sports injuries, qualified as an Osteopath (in the UK), and became the first British woman to fight in the UFC. She retired from active competition in 2014, and these days, she divides her time between fixing broken people, doing Brazilian Jiu Jitsu, climbing, writing, picking up heavy things, and taking her son to soccer practice. |
Search Articles
Article Categories
Sort by Author
Sort by Issue & Date
Article Categories
Sort by Author
Sort by Issue & Date