Articles


How to Read A Scientific Study Part 1: Tips From the Pros
Yael Grauer

When you come across an online health article or blog post citing a scientific study, it’s important to have some strategies to put alarming claims or statistics in perspective.

This month, we’ll look at a handful of suggestions from those in the know. I spoke with Reuters Health executive editor Ivan Oransky, MD (who also teaches medical journalism at NYU), San Diego-based strength coach Brian Tabor (M.S., Exercise Physiology), Examine.com’s lead research editor Kurtis Frank, and Ph.D. candidate Bret Contreras (who cowrites the Strength and Conditioning Research Review).

Next month, we’ll wrap things up by looking at the differences between all of the types of studies out there, and what their individual strengths and weaknesses are, as well as breaking down the different sections of a study, and defining some common scientific terminology. (Feel free to send your most pressing questions to yael@yaelwrites.com.)

Okay. So you’re sitting at your desk and checking Facebook, when along comes the headline, “Eating Red Meat Will Kill You, Says Science.” (This is an actual headline. Check for yourself.) For the next three hours, the blogosphere (read: echo chamber) is repeating the claim, and your inbox is getting filled with messages from your vegan aunt and vegetarian neighbors. What’s a Performance Menu reader to do? Here are some suggestions to heed the next time this happens to you.

1. Read the actual study.


Whether you’re reading about your impending doom from last night’s steak dinner, or come across headlines about how your risk of heart disease could decrease by 500% because of a magical supplement, looking at the study is always a wise first step. “If the claim seems too good to be true, seek out the actual article,” Bret Contreras advises. “At the very least, pull up the abstract, but ideally, you should read the full paper.” Many papers are available for free, or can be accessed through educational institutions. Studies can usually be found on PubMed, or if you look for new studies on websites such as ScienceDaily, they will link to the actual study directly.

2. Put statistics in perspective.


Ivan Oransky adds that it’s important to look at absolute risk, rather than relative risk (e.g. ‘doubled the risk of cancer.’) “Was that a difference between 1 percent and 2 percent, or 20 percent and 40 percent? Same doubling…completely different significance,” he points out.

Kurtis Frank agrees. “Percentages are the worst offender here,” he said, using growth hormone research as an example, “where circulating levels of growth hormone are so small normally that a small insignificant spike can reach up to 100-200% of baseline. A small number doubled is still a small number sometimes,” he adds.

3. Learn to love the number needed to treat (NNT) or number needed to harm.


This “…would show how many people would need to be treated to see one person benefit (or be harmed). It's an easy calculation, and a very elegant way of saying just how many people would need to be treated to see one person benefit -- or be harmed,” Ivan Oransky explained. The calculation (prominently displayed in slide 32 of Oransky’s “Evaluating Medical Evidence” slideshow, is -100/absolute risk (as a percentage.)

4. Put studies into perspective.


“Whenever something says that research proves another thing, I immediately put on my skeptic pants,” Brian Tabor explains. “Research very rarely proves something to be an absolute truth. It tests ideas and is typically expressed in terms of statistical significance and differences, but that is not proof. It is merely supportive evidence for a particular theory. It takes large amounts of supportive proof from multiple studies over time to begin to prove something and there are likely conflicting results in other studies. Just beware of claims that are ‘proven’ by research.”

5. Correlation vs. causation

Correlation means that two things might be related, but doesn’t necessarily mean that one causes another. For example, if there are a lot of firemen when there is a large fire, that doesn’t mean the firemen cause the fire, but that they are correlated (in this instance, because more firemen come to put out the fire.) Another example from an actual study was a link that was found in a study showing that young children who sleep with the light on are more likely to develop myopia later in life. However, a later study found that the link was actually between myopic parents and the development of child myopia. Myopic parents were more likely to leave the light on in their children’s rooms. Despite early reports suggesting otherwise, it is no longer believed that the light being left on caused myopia. (See Myopia and ambient lighting at night, Night-light may lead to nearsightedness, Night lights don’t lead to nearsightedness, study suggests, and Vision: Myopia and ambient night-time lighting for the details.)

“Also, when the journalist falsely says 'cause' or 'creates' (denoting causation) and then quickly switches to the researcher's opinions, it creates the illusion that the researcher also shares this opinion. Sometimes wordplay can be used cunningly or perhaps accidentally to make correlation seem like causation, and to associate claims with the researcher. Be careful around quotes,” Kurtis Frank warns.

6. Beware of oversimplified articles, where nuance and accuracy are sometimes lost. Sometimes information is cherry picked to support a specific opinion or create a compelling headline.

An important part of science writers' jobs involve explaining science clearly, often using anecdotes. But sometimes, the accuracy and nuance can be lost in the process, when conflicting studies are glossed over in an attempt to package research in a way that is easily digestible for the reader.

“Some topics have such a vast database of research that it is difficult to mention all the opinions on the subject and impossible to mention all associated theories without writing a novel,” Kurtis Frank says. “Additionally, some topics that are more microscopic in nature (and tend to have longer terms or acronyms used) just cannot be simplified without being at least somewhat falsified.”

Brian Tabor points out that “writers with a goal in mind or something to sell will always have a particular bias and may cherry pick the studies they use to support their claims. The beauty of peer reviewed research journals though is that all the info is laid out for anyone to make their own judgments, so you can do your own searches and find conflicting research if it exists. The problem with this though is that it is often written at a level that is difficult for many to understand well and it usually makes for pretty dry reading material.” It is the job of writers to put the information into simple language without inaccuracies, “and if their goal is to educate without an opinion or bias they should be able to clearly communicate the conflicting research as well.” (Tabor recommends exrx.net as an example of an unbiased site.)

7. Find higher quality sites as sources of information.


We’ve already mentioned both Science Daily and exrx.net, so you have a head start on this suggestion. “I would recommend having a collection of sites that are relatively unbiased as the 'top tier' where you consider their information 'better' and apply less skepticism, and then put the others in the 'bottom tier,'” Kurtis Frank says. “It is impossible to not get an unbiased source, so it would be good to just get the ones that are trying their hardest to work against their own bias (peer-review works great in this sense). Bottom tier is stuff that either doesn't have quality control (Livestrong) or is associated with a company (T-Nation); these sites can still have good articles and interestingly theories, and I cannot say they should be avoided at all. They should just be approached cautiously; the phrase ‘a diamond in the rough’ exists for a reason, and sometimes you find awesome things in unlikely areas,” he adds, and points out that the site he works with, Examine.com, is a high quality source of information. “Our quality control is in part only accepting primary articles or peer-reviewed secondary information (think review articles or meta-analysis') and in how we work a bit on a Wikipedia model of user editing (although this is controlled a bit, to avoid vandalism). I'd like to imagine we are one of the more unbiased sources out there, since we are not only not associated with any supplement company but actively avoid them like the plague since we fear it would ruin the quality of our information,” he said.

8. Recognize that scientists also make mistakes.


“Scientists are human-beings too! They have biases and they commit errors. For this reason, the reader needs to understand the basics of scientific research in order to catch author mistakes,” Bret Contreras says.

One rule of thumb I use is to see how honest study authors are about their own challenges and limitations, and keep in mind that there may be others they may not have mentioned.

9. One study is not the be-all and end-all.
But grounding it in existing literature can help.

“A single study never proves something absolutely. It just contributes data to the bigger picture, which will likely contain conflicting information. It is important to see how studies differ in their subjects, design, procedures, etc. to really get a clear idea of what all the different data suggests,” Brian Tabor explains. “And while you may find the data to point to a very certain conclusion, you should always try to keep in mind its not likely to be an absolute truth all the time. It takes years and years of research to consider something a truth. I mean, gravity is still a theory and we see it work nearly every second of everyday we live!”

10. Be aware that some writers may not pay enough attention to the specific details of the research study.


“Exercise research on strength training can have completely different results based on the experience of the subjects, trained vs., untrained for example, but often things like that are overlooked completely because writers often only have time to read abstracts or only read the discussion portion of the article,” Tabor explains. “Discussion sections are where the researcher gets to provide their personal interpretations of the data and is not necessarily gospel truth. Simply reading abstracts or discussion sections can create a lot of misinformation,” he adds.

11. Older studies and animal studies are not necessarily irrelevant.


Although many science writers prefer to focus on the newest study on humans, “just because a study is new, doesn't mean it's best,” Brian Tabor says. “It may very well have new flaws that weren’t present in older studies. It’s good to see if techniques or procedures have changed that make one study different from another, be it in animals vs., humans or new vs. old. Looking at multiple articles helps you gain a greater perspective to interpret the results as a whole and see a bigger picture than just a single study.

Kurtis agrees. “Older studies are always relevant, unless a newer study 'replaces' it (does the same techniques and the same population, but with improved technology or statistical measures) or corroborates it (in which case, both studies should be viewed side-by-side). In some cases, if a research artefact exists (an error in experimental technique that was not realized at the time of publication) then the research may be invalid. Most 'newer' studies with large populations fall into the 'corroboration' one, where they merely replicate a pre-existing study but try to fix the errors of the previous study, inching towards the best possible conclusion.”

12. Bias is everywhere—but doesn’t necessarily discount study findings.


Sometimes studies are funded by organizations that may have a stake in what the research shows. “Disclosing conflicts of interest is an important part of reporting,” Ivan Oransky says, but “such conflicts don't invalidate a study; they're just one more factor to take into account. And it's not just financial ones to worry about; science is as cliquish and human as any other endeavor. No study is perfect, and sponsorship is just one issue of many to consider. Still, I'd have to say I'm even more likely to be skeptical if someone doesn't disclose significant conflicts and I find out about them later. Trust but verify,” he said.

“Question everything to a reasonable extent. But we also can't simply reject research because we don’t like the findings,” Tabor says. “I'm sure there are undocumented conflicts of interest, but there always will be. That's why it’s the readers' responsibility to be a skeptic.” Tabor further points out that it may be easy for groups to gain momentum by discounting studies due to bias, “even if they don't provide much substantial proof for their claims of foul play.”

Critiquing studies scientifically is always a better approach than yelling loudly, so stay tuned for next month’s segment, where we’ll break down exactly how to do that.


Search Articles


Article Categories


Sort by Author


Sort by Issue & Date