Ashley is a registered nurse with a Master of Psychiatric Nursing degree, and has published papers in several nursing journals.
What is Research Literacy?
On a regular basis we hear from the media about the latest research study, often with findings that appear to contradict what was in the news last week. Coffee might be bad for one week, then good for us the next week, and then bad for us again the week after that. How is anyone supposed to make any sense of all this?
Research literacy is the skill set that helps us to do that. Research literacy refers to the ability to critically read, interpret, and evaluate research studies. That may sound rather daunting, but basic research literacy is still well within the reach of people who haven't done grad school. It really comes down to bringing a healthy dose of skepticism, and making sure your BS-detector is finely tuned.
Research and the Media
While major publications may have science writers with high levels of research literacy, this is not the case for all publications. This means there is the potential for information to get lost in translation from scientific language to common parlance. There is also the possibility of certain findings being played up for newsworthiness that don't accurately reflect the study's overall conclusions. This means it's important to critically evaluate the source of a story, and if you're unsure how reliable it is, it might be worth going back to the original source, which will be covered in a later section on where to find research.
Research design 101
Research design, which describes how a study is carried out, will determine the type of conclusions that can be reached based on the data that are generated. Quantitative studies generate numerical data that can be analyzed statistically, while qualitative studies produce words to describe phenomena. Under those broad categories there are a number different designs that can be used. The most common design for biomedical research is the experimental design, as this can allow inferences to be made about causation. An experimental design is not always feasible, and that may mean using a research design that does not support inferences about causation but can still yield valuable data.
The gold standard for a biomedical clinical trial is a randomized, double-blinded, controlled experiment. Let's break down each of those terms.
If there are two arms in a study, e.g. drug and placebo, study participants would be randomly assigned to one arm or another. This randomization will produce a fairly even distribution of different characteristics between the two groups, which leads to more reliable results.
If you were to give drug X to a group of people and 70% of them got better, you don't know based on that information alone how many people actually got better because of the drug. If you gave another group a placebo, you would see how many people got better because of the placebo effect and/or because they simply would have gotten better anyway. From this, you can then determine how many people got better because of the drug, and statistical calculations can be performed to determine if the difference between the two groups is large enough to indicate that the drug was responsible for the difference.
Blinding refers to who knows what intervention the patient is actually receiving. Ideally a study would be double-blinded, meaning that both the participant and the researcher measuring participant outcomes would be unaware of whether the participant was receiving that active treatment or placebo.
An experiment produces numerical results, but statistics are needed to find out what those numbers actually mean. Statistics, though, can easily be misinterpreted if someone doesn't understand the underlying concepts, and that can mean inaccurate reporting.
One important concept is distinguishing between different types of risk. Absolute risk is the chance of something occurring, full stop, while relative risk is the chance of one event occurring in relation to another. These numbers may be very different from one another. Let's say the chance of a baby being born with rainbow-coloured hair is one in a trillion. Imagine that eating blueberries may increase the risk by 500%. That 500% figure sounds scary, but it has a negligible effect on the absolute risk. Relative risk on its own has very limited meaning if you don't know what it's being compared to.
Time frame also matter when it comes to risk. If you look at a long enough timeframe, the risk of death for any human is 100%, with no exceptions. If we're looking at risk of death within the next year, that number is much more important.
Speaking of important, in casual parlance the word significant is used synonymously with important. This is not the case in a statistical context. Statistical significance means that it's unlikely that results obtained from a given test were due to chance. Let's say that 100 people were given a placebo and 100 received a drug. In the placebo group, 40 experienced outcome X. Significance calculations might show that the expected range of variation in results would be 35-45. If fewer than 35 or greater than 45 people who received the drug experienced outcome X, that would be a significant result, meaning it would be unlikely to occur due to chance.
Significance does not refer to the size of the effect, or the meaning associated with the effect; there are other measures that may be used to describe those. Whether 50 or 90 people in the drug group experienced outcome X, those outcomes would both be clinically significant.
Correlation vs. Causation
Perhaps one of the most common stumbling blocks in interpreting research findings is confusing correlation with causation, and coming to erroneous conclusions as a result.
Correlation means there is a pattern in how two variables behave over time. This alone does not mean that one variable's change causes a change in the other variable. As an example, 100% of people breathe oxygen, and 100% of people die. The two variables are correlated, but obviously oxygen does not cause death.
Causation is more difficult to establish, and only certain highly rigorous research designs are able to support inferences that changes in one variable caused changes in another.
Part of the peer review process, which we'll cover in the next section, is to ensure that the research paper doesn't include unfounded claims of causation. That does not, however, prevent media or others commenting on the findings from making inappropriate assumptions around causation that the original research paper never even suggested.
Academic Journals and Journal Articles
Research has little value if no one knows about it. The main way to spread the word is by publishing a paper in an academic journal. Some journals are considered more prestigious, and if you're hearing about a research study in the news, chances are it's been published in a high-profile journal.
To be accepted for publication in an academic journal, a paper must pass peer review, a key quality control step. Peer reviewers are experts in the field, and they are independent of the journal. The researchers who submitted the paper do not learn who the reviewers are, and some journals do not give reviewers the authors' names either. The reviewers evaluate the manuscript and research design, point out areas that need to be addressed, and make a recommendation whether the manuscript is suitable for publication and whether any changes are required.
Some journals are "open access". They are freely available for all to read, and their revenue comes from charging authors a publication fee. While some of these journals are high quality, others are predatory. When it comes to open access, there is a far greater variation in quality than with traditional subscription-based journals.
The best way to get right to the point of a research study is the article's abstract. The abstract contains a concise overview of the study design and its findings. All journals offer access to abstracts free of charge.
Systematic reviews and meta-analyses are types of research papers that are helpful as they do the quality control for you as they evaluate existing research literature topic and, in the case of meta-analysis, pool the results of multiple studies together in order to draw broader conclusions.
Where to Find Research
Two great options that are accessible to all are Google Scholar and PubMed.
Google Scholar harnesses the Google searching capability to search through academic publications. Many of these results will link to a paper's abstract on the publisher's site, but there are also some links to full-text sources.
PubMed is a site run by the U.S. National Library of Medicine. Studies funded by the National Institutes of Health are available as full-text from PubMed Central, while a large array of other research studies are available as abstracts.
Bringing a Critical Lens
The main take-home point here is to be skeptical about research study results that you hear about in the media. A media report is only going to be as good as the research literacy of the reporter. We all want to understand why things happen, so it can be very tempting to make assumptions about causation when a research paper is only talking about correlations. Try not to fall into that trap.
Going back to the idea of coffee being good or bad for you, multiple studies may be designed quite differently and measuring different things, so coffee itself is probably not jumping back and forth between the healthy camp and the unhealthy camp.
Finally, always ask questions. After all, curiosity is how new research knowledge is generated in the first place.
© 2019 Ashley Peterson