How not to be fooled while reading research studies

Most people tend to blindly believe the authors’ conclusions given in the abstract of each study, and look no further.

Even though medical research is expected to generate scientific truth, unfortunately we sometimes see exaggerated and misleading conclusions. This could lead some readers into taking wrong decisions.

Most people tend to blindly believe the authors’ conclusions given in the abstract of each study, and look no further. This is unfortunately not an advisable strategy. Often, research studies use terminology and jargon that decorates their findings, leading lay people to believe that the effect was much larger than it actually was.

There are numerous biases associated with research, and these are not limited to healthcare. Awareness of basic statistics helps identify a few pitfalls.

This article is written to highlight one common flaw - something that can easily be spotted.

Let’s start with a typical real-life medical question.

A new blood pressure pill has hit the market, and many people are switching to the newer pill because it is apparently “50% safer” than the existing medication. We want to find out the facts, and decide to look up the research study.

The authors compared two medications A and B for hypertension in 1000 patients. Both were equally good at controlling blood pressure. B is new, and more expensive. According to the study, B is claimed to be “50% safer” than the other. How do we decide if that’s important?

To answer this question, let us look at another example.

Which taxi to choose?

Imagine the year is 1975, and we are at a taxi stand in Delhi. There are two choices. Ambassador and Fiat Premier Padmini. Which one do we choose for safety? Most people who know about cars would pick either one.

Now, let’s imagine a hypothetical research study that had studied the safety of taxi cars.

What if the author of that study concluded that “Ambassador taxi is 50% less likely to have an accident than Fiat”?

Will that make us change the decision?

Many people who hear that study conclusion would say “Oh my God! I didn’t know that. Thanks for letting us know. We will pick the Ambassador. Fiat is too dangerous”

The truth is, there is no need to panic. Research studies are known to exaggerate their findings to grab attention. That is not the same as fraudulent research or falsification. Statistics is a tool that can be used to present even the most trivial findings in the most dramatic format, without being dishonest.

As someone said, if we torture data long enough, it will confess to anything we want.

Let’s look at the hypothetical taxi study in detail.

Imagine that the authors studied 100,000 taxi cars each of Ambassador and Fiat, tracked their accident history over 1 year. They found that 3 accidents occurred in the Ambassador group, while 6 occurred in the Fiat group over the same time period.

They conclude that travelling in an Ambassador taxi is 50% less likely to result in an accident.

Such conclusions often appear verbatim in news media headlines, without much further details.

A typical headline will be: “Ambassador taxi 50% safer than Fiat”

Now let’s look at their calculations.

The research question is to compare the accident rate of the two cars, when used as taxis.

Findings:

Ambassador: 3 per 100,000 (0.003%)

Fiat: 6 per 100,000 (0.006%)

The difference is 0.006%-0.003%=0.003%

Which means that if we pick the Ambassador car, our accident rate is 0.003% less than that of Fiat.

Wait a minute. How is that even possible, when the study conclusions stated “50% safer”? How could they say that? Could there be a mistake?

There is no mistake here. The authors simply used the most impressive parameter to express their findings.

That’s called RR or Relative Risk.

It is the ratio of two rates.

Ambassador/Fiat accident rates

3/100,000

———-

6/100,000

= ½ = 50%

Thus, the relative risk RR is 50%

And it is technically not wrong to write that the Ambassador car is 50% safer compared to Fiat, according to the study of course.

What about the 0.003% difference then?

That’s called ARR, another parameter that could have been used, but is not much of a favourite because it reveals the true difference.

ARR = Accident Rate in Fiat - Accident rate in Ambassador

0.006% - 0.003% = 0.003%

(Note: this is a hypothetical example, and was created only to illustrate problems while interpreting research. There is no evidence that either car is safer than the other)

Just like a gourmet chef presents his or her creation in the most visually appealing and attractive manner, it is the authors’ choice, privilege and right to present their research findings in a way that grabs the most attention. Whether this misleads readers is another issue.

How to detect this flaw is everyday life?

Although statistics is commonly described as boring, it helps to learn a few basics, especially from people who are able to teach with interesting examples.

For someone who is not motivated to do that, the next best thing is the following: while going through a research study, look for claims along the lines of “90% reduction” or “30% more”. It is quite likely that the authors are attempting to decorate their findings, as most readers tend to relate to percentages from their own school grades. For instance, anything more than 90% carries an indirect signal of “excellent”. The question is, what exactly do they mean by these percentages.

A common example of this phenomenon was seen in toothpaste ads, which used slogans along the lines of “9 out of 10 dentists recommend this brand”, without disclosing exactly how they arrived at that conclusion.

A simple and effective trick, is to “look for the denominator”. That is, how many people were studied in order to report the findings.

That’s the factor that some research studies try to hide within the narrative. 3 accidents in 100,000 is a really small number. The percentage is just 0.003%. On the other hand, 3 accidents out of 10 means something else. The denominator makes all the difference.

Here’s the best part: instead of 100,000, even if the denominator was 10 billion in the above taxi example, the RR or Relative risk will remain at 50%, and the conclusion will be no different.

Let us go back to the example of blood pressure medication.

Authors looked for the incidence of headache among those who took either medication. If the side effect (headache) rate in drug A was 6 per 1000 and in drug B was 3 per 1000, the difference is only 3/1000 or 0.3%. In other words, even if we take the cheaper medication A, there is only a 0.3% increased chance of having headache than B. This is called ARR, absolute risk reduction, as described above.

That is certainly not what seemed from the authors' dramatic conclusions of “50% lower risk of side effects” in the case of the costly medication. Instead of ARR, the authors chose RR (relative risk) to present their findings in the most impressive manner possible. While using RR, we can see that 3 per 1000 is half of 6 per 1000, yielding a relative risk of 50%.

Thus, a closer look at the data tells us that 1) the occurrence of headache itself is very rare and 2) actual difference is only 0.3%. Therefore, we can safely continue our existing medication - without having to switch to something that’s completely new and expensive.

Other pitfalls: the list is long.

There are several other ways in which research can be misleading, a detailed description of which is beyond the scope of this article. A few that deserve mention are:

  1. Publication bias: studies that announce a positive effect are more likely to get published, while those that confirm no difference (negative studies, equally important) get rejected.

  2. Commercial bias: manufacturers may selectively publish only those studies that show their product in good light, while ignoring those that describe side effects or treatment failures. This leads to skewed public perception in favour of the company.

  3. When such studies are published or quoted by indirect beneficiaries of the company or product, conflict of interest might not always be obvious to readers.

  4. Ascertainment bias: studying the wrong sample. For example, if we wish to study the dietary preference of people in India, doing a survey on customers visiting an Udupi vegetarian restaurant might lead us to conclude that almost everyone prefers south Indian vegetarian cuisine.

  5. Study on small sample sizes are prone to erroneous interpretations.

  6. Observational studies are less reliable than randomized trials.

  7. Survivorship bias: patients recruited later into an intervention study (e.g. booster dose, heart transplant) have automatically been selected for surviving the early phase, and will therefore show better outcomes, that get falsely attributed to the intervention.

  8. Non-peer reviewed studies or preprints have not yet passed the scrutiny of independent experts, and could contain errors.

Summary

It is unwise to blindly believe author's conclusions without understanding the full context - regardless of the reputation of the journal. For instance, anytime a study claims “90% reduction” or “30% more”, it helps to look for raw data, specifically at the denominator. Most research papers will have supplemental pages that provide useful data.

If the article is difficult to understand, it helps to cross check with someone who is well-versed with biostatistics and research methodology. As the adage goes, if anything sounds too good (or scary) to be true, it probably is.