The ratio’s trap

Odds ratio vs risk ratio.

odds ratio

Odds ratio and risk ratio are related concepts that can be interchanged when the prevalence of the effect is low, but not in other situations.

The realm of science is full of traps. They’re everywhere. Neither the major medical journal, nor the most prestigious authors are free of them. Many people tend to take advantage of our ignorance and use interested indicators instead of using the proper ones in order to show the results in an interested way. For this reason, we have to be very alert and always look at the studies’ data to get our own interpretation.

Unfortunately, we cannot avoid the results being manipulated, but we can fight our ignorance and always do a critical appraisal when reading scientific papers.

An example of what I am talking about is the choice between risk ratio and odds ratio.

Odds ratio vs risk ratio

You know the difference between risk and odds. A risk is the proportion of subjects with an event in a total group of susceptible subjects. Thus, we can calculate the risk of having a heart attack among smokers (infarcted smokers divided by the total number of smokers) and among non-smokers (the same, but with non-smokers). If we go a step further, we can calculate the ratio between the two risks, called relative risk or risk ratio (RR), which indicates how much more likely is the occurrence of the event in one group compared with the other group.

Meanwhile, the odds represents a quite different concept. The odds indicates how much more likely is an event to occur than not to occur (p/(1-p)). For example, the odds of suffering a heart attack in smokers is calculated dividing the likelihood of having an attack in smokers (infarcted smokers divided by the total number of smokers, same that we did with the risk) by the probability of not suffering the attack in smokers (non-infarcted smokers divided by the total number of smokers or, equivalently, one minus the likelihood of having the attack).

Like we did with the risk, we can calculate the ratio of the odds of the two groups to get the odds ratio (OR), which gives us an idea of how much more likely is the event to occur in one group than the other.

As you can see, they are similar but different concepts. In both cases, the null value is one. A value greater than one indicates that subject located in the numerator have a greater risk, whilst a value less than one indicates that they have less risk of presenting the event. Thus, a RR of 2.5 would mean that the group in the numerator has a 150% greater chance of presenting the event that we are studying. An OR of 2.5 means that it’s two and a half times more likely to present the event in the numerator’s group.

In other way, a RR of 0.4 indicates a 60% reduction of the probability of the event in the numerator group. An OR of 0.4 is more complex to interpret, but it’s more or less the same meaning.

Which of the two should we use?. It depends on the type of study. To calculate the RR we have to previously calculate the risks in the two groups, and for that we have to know the prevalence or cumulative incidence of the disease, so this measure is often used in cohort studies and clinical trials.

In the studies in which the prevalence of disease is unknown, as in case-control studies, there’s no choice but to use OR. But using OR is not restricted to this type of study. We can use it whenever we want, instead of use RR. In addition, a particular case is when it’s used a logistic regression model to adjust for the different confounding factors detected, which provide adjusted ORs.

The difference

odds ratio In any case, RR and OR values are similar when the frequency of the effect is low, below 10%, although OR is always slightly lower than RR for values less than one and a little higher for values larger than one. Just a little?. Well, sometimes not so little.

In the attached graphic it’s approximately represented the relation between RR and OR. As you can see, as the frequency of the event increases, the OR grows much faster than the RR. And here is where the trap lies, since for the same risk, the impact may seem much higher if we use an OR than if we use a RR. The OR can be misleading when the event is frequent. Let’s see an example.

Imagine that I’m very concerned with obesity among attendees to a movie theater and I want to prevent them to enter the room with a huge tank of a sugary drink whose brand I’m not going to mention. So I count how many viewers buy the drink and get a proportion of 95% of the attendees. Then, a different day, I put a sign in the bar warning about the bad health effect of drinking sugary beverages and, very gladly, I see how the proportion reduces down to an 85%.

In our case, the absolute risk measure of effect is the absolute risk difference, which is only of 10%. That’s something, but it doesn’t look like much: I only get the desired effect in one in ten. Let’s see how association measures work.

The RR is calculated as the ratio 95/85 = 1.17. This indicates that the risk of buying the drink is a 17% higher if we don’t put the sign than if we put it. It doesn’t seem too much, does it?.

The odd of buying the beverage would be 95/(1-95) without putting the sign and 85/(1-85) putting it, so the OR would be equal to (95/5)/(85/15) = 3.35. It means that it’s three times more likely to buy the beverage if we don’t put the sign.

It’s clear that RR gives a better idea that corresponds better with the absolute measure (risk difference), but now I wonder: if my brother-in-law had a factory to make signs, what indicator do you think he would use? No doubt he would use the OR.

This is why we must always look at the results to check if we can calculate some absolute indicator from the study data. Sometimes this is not as easy as in our example, as when the authors presents the OR provided by a regression model. In these cases, if we know the prevalence of the effect or disease under study, we can always calculate the equivalent RR using the following formula:

RR= \frac{OR}{(1-Prev)+(Prev\times OR)}

We’re leaving…

And here we leave the traps for today. You have seen how data and the way of presenting them can be manipulated to say what you want without actually lying. There’re more examples of misuse of relative association measures instead of absolutes ones, such us using the relative risk difference instead of the absolute risk difference. But that’s another story…

Leave a Reply

Your email address will not be published. Required fields are marked *

Información básica sobre protección de datos Ver más

  • Responsable: Manuel Molina Arias.
  • Finalidad:  Moderar los comentarios.
  • Legitimación:  Por consentimiento del interesado.
  • Destinatarios y encargados de tratamiento:  No se ceden o comunican datos a terceros para prestar este servicio. El Titular ha contratado los servicios de alojamiento web a Aleph que actúa como encargado de tratamiento.
  • Derechos: Acceder, rectificar y suprimir los datos.
  • Información Adicional: Puede consultar la información detallada en la Política de Privacidad.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Esta web utiliza cookies propias y de terceros para su correcto funcionamiento y para fines analíticos. Al hacer clic en el botón Aceptar, aceptas el uso de estas tecnologías y el procesamiento de tus datos para estos propósitos. Antes de aceptar puedes ver Configurar cookies para realizar un consentimiento selectivo.    Más información
Privacidad