Ménage à trois

In this post we will give another twist to the issue of the variables that can disturb the harmonious relationship of the couple formed by exposure and effect, so that all those dirty minds waiting else reading the title can move to the next result of Google, for sure who match what they were looking for.

We saw that there exist confounding variables that are related to the effect and the exposure and how they can alter our estimates of the measures of association if these variables are not distributed evenly among the study groups. We talk about our backdoor, how to avoid it and how close it both in cohort and in case and control studies.

But sometimes the effect of exposure on the outcome studied is not always the same and can vary in intensity as the value or the level of a third variable is changed. As was the case with confounding, we observe it better stratifying the results for analysis, but in these cases is not due to the uneven distribution of the variable, but the effect of exposure is actually modified by the magnitude of this variable, which is called modifying variable or interaction effect.

Naturally, it is essential to distinguish between confounding and interaction variable. The effect of the confounding variable depends on its distribution among the study groups. In experimental studies, this distribution may vary according to the distribution occurred during randomization, so a variable can act as confounder in one trial and not in another. However, in observational studies they always exert their effect, as they are associated both with the exposure and the effect. When we find a confounding variable our goal is to control its effect and estimate an adjusted measure of association.

On the other hand, effect modifier variables represent characteristics of the relationship between exposure and effect whose intensity depends on the ménage à trois created by the interaction of this third variable. If you think about it, in the event that there’s a modification of effect we’ll not be interested in calculating an adjusted measure of association, as we could do with the Mantel-Haenszel test, because it wouldn’t be representative of the overall action of exposure on effect. Neither is good idea to do the simple arithmetic average of the measures of association we observe in each stratum. In any case what we have to do is to describe it and not try to control it, as we do with confounding variables.

Before we can say that there is an effect modifier variable we must discard that the observed differences are due to chance, confounding or bias of our study. Observing the confidence intervals of the estimated measures can help to rule out chance, as it will be more unlikely if the intervals do not overlap. We can also calculate whether differences among strata are statistically significant, using the appropriate test according to the design of the study.

And can we estimate an overall measure of the influence of exposure on the effect that takes into account the existence of an interaction variable?. Of course we can, does anyone doubt it?.

Perhaps the easiest way is to calculate a standardized measure. To do so we compare two different measures, one which assumes that each element of each stratum has the risk of the exposed and another which assumes the same but in non-exposed. Doing so we estimate a measure of the association in the global standard population we have set. Confused?. Let’s see an example. We’re going to continue boring you to exhaustion with poor smokers and their coronary artery disease. In the first table are the results of a study that I just invented over smoking and myocardial infarction.modifier_variable

We see that, overall, smokers have seven times higher risk of suffering infarction than non-smokers (relative risk, RR = 7). Let’s assume that smokers and nonsmokers have a similar age distribution, but that if we break down the data into two age groups the risks are different. The RR under 50 years is 2, compared to the older than 50, whose risk of heart attack is three times higher for smokers than for non-smokers.

standardized_RRWe will calculate two measures of association, one assuming that everyone smokes and the other assuming none smokes. In younger than 50 years, the risk of myocardial infarction if all smoke is 5/197 = 0.02. If we have 454 people less than 50, the expected number of infarctions would be 454×0.02 = 9.1. The risk in non-smokers would be 3/257 = 0.01, so we expect to find 0.01×454 = 4.5 cases of infarction in non-smokers.

We do the same calculations with the older than 50 and we add the total number of people (770), the total number of heart attacks in smokers (47.1) and nonsmokers (10.8). The standardized risk in smokers in this population is 47.1 / 770 = 0.06. The standardized risk in nonsmokers, 10.8 / 770 = 0.01. Finally, we calculate the standardized RR: 0.06 / 0.01 = 6. This means that, globally, smoking increases six time the risk of myocardial infarction, but do not forget that this result is valid only for this standard population and it would probably not for a different population.

Just one more thing before finishing. As with the analysis of confounding variables, the analysis of effect modifiers can also be done by regression, introducing an interaction coefficient in the obtained equation to correct the effect of the modifier. Moreover, these coefficients are very useful to us because their statistical significance serves to distinguish between confusion and interaction. But that is another story…

A matter of pairs

We saw in the previous post how observational studies, in particular cohort studies and case-control studies, are full of traps and loopholes. One of these traps is the backdoor through which data may be eluding us, causing that we get erroneous estimates of association measures. This backdoor is kept ajar by confounding factors.

We know that there’re several ways to control confounding. One of them, pairing, has it peculiarities in accordance whether we employ it in a cohort study or in a case-control study.

When it comes to cohort studies, matching by the confounding factor allows us to obtain an adjusted measure of association. This is because we control the influence of the confounding variable on exposure and on the effect. However, the above is not fulfilled when the matching technique is used in a case-control study. The design of this type of study imposes the obligation to make the pairing once the effect has been produced. Thus, patients that act as controls are a set of independent individuals chosen at random, since each control is selected because it fulfills a series of criteria established by the case with which it is going to be paired. This, of course, prevents us to select other individuals in the population who do not meet the specified criteria but would be potentially included in the study. If we forget this little detail and apply the same methodology of analysis that would use a cohort study we would incur in a selection bias that would invalidate our results. In addition, although we force a similar distribution of the confounder, we only fully control its influence on the effect, but no on the exposure.

So the mentality of the analysis varies slightly when assessing the results of a case-control study in which we used the matching technique to control for confounding factors. While with an unpaired study we analyze the association between exposure and effect on the overall group, when we paired we must study the effect on the case-control pairs.

cyc_pairingWe will see this continuing with the example of the effect of tobacco on the occurrence of laryngeal carcinoma from the previous post.

In the upper table we see the global data of the study. If we analyze the data without considering that we used the pairing to select the controls we obtain an odds ratio of 2.18, as we saw in the previous post. However, we know that this estimate is wrong. What do we do? Consider the effect of couples, but only that of those that don’t get along.

We see in the table below the distribution of the pairs according to their exposure to tobacco. We have 208 pairs in which both the case (person with laryngeal cancer) and the control are smokers. Being both subject to exposure they don’t serve to estimate their association with the effect. The same is true of the 46 pairs in which neither the case nor the control smoke. Pairs of interest are the 14 in which the control smoke but the case don’t, and the 62 pairs in which only the case smokes, but not the control.

These discordant pairs are the ones that give us information on the effect of tobacco on the occurrence of laryngeal cancer. If we calculated the odds ratio is of 62/14 = 4.4, a measure of association stronger than the previously obtained and certainly much closer to reality.

Finally, I want to do three considerations before finishing. First, although it goes without saying, to remind you that the data are a product of my imagination and that the example is completely fictitious although it does not seem as stupid as others I invented in other posts. Second, these calculations are usually made with software, using the Mantel-Haenszel´s or the McNemar´s test. The third is to comment that in all these examples we have used a pairing ratio of 1: 1 (one control per case), but this need not necessarily be so because, in some cases, we may be interested in using more than one control for each case. This entails its differences about the influence of the confounder on the estimated measure of association, and its considerations when performing the analysis. But that’s another story…

Birds of a feather flock together

Sometimes, we cannot help confounding factors getting involved in our studies, both known and unknown. These confounding variables open a backdoor through which our data can slip, making those measures of association between exposure and effect that we estimate not to correspond to reality.

During the phase of analysis it is often used techniques as stratification or regression models to measure the association adjusted by the confounding variable. But we can also try to prevent confusion in the design phase. One way is to restrict the inclusion criteria in accordance with the confounding variable. Another strategy is to select controls to have the same distribution of confounding variable than the intervention group. This is what is known as pairing.

bird_feather_poblacion generalSuppose we want to determine the effect of smoking on the frequency of laryngeal cancer in a population distribution that you see in the first table. We can see that 80% of smokers are men, while only 20% of non-smokers are. We invented the risk of cancer in men is 2%, but rises to 6% for smokers. For its part, the risk in women is 1%, reaching 3% if they smoke. So, although all of them will double the risk if they practice the more antisocial of the vices, men always have twice the risk than women (while being equal the exposure to tobacco between the two sexes, because men smokers have six times more risk than non-smoker women). In short, sex acts as a confounding factor: influencing the likelihood of exposure and the likelihood of the effect, but is not part of the causal sequence between smoking and cancer of the larynx. This would be taken into account when analyzing and calculating the adjusted relative risk by Mantel-Haenszel technique or using a logistic regression model.

Let’s see another possibility, if we know the confounding factor, which is trying to prevent its effect during the planning phase of the study. Suppose we start from a cohort of 500 smokers, 80% out of them are men and 20% are women. Instead of randomly taking 500 non-smokers controls, we include in the unexposed cohort one non-smoker man per each smoker man in the exposed cohort, and the same with women. We will have two cohorts with a similar distribution of the confounding variable and, of course, also similar in the distribution of the remaining known variables (otherwise we could not compare them).

Have we solved the problem of confusion? Let’s check it out.

bird_feather_cohortes_tres tablasbird_feather_cohortesWe see the contingency table of our study with 1000 people, 80% men and 20% women in both groups exposed and unexposed. As we know the risk of developing cancer by gender and smoking status, we can calculate the number of people we expect to develop cancer during the study: 24 men smokers (6% of 400), eight non-smoking men (2% of 400), three women smokers (3% of 100) and one non-smoker women (1% of 100).

With these data we can build the contingency tables, global and stratified by gender, we expect to find at the end of follow-up. If we calculate the measure of association (in this case, the relative risk) in men and women separately we see that coincides (RR = 3). Plus, it’s the same risk as the global cohort, so it seems we have managed to close the back door. We know that in a cohort study, matching the confounding factor allows us to counteract its effect.

Now suppose that instead of a cohort study we conduct a case-control study. Can we use the pairing? Of course we can, who’s going to stop us? But there is one problem.

If we think about it, we realize that pairing with cohorts influences both the exposure and the effect. However, in case-control studies, forcing a similar distribution of confounding affects only its influence on the effect, not the one that has over exposure. This is so because homogenizing according to the confounder we also do it according to other related factors, among other, the exposition itself. For this reason, pairing doesn’t guarantee closing the back door in case-control studies.

bird_feather_casos controlesSomeone does not buy it? Let’s assume that we have 330 people with laryngeal cancer (80% male and 20% female). To do the study, we selected a group of similar controls from the same population (what is called a case-control study nested in a cohort study).

We know the number of expose and non-exposed from data we gave at the beginning of the general population, knowing the risk of cancer arising by gender and exposure to tobacco. In addition, we can also build the table of controls, since we know the percentage of exposure to tobacco by sex.

Finally, with the data from these three tables we can build the contingency tables for the overall study and those for men and women.

In this case, the suitable measure of association is the odds ratio, which has a value of three for men and women, but is 2.18 for the overall study population. Thus we see that they do bird_feather_casoscontroles_tres tablasnot match, which is telling us that we have not completely escaped from the effect of the confounder even though we used the technique of pairing to select the control group.

So pairing cannot be used in case-control studies? Yes, yes it can, although the analysis of the results to estimate the extent of adjusted association is a little different. But that is another story…

Dividing to conquer

Who has not heard this sentence a lot of times? Although it’s quite famous, it’s strange that its origin is not well known. Some say it was an occurrence of Julius Caesar, but there seems to be no written evidence to prove it. Others say it was an inspiration of Machiavelli, much given to fuck his neighbors up to obtain personal gain.

I think it’s likely that the award is for neither of them and that the phrase concerned is one more of the vast cultural heritage of ours, the so-called human-beings. What is not in doubt, however, is that it forms the core of a useful strategy for solving problems of some complexity. The problem is divided into smaller parts which are resolved more easily and then these solutions are used to build the complex solution of the initial problem.

Do you remember the study about tobacco and coronary disease when we speak of confounding? We demonstrate that the effect of the confounding variable was masking the true effect of tobacco on the disease. Well, let’s divide to conquer.

To do this we will use one of the techniques that exist to estimate the effect of the confounding variable: stratification. This consists in creating subgroups from the initial sample, so that each subgroup is free of the confusion produced by the factor. Once this is done, we estimate separate measures of association and, if they are not equal (due to the confounding variable), calculate the estimate of the adjusted association by the factor by which we have stratified (the confusion).

When the confounding variable is not continuous (e.g., male or female) it’s very easy to stratify. However, if the confounder is a continuous variable, such as age, it can be difficult to decide how many strata we need. As much strata we made, less confusion we’ll have, although it’ll be more difficult to obtain useful information from too small strata. And conversely, if there’re few strata we’ll run the risk of not to adjust well the estimate of the measure of association.

So I myself am quite sloppy and do not want to do a lot of numbers, I’m going to put the example stratifying into two age groups: older and younger than 50 years.dividing and conquerYou see that relative risks (RR) are different, indicating that age probably acts as a confounding variable. One way to separate the effect of age and get an estimate of the true tobacco association’s effect on coronary heart disease is to calculate a weighted average RR by Mantel-Haenszel method.

This method weights in a combined way the three factors of the contingency table that reflect the information about exposure and effect: the frequency of effect in exposed and unexposed, the relative sizes of the comparison groups and the overall size of each stratum. Of course, these two gentlemen explain this with a huge equation you’re going to forgive me for not to put it here. Simply, let’s see how the new adjusted RR is calculated.

To calculate the weighted risk in exposed, rather than dividing the number of exposed patients by total exposed as normally would (166/591, for under 50 years), we divide by the total of the stratum and multiplied by the total unexposed as follows:

– Younger than 50 years: Re = 166 x (605/1196) = 83.97.

– Older than 50 years: Re = 227 x (634/1021) = 140.95.

In a similar way, we calculate the weighted risk in non-exposed multiplying the non-exposed who are sick by the total number of exposed and divide it by the total of the stratum:

– Younger than 50 years: Ro = 68 x (591/1196) = 33.60.

– Older than 50 years: Ro = 314 x (387/1021) = 119.01.

Finally, we add the weighted risks in exposed and divide it by the sum of the weighted risks in non-exposed, obtaining the adjusted RR:

aRR = (83,97+140,95) / (33,60+119,01) = 1,47.

It means that the risk of developing coronary heart disease is approximately 50% greater if you smoke, regardless of age.

This simple calculation is much more unfriendly if we are not so shoddy and divide the sample into a greater number of strata. And imagine if the contingency tables get complicated. Of course, that is what computers and statistical programs are for; they do all this in a jiffy, we don’t know whether effortless or not, but without any claim.

However, there are other methods to calculate the estimate of the adjusted association. The most fashionable now is logistic regression. With the computers actually available to any of us, a paper that doesn’t analyze this problem by applying a regression model is going to get only dirty looks. But that is another story…

The backdoor

I wish I had a time machine!. Think about it for a moment. We should not have to work (we would have won the lottery several times), we could anticipate all our misfortunes, always making the best decision … It would be like in the movie “Groundhog Day”, but without acting the fool.

Of course if we had a time machine that worked, there would be occupations that could disappear. For example, epidemiologists would have a hard time. If we wanted to know, imagine, if the snuff is a risk factor for coronary heart disease we only would’ve to take a group of people, tell them not to smoke and see what happened twenty years later. Then we would go back in time, require them to smoke and see what happen twenty years later and compare the results of the two tests. How easy, isn’t it?. Who would need an epidemiologist and all his complex science about associations and study designs?. We could study the influence of exposure (the snuff) on the effect (coronary heart disease) comparing these two potential outcomes, also called counterfactual outcomes (pardon the barbarism).

However, not having a time machine, the reality is that we cannot measure the two results in one person, and although it seems obvious, what it actually means is that we cannot directly measure the effect of exposure to a particular person.

So epidemiologists resort to study populations. Normally in a population will be exposed and unexposed subjects, so we can try to estimate the counterfactual effect of each group to calculate what would be the average effect of exposure on the population as a whole. For example, the incidence of coronary heart disease in nonsmokers may serve to estimate what would have been the incidence of disease in smokers if they had not smoked. This enables that the difference in disease between the two groups (the difference between its factual outcomes), expressed as the applicable measure of association, is an estimate of the average effect of smoking on the incidence of coronary heart disease in the population.

All that we have said requires a prerequisite: counterfactual outcomes have to be interchangeable. In our case, this means that the incidence of disease in smokers if they had not smoked would have been the same as that of nonsmokers, who have never smoked. And vice versa: if the group of non-smokers had smoked they would have the same incidence than that observed in those who are actually smokers. This seems like another truism, but it’s not always the case, since in the relationship which exists between effect and exposure frequently exist backdoors that make counterfactual outcomes of the two groups not interchangeable, so the estimation of measures of association cannot be done properly. This backdoor is what we call a confounding factor o confounding variable.

backdoor_globalLet’s clarify a bit with a fictional example. In the first table I present the results of a cohort study (that I have just invented myself) that evaluates the effects of smoking on the incidence of coronary heart disease. The risk of disease is 0.36 (394/1090) among smokers and 0.34 (381/1127) among nonsmokers, so the relative risk (RR, the relevant measure of association in this case) is 0.36 / 0.34 = 1.05. I knew it!. As Woody Allen said in “Sleeper”!. The snuff is not as bad as previously thought. Tomorrow I go back to smoking.

Sure?. It turns out that mulling over the matter, it just occurs to me that something may be wrong. The sample is large, so it is unlikely that chance has played me a bad move. The study does not apparently have a substantial risk of bias, although you can never completely trust. So, assuming that Woody Allen wasn’t right in his film, there is only the possibility that there’s a confounding variable implicated altering our results.

The confounding variable must meet three requirements. First, it must be associated with exposure. Second, it must be associated with the effect of exposure independently of the exposure we are studying. Third, it should not be part of the chain of cause-effect relationship between exposure and effect.

This is where the imagination of researcher comes into play, which has to think what may act as a confounder. To me, in this case, the first that comes to mind is age. It fulfills the second point (the oldest are at increased risk of coronary heart disease) and third (no matter how the snuff is, it doesn’t increase your risk of getting sick because it makes you older). But, does it fulfill the first condition?. Is there an association between age and the fact of be a smoker?. It turns out that we had not thought about it before, but if this were so, it could explain everything. For example, if smokers were younger, the injurious effect of snuff could be offset by the “benefit” of younger age. Conversely, the benefit of the elderly for not smoking would vanish because of the increased risk of older age.

How can we prove this point?. Let’s separate the data of younger and older than 50 years and let’s recalculate the risk. If the relative risks are different, you will probably want to say that age is acting as a confounding variable. Conversely, if they are equal there will be no choice but to agree with Woody Allen.backdoor_edadesLet’s look at the table of the youngest. The risk of disease is 0.28 (166/591) in smokers and 0.11 (68/605) in non-smokers, then the RR is 2.5. Meanwhile the risk of disease, in patients older than 50 years, is 0.58 (227/387) in smokers and 0.49 (314/634) in nonsmokers, so the RR equals 1.18. Sorry for those of you who are smokers, but The Sleeper was wrong: the snuff is bad.

With this example we realize how important it is what we said before about counterfactual outcomes being interchangeable. If the age distribution is different between exposed and unexposed and we have the misfortune of that age is a confounding variable, the result observed in smokers will no longer be interchangeable with the counterfactual outcome of nonsmokers, and vice versa.

Can we avoid this effect?. We cannot avoid the effect of a confounding variable, and this is even a bigger problem when we don’t know that it can play its trick. Therefore it’s essential to take a number of precautions when designing the study to minimize the risk of its occurrence and having backdoors which data squeeze through.

One of these is randomization, with which we will try that both groups are similar in terms of the distribution of confounding variables, both those known and unknown. Another would be to restrict the inclusion in the study of a particular group as, for instance, those less than 50 years in our example. The problem is that we cannot do so for unknown confounders. Another third possibility is to use paired data, so that for every young smoker we include, we select another young non-smoker, and the same for the elderly. To apply this paired selection we also need to know beforehand the role of confounding variables.

And what do we do once we have finished the study and found to our horror that there is a backdoor?. First, do not despair. We can always use the multiple resources of epidemiology to calculate an adjusted measure of association which estimate the relationship between exposure and effect regardless of the confounding effect. In addition, there are several methods for doing the analysis, some simpler and some more complex, but all very stylish. But that’s another story…

That’s not what it seems to be

I hope, for your own good, that you have never had to do with a situation in what you had to pronounce this sentence. And I hope, also for your good, that if you have had to pronounce it, the sentence wouldn’t have begun with the word “darling”. Would it?. Let’s leave it to the conscience of everyone.

What is true is that we have to ask ourselves this question in a much less scabrous situation: when assessing the results of a cross-sectional study. It goes without saying, of course, that in these cases there’s no use for the word “darling”.

Cross-sectional descriptive studies are a type of observational study in what we extract a sample from the population we want to study and then we measure the frequency of the disease or effect that we are interested in in the individuals of that sample. When we measure more than one variable this studies are called association cross-sectional studies and allow us to determine if there’s any kind of association among the variables.

But these studies have two characteristics that we must always keep in mind. First, they are prevalence studies that measure the frequency at a given time, so the result may vary depending on the timing of measuring the variable. Second, since the measurement is performed simultaneously, it is difficult to establish a cause-effect relationship, something that we all love to do. But it is something we should avoid doing because with this type of study, things are not always what they seem to be. Or rather, things can be a lot more things than what they seem.

What are we talking about?. Let’s consider an example. transversal_enI’m a little bored of going to the gym because I’m becoming more and more tired and my physical condition… well, just leave it that I get tired, so I want to study whether or not the effort can reward me with a better control of my body weight. Thus, I make a survey and get data from 1477 individuals approximately my age and ask them if they go to the gym (yes or no) and if they have a body mass index greater than 25 (yes or no). If you look closely at the results depicted in the table you’ll notice that the prevalence of overweight-obesity among those who go to the gym (50/751, about 7%) is higher than among those not going (21/726, about 3%).Oh my goodness!, I think, I not only get tired, but going to the gym I have twice the chance of being fat. Conclusion: I’ll leave the gym tomorrow.

Do you see how easy it is to reach an absurd (rather stupid, in this case) conclusion?. But the data are there, so we have to find an explanation to understand why they suggest something that goes against our common sense. And there are several possible explanations for these results.

The first is that going to the gym actually favors one fattening. It seems unlikely, but you never know … Imagine that working out motivates athletes to eat like wild beasts during the next six hours after a sports session.

The second is that obese going to the gym live longer than those who don’t go. Let’s think that exercise prevents death from cardiovascular disease in obese patients. It would explain why there are more obese (in proportion) in the gym than outside it: obese going to the gym die less that those not going. At the end of the day we are dealing with a prevalence study, so we see the final result at the time of measurement.

The third possibility is that the disease can influence the frequency of exposure, which is known as reverse causality. In our example, there could be more obese in the gym because the treatment recommendations they receive is doing it: to join a gym. This does not sound as ridiculous as the first one.

But we still have more possible explanations. So far we have tried to explain an association between the two variables that we have assumed as real. But what if the association is not real?. How can we get a false association between the two variables?. Again, we have three possible explanations.

First, our old friend: random. Some of you will tell me that we can calculate statistical significance or confidence intervals, but so what?. Even in the case of statistical significance, it only means that we can rule out the effect of random, but with some degree of uncertainty. Even with p < 0.05, there’s always a chance of committing a type I error, and erroneously reject the effect of chance. We can measure random, but never get rid of it.

The second is that we have committed some kind of bias that invalidates our results. Sometimes the disease’s characteristics can result in a different probability of choosing exposed and unexposed subjects, resulting in a selection bias. Imagine that instead of a survey (by telephone, for example) we have used a medical record. It may happen that obese going to the gym are more responsible with their health care and go to the doctor more than those that don’t go to the gym. In this situation, it will be more likely that we include obese athletes in the study, making a higher estimate of the true proportion. Sometimes the study factor may be somewhat stigmatizing from the social point of view, so diseased people will have less desire to participate in the study (and recognize their disease) that those who are healthy. In this case, we’ll underestimate the frequency of disease.

In our example, it may be that obese people who do not go to the gym answer to the survey lying about their true weight, which will be wrongly classified. This classification bias can occur randomly in the two groups of exposed and unexposed, thereby favoring the lack of association (the null hypothesis), and so the association will be underestimated, if it exists. The problem is when this error is systematic in one of the two groups, as this can both underestimate and overestimate the association between exposure and disease.

And finally, the third possibility is that there is a confounding variable that is distributed differently between exposed and unexposed. I can think that those who go to the gym are younger than those who don’t. I t is possible that younger obese are more likely to go to the gym. If we stratified the results by the confounding variable, age, we can determine its influence on the association.

To finish, I only want to apologize to all obese in the world for using them in the example but, for once, I wanted to let the smokers alone.

As you can see, things are not always what they seem at first glance, so the results should be interpreted with common sense and in the light of existing knowledge, without falling into the trap of establishing causal relationships from associations detected in observational studies. To stablish cause and effect we always need experimental studies, the paradigm of which is the clinical trial. But that’s another story…