Simplifying the impact

Other impact numbers

In epidemiological studies it is common to find a set of measures of effect such as risks in exposed and non-exposed, relative risks and risk reductions. However, in order for the analysis of a study to be considered well done, measures of effect should be accompanied by a series of impact measures, which are the ones that inform us more precisely about the true effect of the exposure or intervention we are studying.

Impact numbers

For example, if we conducted a study on the prevention of mortality from a disease with a treatment X, a relative risk of 0.5 would tell us that there is a half chance of dying if we take the drug, but we cannot see clearly the impact of treatment. However, if we calculate the number needed to treat (NNT) and it comes out to be two, we will know that one in two people treated will avoid death by that disease. This impact measure, the NNT, does give us a clearer idea of the real effect of the intervention in our practice.

There are several impact measures, in addition to the NNT. In the cohort studies, which we are going to focus on today, we can calculate the difference of incidences between exposed and unexposed, the exposed attributable fraction (EAF), the avoidable risk in exposed (ARE) and the population attributable fraction (PAF).

The EAF indicates the risk of presenting the effect on the exposed that is due specifically to that, to have been exposed. The ARE would inform us of the cases of illness in the exposed group that could have been avoided had the exposure not existed. Finally, PAF is a specific attributable risk that describes the proportion of cases that could be prevented in the population if the risk factor under study were completely eliminated. formulas_cohortes_enAs a fourth parameter, considering the presence of exposure and disease, we can calculate the fraction of exposure in cases (FEc), which defines the proportion of exposed cases that are attributable to the risk factor.

In the table that I attach you can see the formulas for the calculation of these parameters.

Other impact numbers

The problem with these impact measures is that they can sometimes be difficult to interpret on the part of the clinician. For this reason, and inspired by the calculation of NNTs, a series of measures called impact numbers have been devised, giving us a more direct idea of the effect of the exposure factor on the disease being studied. These impact numbers are the number of impact on exposed (NIE), the number of impact in cases (NIC) and the number of impact of exposed cases (NIEC).

Let’s start with the simplest. The NIE would be the equivalent of the NNT and would be calculated as the inverse of the absolute risk reduction or the risk difference. The NNT is the number of people who should be treated to prevent a case compared to the control group. The NIE represents the average number of people who have to be exposed to the risk factor for a new disease event to occur compared to non-exposed persons. For example, a NIE of 10 means that out of every 10 exposed a case of disease will occur that will be attributable to the risk factor.

The NIC is the inverse of the PAF, so it defines the average number of sick people among whom a case is due to the risk factor. A NIC of 10 means that for every 10 cases in the population, one is attributable to the risk factor under study.

Finally, the NIEC is the inverse of the FEc. It is the average number of cases among which a case is attributable to the risk factor.

In summary, these three measures indicate the impact of exposure among all exposed (NIE), among all patients (NIC) and among all patients who have been exposed (NIEC).

impact-numbersAn example is the data from the attached table, corresponding to a fictional study on the effect of coronary mortality on smoking. I have used an epidemiological calculator of the many available on the Internet and have calculated a risk difference of 0.0027, a PAR of 0.16 and an FEc of 0.4. We can now calculate our impact numbers.

NIE value is 1 / 0.0027 = 366. Rounding up, out of every 365 smokers, one will die from a heart disease attributable to tobacco.

NIC will be 1 / 0.16 = 6.25. Of every six deaths from heart disease in the population, one will be attributable to tobacco.

Finally, NIEC will be 1 / 0.4 = 2.5. Approximately, for every three deaths from heart disease among those who smoked, one would be attributable to tobacco addiction.

We’re leaving…

And here we leave it for today. Do not forget that the data of the example are fictitious and I do not know if they fit very much to reality.

We have discussed only the point estimates of impact numbers but, as always, it is preferable to calculate their confidence intervals. All three can be calculated with the limits of intervals of the measurement from which the impact numbers are obtained, but it is best to use a calculator that does it for us. Calculation of the intervals of some parameters such as, for example, the PAR can be complex. But that is another story…

Together but not in each other’s pockets

Hybrid designs

Observational studies are those in which, as its name suggests, the researcher merely observes what happens. Well, to observe and analyze, but has no active role on the exposure or intervention under study. Within these observational studies, we all know cohort studies and case-control studies, the most commonly used.

The basic designs

In a cohort study, a group or cohort is subjected to an exposure and followed over time to compare the frequency of occurrence of the effect in comparison with an unexposed cohort, which acts as control. On the other hand, in a case-control study we begin with two population groups, one of which suffers the effect or disease under study and its exposure to a particular factor is compared with that of the group that have not the disease and that acts as control.

The cohort study is the sounder of the two from the methodological point of view. The problem is that it often requires longer periods of follow-up and large cohorts, especially when the frequency of the disease studied is low, leading to the need to manage all covariates of this entire large cohort, which increases the costs of the study.

Hybrid designs

Well, for those cases in which neither the case-control nor cohort studies fit well to the needs of researchers, epidemiologists have invented a series of designs that are halfway between the two and can mitigate their shortcomings. These hybrid designs are the cohort nested case-control study and the case cohort study.

Let’s start with the nested case and controls. Suppose we have made a study in which we used a cohort with many participants. Well, we can reuse it in a nested case-control. We take the cohort and follow it over time, selecting those subjects who develop the disease and assigning them as controls subjects from the same cohort who have not yet presented the disease (although they may do so later). Thus, cases and controls come from the same cohort. It is desirable to match those considering variables that are confounders and time-dependent, for example, the years they have been enrolled in the cohort. Thus, the same subject can act as a control on several occasions and end up as another case, which must be taken into account in the statistical analysis of the studies.

As we are detecting how cases arise, we do the sampling based on incidence density, which allow us to estimate relative risks. This is an important difference with conventional case-control studies, which usually provide odds ratios, only comparable to relative risks when the frequency of the effect is very low.

Another difference is that all the information is collected on the cohort at baseline, so there is less risk of occurrence of the information bias characteristics of the classic case-control studies, which are of retrospective nature.

The other type of hybrid observational design that we will deal with is that of the case cohort study. Here, we also start from a large initial cohort, from which we select a more manageable sub-cohort to be used as a comparison group. Then, we follow the sub-cohort to detect over time the subjects that develop the disease in comparison with the sub-cohort (whether or not they belong to the sub-cohort).

As in the previous example, to detect the cases over time allow us to estimate de density of incidence of cases and no-cases, calculating from them the relative risks. As you can imagine, this design is more economical than conventional studies because it greatly reduces the volume of information from healthy subjects to be handled without losing efficiency when studying rare diseases. The problem that arises is that the sub-cohort has an overrepresentation of cases, so the analysis of the results may not be as the traditional cohorts, but has its own methodology rather more complicated.

We’re leaving…

And here we will leave this topic for today. To sum up a little, shall we say that nested case-control study is more like the classical case-control study, while the case cohort study is more like the conventional cohort study. The fundamental difference between the two is that in the nested study we sample controls by incidence density and matching, so we have to wait to have occurred all the cases to have selected the entire control population. This is not so in the case cohort study, much easier, wherein the reference population is selected at the beginning of the study.

The drawback of these studies, as we have said, is that the analysis is a bit more complicated than the conventional observational studies because it is not enough to do the analysis of the raw data results. Instead, results must be adjusted by the possibility that a participant can act as a case and as a control (in nested studies) and by the overrepresentation of cases in the sub-cohort study (in the case cohort study). But that’s another story…

A matter of pairs

We saw in the previous post how observational studies, in particular cohort studies and case-control studies, are full of traps and loopholes. One of these traps is the backdoor through which data may be eluding us, causing that we get erroneous estimates of association measures. This backdoor is kept ajar by confounding factors.

We know that there’re several ways to control confounding. One of them, pairing, has it peculiarities in accordance whether we employ it in a cohort study or in a case-control study.

When it comes to cohort studies, matching by the confounding factor allows us to obtain an adjusted measure of association. This is because we control the influence of the confounding variable on exposure and on the effect. However, the above is not fulfilled when the matching technique is used in a case-control study. The design of this type of study imposes the obligation to make the pairing once the effect has been produced. Thus, patients that act as controls are a set of independent individuals chosen at random, since each control is selected because it fulfills a series of criteria established by the case with which it is going to be paired. This, of course, prevents us to select other individuals in the population who do not meet the specified criteria but would be potentially included in the study. If we forget this little detail and apply the same methodology of analysis that would use a cohort study we would incur in a selection bias that would invalidate our results. In addition, although we force a similar distribution of the confounder, we only fully control its influence on the effect, but no on the exposure.

So the mentality of the analysis varies slightly when assessing the results of a case-control study in which we used the matching technique to control for confounding factors. While with an unpaired study we analyze the association between exposure and effect on the overall group, when we paired we must study the effect on the case-control pairs.

cyc_pairingWe will see this continuing with the example of the effect of tobacco on the occurrence of laryngeal carcinoma from the previous post.

In the upper table we see the global data of the study. If we analyze the data without considering that we used the pairing to select the controls we obtain an odds ratio of 2.18, as we saw in the previous post. However, we know that this estimate is wrong. What do we do? Consider the effect of couples, but only that of those that don’t get along.

We see in the table below the distribution of the pairs according to their exposure to tobacco. We have 208 pairs in which both the case (person with laryngeal cancer) and the control are smokers. Being both subject to exposure they don’t serve to estimate their association with the effect. The same is true of the 46 pairs in which neither the case nor the control smoke. Pairs of interest are the 14 in which the control smoke but the case don’t, and the 62 pairs in which only the case smokes, but not the control.

These discordant pairs are the ones that give us information on the effect of tobacco on the occurrence of laryngeal cancer. If we calculated the odds ratio is of 62/14 = 4.4, a measure of association stronger than the previously obtained and certainly much closer to reality.

Finally, I want to do three considerations before finishing. First, although it goes without saying, to remind you that the data are a product of my imagination and that the example is completely fictitious although it does not seem as stupid as others I invented in other posts. Second, these calculations are usually made with software, using the Mantel-Haenszel´s or the McNemar´s test. The third is to comment that in all these examples we have used a pairing ratio of 1: 1 (one control per case), but this need not necessarily be so because, in some cases, we may be interested in using more than one control for each case. This entails its differences about the influence of the confounder on the estimated measure of association, and its considerations when performing the analysis. But that’s another story…

Birds of a feather flock together

Sometimes, we cannot help confounding factors getting involved in our studies, both known and unknown. These confounding variables open a backdoor through which our data can slip, making those measures of association between exposure and effect that we estimate not to correspond to reality.

During the phase of analysis it is often used techniques as stratification or regression models to measure the association adjusted by the confounding variable. But we can also try to prevent confusion in the design phase. One way is to restrict the inclusion criteria in accordance with the confounding variable. Another strategy is to select controls to have the same distribution of confounding variable than the intervention group. This is what is known as pairing.

bird_feather_poblacion generalSuppose we want to determine the effect of smoking on the frequency of laryngeal cancer in a population distribution that you see in the first table. We can see that 80% of smokers are men, while only 20% of non-smokers are. We invented the risk of cancer in men is 2%, but rises to 6% for smokers. For its part, the risk in women is 1%, reaching 3% if they smoke. So, although all of them will double the risk if they practice the more antisocial of the vices, men always have twice the risk than women (while being equal the exposure to tobacco between the two sexes, because men smokers have six times more risk than non-smoker women). In short, sex acts as a confounding factor: influencing the likelihood of exposure and the likelihood of the effect, but is not part of the causal sequence between smoking and cancer of the larynx. This would be taken into account when analyzing and calculating the adjusted relative risk by Mantel-Haenszel technique or using a logistic regression model.

Let’s see another possibility, if we know the confounding factor, which is trying to prevent its effect during the planning phase of the study. Suppose we start from a cohort of 500 smokers, 80% out of them are men and 20% are women. Instead of randomly taking 500 non-smokers controls, we include in the unexposed cohort one non-smoker man per each smoker man in the exposed cohort, and the same with women. We will have two cohorts with a similar distribution of the confounding variable and, of course, also similar in the distribution of the remaining known variables (otherwise we could not compare them).

Have we solved the problem of confusion? Let’s check it out.

bird_feather_cohortes_tres tablasbird_feather_cohortesWe see the contingency table of our study with 1000 people, 80% men and 20% women in both groups exposed and unexposed. As we know the risk of developing cancer by gender and smoking status, we can calculate the number of people we expect to develop cancer during the study: 24 men smokers (6% of 400), eight non-smoking men (2% of 400), three women smokers (3% of 100) and one non-smoker women (1% of 100).

With these data we can build the contingency tables, global and stratified by gender, we expect to find at the end of follow-up. If we calculate the measure of association (in this case, the relative risk) in men and women separately we see that coincides (RR = 3). Plus, it’s the same risk as the global cohort, so it seems we have managed to close the back door. We know that in a cohort study, matching the confounding factor allows us to counteract its effect.

Now suppose that instead of a cohort study we conduct a case-control study. Can we use the pairing? Of course we can, who’s going to stop us? But there is one problem.

If we think about it, we realize that pairing with cohorts influences both the exposure and the effect. However, in case-control studies, forcing a similar distribution of confounding affects only its influence on the effect, not the one that has over exposure. This is so because homogenizing according to the confounder we also do it according to other related factors, among other, the exposition itself. For this reason, pairing doesn’t guarantee closing the back door in case-control studies.

bird_feather_casos controlesSomeone does not buy it? Let’s assume that we have 330 people with laryngeal cancer (80% male and 20% female). To do the study, we selected a group of similar controls from the same population (what is called a case-control study nested in a cohort study).

We know the number of expose and non-exposed from data we gave at the beginning of the general population, knowing the risk of cancer arising by gender and exposure to tobacco. In addition, we can also build the table of controls, since we know the percentage of exposure to tobacco by sex.

Finally, with the data from these three tables we can build the contingency tables for the overall study and those for men and women.

In this case, the suitable measure of association is the odds ratio, which has a value of three for men and women, but is 2.18 for the overall study population. Thus we see that they do bird_feather_casoscontroles_tres tablasnot match, which is telling us that we have not completely escaped from the effect of the confounder even though we used the technique of pairing to select the control group.

So pairing cannot be used in case-control studies? Yes, yes it can, although the analysis of the results to estimate the extent of adjusted association is a little different. But that is another story…

To what do you attribute it?

It seems like only yesterday. I began my adventures at the hospital and had my first contacts with The Patient. And, by the way, I didn’t know much about diseases but I knew without thinking about it what were the three questions with which any good clinical history began: what is bothering you?, how long has it been going on?, and to what do you attribute it?.

The fact is that the need to know the why of things is inherent to human nature and, of course, is of great importance in medicine. Everyone is mad for establishing cause and effect relations; sometimes one does it rather loosely and comes to the conclusion that the culprit of his summer’s cold is the supermarket’s guy, who has set the air conditioned at maximal power. This is the reason why studies on etiology must be conducted and assessed with scientific rigour. For this reason and because when we talk about etiology we also refer to harm, including that derived from our own actions (what educated people call iatrogenic).

This is why studies on etiology/harm have similar designs. The clinical trial is the ideal choice and we can use it, for example, to know if a treatment is the cause of the patient’s recovery. But when we study risk factors or harmful exposures, the ethical principle of nonmaleficence prevent us to randomized exposures, so we have to resort to observational studies such us cohort studies or case-control studies, although the level of evidence provided by them will be smaller than that of the experimental studies.

To critically appraise a paper on etiology / harm, we’ll resort to our well-known pillars: validity, relevance and applicability.

First, we’ll focus on the VALIDITY or scientific rigour of the work, which should answer to the question whether the factor or intervention studied was the cause of the adverse effect or disease observed.

As always, we’ll asses a series of primary validity criteria. If these are not fulfilled, we’ll left the paper and devote ourselves to something else more profitable. The first is to determine whether groups compared were similar regarding to other important factors different from the exposure studied. Randomization in clinical trials provides that the groups are homogeneous, but we cannot count on it in the case of observational studies. The homogeneity of the two cohorts is essential and the study is not valid without it. One can always argue that has stratified the differences between the two groups or that has made a multivariate analysis to control for the effect of known confounders but, what about the unknown?. The same applies to case-control studies, much more sensitive to bias and confusion.

Have exposure and effect been assessed in the same way in all groups?. In clinical trials and cohort studies we have to check that the effect has had the same likelihood of appearance and of be detected in the two groups. Moreover, in case-control studies is very important to properly asses previous exposure, so we must investigate whether there is potential bias in data collection, such us recall bias (patients often remember symptoms better than healthy). Finally, we must consider if follow-up has been long enough and complete. Losses during the study, common in observational designs, can bias the results.

If we have answered yes to all the three questions, we’ll turn to consider secondary validity criteria. Study’s results have to be evaluated to determine whether the association between exposure and effect satisfies a reasonably evidence of causality.Hill_en One useful tool are the Hill’s criteria, which was a gentleman who suggested using a series of items to try to distinguish the causal or non-causal nature of an association. These criteria are: a) strength of association, represented by the risk ratio between exposure and effect, that we’ll consider shortly; b) consistency, which is reproducibility in populations or in different situations; c) specificity, which means that a cause produces a unique effect and no a multiple one; d) temporality: it’s essential that cause precedes the effect; e) biological gradient: the more intense the cause, the more intense the effect; f) plausibility: the relationship has to be logical according to our biological knowledge; g) coherence, the relationship should not be in conflict with other knowledge about disease or effect; h) experimental evidence, often difficult to obtain in humans for ethical reasons; and finally, i) analogy to other known situations. Although these are a quite-vintage criteria and some of them may be irrelevant (experimental evidence or analogy), they may serve as a guidance. The criterion of temporality would be a necessary one and would be well complemented with biological gradient, plausibility and coherence.

Another important aspect is to consider whether, apart from the intervention under study, both groups were treated similarly. In this type of study in which the double-blind is absent is where there is more risk of bias due to co-interventions, especially if these are treatments with a much greater effect than the exposure under study.

Regarding the RELEVANCE of the results, we must consider the magnitude and precision of the association between exposure and effect.

What was the strength of the association?. The most common measure of association is the risk ratio (RR), which can be used in trials and cohort studies. However, in case-control studies we don’t know the incidence of the effect (the effect has occurred when the study is conducted), so we used the odds ratio (OR). As we know, the interpretation of the two parameters is similar. Even the values of the two are similar when the frequency of the effect is very low. However, the greater the magnitude or frequency of the effect, the more different RR and OR are, with the peculiarity that the OR tends to overestimate the strength of the association when it is greater than 1 and underestimate it when it is less than 1. Anyway, these vagaries of OR will exceptionally modify the qualitative interpretation of the results.

It has to be kept in mind that a test is statistically significant for any value of OR or RR whose confidence interval does not include one, but observational studies have to be a little more demanding. Thus, in a cohort study we’ll like to see values greater than or equal to three for RR and equal than or greater than four in case-control studies.

Another useful parameter (in trials and cohort studies) is the difference in risks or incidence difference, which is a fancy way of calling our known absolute risk reduction (ARR), which allows us to calculate the NNT (or NNH, number needed to harm) parameter that best quantifies us the clinical significance of the association. Also, similar to the relative risk reduction (RRR), we have the attributable fraction in the exposed, which is the percentage of risk observed in the exposed that is due to exposure.

And, what is the accuracy of the results?. As we know, we’ll use our beloved confidence intervals, which serve to determine the accuracy of the parameter estimate in the population. It is always useful to have all these parameters, which must be included in the study or its calculation should be possible from the data provided by the authors.

Finally, we’ll asses the APPLICABILITY of the results to our clinical practice.

Are the results applicable to our patients?. Search to see if there are differences that advise against extrapolating results of the work to our environment. Also, consider what is the magnitude of the risk in our patients based on the results of the study and their characteristics. And finally, having all this information in mind, we must think about our working conditions, the choices we have and the patient’s preferences to decide whether to avoid or not the studied exposure. For example, if the magnitude of the risk is high and we have an effective alternative, the decision will be clear, but things are not always so simple.

As always, I advise you to use the resources available on the Internet, such as CASP’s, both the design-specific templates and the calculator to assess the relevance of the results.

Before concluding, let me clarify one thing. Although we’ve said we use RR in cohort studies and clinical trials and we use OR in case-control studies, actually we can use OR in any type of study (not so for RR, for which we must know the incidence of the effect). The problem is that ORs are somewhat less accurate, so we prefer to use RR and NNT whenever possible. However, OR is increasingly popular for another reason, its use in logistic regression models, which allow us to obtain estimates adjusted for confounding variables. But that’s another story…

The table

There’re plenty of tables. And they play a great role throughout our lives. Perhaps the first one that strikes us during our early childhood is the multiplication table. Who doesn’t long, at least the older of us, how we used to repeat like parrots that of two times one equals two, two times… until we learned it by heart?. But, as soon as we achieved mastering multiplication tables we bumped into the periodic table of the elements.  Again to memorize, this time aided by idiotic and impossible mnemonics about some Indians who Gained Bore I-don’t-know-what.

But it was through the years that we found the worst table of all: the foods composition table, with its cells full of calories. This table pursues us even in our dreams. And it’s because eating a lot have many drawbacks, most of which are found out with the aid of other table: the contingency table.

Contingency tables are used very frequently in Epidemiology to analyze the relationship among two or more variables. They consist of rows and columns. Groups by level of exposure to the study factor are usually represented in the rows, while categories that have to do with the health problem that we are investigating are usually placed in the columns. Rows and columns intersect to form cells in which the frequency of its particular combination of variables is represented.

The most common table represents two variables (our beloved 2×2 table), one dependent and one independent, but this is not always true. There may be more than two variables and, sometimes, there may be no direction of dependence between variables before doing the analysis.

Simpler 2×2 tables allow analyzing the relationship between two dichotomous variables. According to the content and the design of the study to which they belong, their cells may have slightly different meanings, just as there will be different parameters that can be calculated from the data of the table.

contingencia_transversal_enThe first we’re going to talk about are cross-sectional studies’ tables. This type of study represents a sort of snapshot of our sample that allows us to study the relationship between the variables. They’re, therefore, prevalence studies and, although data can be collected over a period of time, the result only represents the snapshot we have already mentioned. Dependent variable is placed in columns (disease status) and independent variable in rows (exposure status), so we can calculate a series of frequency, association and statistical significance measures.

The frequency measures are the prevalence of disease among exposed (EXP) and unexposed (NEXP) and the prevalence of exposure among diseased (DIS) and non-diseased (NDIS). These prevalences represent the number of sick, healthy, exposed and unexposed in relation to each group total, so they are rates estimated in a precise moment.

The measures of association are the rates between prevalences just aforementioned according to exposure and disease status, and the odds ratio, which tells us how much more likely the disease will occur in exposed (EXP) versus non-exposed (NEXP) people. If these parameters have a value greater than one it will indicate that the exposure factor is a risk factor for disease. On the contrary, a value equal or greater than zero and less than one will mean a protective factor. And if the value equals one, it will be neither fish nor fowl.

Finally, as in all types of tables that we’ll mention, you can calculate statistical significance measures, mainly chi-square with or without correction, Fisher’s exact test and p value, unilateral or bilateral.

contingencia_casos_controles_enVery much like those table we’ve just seen are case-control studies’ tables. This study design tries to find out if different levels of exposure can explain different levels of disease. Cases and controls are placed in columns and exposure status (EXP and NEXP) in rows.

The measures of frequency that we can calculate are the proportion of exposed cases (based on the total number of cases) and the proportion of exposed controls (based on the total number of controls). Obviously, we can also come up with the proportions of non-exposed calculating the complementary values of the aforementioned ones.

The key measure of association is the odds ratio that we already know and in which we are not going to spend much time. All of us know that, in the simplest way, we can calculate its value as the ratio of the cross products of the table and that it informs us about how much more likely is the disease to occur in exposed than in non-exposed people. The other measure of association is the exposed attributable fraction (ExpAR), which indicates the number of patients who are sick due to direct effect of exposition.

Managing this type of tables, we can also calculate a measure of impact: the population attributable fraction (PopAR), which tells us what would happen on the population if we eliminated the exposure factor. If the exposure factor is a risk factor, the impact will be positive. Conversely, if we are dealing with a protective factor, its elimination impact will be negative.

With this type of study design, the statistical significance measures will be different if we are managing paired (McNemar test) or un-paired data (chi-square, Fisher’s exact test and p value).

contingencia_cohortes_acumulada_enThe third type of contingency tables is the corresponding to cohort studies, although their structure differ slightly if you count total cases along the entire period of the study (cumulative incidence) or if you consider the time period of the study, the time of onset of disease in cases and the different time of follow-up among groups (incidence rate or incidence density).

Tables from cumulative incidence studies (CI) are similar to those we have seen so far. Disease status is represented in columns and exposure status in rows. Otherwise, incidence density (ID) tables represent in the first column the number of patients and, in the second column, the follow-up in patients-years format, so that those with longer follow-up have greater weight when calculating measures of frequency, association, etc.

contingencia_cohortes_densidad_enThe measures of frequency are the EXP risk (Re) and the NEXP risk (Ro) for CI studies and EXP and NEXP incidence rates in ID studies.

We can calculate the ratios of the above measures to come up with the association measures: relative risk (RR), absolute risk reduction (ARR) and relative risk reduction (RRR) for CI studies and incidence density reduction (IRD) for ID studies. In addition, we can also calculate ExpAR as we did in the cases-control study, as well as a measure of impact: PopAR.

We can also calculate the odds ratios if we want, but they are generally much less used in this type of study design. In any case, we know that RR and odds ratio are very similar when disease prevalence is low.

To end with this kind of table, we can calculate the statistical significance measures: chi-square, Fisher’s test and p value for CI studies and other association measures for ID studies.

As always, all these calculations can be done by hand, although I recommend you to use a calculator, such as the available one at the CASPe site. It’s easier and faster and further we will come up with all these parameters and their confidence intervals, so we can also estimate their precision.

And with this we come to the end. There’re more types of tables, with multiple levels for managing more than two variables, stratified according to different factors and so on. But that’s another story…