Science without sense…double nonsense

Píldoras sobre medicina basada en pruebas

Archive for the Epidemiology Category

The guard’s dilemma

Print Friendly, PDF & Email

The world of medicine is a world of uncertainty. We can never be sure of anything at 100%, however obvious it may seem a diagnosis, but we cannot beat right and left with ultramodern diagnostics techniques or treatments (that are never safe) when making the decisions that continually haunt us in our daily practice.

That’s why we are always immersed in a world of probabilities, where the certainties are almost as rare as the so-called common sense which, as almost everyone knows, is the least common of the senses.

Imagine you are in the clinic and a patient comes because he has been kicked in the ass, pretty strong, though. As good doctor as we are, we ask that of what’s wrong?, since when?, and what do you attribute it to? And we proceed to a complete physical examination, discovering with horror that he has a hematoma on the right buttock.

Here, my friends, the diagnostic possibilities are numerous, so the first thing we do is a comprehensive differential diagnosis. To do this, we can take four different approaches. The first is the possibilistic approach, listing all possible diagnoses and try to rule them all simultaneously applying the relevant diagnostic tests. The second is the probabilistic approach, sorting diagnostics by relative chance and then acting accordingly. It seems a posttraumatic hematoma (known as the kick in the ass syndrome), but someone might think that the kick has not been so strong, so maybe the poor patient has a bleeding disorder or a blood dyscrasia with secondary thrombocytopenia or even an atypical inflammatory bowel disease with extraintestinal manifestations and gluteal vascular fragility. We could also use a prognostic approach and try to show or rule out possible diagnostics with worst prognosis, so the diagnosis of the kicked in the ass syndrome lose interest and we were going to rule out chronic leukemia. Finally, a pragmatic approach could be used, with particular interest in first finding diagnostics that have a more effective treatment (the kick will be, one more time, the number one).

It seems that the right thing is to use a judicious combination of pragmatic, probabilistic and prognostic approaches. In our case we will investigate if the intensity of injury justifies the magnitude of bruising and, in that case, we would indicate some hot towels and we would refrain from further diagnostic tests. And this example may seems to be bullshit, but I can assure you I know people who make the complete list and order the diagnostic tests when there are any symptoms, regardless of expenses or risks. And, besides, one that I could think of, could assess the possibility of performing a more exotic diagnostic test that I cannot imagine, so the patient would be grateful if the diagnosis doesn’t require to make a forced anal sphincterotomy. And that is so because, as we have already said, the waiting list to get some common sense exceeds in many times the surgical waiting list.

Now imagine another patient with a symptom complex less stupid and absurd than the previous example. For instance, let’s think about a child with symptoms of celiac disease. Before we make any diagnostic test, our patient already has a probability of suffering the disease. This probability will be conditioned by the prevalence of the disease in the population from which she proceeds and is called the pretest probability. This probability will stand somewhere between two thresholds: the diagnostic threshold and the therapeutic threshold.

The usual thing is that the pre-test probability of our patient does not allow us to rule out the disease with reasonable certainty (it would have to be very low, below the diagnostic threshold) or to confirm it with sufficient security to start the treatment (it would have to be above the therapeutic threshold).

We’ll then make the indicated diagnostic test, getting a new probability of disease depending on the result of the test, the so-called post-test probability. If this probability is high enough to make a diagnosis and initiate treatment, we’ll have crossed our first threshold, the therapeutic one. There will be no need for additional tests, as we will have enough certainty to confirm the diagnosis and treat the patient, always within a range of uncertainty.

And what determines our treatment threshold? Well, there are several factors involved. The greater the risk, cost or adverse effects of the treatment in question, the higher the threshold that we will demand to be treated. In the other hand, as much more serious is the possibility of omitting the diagnosis, the lower the therapeutic threshold that we’ll accept.

But it may be that the post-test probability is so low that allows us to rule out the disease with reasonable assurance. We shall then have crossed our second threshold, the diagnostic one, also called the no-test threshold. Clearly, in this situation, it is not indicated further diagnostic tests and, of course, starting treatment.

However, very often changing pretest to post-test probability still leaves us in no man’s land, without achieving any of the two thresholds, so we will have to perform additional tests until we reach one of the two limits.

And this is our everyday need: to know the post-test probability of our patients to know if we discard or confirm the diagnosis, if we leave the patient alone or we lash her out with our treatments. And this is so because the simplistic approach that a patient is sick if the diagnostic test is positive and healthy if it is negative is totally wrong, even if it is the general belief among those who indicate the tests. We will have to look, then, for some parameter that tells us how useful a specific diagnostic test can be to serve the purpose we need: to know the probability that the patient suffers the disease.

And this reminds me of the enormous problem that a brother-in-law asked me about the other day. The poor man is very concerned with a dilemma that has arisen. The thing is that he’s going to start a small business and he wants to hire a security guard to stay at the entrance door and watch for those who take something without paying for it. And the problem is that there’re two candidates and he doesn’t know who of the two to choose. One of them stops nearly everyone, so no burglar escapes. Of course, many honest people are offended when they are asked to open their bags before leaving and so next time they will buy elsewhere. The other guard is the opposite: he stops almost anyone but the one he spots certainly brings something stolen. He offends few honest people, but too many grabbers escape. A difficult decision…

Why my brother-in-law comes to me with this story? Because he knows that I daily face with similar dilemmas every time I have to choose a diagnostic test to know if a patient is sick and I have to treat her. We have already said that the positivity of a test does not assure us the diagnosis, just as the bad looking of a client does not ensure that the poor man has robbed us.

Let’s see it with an example. When we want to know the utility of a diagnostic test, we usually compare its results with those of a reference or gold standard, which is a test that, ideally, is always positive in sick patients and negative in healthy people. Now let’s suppose that I perform a study in my hospital office with a new diagnostic test to detect a certain disease and I get the results from the attached table (the patients are those who have the positive reference test and the healthy ones, the negative).

Let’s start with the easy part. We have 1598 subjects, 520 out of them sick and 1078 healthy. The test gives us 446 positive results, 428 true (TP) and 18 false (FP). It also gives us 1152 negatives, 1060 true (TN) and 92 false (FN). The first we can determine is the ability of the test to distinguish between healthy and sick, which leads me to introduce the first two concepts: sensitivity (Se) and specificity (Sp). Se is the likelihood that the test correctly classifies a patient or, in other words, the probability that a patient gets a positive result. It’s calculated dividing TP by the number of sick. In our case it equals 0.82 (if you prefer to use percentages you have to multiply by 100). Moreover, Sp is the likelihood that the test correctly classifies a healthy or, put another way, the probability that a healthy gets a negative result. It’s calculated dividing TN by the number of healthy. In our example, it equals 0.98.

Someone may think that we have assessed the value of the new test, but we have just begun to do it. And this is because with Se and Sp we somehow measure the ability of the test to discriminate between healthy and sick, but what we really need to know is the probability that an individual with a positive results being sick and, although it may seem to be similar concepts, they are actually quite different.

The probability of a positive of being sick is known as the positive predictive value (PPV) and is calculated dividing the number of patients with a positive test by the total number of positives. In our case it is 0.96. This means that a positive has a 96% chance of being sick. Moreover, the probability of a negative of being healthy is expressed by the negative predictive value (NPV), with is the quotient of healthy with a negative test by the total number of negatives. In our example it equals 0.92 (an individual with a negative result has 92% chance of being healthy). This is already looking more like what we said at the beginning that we needed: the post-test probability that the patient is really sick.

And from now on is when neurons begin to be overheated. It turns out that Se and Sp are two intrinsic characteristics of the diagnostic test. Their results will be the same whenever we use the test in similar conditions, regardless of the subjects of the test. But this is not so with the predictive values, which vary depending on the prevalence of the disease in the population in which we test. This means that the probability of a positive of being sick depends on how common or rare the disease in the population is. Yes, you read this right: the same positive test expresses different risk of being sick, and for unbelievers, I’ll put another example.

Suppose that this same study is repeated by one of my colleagues who works at a community health center, where population is proportionally healthier than at my hospital (logical, they have not suffered the hospital yet). If you check the results in the table and bring you the trouble to calculate it, you may come up with a Se of 0.82 and a Sp of 0.98, the same that I came up with in my practice. However, if you calculate the predictive values, you will see that the PPV equals 0.9 and the NPV 0.95. And this is so because the prevalence of the disease (sick divided by total) is different in the two populations: 0.32 at my practice vs 0.19 at the health center. That is, in cases of highest prevalence a positive value is more valuable to confirm the diagnosis of disease, but a negative is less reliable to rule it out. And conversely, if the disease is very rare a negative result will reasonably rule out disease but a positive will be less reliable at the time to confirm it.

We see that, as almost always happen in medicine, we are moving on the shaking ground of probability, since all (absolutely all) diagnostic tests are imperfect and make mistakes when classifying healthy and sick. So when is a diagnostic test worth of using it? If you think about it, any particular subject has a probability of being sick even before performing the test (the prevalence of disease in her population) and we’re only interested in using diagnostic tests if that increase this likelihood enough to justify the initiation of the appropriate treatment (otherwise we would have to do another test to reach the threshold level of probability to justify treatment).

And here is when this issue begins to be a little unfriendly. The positive likelihood ratio (PLR), indicates how much more probable is to get a positive with a sick than with a healthy subject. The proportion of positive in sick patients is represented by Se. The proportion of positives in healthy are the FP, which would be those healthy without a negative result or, what is the same, 1-Sp. Thus, PLR = Se / (1 – Sp). In our case (hospital) it equals 41 (the same value no matter we use percentages for Se and Sp). This can be interpreted as it is 41 times more likely to get a positive with a sick than with a healthy.

It’s also possible to calculate NLR (negative), which expresses how much likely is to find a negative in a sick than in a healthy. Negative patients are those who don’t test positive (1-Se) and negative healthy are the same as the TN (the test’s Sp). So, NLR = (1 – Se) / Sp. In our example, 0.18.

A ratio of 1 indicates that the result of the test doesn’t change the likelihood of being sick. If it’s greater than 1 the probability is increased and, if less than 1, decreased. This is the parameter used to determine the diagnostic power of the test. Values > 10 (or > 0.01) indicates that it’s a very powerful test that supports (or contradict) the diagnosis; values from 5-10 (or 0.1-0.2) indicates low power of the test to support (or disprove) the diagnosis; 2-5 (or 0.2-05) indicates that the contribution of the test is questionable; and, finally, 1-2 (0.5-1) indicates that the test has not diagnostic value.

The likelihood ratio does not express a direct probability, but it helps us to calculate the probabilities of being sick before and after testing positive by means of the Bayes’ rule, which says that the posttest odds is equal to the product of the pretest odds by the likelihood ratio. To transform the prevalence into pre-test odds we use the formula odds = p / (1-p). In our case, it would be 0.47. Now we can calculate the post-test odds (PosO) by multiplying the pretest odds by the likelihood ratio. In our case, the positive post-test odds value is 19.27. And finally, we transform the post-test odds into post-test probability using the formula p = odds / (odds + 1). In our example it values 0.95, which means that if our test is positive the probability of being sick goes from 0.32 (the pre-test probability) to 0.95 (post-test probability).

If there’s still anyone reading at this point, I’ll say that we don’t need all this gibberish to get post-test probability. There are multiple websites with online calculators for all these parameters from the initial 2 by 2 table with a minimum effort. I addition, the post-test probability can be easily calculated using a Fagan’s nomogram (see attached figure). This graph represents in three vertical lines from left to right the pre-test probability (it is represented inverted), the likelihood ratios and the resulting post-test probability.

To calculate the post-test probability after a positive result, we draw a line from the prevalence (pre-test probability) to the PLR and extend it to the post-test probability axis. Similarly, in order to calculate post-test probability after a negative result, we would extend the line between prevalence and the value of the NLR.

In this way, with this tool we can directly calculate the post-test probability by knowing the likelihood ratios and the prevalence. In addition, we can use it in populations with different prevalence, simply by modifying the origin of the line in the axis of pre-test probability.

So far we have defined the parameters that help us to quantify the power of a diagnostic test and we have seen the limitations of sensitivity, specificity and predictive values and how the most useful in a general way are the likelihood ratios. But, you will ask, what is a good test?, is it a sensitive one?, a specific?, both?

Here we are going to return to the guard’s dilemma that has arisen to my poor brother-in-law, because we have left him abandoned and we have not answered yet which of the two guards we recommend him to hire, the one who ask almost everyone to open their bags and so offending many honest people, or the one who almost never stops honest people but, stopping almost anyone, many thieves get away.

And what do you think is the better choice? The simple answer is: it depends. Those of you who are still awake by now will have noticed that the first guard (the one who checks many people) is the sensitive one while the second is the specific one. What is better for us, the sensitive or the specific guard? It depends, for example, on where our shop is located. If your shop is located in a heeled neighborhood the first guard won’t be the best choice because, in fact, few people will be stealers and we’ll prefer not to offend our customers so they don’t fly away. But if our shop is located in front of the Cave of Ali Baba we’ll be more interested in detecting the maximum number of clients carrying stolen stuff. Also, it can depend on what we sell in the store. If we have a flea market we can hire the specific guard although someone can escape (at the end of the day, we’ll lose a few amount of money). But if we sell diamonds we’ll want no thieve to escape and we’ll hire the sensitive guard (we’ll rather bother someone honest than allows anybody escaping with a diamond).

The same happens in medicine with the choice of diagnostic tests: we have to decide in each case whether we are more interested in being sensitive or specific, because diagnostic tests not always have a high sensitivity (Se) and specificity (Sp).

In general, a sensitive test is preferred when the inconveniences of a false positive (FP) are smaller than those of a false negative (FN). For example, suppose that we’re going to vaccinate a group of patients and we know that the vaccine is deadly in those with a particular metabolic error. It’s clear that our interest is that no patient be undiagnosed (to avoid FN), but nothing happens if we wrongly label a healthy as having a metabolic error (FP): it’s preferable not to vaccinate a healthy thinking that it has a metabolopathy (although it hasn’t) that to kill a patient with our vaccine supposing he was healthy. Another less dramatic example: in the midst of an epidemic our interest will be to be very sensitive and isolate the largest number of patients. The problem here if for the unfortunate healthy who test positive (FP) and get isolated with the rest of sick people. No doubt we’d do him a disservice with the maneuver. Of course, we could do to all the positives to the first test a second confirmatory one that is very specific in order to avoid bad consequences to FP people.

On the other hand, a specific test is preferred when it is better to have a FN than a FP, as when we want to be sure that someone is actually sick. Imagine that a test positive result implies a surgical treatment: we’ll have to be quite sure about the diagnostic so we don’t operate any healthy people.

Another example is a disease whose diagnosis can be very traumatic for the patient or that is almost incurable or that has no treatment. Here we´ll prefer specificity to not to give any unnecessary annoyance to a healthy. Conversely, if the disease is serious but treatable we´ll probably prefer a sensitive test.

So far we have talked about tests with a dichotomous result: positive or negative. But, what happens when the result is quantitative? Let’s imagine that we measure fasting blood glucose. We must decide to what level of glycemia we consider normal and above which one will seem pathological. And this is a crucial decision, because Se and Sp will depend on the cutoff point we choose.

To help us to choose we have the receiver operating characteristic, known worldwide as the ROC curve. We represent in coordinates (y axis) the Se and in abscissas the complementary Sp (1-Sp) and draw a curve in which each point represents the probability that the test correctly classifies a healthy-sick couple taken at random. The diagonal of the graph would represent the “curve” if the test had no ability to discriminate healthy from sick patients.

As you can see in the figure, the curve usually has a segment of steep slope where the Se increases rapidly without hardly changing the Sp: if we move up we can increase Se without practically increasing FP. But there comes a time when we get to the flat part. If we continue to move to the right, there will be a point from which the Se will no longer increase, but will begin to increase FP. If we are interested in a sensitive test, we will stay in the first part of the curve. If we want specificity we will have to go further to the right. And, finally, if we do not have a predilection for either of the two (we are equally concerned with obtaining FP than FN), the best cutoff point will be the one closest to the upper left corner. For this, some use the so-called Youden’s index, which optimizes the two parameters to the maximum and is calculated by adding Se and Sp and subtracting 1. The higher the index, the fewer patients misclassified by the diagnostic test.

A parameter of interest is the area under the curve (AUC), which represents the probability that the diagnostic test correctly classifies the patient who is being tested (see attached figure). An ideal test with Se and Sp of 100% has an area under the curve of 1: it always hits. In clinical practice, a test whose ROC curve has an AUC> 0.9 is considered very accurate, between 0.7-0.9 of moderate accuracy and between 0.5-0.7 of low accuracy. On the diagonal, the AUC is equal to 0.5 and it indicates that it does not matter if the test is done by throwing a coin in the air to decide if the patient is sick or not. Values below 0.5 indicate that the test is even worse than chance, since it will systematically classify patients as healthy and vice versa.Curious these ROC curves, aren`t they? Its usefulness is not limited to the assessment of the goodness of diagnostic tests with quantitative results. The ROC curves also serve to determine the goodness of fit of a logistic regression model to predict dichotomous outcomes, but that is another story…

King of Kings

Print Friendly, PDF & Email

There is no doubt that when doing a research in biomedicine we can choose from a large number of possible designs, all with their advantages and disadvantages. But in such a diverse and populous court, among jugglers, wise men, gardeners and purple flautists, it reigns over all of them the true Crimson King in epidemiology: the randomized clinical trial.

The clinical trial is an interventional analytical study, with antegrade direction and concurrent temporality, and with sampling of a closed cohort with control of exposure. In a trial, a sample of a population is selected and divided randomly into two groups. One of the groups (intervention group) undergoes the intervention that we want to study, while the other (control group) serves as a reference to compare the results. After a given follow-up period, the results are analyzed and the differences between the two groups are compared. We can thus evaluate the benefits of treatments or interventions while controlling the biases of other types of studies: randomization favors that possible confounding factors, known or not, are distributed evenly between the two groups, so that if in the end we detect any difference, this has to be due to the intervention under study. This is what allows us to establish a causal relationship between exposure and effect.

From what has been said up to now, it is easy to understand that the randomized clinical trial is the most appropriate design to assess the effectiveness of any intervention in medicine and is the one that provides, as we have already mentioned, a higher quality evidence to demonstrate the causal relationship between the intervention and the observed results.

But to enjoy all these benefits it is necessary to be scrupulous in the approach and methodology of the trials. There are checklists published by experts who understand a lot of these issues, as is the case of the CONSORT list, which can help us assess the quality of the trial’s design. But among all these aspects, let us give some thought to those that are crucial for the validity of the clinical trial.

Everything begins with a knowledge gap that leads us to formulate a structured clinical question. The only objective of the trial should be to answer this question and it is enough to respond appropriately to a single question. Beware of clinical trials that try to answer many questions, since, in many cases, in the end they do not respond well to any. In addition, the approach must be based on what the inventors of methodological jargon call the equipoise principle, which does not mean more than, deep in our hearts, we do not really know which of the two interventions is more beneficial for the patient (from the ethical point of view, it would be necessary to be anathema to make a comparison if we already know with certainty which of the two interventions is better). It is curious in this sense how the trials sponsored by the pharmaceutical industry are more likely to breach the equipoise principle, since they have a preference for comparing with placebo or with “non-intervention” in order to be able to demonstrate more easily the efficacy of their products.Then we must carefully choose the sample on which we will perform the trial. Ideally, all members of the population should have the same probability not only of being selected, but also of finishing in either of the two branches of the trial. Here we are faced with a small dilemma. If we are very strict with the inclusion and exclusion criteria, the sample will be very homogeneous and the internal validity of the study will be strengthened, but it will be more difficult to extend the results to the general population (this is the explanatory attitude of sample selection). On the other hand, if we are not so rigid, the results will be more similar to those of the general population, but the internal validity of the study may be compromised (this is the pragmatic attitude).

Randomization is one of the key points of the clinical trial. It is the one that assures us that we can compare the two groups, since it tends to distribute the known variables equally and, more importantly, also the unknown variables between the two groups. But do not relax too much: this distribution is not guaranteed at all, it is only more likely to happen if we randomize correctly, so we should always check the homogeneity of the two groups, especially with small samples.

In addition, randomization allows us to perform masking appropriately, with which we perform an unbiased measurement of the response variable, avoiding information biases. These results of the intervention group can be compared with those of the control group in three ways. One of them is to compare with a placebo. The placebo should be a preparation of physical characteristics indistinguishable from the intervention drug but without its pharmacological effects. This serves to control the placebo effect (which depends on the patient’s personality, their feelings towards the intervention, their love for the research team, etc.), but also the side effects that are due to the intervention and not to the pharmacological effect (think, for example, of the percentage of local infections in a trial with medication administered intramuscularly).

The other way is to compare with the accepted as the most effective treatment so far. If there is a treatment that works, the logical (and more ethical) is that we use it to investigate whether the new one brings benefits. It is also usually the usual comparison method in equivalence or non-inferiority studies. Finally, the third possibility is to compare with non-intervention, although in reality this is a far-fetched way of saying that only the usual care that any patient would receive in their clinical situation is applied.

It is essential that all participants in the trial are submitted to the same follow-up guideline, which must be long enough to allow the expected response to occur. All losses that occur during follow-up should be detailed and analyzed, since they can compromise the validity and power of the study to detect significant differences. And what do we do with those that get lost or end up in a different branch to the one assigned? If there are many, it may be more reasonable to reject the study. Another possibility is to exclude them and act as if they had never existed, but we can bias the results of the trial. A third possibility is to include them in the analysis in the branch of the trial in which they have participated (there is always one that gets confused and takes what he should not), which is known as analysis by treatment or analysis by protocol. And the fourth and last option we have is to analyze them in the branch that was initially assigned to them, regardless of what they did during the study. This is called the intention-to-treat analysis, and it is the only one of the four possibilities that allows us to retain all the benefits that randomization had previously provided.

As a final phase, we would have the analyze and compare the data to draw the conclusions of the trial, using for this the association and impact measures of effect that, in the case of the clinical trial, are usually the response rate, the risk ratio (RR), the relative risk reduction (RRR), the absolute risk reduction (ARR) and the number needed to treat (NNT). Let’s see them with an example.

Let’s imagine that we carried out a clinical trial in which we tried a new antibiotic (let’s call it A not to get warm from head to feet) for the treatment of a serious infection of the location that we are interested in studying. We randomize the selected patients and give them the new drug or the usual treatment (our control group), according to what corresponds to them by chance. In the end, we measure how many of our patients fail treatment (present the event we want to avoid).

Thirty six out of the 100 patients receiving drug A present the event to be avoided. Therefore, we can conclude that the risk or incidence of the event in those exposed (Ie) is 0.36. On the other hand, 60 of the 100 controls (we call them the group of not exposed) have presented the event, so we quickly calculate that the risk or incidence in those not exposed (Io) is 0.6.

At first glance we already see that the risk is different in each group, but as in science we have to measure everything, we can divide the risks between exposed and not exposed, thus obtaining the so-called risk ratio (RR = Ie / Io). An RR = 1 means that the risk is equal in the two groups. If the RR> 1 the event will be more likely in the group of exposed (the exposure we are studying will be a risk factor for the production of the event) and if RR is between 0 and 1, the risk will be lower in those exposed. In our case, RR = 0.36 / 0.6 = 0.6. It is easier to interpret RR> 1. For example, a RR of 2 means that the probability of the event is twice as high in the exposed group. Following the same reasoning, a RR of 0.3 would tell us that the event is a third less frequent in the exposed than in the controls. You can see in the attached table how these measures are calculated.

But what we are interested in is to know how much the risk of the event decreases with our intervention to estimate how much effort is needed to prevent each one. For this we can calculate the RRR and the ARR. The RRR is the risk difference between the two groups with respect to the control (RRR = [Ie-Io] / Io). In our case it is 0.4, which means that the intervention tested reduces the risk by 60% compared to the usual treatment.

The ARR is simpler: it is the difference between the risks of exposed and controls (ARR = Ie – Io). In our case it is 0.24 (we ignore the negative sign), which means that out of every 100 patients treated with the new drug there will be 24 fewer events than if we had used the control treatment. But there is still more: we can know how many we have to treat with the new drug to avoid an event by just doing the rule of three (24 is to 100 as 1 is to x) or, easier to remember, calculating the inverse of the ARR. Thus, the NNT = 1 / ARR = 4.1. In our case we would have to treat four patients to avoid an adverse event. The context will always tell us the clinical importance of this figure.

As you can see, the RRR, although it is technically correct, tends to magnify the effect and does not clearly quantify the effort required to obtain the results. In addition, it may be similar in different situations with totally different clinical implications. Let’s see it with another example that I also show you in the table. Suppose another trial with a drug B in which we obtain three events in the 100 treated and five in the 100 controls. If you do the calculations, the RR is 0.6 and the RRR is 0.4, as in the previous example, but if you calculate the ARR you will see that it is very different (ARR = 0.02), with an NNT of 50 It is clear that the effort to avoid an event is much greater (4 versus 50) despite the same RR and RRR.

So, at this point, let me advice you. As the data needed to calculate RRR are the same than to calculate the easier ARR (and NNT), if a scientific paper offers you only the RRR and hide the ARR, distrust it and do as with the brother-in-law who offers you wine and cured cheese, asking him why he does not better put a skewer of Iberian ham. Well, I really wanted to say that you’d better ask yourselves why they don’t give you the ARR and compute it using the information from the article.

So far all that we have said refers to the classical design of parallel clinical trials, but the king of designs has many faces and, very often, we can find papers in which it is shown a little differently, which may imply that the analysis of the results has special peculiarities.

Let’s start with one of the most frequent variations. If we think about it for a moment, the ideal design would be that which would allow us to experience in the same individual the effect of the study intervention and the control intervention (the placebo or the standard treatment), since the parallel trial is an approximation that it assumes that the two groups respond equally to the two interventions, which always implies a risk of bias that we try to minimize with randomization. If we had a time machine we could try the intervention in all of them, write down what happens, turn back the clock and repeat the experiment with the control intervention so we could compare the two effects. The problem, the more alert of you have already imagined, is that the time machine has not been invented yet.

But what has been invented is the cross-over clinical trial, in which each subject is their own control. As you can see in the attached figure, in this type of test each subject is randomized to a group, subjected to the intervention, allowed to undergo a wash-out period and, finally, subjected to the other intervention. Although this solution is not as elegant as that of the time machine, the defenders of cross-trials argue the fact that variability within each individual is less than the interindividual one, with which the estimate can be more accurate than that of the parallel trial and, in general, smaller sample sizes are needed. Of course, before using this design you have to make a series of considerations. Logically, the effect of the first intervention should not produce irreversible changes or be very prolonged, because it would affect the effect of the second. In addition, the washing period must be long enough to avoid any residual effects of the first intervention.

It is also necessary to consider whether the order of the interventions can affect the final result (sequence effect), with which only the results of the first intervention would be valid. Another problem is that, having a longer duration, the characteristics of the patient can change throughout the study and be different in the two periods (period effect). And finally, beware of the losses during the study, which are more frequent in longer studies and have a greater impact on the final results than in parallel trials.

Imagine now that we want to test two interventions (A and B) in the same population. Can we do it with the same trial and save costs of all kinds? Yes, we can, we just have to design a factorial clinical trial. In this type of trial, each participant undergoes two consecutive randomizations: first it is assigned to intervention A or to placebo (P) and, second, to intervention B or placebo, with which we will have four study groups: AB, AP, BP and PP. As is logical, the two interventions must act by independent mechanisms to be able to assess the results of the two effects independently.

Usually, an intervention related to a more plausible and mature hypothesis and another one with a less contrasted hypothesis are studied, assuring that the evaluation of the second does not influence the inclusion and exclusion criteria of the first one. In addition, it is not convenient that neither of the two options has many annoying effects or is badly tolerated, because the lack of compliance with one treatment usually determines the poor compliance of the other. In cases where the two interventions are not independent, the effects could be studied separately (AP versus PP and BP versus PP), but the design advantages are lost and the necessary sample size increases.

At other times it may happen that we are in a hurry to finish the study as soon as possible. Imagine a very bad disease that kills lots of people and we are trying a new treatment. We want to have it available as soon as possible (if it works, of course), so after every certain number of participants we will stop and analyze the results and, in the case that we can already demonstrate the usefulness of the treatment, we will consider the study finished. This is the design that characterizes the sequential clinical trial. Remember that in the parallel trial the correct thing is to calculate previously the sample size. In this design, with a more Bayesian mentality, a statistic is established whose value determines an explicit termination rule, so that the size of the sample depends on the previous observations. When the statistic reaches the predetermined value we see ourselves with enough confidence to reject the null hypothesis and we finish the study. The problem is that each stop and analysis increases the error of rejecting it being true (type 1 error), so it is not recommended to do many intermediate analysis. In addition, the final analysis of the results is complex because the usual methods do not work, but there are others that take into account the intermediate analysis. This type of trial is very useful with very fast-acting interventions, so it is common to see them in titration studies of opioid doses, hypnotics and similar poisons.

There are other occasions when individual randomization does not make sense. Imagine we have taught the doctors of a center a new technique to better inform their patients and we want to compare it with the old one. We cannot tell the same doctor to inform some patients in one way and others in another, since there would be many possibilities for the two interventions to contaminate each other. It would be more logical to teach the doctors in a group of centers and not to teach those in another group and compare the results. Here what we would randomize is the centers to train their doctors or not. This is the trial with group assignment design. The problem with this design is that we do not have many guarantees that the participants of the different groups behave independently, so the size of the sample needed can increase a lot if there is great variability between the groups and little within each group. In addition, an aggregate analysis of the results has to be done, because if it is done individually, the confidence intervals are falsely narrowed and we can find false statistical meanings. The usual thing is to calculate a weighted synthetic statistic for each group and make the final comparisons with it.

The last of the series that we are going to discuss is the community essay, in which the intervention is applied to population groups. When carried out in real conditions on populations, they have great external validity and often allow for cost-efficient measures based on their results. The problem is that it is often difficult to establish control groups, it can be more difficult to determine the necessary sample size and it is more complex to make causal inference from their results. It is the typical design for evaluating public health measures such as water fluoridation, vaccinations, etc.

I’m done now. The truth is that this post has been a bit long (and I hope not too hard), but the King deserves it. In any case, if you think that everything is said about clinical trials, you have no idea of all that remains to be said about types of sampling, randomization, etc., etc., etc. But that is another story…

From the hen to the egg

Print Friendly, PDF & Email

Surely someone overflowing genius has asked you on any occasion, with a smug look, what came first, the hen or the egg? Well, the next time you meet with someone like this, you can answer with another question: what is it that that the hen and the egg have something to do which each other? Because we must first now not only whether if to have hens we have to have eggs before, but also how likely is to end having hens, with or without eggs (some twisted mind will say that the question could be raised backwards, but I am among those to think that the first thing we have to have, no offense, are eggs).

This approach would lead us to the design of a case-control study, which is an observational and analytical study in which sampling is done on the basis of presenting a certain disease or effect (the cases) and that group is compared with another group that it does not present it (the controls), in order to determine if there is a difference in the frequency of exposure to a certain risk factor between the two groups. These studies are of retrograde directionality and of mixed temporality, so most of them are retrospective, although, as was the case with cohort studies, they can also be prospective (perhaps the most useful key to distinguish between the two is the sampling of each one, based on the exposure in the cohort studies and based on the effect in the cases and controls).

In the attached figure you can see the typical design of a case-control study. These studies are based on a specific population from which a sample of cases that usually includes all diagnosed and available cases, are compared with a control group consisting of a balanced sample of healthy subjects from the same population. However, it is increasingly common to find variations in the basic design that combine characteristics of the cohort and case-control studies, comparing the cases that appear in a stable cohort over time with controls of a partial sample extracted from that same cohort.

The best known of this type of mixed designs is that of nested in a cohort cases and controls. In these cases, we start with a well-known cohort in which we identify the cases that are occurring. Each time a case appears, it is paired with one or more controls also taken from the initial cohort. If we think about it briefly, it is possible that a subject that is initially selected as a control becomes a case over time (develop the disease under study). Although it may seem that this may bias the results, this should not be the case, since it is about measuring the effect of the exposure at the time of the analysis. This design can be done with smaller cohorts, so it can be simpler and cheaper. In addition, it is especially useful in very dynamic cohorts with many inputs and outputs over time, especially if the incidence of the disease under study is low.

Another variant of the basic design are the cohort and cases studies. In this type, we initially have a very large cohort from which we will select a smaller sub-cohort. The cases will be the patients that are produced in either of the two cohorts, while the controls will be the subjects of the smallest (and most manageable) sub-cohort. These studies have a method of analysis a little more complicated than the basic designs, since they have to compensate the fact that the cases are overrepresented because they come from the two cohorts. The great advantage of this design is that it allows studying several diseases at the same time, comparing the different cohorts of patients with the sub-cohort chosen as control.

Finally, one last variation that we are going to discuss is that of the polysemic case-cohort studies, also known as crossed cases and controls, also known as self-controlled cases. In this paired design, each individual serves as their own control, comparing the exposure during the period of time closest to the onset of the disease (case period) with the exposure during the previous period of time (control period). This study approach is useful when the exposure is short, with a foreseeable time of action and produces a disease of short duration in time. They are widely used, for example, to study the adverse effects of vaccines.

As in cohort studies, case-control studies allow the calculation of a whole series of association and impact measures. Of course, here we have a fundamental difference with cohort studies. In cohort studies we started from a cohort without patients in which the patients appeared during the follow-up, which allowed us to calculate the risk of becoming ill over time (incidence). Thus, the quotient between incidents of exposed and not exposed gave us the risk ratio, the main measure of association.

However, as can be deduced from the design of case-control studies, in these cases we cannot make a direct estimate of the incidence or prevalence of the disease, since the proportion of patients is determined by the selection criteria of the researcher and not by the incidence in the population (a fixed number of cases and controls are selected at the beginning, but we cannot calculate the risk of being a case in the population). Thus, before the impossibility of calculating the risk ratio, we will resort to the calculation of the odds ratio (OR), as you can see in the second figure.

The OR has a similar interpretation that the risk ratio, being able to value from zero to infinity. An OR = 1 means that there is no association between exposure and effect. An OR <1 means that exposure is a factor of protection against the effect. Finally, an OR> 1 indicates that the exposure is a risk factor, the higher the value of the OR.

Anyway and only for those who like getting into trouble, I will tell you that it is possible to calculate the incidence rates from the results of a case-control study. If the incidence of the disease under study is low (below 10%), OR and risk ratio can be comparable, so we can estimate the incidence in an approximate way. If the incidence of the disease is greater, the OR tends to overestimate the risk ratio, so we cannot consider them to be equivalent. In any case, in these cases, if we previously know the incidence of the disease in the population (obtained from other studies), we can calculate the incidence using the following formulas:

I0 = It / (OR x Pe) + P0

Ie = I0 x OR,

where It is the total incidence, Ie the incidence in exposed, I0 the incidence in not exposed, Pe the proportion of exposed, and P0 the proportion of not exposed.

Although the OR allows estimating the strength of the association between the exposure and the effect, it does not report on the potential effect that eliminating the exposure on the health of the population would have. For this, we will have to resort to the measures of attributable risk (as we did with cohort studies), which can be absolute or relative.

There are two absolute measures of attributable risk. The first is the attributable risk in exposed (ARE), which is the difference between the incidence in exposed and not exposed and represents the amount of incidence that can be attributed to the risk factor in the exposed. The second is the population attributable risk (PAR), which represents the amount of incidence that can be attributed to the risk factor in the general population.

On the other hand, there are also two relative measures of attributable risk (also known as proportions or attributable or etiological fractions). First, the attributable fraction in exposed (AFE), which represents the difference of risk relative to the incidence in the group of exposed to the factor. Second, the population attributable fraction (PAF), which represents the difference in risk relative to the incidence in the general population.

In the attached table I show you the formulas for the calculation of these parameters, which is somewhat more complex than in the case of cohort studies.

The problem with these impact measures is that they can sometimes be difficult for the clinician to interpret. For this reason, and inspired by the calculation of the number needed to treat (NNT) of clinical trials, a series of measures called impact numbers have been devised, which give us a more direct idea of the effect of the exposure factor on the disease. in study. These impact numbers are the number of impact in exposed (NIE), the number of impact in cases (NIC) and the number of impact in exposed cases (NIEC).

Let’s start with the simplest one. The NIE would be the equivalent of the NNT and would be calculated as the inverse of the absolute risk reduction or of the risk difference between exposed and not exposed. The NNT is the number of people who should be treated to prevent a case compared to the control group. The NIE represents the average number of people who have to be exposed to the risk factor so that a new case of illness occurs compared to the people who are not exposed. For example, a NIE of 10 means that out of every 10 exposed there will be a case of disease attributable to the risk factor studied.

The NIC is the inverse of the PAF, so it defines the average number of sick people among which a case is due to the risk factor. An NIC of 10 means that for every 10 patients in the population, one is attributable to the risk factor under study.

Finally, the NIEC is the inverse of the AFE. It is the average number of patients among which a case is attributable to the risk factor.

In summary, these three parameters measure the impact of exposure among all exposed (NIE), among all patients (NIC) and among all patients who have been exposed (NIEC). It will be useful for us to try to calculate them if the authors of the study do not do so, since they will give us an idea of the real impact of the exposure on the effect. In the second table I show you the formulas that you can use to obtain them.

As a culmination to the previous three, we could estimate the effect of the exposure on the entire population by calculating the number of impact on the population (NIP), for which we have only to do the inverse of the ARP. Thus, a NIP of 3000 means that for every 3,000 subjects of the population there will be a case of illness due to exposure.

In addition to assessing the measures of association and impact, when appraising a case-control study we will have to pay special attention to the presence of biases, since they are the observational studies that have the greatest risk of presenting them.

Case-control studies are relatively simple to make, have in general lower cost than other observational studies (including cohort studies), allow us to study various exposure factors at the same time and to know how they interact, and they are ideal for diseases of exposure factors with very low frequency. The problem with this type of design is that you have to be extremely careful selecting cases and controls, as it is very easy to commit a list of biases that, to this day, does not have a known end.

In general, the selection criteria should be the same for cases and controls, but as to be a case one has to be diagnosed and be available for the study, it’s very likely that cases are not fully representative of the population. For example, if the diagnostic criteria are not sensitive and specific enough we’ll get many false positives and negatives, with the consequent dilution of the effect of the exposure to the factor.

Other possible problem depends on the selection of incident (newly diagnosed) or prevalent cases. Prevalence based studies favor the selection of the survivors (as far as it’s known, no dead has agreed to participate in any study) and if survival is related to the exposure, the risk identified will be less than with incident cases. This effect is even more evident when the exposure factor is of good prognosis, a situation in which prevalence studies produces a greater overestimation of the association. As an example to better understand these issues, let’s suppose that the risk of suffer a stroke is higher the more one smokes. If we include only prevalent cases we’ll exclude the people dead of more severe heart attacks, which probably would be the one who smoke most, with which the effect of smoking could be underestimated.

But if selecting cases seems complicated, it’s nothing compared to a good selection of controls. Ideally, controls have had the same likely of exposure than cases or, put it another way, should be representative of the population from which the cases were extracted. In addition, this must be combined with the exclusion of those who have any illness related positively or negatively to the exposure factor. For example, If we want to waste our time and study the association between air passengers who have thrombophlebitis and prior aspirin ingestion, we must exclude from the study the controls that have any other disease being treated with aspirin, even if they had not taken it before the journey.

We have also to be careful with some habits of control selection. For instance, patients who go to the hospital for reasons different to that of study are at hand and tend to be very cooperative and, being sick, they surely better recall past exposure to risk factors. But the problem is that they are ill, so the pattern of exposure to risk factors can be different to the general population.

Another resource is to include neighbors, friend, relatives, etc. These usually are very comparable and cooperative, but we have the risk that there’re paired exposure habit that can alter study results. These entire problems are avoided taking controls from general population, but it is more costly in effort and money, they usually are less cooperative and, above all, much more forgetful (healthy people recall less about past exposures to risk factors), with so the quality of the information we obtain from cases and controls can be very different.

Just one more comment to end this theme so enjoyable. Case-control studies share a characteristic with the rest of the observational studies: they detect the association between exposure and effect, but they do not allow us to establish causality relations with certainty, for which we need other types of studies such as randomized clinical trials. But that is another story…

One about Romans

Print Friendly, PDF & Email

What a fellows, those Romans!. They came, they saw and they conquered. With those legions, each one with ten cohorts, each cohort with almost five hundred Romans with their skirts and strappy sandals. The cohorts were groups of soldier that were in reach of the speech of the same boss. They always went forward, never retreating. This is how you can conquer Gaul (though not entirely, as is well known).

In epidemiology, a cohort is also a group of people who share something, but instead of being the boss’s harangue it is the exposure to a factor that is studied over time (neither the skirt nor the sandals are essential) . Thus, a cohort study is a type of observational, analytical design, of anterograde directionality and of concurrent or mixed temporality that compares the frequency with which a certain effect occurs (usually a disease) in two different groups (cohorts), one of them exposed to one factor and another not exposed to the same factor (see attached figure).

Therefore, sampling is related to exposure to the factor. Both cohorts are studied over time, which is why most of the cohort studies are prospective or of concurrent temporality (they go forward, like the Roman cohorts). However, it is possible to do retrospective cohort studies once both the exposure and the effect have occurred. In these cases, the researcher identifies the exposure in the past, reconstructs the experience of the cohort over time and attends in the present to the appearance of the effect, which is why they are studies of mixed temporality.

We can also classify the cohort studies according to whether they use an internal or external comparison group. Sometimes we can use two internal cohorts belonging to the same general population, classifying the subjects in one or another cohort according to the level of exposure to the factor. However, other times the exposed cohort will interest us because of its high level of exposure, so we will prefer to select an external cohort of subjects not exposed to make the comparison between both.

Another important aspect when classifying the cohort studies is the time of inclusion of the subjects in the study. When we only select the subjects that meet the inclusion criteria at the beginning of the study, we speak of a fixed cohort, whereas we will speak of an open or dynamic cohort when subjects continue to enter the study throughout the follow-up. This aspect will be important, as we will see later, when calculating the association measures between exposure and effect.

Finally, and as a curiosity, we can also do a study with a single cohort if we want to study the incidence or evolution of a certain disease. Although we can always compare the results with other known data of the general population, this type of designs lacks a comparison group in the strict sense, so it is included within the longitudinal descriptive studies.

When followed up over time, the cohort studies allow us to calculate the incidence of the effect between exposed and not exposed, calculating from them a series of association measures and specific impact measures.

In studies with closed cohorts in which the number of participants does not change, the measure of association is the relative risk (RR), which is the ratio between the incidence of exposed (Ie) and unexposed (I0): RR = Ie / I0.

As we know, the RR can value from 0 to infinity. A RR = 1 means that there is no association between exposure and effect. A RR <1 means that exposure is a factor of protection against the effect. Finally, a RR> 1 indicates that exposure is a risk factor, the greater the value of the RR.

The case of studies with open cohorts in which participants can enter and leave during the follow-up is a bit more complex, since instead of incidences we will calculate incidence densities, a term that refers to the number of cases of the effect or disease that they occur referring to the number of people followed by each follow-up time (for example, number of cases per 100 person-years). In these cases, instead of the RR we will calculate the incidence density ratio, which is the quotient of the incidence density in exposed divided by the density in not exposed.

These measures allow us to estimate the strength of the association between the exposure to the factor and the effect, but they do not inform us about the potential impact that exposure has on the health of the population (the effect that eliminating this factor would have on the health of the population). For this, we will have to resort to the measures of attributable risk, which can be absolute or relative.

There are two absolute measures of attributable risk. The first is the attributable risk in exposed (ARE), which is the difference between the incidence in exposed and not exposed and represents the amount of incidence that can be attributed to the risk factor in the exposed. The second is the population attributable risk (PAR), which represents the amount of incidence that can be attributed to the risk factor in the general population.

On the other hand, there are also two relative measures of attributable risk (also known as proportions or attributable or etiological fractions). First, the attributable fraction in exposed (AFE), which represents the difference of risk relative to the incidence in the group of exposed to the factor. Second, the population attributable fraction (PAF), which represents the difference in risk relative to the incidence in the general population.

In the attached table you can see the formulas that are used for the calculation of these impact measures.

The problem with these impact measures is that they can sometimes be difficult for the clinician to interpret. For this reason, and inspired by the calculation of the number needed to treat (NNT) of clinical trials, a series of measures called impact numbers have been devised, which give us a more direct idea of the effect of the exposure factor on the disease. in study. These impact numbers are the number of impact in exposed (NIE), the number of impact in cases (NIC) and the number of impact in exposed cases (NIEC).

Let’s start with the simplest one. The NIE would be the equivalent of the NNT and would be calculated as the inverse of the absolute risk reduction or of the risk difference between exposed and not exposed. The NNT is the number of people who should be treated to prevent a case compared to the control group. The NIE represents the average number of people who have to be exposed to the risk factor so that a new case of illness occurs compared to the people who are not exposed. For example, a NIE of 10 means that out of every 10 exposed there will be a case of disease attributable to the risk factor studied.

The NIC is the inverse of the PAF, so it defines the average number of sick people among which a case is due to the risk factor. An NIC of 10 means that for every 10 patients in the population, one is attributable to the risk factor under study.

Finally, the NIEC is the inverse of the AFE. It is the average number of patients among which a case is attributable to the risk factor.

In summary, these three parameters measure the impact of exposure among all exposed (NIE), among all patients (NIC) and among all patients who have been exposed (NIEC). It will be useful for us to try to calculate them if the authors of the study do not do so, since they will give us an idea of the real impact of the exposure on the effect. In the second table I show you the formulas that you can use to obtain them.

As a culmination to the previous three, we could estimate the effect of the exposure on the entire population by calculating the number of impact on the population (NIP), for which we have only to do the inverse of the ARP. Thus, a NIP of 3000 means that for every 3,000 subjects of the population there will be a case of illness due to exposure.

Another aspect that we must take into account when dealing with cohort studies is their risk of bias. In general, observational studies have a higher risk of bias than experimental studies, as well as being susceptible to the influence of confounding factors and effect modifying variables.

The selection bias must always be considered, since it can compromise the internal and external validity of the study results. The two cohorts should be comparable in all aspects, in addition to being representative of the population from which they come.

Another very typical bias of cohort studies is the classification bias, which occurs when an erroneous classification of the participants is made in terms of their exposure or the detection of the effect (basically, it is just another information bias). . The classification bias can be non-differential when the error occurs randomly independently of the study variables. This type of classification bias is in favor of the null hypothesis, that is, it makes it difficult for us to detect the association between exposure and effect, if it exists. If, despite the bias, we detect the association, then nothing bad will happen, but if we do not detect it, we will not know if it does not exist or if we do not see it because of the bad classification of the participants. On the other hand, the classification bias is differential when performed differently between the two cohorts and has to do with some of the study variables. In this case there is no forgiveness or possibility of amendment: the direction of this bias is unpredictable and mortally compromises the validity of the results.

Finally, we should always be alert to the possibility of confusion bias (due to confounding variables) or interaction bias (due to effect modifying variables). The ideal is to prevent them in the design phase, but it is not superfluous to control confusion in the analysis phase, mainly through stratified analyzes and multivariate studies.

And with this we come to the end of this post. We see, then, that cohort studies are very useful to calculate the association and the impact between effect and exposure but, careful, they do not serve to establish causal relationships. For that, other types of studies are necessary.

The problem with cohort studies is that they are difficult (and expensive) to perform adequately, often require large samples and sometimes long follow-up periods (with the consequent risk of losses). In addition, they are not very useful for rare diseases. And we must not forget that they do not allow us to establish causal relationships with sufficient security, although for this reason, case-control studies are better than their cousins, but that is another story…

Which family you belong?

Print Friendly, PDF & Email

As we already know from previous posts, the evidence-based medicine systematics begins with a knowledge gap that moves us to ask a structured clinical question. Once we have elaborated the question, we will use its components to make a bibliographic search and obtain the best available evidence to solve our doubt.

And here comes, perhaps, the most feared task of evidence-based medicine: the critical appraisal of the evidence found. Actually, the thing is not so much since, with a little practice, the critical reading consists only of systematically applying a series of questions about the article that we are analyzing. The problem sometimes comes in knowing what questions we have to ask, since this system has differences according to the design of the study that we are evaluating.

Here, by design we understand the set of procedures, methods and techniques used with the study participants, during the data collection and during the analysis and interpretation of the results to obtain the conclusions of the study. And there are a myriad of possible study designs, especially in recent times when epidemiologists have been led to design mixed observational studies. In addition, the terminology can sometimes be confusing and use terms that do not clarify well what is the design we have in front of us. It’s like when we get to a wedding of someone from a large family and we meet a cousin we do not know where it comes from. Even if we look for physical similarities, we will most likely end up asking him: and you, which family you belong? Only then will we know if he belongs to the groom or to the bride.

What we are going to do in this post is something similar. We will try to establish a series of criteria for classifying studies to finally establish a series of questions whose answers allow us to identify which family they belong to.

To begin with, the type of clinical question to which the work tries to answer can give us some guidance. If the question is of diagnostic nature, it is most likely that we will be faced with what is called a diagnostic test study, which is usually a design in which a series of participants are subjected, in a systematic and independent way, to the test in study and to the reference pattern (the gold standard). It is a type of design especially made for this type of questions but do not just take it from me: sometimes we can see diagnostic questions that can be tried to be solved with other types of studies.

If the question is about treatment, it is most likely that we are facing a clinical trial or, sometimes, a systematic review of clinical trials. However, there are not always trials on everything we look for and we may have to settle for an observational study, such as a case-control or a cohort study.

In case of questions of prognosis and etiology/harm we may find ourselves reading a clinical trial, but the most usual thing is that it is not possible to carry out trials and we only have observational studies.

Once analyzed this aspect, it is possible that we have doubts about the type of design we are facing. It will then be time to turn to our questions about six criteria related to the methodological design: general objective of the clinical question, direction of the study, type of sampling of the participants, temporality of the events, assignment of the study factors and units of study used. Let’s see in detail what each one of these six criteria means, which you see summarized in the table that I attach.

According to the objective, the studies can be descriptive or analytical. A descriptive study is one that, as the name suggests, only has the descriptive purpose of telling how things are, but without intending to establish causal relationships between the risk factor or exposure and the effect studied (a certain disease or health event, in most cases). These studies answer not very complex questions like how many? where? or to whom ?, so they are usually simple and they serve to elaborate hypotheses that later will need more complex studies for their demonstration.

By contrast, other analytical studies do try to establish such relationships, answering questions like why? how to deal with? or how to prevent? Logically, to establish such relationships it will need to have a group with which to compare (the control group). This will be a useful clue to distinguish between analytical and descriptive studies if we have any doubt: the presence of a comparison group will be typical of analytical studies.

The directionality of the study refers to the order in which the exposure and the effect of such exposure are investigated. The study will have an antegrade directionality when the exposure is studied before the effect and a retrograde directionality when the opposite is done. For example, if we want to investigate the effect of smoking on coronary mortality, we can take a set of smokers and see how many die of coronary diseases (antegrade) or, conversely, take a set of deaths from coronary heart disease and look to see how many smoked (retrograde). Logically, only studies with anterograde directionality can ensure that the exposure precedes the effect in time (I’m not saying that one is the cause of the other). Finally, to say that sometimes we can find studies in which exposure and effect are studied at the same time, talking then of simultaneous directionality.

The type of sampling has to do with how to select the study participants. These can be chosen because they are subject to the exposure factor that interests us, to having presented the effect or to a combination of the two or even other criteria other than exposure and effect.

Our fourth criterion is temporality, which refers to the relationship in time between the researcher and the exposure factor or the effect studied. A study will have a historical temporality when effect and exposure have already occurred when the study begins. On the other hand, when these events take place during the study, it will have a concurrent temporality. Sometimes the exposure can be historical and the effect concurrent, speaking then of mixed temporality.

Here I would like to make a point about two terms used by many authors and that will be more familiar to you: prospective and retrospective. Prospective studies would be those in which exposure and effect did not occur at the beginning of the study, while those in which the events have already occurred at the time of the study would be retrospective. To curl the curl, when both situations are combined we would talk about ambispective studies. The problem with these terms is that sometimes they are used indistinctly to express directionality or temporality, which are different terms. In addition, they are usually associated with specific designs: prospective with cohort studies and retrospective with case and control studies. It may be better to use the specific criteria of directionality and temporality, which express the aspects of the design more precisely.

Two other terms related to temporality are those of transversal and longitudinal studies. Transversals are those that provide us with a snapshot of how things are at a given moment, so they do not allow us to establish temporal or causal relationships. They tend to be prevalence studies and always of a descriptive nature.

On the other hand, in longitudinal studies variables are measured over a period of time, so they do allow establishing temporary relationships, but the researcher dos not control how the exposure is assigned to participants. These may have an antegrade (as in cohort studies) or retrograde (as in case and control studies) directionality.

The penultimate of the six criteria that we are going to take into account is the assignment of the study factors. In this sense, a study will be observational when the researchers are mere observers who do not act on the assignment of the exposure factors. In these cases, the relationship between exposure and effect may be affected by other factors, known as confusion, so they do not allow drawing conclusions about causality. On the other hand, when the researcher assigns the effect in a controlled manner according to a previous established protocol, we will talk about experimental or intervention studies. These experimental studies with randomization are the only ones that allow establishing cause-effect relationships and are, by definition, analytical studies.

The last of the criteria refers to the study units. The studies can be carried out on individual participants or on population groups. The latter are ecological studies and community trials, which have specific design characteristics.In the attached figure you can see a scheme of how to classify the different epidemiological designs according to these criteria. When you have doubts about which design corresponds to the work you are evaluating, follow this scheme. The first will be to decide if the study is observational or experimental. This is usually simple, so we move on to the next point. A descriptive observational (without a comparison group) will correspond to a series of cases or a cross-sectional study.

If the observational study is analytical, we will look at the type of sampling, which may be due by disease or study effect (case-control study) or by exposure to the risk or protection factor (cohort study).

Finally, if the study is experimental, we will look for if the exposure or intervention has been assigned randomly and with a comparison group. In the affirmative case, we will find ourselves in front of a randomized controlled clinical trial. If not, it is probably an uncontrolled trial or another type of quasi-experimental design.

And here we will stop for today. We have seen how to identify the most common types of methodological designs. But there are many more. Some with a very specific purpose and their own design, such as economic studies. And others that combine characteristics of basic designs, such as case-cohort studies or nested studies. But that is another story…

Simplifying the impact

Print Friendly, PDF & Email

In epidemiological studies it is common to find a set of measures of effect such as risks in exposed and non-exposed, relative risks and risk reductions. However, in order for the analysis of a study to be considered well done, measures of effect should be accompanied by a series of impact measures, which are the ones that inform us more precisely about the true effect of the exposure or intervention we are studying.

For example, if we conducted a study on the prevention of mortality from a disease with a treatment X, a relative risk of 0.5 would tell us that there is a half chance of dying if we take the drug, but we cannot see clearly the impact of treatment. However, if we calculate the number needed to treat (NNT) and it comes out to be two, we will know that one in two people treated will avoid death by that disease. This impact measure, the NNT, does give us a clearer idea of the real effect of the intervention in our practice.

There are several impact measures, in addition to the NNT. In the cohort studies, which we are going to focus on today, we can calculate the difference of incidences between exposed and unexposed, the exposed attributable fraction (EAF), the avoidable risk in exposed (ARE) and the population attributable fraction (PAF).

The EAF indicates the risk of presenting the effect on the exposed that is due specifically to that, to have been exposed. The ARE would inform us of the cases of illness in the exposed group that could have been avoided had the exposure not existed. Finally, PAF is a specific attributable risk that describes the proportion of cases that could be prevented in the population if the risk factor under study were completely eliminated. formulas_cohortes_enAs a fourth parameter, considering the presence of exposure and disease, we can calculate the fraction of exposure in cases (FEc), which defines the proportion of exposed cases that are attributable to the risk factor.

In the table that I attach you can see the formulas for the calculation of these parameters.

The problem with these impact measures is that they can sometimes be difficult to interpret on the part of the clinician. For this reason, and inspired by the calculation of NNTs, a series of measures called impact numbers have been devised, giving us a more direct idea of the effect of the exposure factor on the disease being studied. These impact numbers are the number of impact on exposed (NIE), the number of impact in cases (NIC) and the number of impact of exposed cases (NIEC).

Let’s start with the simplest. The NIE would be the equivalent of the NNT and would be calculated as the inverse of the absolute risk reduction or the risk difference. The NNT is the number of people who should be treated to prevent a case compared to the control group. The NIE represents the average number of people who have to be exposed to the risk factor for a new disease event to occur compared to non-exposed persons. For example, a NIE of 10 means that out of every 10 exposed a case of disease will occur that will be attributable to the risk factor.

The NIC is the inverse of the PAF, so it defines the average number of sick people among whom a case is due to the risk factor. A NIC of 10 means that for every 10 cases in the population, one is attributable to the risk factor under study.

Finally, the NIEC is the inverse of the FEc. It is the average number of cases among which a case is attributable to the risk factor.

In summary, these three measures indicate the impact of exposure among all exposed (NIE), among all patients (NIC) and among all patients who have been exposed (NIEC).

impact-numbersAn example is the data from the attached table, corresponding to a fictional study on the effect of coronary mortality on smoking. I have used an epidemiological calculator of the many available on the Internet and have calculated a risk difference of 0.0027, a PAR of 0.16 and an FEc of 0.4. We can now calculate our impact numbers.

NIE value is 1 / 0.0027 = 366. Rounding up, out of every 365 smokers, one will die from a heart disease attributable to tobacco.

NIC will be 1 / 0.16 = 6.25. Of every six deaths from heart disease in the population, one will be attributable to tobacco.

Finally, NIEC will be 1 / 0.4 = 2.5. Approximately, for every three deaths from heart disease among those who smoked, one would be attributable to tobacco addiction.

And here we leave it for today. Do not forget that the data of the example are fictitious and I do not know if they fit very much to reality.

We have discussed only the point estimates of impact numbers but, as always, it is preferable to calculate their confidence intervals. All three can be calculated with the limits of intervals of the measurement from which the impact numbers are obtained, but it is best to use a calculator that does it for us. Calculation of the intervals of some parameters such as, for example, the PAR can be complex. But that is another story…

The tribulations of an interval

Print Friendly, PDF & Email

The number needed to treat (NNT) is an impact measure that tells us in a simple way about the effectiveness of an intervention or its side effects. If the treatment tries to avoid unpleasant events, the NNT will show us an appreciation of the patients that we have to submit to treatment to avoid one of these events. In this case we talk about NNTB, the number to deal with to benefit.

In other cases, the intervention may produce adverse effects. Then we will talk about the NNTH or number to try to harm one (produce an unpleasant event).

nnt_enThe calculation of the NNT is simple when we have a contingency table like the one we see in the first table. It is usually calculated as the inverse of the absolute risk reduction (1 / ARR) and is given as a point estimate value. The problem is that this ignores the probabilistic nature of the NNT, so the most correct would be to specify its 95% confidence interval (95CI), as we do with the rest of the measures.

We already know that the 95CI of any measure responds to the following formula:

95CI (X) = X ± (1.96 x SE (X)), where SE is the standard error.

Thus the lower and upper limits of the interval would be the following:

X – 1.96 SE (X), X + 1.96 SE (X)

And here we have a problem with the NNT’s 95CI. This interval cannot be calculated directly because NNT does not have a normal distribution. Therefore, some tricks have been invented to calculate it, such us to calculate the 95CI of the ARR and use its limits to calculate the NNT’s, as follows:

95CI (ARR) = ARR – 1,96(SE(ARR)) , ARR + 1,96(SE(ARR))

CI(NNT) = 1 / upper limit of the 95CI (ARR), 1 / lower limit of the 95CI (ARR) (we use the upper limit of the ARR to calculate the lower limit of the NNT, and vice versa, because being the treatment beneficial, risk reduction would in fact be a negative value [RT – RNT], although we usually speak of it in absolute value).

We just need to know how to calculate the RAR’s SE, which turns out to be done with a slightly unfriendly formula that I put to you just in case anyone is curious to see it:SE(ARR) = \sqrt{\frac{R_{T}\times(1-R_{T})}{Treated}+\frac{R_{NT}\times(1-R_{NT})}{Non\ treated}}nnt2_enIn the second table you can see a numerical example to calculate the NNT and its interval. You see that the NNT = 25, with an 95CI of 15 to 71. Look at the asymmetry of the interval since, as we have said, does not follow a normal distribution. In addition, far from the fixed value of 25, the interval values say that in the best case we will have to treat 15 patients to avoid an adverse effect, but in the worst case this value can rise to 71.

To all the above difficulty for its calculation, another added difficulty arises when the ARR’s 95CI includes zero. In general, the lower the effect of the treatment (the lower the ARR) the higher the NNT (it will be necessary to treat more to avoid an unpleasant event), so in the extreme value of the effect is zero, the NNT’s value will be infinite (an infinite number of patients would have to be treated to avoid an unpleasant event).

So it is easy to imagine that if the 95CI of the ARR includes zero, the 95CI of the NNT will include infinity. It will be a discontinuous interval with a negative value limit and a positive one, which can pose problems for its interpretation.

For example, suppose we have a trial in which we calculated an ARR of 0.01 with a 95CI of -0.01 to 0.03. With the absolute value we have no problem, the NNT is 100 but, what about with the interval? For it would go from -100 to 33, going through infinity (actually, from minus infinity to -100 and from 33 to infinity).

How do we interpret a negative NNT? In this case, as we have already said, we are dealing with an NNTB, so its negative value can be interpreted as a positive value of its alter ego, the NNTH. In our example, -100 would mean that we will have an adverse effect for every 100 treated. In short, our interval would tell us that we could produce one event for every 100 treated, in the worst case, or avoid one for every 33 treated, in the best. This ensures that the interval is continuous and includes the point estimate, but it will have little application as a practical measure. Basically, it may make little sense to calculate the NNT when the ARR is not significant (its 95CI includes zero).

At this point, the head begins to smoke us out, so let’s go ending today. Needless to say, everything I have explained about the calculation of the interval can be done clicking with any of the calculators available on the Internet, so we will not have to do any math.

In addition, although the NNT calculation is simple when we have a contingency table, we often have adjusted risk values obtained from regression models. Then, the maths for the calculation of the NNT and its interval gets a little complicated. But that is another story…

Regular customers

Print Friendly, PDF & Email

We saw in a previous post that sample size is very important. The sample should be the right size, neither more nor less. If too large, we are wasting resources, something to keep in mind in modern times. If we use a small sample we will save money, but lose statistical power. This means that it may happen that there is a difference in effect between the two interventions tested in a clinical trial and not be able to recognize it, which we will be just throwing good money equally.

The problem is that sometimes it can be very difficult to get an adequate sample size, needing excessively long periods of time to get the desired size. Well, for these cases, someone with commercial mentality has devised a method that is to include the same participant many times in the trial. It’s like in bars. Better to have a regular clientele who comes many times to the establishment, always easier than to have a very busy parish (which is also desirable).

There are times when the same patient needs the same treatment in repeated occasions. Consider, for example, asthmatics that need bronchodilator treatment repeatedly, or couples undergoing a process of in vitro fertilization, which requires several cycles to succeed.

Although the usual standard in clinical trials is randomizing participants, in these cases we can randomize each participant independently whenever he needs treatment. For example, if we are testing two bronchodilators, we can randomize the same subject to one of two every time he has an asthma attack and needs treatment. This procedure is known as re-randomization and consists, as we have seen, in randomizing situations rather than participants.

This trick is quite correct from a methodological point of view, provided that certain conditions discussed below are met.

The participant enters the trial the first time in the usual way, being randomly assigned to one of two arms of the trial. Subsequently he is followed-up during the appropriate period and the results of the study variables are collected. Once the follow-up period is finished, if the patient requires new treatment, and continues to meet the inclusion criteria of the trial, he is randomized again, repeating this cycle as necessary to achieve the desired sample size.

This mode of recruiting situations instead of participants achieves getting the sample size with a smaller number of participants. For example, if we need 500 participants, we can randomize 500 once, 250 twice, or 200 once and 50 six times. The important thing is that the number of randomizations of each participant cannot be specified previously, but must depend on the need of treatment in every time.

To apply this method correctly you need to meet three requirements. First, patients can only be re-randomized when they have fully completed the follow-up period of the previous procedure. This is logical because, otherwise, the effects of the two treatments would overlap and a biased measure of the effect of the intervention would be obtained.

Second, each new randomization in the same participant should be done independently of the others. In other words, the probability of assignment to each intervention should not depend on previous assignments. Some authors are tempted to use reallocations to balance the two groups, but this can bias comparisons between the two groups.

Third, the participant should receive the same benefit of each intervention. Otherwise, we get a biased estimate of treatment effect.

We see, then, that this is a good way to reach more easily the sample size we want. The problem with this type of design is that the analysis of the results is more complex than that of conventional clinical trial.

Basically, without going into details, there are two methods of analysis of results. The simplest is the unadjusted analysis, in which all interventions, even if they belong to the same participant are treated independently. This model, which is usually expressed by a linear regression model, does not take into account the effect that participants can have on the results.

The other method is adjusted for the effect of patients, which takes into account the correlation between observations of the same participants.

And here we leave for today. We have not talked anything about the mathematical treatment of the adjusted method to avoid burning the reader’s neurons. Suffice it to say that there are several models that have to do with using generalized linear models and mixed-effects models. But that is another story…

The fairground shotgun

Print Friendly, PDF & Email

A few days ago I was with my cousin in our neighborhood parties and to entertain a while, we were firing a shot at one of the booths, to see if we could take the teddy bear.

But nothing, not by chance.

I shot a lot of times, but got no buckshot in the target. They were all around, but not one in the center. My cousin, however, is a crack shot. The problem is that he got a shotgun with a twisted look, so he went all deviants buckshot and not put any into the target. In sort, we were left with nothing. random-and-systematic-errorsIn the figure attached you can see the mesh of shots that we both did.

Anyway, and to take advantage of this situation, looking at the targets it occurs to me that the situation bears some resemblance to the two types of errors that we can have in our epidemiological studies.

These are, in general, two: random error and systematic error.

Random error is due to our friend the chance, of whom there is no way to escape. You can have two fundamental causes. First, the sampling error. When we get a sample of a population we do it with the aim of estimating a population parameter through the study of an estimator of that parameter in the sample. However, due to sampling error we can obtain a sample that is not representative of the population (if we get several samples, all will be slightly different from each other). This happens especially when sample sizes are small and when we use sampling techniques other than probabilistic.

The other source of random error is the variability in the measurement. If we take the blood pressure several times, the results will be different (though similar) because, on the one hand, biological variability itself and, secondly, to the imprecision of the measuring device we use.

This random error will be related to the accuracy of the result. A measure will be more accurate the smaller its random component is, so we can increase the accuracy by increasing the size of the sample or by being more careful with measurements.

In our example of the shot, I represent the random error. I stray shots at random, so that by the cloud of impacts can be imagined where the target is, but no shot reaches it. Logically, the more shots you do the more likely it is to hit the center, albeit by chance.

The second error we mentioned is systematic error, also called bias. This is due to an error in the design or analysis of the study, which produces an incorrect or invalid estimate of the effect we are studying. In our example, as you may have guessed, my cousin represents the systematic error. He shoots very well, but as the gun is poorly calibrated, give shots off the target, turning aside all systematically in one direction. Seeing only the shots we cannot imagine where the center is, as we saw with my shots in my target, because we would think that the center is in a location that actually does not belong. Thus, the random error affects the accuracy, while the systematic compromises the validity of results. And another thing, though my cousin increase the number of shots, they will keep coming crooked. The systematic error does not decrease because we increase the sample size.

And here we will leave for today. We have not talked anything about the types of systematic errors, there are several. They can be divided in selection bias, information and analysis that, in turn, can be divided into many others bias. But that is another story…

Together but not in each other’s pockets

Print Friendly, PDF & Email

Observational studies are those in which, as its name suggests, the researcher merely observes what happens. Well, to observe and analyze, but has no active role on the exposure or intervention under study. Within these observational studies, we all know cohort studies and case-control studies, the most commonly used.

In a cohort study, a group or cohort is subjected to an exposure and followed over time to compare the frequency of occurrence of the effect in comparison with an unexposed cohort, which acts as control. On the other hand, in a case-control study we begin with two population groups, one of which suffers the effect or disease under study and its exposure to a particular factor is compared with that of the group that have not the disease and that acts as control.

The cohort study is the sounder of the two from the methodological point of view. The problem is that it often requires longer periods of follow-up and large cohorts, especially when the frequency of the disease studied is low, leading to the need to manage all covariates of this entire large cohort, which increases the costs of the study.

Well, for those cases in which neither the case-control nor cohort studies fit well to the needs of researchers, epidemiologists have invented a series of designs that are halfway between the two and can mitigate their shortcomings. These hybrid designs are the cohort nested case-control study and the case cohort study.

Let’s start with the nested case and controls. Suppose we have made a study in which we used a cohort with many participants. Well, we can reuse it in a nested case-control. We take the cohort and follow it over time, selecting those subjects who develop the disease and assigning them as controls subjects from the same cohort who have not yet presented the disease (although they may do so later). Thus, cases and controls come from the same cohort. It is desirable to match those considering variables that are confounders and time-dependent, for example, the years they have been enrolled in the cohort. Thus, the same subject can act as a control on several occasions and end up as another case, which must be taken into account in the statistical analysis of the studies.

As we are detecting how cases arise, we do the sampling based on incidence density, which allow us to estimate relative risks. This is an important difference with conventional case-control studies, which usually provide odds ratios, only comparable to relative risks when the frequency of the effect is very low.

Another difference is that all the information is collected on the cohort at baseline, so there is less risk of occurrence of the information bias characteristics of the classic case-control studies, which are of retrospective nature.

The other type of hybrid observational design that we will deal with is that of the case cohort study. Here, we also start from a large initial cohort, from which we select a more manageable sub-cohort to be used as a comparison group. Then, we follow the sub-cohort to detect over time the subjects that develop the disease in comparison with the sub-cohort (whether or not they belong to the sub-cohort).

As in the previous example, to detect the cases over time allow us to estimate de density of incidence of cases and no-cases, calculating from them the relative risks. As you can imagine, this design is more economical than conventional studies because it greatly reduces the volume of information from healthy subjects to be handled without losing efficiency when studying rare diseases. The problem that arises is that the sub-cohort has an overrepresentation of cases, so the analysis of the results may not be as the traditional cohorts, but has its own methodology rather more complicated.

And here we will leave this topic for today. To sum up a little, shall we say that nested case-control study is more like the classical case-control study, while the case cohort study is more like the conventional cohort study. The fundamental difference between the two is that in the nested study we sample controls by incidence density and matching, so we have to wait to have occurred all the cases to have selected the entire control population. This is not so in the case cohort study, much easier, wherein the reference population is selected at the beginning of the study.

The drawback of these studies, as we have said, is that the analysis is a bit more complicated than the conventional observational studies because it is not enough to do the analysis of the raw data results. Instead, results must be adjusted by the possibility that a participant can act as a case and as a control (in nested studies) and by the overrepresentation of cases in the sub-cohort study (in the case cohort study). But that’s another story…