Science without sense…double nonsense

Píldoras sobre medicina basada en pruebas

Archive for the Sin categoría @en Category

Little ado about too much

Print Friendly, PDF & Email

Yes, I know that the saying goes just the opposite. But that is precisely the problem we have with so much new information technology. Today anyone can write and make public what goes through his head, reaching a lot of people, although what he says is bullshit (and no, I do not take this personally, not even my brother-in-law reads what I post!). The trouble is that much of what is written is not worth a bit, not to refer to any type of excreta. There is a lot of smoke and little fire, when we all would like the opposite to happen.

The same happens in medicine when we need information to make some of our clinical decisions. Anywhere the source we go, the volume of information will not only overwhelm us, but above all the majority of it will not serve us at all. Also, even if we find a well-done article it may not be enough to answer our question completely. That’s why we love so much the revisions of literature that some generous souls publish in medical journals. They save us the task of reviewing a lot of articles and summarizing the conclusions. Great, isn’t it? Well, sometimes it is, sometimes it is not. As when we read any type of medical literature’s study, we should always make a critical appraisal and not rely solely on the good know-how of its authors.

Revisions, of which we already know there are two types, also have their limitations, which we must know how to value. The simplest form of revision, our favorite when we are younger and ignorant, is what is known as a narrative review or author’s review. This type of review is usually done by an expert in the topic, who reviews the literature and analyzes what she finds as she believes that it is worth (for that she is an expert) and summarizes the qualitative synthesis with her expert’s conclusions. These types of reviews are good for getting a general idea about a topic, but they do not usually serve to answer specific questions. In addition, since it is not specified how the information search is done, we cannot reproduce it or verify that it includes everything important that has been written on the subject. With these revisions we can do little critical appraising, since there is no precise systematization of how these summaries have to be prepared, so we will have to trust unreliable aspects such as the prestige of the author or the impact of the journal where it is published.

As our knowledge of the general aspects of science increases, our interest is shifting towards other types of revisions that provide us with more specific information about aspects that escape our increasingly wide knowledge. This other type of review is the so-called systematic review (SR), which focuses on a specific question, follows a clearly specified methodology of searching and selection of information and performs a rigorous and critical analysis of the results found. Moreover, when the primary studies are sufficiently homogeneous, the SR goes beyond the qualitative synthesis, also performing a quantitative synthesis analysis, which has the nice name of meta-analysis. With these reviews we can do a critical appraising following an ordered and pre-established methodology, in a similar way as we do with other types of studies.

The prototype of SR is the one made by the Cochrane’s Collaboration, which has developed a specific methodology that you can consult in the manuals available on its website. But, if you want my advice, do not trust even the Cochrane’s and make a careful critical appraising even if the review has been done by them, not taking it for granted simply because of its origin. As one of my teachers in these disciplines says (I’m sure he’s smiling if he’s reading these lines), there is life after Cochrane’s. And, besides, there is lot of it, and good, I would add.

Although SRs and meta-analyzes impose a bit of respect at the beginning, do not worry, they can be critically evaluated in a simple way considering the main aspects of their methodology. And to do it, nothing better than to systematically review our three pillars: validity, relevance and applicability.

Regarding VALIDITY, we will try to determine whether or not the revision gives us some unbiased results and respond correctly to the question posed. As always, we will look for some primary validity criteria. If these are not fulfilled we will think if it is already time to walk the dog: we probably make better use of the time.

Has the aim of the review been clearly stated? All SRs should try to answer a specific question that is relevant from the clinical point of view, and that usually arises following the PICO scheme of a structured clinical question. It is preferable that the review try to answer only one question, since if it tries to respond to several ones there is a risk of not responding adequately to any of them. This question will also determine the type of studies that the review should include, so we must assess whether the appropriate type has been included. Although the most common is to find SRs of clinical trials, they can include other types of observational studies, diagnostic tests, etc. The authors of the review must specify the criteria for inclusion and exclusion of the studies, in addition to considering their aspects regarding the scope of realization, study groups, results, etc. Differences among the studies included in terms of (P) patients, (I) intervention or (O) outcomes make two SRs that ask the same question to reach to different conclusions.

If the answer to the two previous questions is affirmative, we will consider the secondary criteria and leave the dog’s walk for later. Have important studies that have to do with the subject been included? We must verify that a global and unbiased search of the literature has been carried out. It is frequent to do the electronic search including the most important databases (generally PubMed, Embase and the Cochrane’s Library), but this must be completed with a search strategy in other media to look for other works (references of the articles found, contact with well-known researchers, pharmaceutical industry, national and international registries, etc.), including the so-called gray literature (thesis, reports, etc.), since there may be important unpublished works. And that no one be surprised about the latter: it has been proven that the studies that obtain negative conclusions have more risk of not being published, so they do not appear in the SR. We must verify that the authors have ruled out the possibility of this publication bias. In general, this entire selection process is usually captured in a flow diagram that shows the evolution of all the studies assessed in the SR.

It is very important that enough has been done to assess the quality of the studies, looking for the existence of possible biases. For this, the authors can use an ad hoc designed tool or, more usually, resort to one that is already recognized and validated, such as the bias detection tool of the Cochrane’s Collaboration, in the case of reviews of clinical trials. This tool assesses five criteria of the primary studies to determine their risk of bias: adequate randomization sequence (prevents selection bias), adequate masking (prevents biases of realization and detection, both information biases), concealment of allocation (prevents selection bias), losses to follow-up (prevents attrition bias) and selective data information (prevents information bias). The studies are classified as high, low or indeterminate risk of bias according to the most important aspects of the design’s methodology (clinical trials in this case).

In addition, this must be done independently by two authors and, ideally, without knowing the authors of the study or the journals where the primary studies of the review were published. Finally, it should be recorded the degree of agreement between the two reviewers and what they did if they did not agree (the most common is to resort to a third party, which will probably be the boss of both).

To conclude with the internal or methodological validity, in case the results of the studies have been combined to draw common conclusions with a meta-analysis, we must ask ourselves if it was reasonable to combine the results of the primary studies. It is fundamental, in order to draw conclusions from combined data, that the studies are homogeneous and that the differences among them are due solely to chance. Although some variability of the studies increases the external validity of the conclusions, we cannot unify the data for the analysis if there are a lot of variability. There are numerous methods to assess the homogeneity about which we are not going to refer now, but we are going to insist on the need for the authors of the review to have studied it adequately.

In summary, the fundamental aspects that we will have to analyze to assess the validity of a SR will be: 1) that the aims of the review are well defined in terms of population, intervention and measurement of the result; 2) that the bibliographic search has been exhaustive; 3) that the criteria for inclusion and exclusion of primary studies in the review have been adequate; and 4) that the internal or methodological validity of the included studies has also been verified. In addition, if the SR includes a meta-analysis, we will review the methodological aspects that we saw in a previous post: the suitability of combining the studies to make a quantitative synthesis, the adequate evaluation of the heterogeneity of the primary studies and the use of a suitable mathematical model to combine the results of the primary studies (you know, that of the fixed effect and random effects models).

Regarding the RELEVANCE of the results we must consider what is the overall result of the review and if the interpretation has been made in a judicious manner. The SR should provide a global estimate of the effect of the intervention based on a weighted average of the included quality items. Most often, relative measures such as risk ratio or odds ratio are expressed, although ideally, they should be complemented with absolute measures such as absolute risk reduction or the number needed to treat (NNT). In addition, we must assess the accuracy of the results, for which we will use our beloved confidence intervals, which will give us an idea of ​​the accuracy of the estimation of the true magnitude of the effect in the population. As you can see, the way of assessing the importance of the results is practically the same as assessing the importance of the results of the primary studies. In this case we give examples of clinical trials, which is the type of study that we will see more frequently, but remember that there may be other types of studies that can better express the relevance of their results with other parameters. Of course, confidence intervals will always help us to assess the accuracy of the results.

The results of the meta-analyzes are usually represented in a standardized way, usually using the so-called forest plot. A graph is drawn with a vertical line of zero effect (in the one for relative risk and odds ratio and zero for means differences) and each study is represented as a mark (its result) in the middle of a segment (its confidence interval). Studies with results with statistical significance are those that do not cross the vertical line. Generally, the most powerful studies have narrower intervals and contribute more to the overall result, which is expressed as a diamond whose lateral ends represent its confidence interval. Only diamonds that do not cross the vertical line will have statistical significance. Also, the narrower the interval, the more accurate result. And, finally, the further away from the zero-effect line, the clearer the difference between the treatments or the comparative exposures will be.

If you want a more detailed explanation about the elements that make up a forest plot, you can go to the previous post where we explained it or to the online manuals of the Cochrane’s Collaboration.

We will conclude the critical appraising of the SR assessing the APPLICABILITY of the results to our environment. We will have to ask ourselves if we can apply the results to our patients and how they will influence the care we give them. We will have to see if the primary studies of the review describe the participants and if they resemble our patients. In addition, although we have already said that it is preferable that the SR is oriented to a specific question, it will be necessary to see if all the relevant results have been considered for the decision making in the problem under study, since sometimes it will be convenient to consider some other additional secondary variable. And, as always, we must assess the benefit-cost-risk ratio. The fact that the conclusion of the SR seems valid does not mean that we have to apply it in a compulsory way.

If you want to correctly evaluate a SR without forgetting any important aspect, I recommend you to use a checklist such as PRISMA’s or some of the tools available on the Internet, such as the grills that can be downloaded from the CASP page, which are the ones we have used for everything we have said so far.

The PRISMA statement (Preferred Reporting Items for Systematic reviews and Meta-Analyzes) consists of 27 items, classified in 7 sections that refer to the sections of title, summary, introduction, methods, results, discussion and financing:

  1. Title: it must be identified as SR, meta-analysis or both. If it is specified, in addition, that it deals with clinical trials, priority will be given to other types of reviews.
  2. Summary: it should be a structured summary that should include background, objectives, data sources, inclusion criteria, limitations, conclusions and implications. The registration number of the revision must also be included.
  3. Introduction: includes two items, the justification of the study (what is known, controversies, etc) and the objectives (what question tries to answer in PICO terms of the structured clinical question).
  4. Methods. It is the section with the largest number of items (12):

– Protocol and registration: indicate the registration number and its availability.

– Eligibility criteria: justification of the characteristics of the studies and the search criteria used.

– Sources of information: describe the sources used and the last search date.

– Search: complete electronic search strategy, so that it can be reproduced.

– Selection of studies: specify the selection process and inclusion’s and exclusion’s criteria.

– Data extraction process: describe the methods used to extract the data from the primary studies.

– Data list: define the variables used.

– Risk of bias in primary studies: describe the method used and how it has been used in the synthesis of results.

– Summary measures: specify the main summary measures used.

– Results synthesis: describe the methods used to combine the results.

– Risk of bias between studies: describe biases that may affect cumulative evidence, such as publication bias.

– Additional analyzes: if additional methods are made (sensitivity, metaregression, etc) specify which were pre-specified.

  1. Results. Includes 7 items:

– Selection of studies: it is expressed through a flow chart that assesses the number of records in each stage (identification, screening, eligibility and inclusion).

– Characteristics of the studies: present the characteristics of the studies from which data were extracted and their bibliographic references.

– Risk of bias in the studies: communicate the risks in each study and any evaluation that is made about the bias in the results.

– Results of the individual studies: study data for each study or intervention group and estimation of the effect with their confidence interval. The ideal is to accompany it with a forest plot.

– Synthesis of the results: present the results of all the meta-analysis performed with the confidence intervals and the consistency measures.

– Risk of bias between the subjects: present any evaluation that is made of the risk of bias between the studies.

– Additional analyzes: if they have been carried out, provide the results of the same.

  1. Discussion. Includes 3 items:

– Summary of the evidence: summarize the main findings with the strength of the evidence of each main result and the relevance from the clinical point of view or of the main interest groups (care providers, users, health decision-makers, etc.).

– Limitations: discuss the limitations of the results, the studies and the review.

– Conclusions: general interpretation of the results in context with other evidences and their implications for future research.

  1. Financing: describe the sources of funding and the role they played in the realization of the SR.

As a third option to these two tools, you can also use the aforementioned Cochrane’s Handbook for Systematic Reviews of Interventions, available on its website and whose purpose is to help authors of Cochrane’s reviews to work explicitly and systematically.

As you can see, we have not talked practically anything about meta-analysis, with all its statistical techniques to assess homogeneity and its fixed and random effects models. And is that the meta-analysis is a beast that must be eaten separately, so we have already devoted two post only about it that you can check when you want. But that is another story…

The guard’s dilemma

Print Friendly, PDF & Email

The world of medicine is a world of uncertainty. We can never be sure of anything at 100%, however obvious it may seem a diagnosis, but we cannot beat right and left with ultramodern diagnostics techniques or treatments (that are never safe) when making the decisions that continually haunt us in our daily practice.

That’s why we are always immersed in a world of probabilities, where the certainties are almost as rare as the so-called common sense which, as almost everyone knows, is the least common of the senses.

Imagine you are in the clinic and a patient comes because he has been kicked in the ass, pretty strong, though. As good doctor as we are, we ask that of what’s wrong?, since when?, and what do you attribute it to? And we proceed to a complete physical examination, discovering with horror that he has a hematoma on the right buttock.

Here, my friends, the diagnostic possibilities are numerous, so the first thing we do is a comprehensive differential diagnosis. To do this, we can take four different approaches. The first is the possibilistic approach, listing all possible diagnoses and try to rule them all simultaneously applying the relevant diagnostic tests. The second is the probabilistic approach, sorting diagnostics by relative chance and then acting accordingly. It seems a posttraumatic hematoma (known as the kick in the ass syndrome), but someone might think that the kick has not been so strong, so maybe the poor patient has a bleeding disorder or a blood dyscrasia with secondary thrombocytopenia or even an atypical inflammatory bowel disease with extraintestinal manifestations and gluteal vascular fragility. We could also use a prognostic approach and try to show or rule out possible diagnostics with worst prognosis, so the diagnosis of the kicked in the ass syndrome lose interest and we were going to rule out chronic leukemia. Finally, a pragmatic approach could be used, with particular interest in first finding diagnostics that have a more effective treatment (the kick will be, one more time, the number one).

It seems that the right thing is to use a judicious combination of pragmatic, probabilistic and prognostic approaches. In our case we will investigate if the intensity of injury justifies the magnitude of bruising and, in that case, we would indicate some hot towels and we would refrain from further diagnostic tests. And this example may seems to be bullshit, but I can assure you I know people who make the complete list and order the diagnostic tests when there are any symptoms, regardless of expenses or risks. And, besides, one that I could think of, could assess the possibility of performing a more exotic diagnostic test that I cannot imagine, so the patient would be grateful if the diagnosis doesn’t require to make a forced anal sphincterotomy. And that is so because, as we have already said, the waiting list to get some common sense exceeds in many times the surgical waiting list.

Now imagine another patient with a symptom complex less stupid and absurd than the previous example. For instance, let’s think about a child with symptoms of celiac disease. Before we make any diagnostic test, our patient already has a probability of suffering the disease. This probability will be conditioned by the prevalence of the disease in the population from which she proceeds and is called the pretest probability. This probability will stand somewhere between two thresholds: the diagnostic threshold and the therapeutic threshold.

The usual thing is that the pre-test probability of our patient does not allow us to rule out the disease with reasonable certainty (it would have to be very low, below the diagnostic threshold) or to confirm it with sufficient security to start the treatment (it would have to be above the therapeutic threshold).

We’ll then make the indicated diagnostic test, getting a new probability of disease depending on the result of the test, the so-called post-test probability. If this probability is high enough to make a diagnosis and initiate treatment, we’ll have crossed our first threshold, the therapeutic one. There will be no need for additional tests, as we will have enough certainty to confirm the diagnosis and treat the patient, always within a range of uncertainty.

And what determines our treatment threshold? Well, there are several factors involved. The greater the risk, cost or adverse effects of the treatment in question, the higher the threshold that we will demand to be treated. In the other hand, as much more serious is the possibility of omitting the diagnosis, the lower the therapeutic threshold that we’ll accept.

But it may be that the post-test probability is so low that allows us to rule out the disease with reasonable assurance. We shall then have crossed our second threshold, the diagnostic one, also called the no-test threshold. Clearly, in this situation, it is not indicated further diagnostic tests and, of course, starting treatment.

However, very often changing pretest to post-test probability still leaves us in no man’s land, without achieving any of the two thresholds, so we will have to perform additional tests until we reach one of the two limits.

And this is our everyday need: to know the post-test probability of our patients to know if we discard or confirm the diagnosis, if we leave the patient alone or we lash her out with our treatments. And this is so because the simplistic approach that a patient is sick if the diagnostic test is positive and healthy if it is negative is totally wrong, even if it is the general belief among those who indicate the tests. We will have to look, then, for some parameter that tells us how useful a specific diagnostic test can be to serve the purpose we need: to know the probability that the patient suffers the disease.

And this reminds me of the enormous problem that a brother-in-law asked me about the other day. The poor man is very concerned with a dilemma that has arisen. The thing is that he’s going to start a small business and he wants to hire a security guard to stay at the entrance door and watch for those who take something without paying for it. And the problem is that there’re two candidates and he doesn’t know who of the two to choose. One of them stops nearly everyone, so no burglar escapes. Of course, many honest people are offended when they are asked to open their bags before leaving and so next time they will buy elsewhere. The other guard is the opposite: he stops almost anyone but the one he spots certainly brings something stolen. He offends few honest people, but too many grabbers escape. A difficult decision…

Why my brother-in-law comes to me with this story? Because he knows that I daily face with similar dilemmas every time I have to choose a diagnostic test to know if a patient is sick and I have to treat her. We have already said that the positivity of a test does not assure us the diagnosis, just as the bad looking of a client does not ensure that the poor man has robbed us.

Let’s see it with an example. When we want to know the utility of a diagnostic test, we usually compare its results with those of a reference or gold standard, which is a test that, ideally, is always positive in sick patients and negative in healthy people. Now let’s suppose that I perform a study in my hospital office with a new diagnostic test to detect a certain disease and I get the results from the attached table (the patients are those who have the positive reference test and the healthy ones, the negative).

Let’s start with the easy part. We have 1598 subjects, 520 out of them sick and 1078 healthy. The test gives us 446 positive results, 428 true (TP) and 18 false (FP). It also gives us 1152 negatives, 1060 true (TN) and 92 false (FN). The first we can determine is the ability of the test to distinguish between healthy and sick, which leads me to introduce the first two concepts: sensitivity (Se) and specificity (Sp). Se is the likelihood that the test correctly classifies a patient or, in other words, the probability that a patient gets a positive result. It’s calculated dividing TP by the number of sick. In our case it equals 0.82 (if you prefer to use percentages you have to multiply by 100). Moreover, Sp is the likelihood that the test correctly classifies a healthy or, put another way, the probability that a healthy gets a negative result. It’s calculated dividing TN by the number of healthy. In our example, it equals 0.98.

Someone may think that we have assessed the value of the new test, but we have just begun to do it. And this is because with Se and Sp we somehow measure the ability of the test to discriminate between healthy and sick, but what we really need to know is the probability that an individual with a positive results being sick and, although it may seem to be similar concepts, they are actually quite different.

The probability of a positive of being sick is known as the positive predictive value (PPV) and is calculated dividing the number of patients with a positive test by the total number of positives. In our case it is 0.96. This means that a positive has a 96% chance of being sick. Moreover, the probability of a negative of being healthy is expressed by the negative predictive value (NPV), with is the quotient of healthy with a negative test by the total number of negatives. In our example it equals 0.92 (an individual with a negative result has 92% chance of being healthy). This is already looking more like what we said at the beginning that we needed: the post-test probability that the patient is really sick.

And from now on is when neurons begin to be overheated. It turns out that Se and Sp are two intrinsic characteristics of the diagnostic test. Their results will be the same whenever we use the test in similar conditions, regardless of the subjects of the test. But this is not so with the predictive values, which vary depending on the prevalence of the disease in the population in which we test. This means that the probability of a positive of being sick depends on how common or rare the disease in the population is. Yes, you read this right: the same positive test expresses different risk of being sick, and for unbelievers, I’ll put another example.

Suppose that this same study is repeated by one of my colleagues who works at a community health center, where population is proportionally healthier than at my hospital (logical, they have not suffered the hospital yet). If you check the results in the table and bring you the trouble to calculate it, you may come up with a Se of 0.82 and a Sp of 0.98, the same that I came up with in my practice. However, if you calculate the predictive values, you will see that the PPV equals 0.9 and the NPV 0.95. And this is so because the prevalence of the disease (sick divided by total) is different in the two populations: 0.32 at my practice vs 0.19 at the health center. That is, in cases of highest prevalence a positive value is more valuable to confirm the diagnosis of disease, but a negative is less reliable to rule it out. And conversely, if the disease is very rare a negative result will reasonably rule out disease but a positive will be less reliable at the time to confirm it.

We see that, as almost always happen in medicine, we are moving on the shaking ground of probability, since all (absolutely all) diagnostic tests are imperfect and make mistakes when classifying healthy and sick. So when is a diagnostic test worth of using it? If you think about it, any particular subject has a probability of being sick even before performing the test (the prevalence of disease in her population) and we’re only interested in using diagnostic tests if that increase this likelihood enough to justify the initiation of the appropriate treatment (otherwise we would have to do another test to reach the threshold level of probability to justify treatment).

And here is when this issue begins to be a little unfriendly. The positive likelihood ratio (PLR), indicates how much more probable is to get a positive with a sick than with a healthy subject. The proportion of positive in sick patients is represented by Se. The proportion of positives in healthy are the FP, which would be those healthy without a negative result or, what is the same, 1-Sp. Thus, PLR = Se / (1 – Sp). In our case (hospital) it equals 41 (the same value no matter we use percentages for Se and Sp). This can be interpreted as it is 41 times more likely to get a positive with a sick than with a healthy.

It’s also possible to calculate NLR (negative), which expresses how much likely is to find a negative in a sick than in a healthy. Negative patients are those who don’t test positive (1-Se) and negative healthy are the same as the TN (the test’s Sp). So, NLR = (1 – Se) / Sp. In our example, 0.18.

A ratio of 1 indicates that the result of the test doesn’t change the likelihood of being sick. If it’s greater than 1 the probability is increased and, if less than 1, decreased. This is the parameter used to determine the diagnostic power of the test. Values > 10 (or > 0.01) indicates that it’s a very powerful test that supports (or contradict) the diagnosis; values from 5-10 (or 0.1-0.2) indicates low power of the test to support (or disprove) the diagnosis; 2-5 (or 0.2-05) indicates that the contribution of the test is questionable; and, finally, 1-2 (0.5-1) indicates that the test has not diagnostic value.

The likelihood ratio does not express a direct probability, but it helps us to calculate the probabilities of being sick before and after testing positive by means of the Bayes’ rule, which says that the posttest odds is equal to the product of the pretest odds by the likelihood ratio. To transform the prevalence into pre-test odds we use the formula odds = p / (1-p). In our case, it would be 0.47. Now we can calculate the post-test odds (PosO) by multiplying the pretest odds by the likelihood ratio. In our case, the positive post-test odds value is 19.27. And finally, we transform the post-test odds into post-test probability using the formula p = odds / (odds + 1). In our example it values 0.95, which means that if our test is positive the probability of being sick goes from 0.32 (the pre-test probability) to 0.95 (post-test probability).

If there’s still anyone reading at this point, I’ll say that we don’t need all this gibberish to get post-test probability. There are multiple websites with online calculators for all these parameters from the initial 2 by 2 table with a minimum effort. I addition, the post-test probability can be easily calculated using a Fagan’s nomogram (see attached figure). This graph represents in three vertical lines from left to right the pre-test probability (it is represented inverted), the likelihood ratios and the resulting post-test probability.

To calculate the post-test probability after a positive result, we draw a line from the prevalence (pre-test probability) to the PLR and extend it to the post-test probability axis. Similarly, in order to calculate post-test probability after a negative result, we would extend the line between prevalence and the value of the NLR.

In this way, with this tool we can directly calculate the post-test probability by knowing the likelihood ratios and the prevalence. In addition, we can use it in populations with different prevalence, simply by modifying the origin of the line in the axis of pre-test probability.

So far we have defined the parameters that help us to quantify the power of a diagnostic test and we have seen the limitations of sensitivity, specificity and predictive values and how the most useful in a general way are the likelihood ratios. But, you will ask, what is a good test?, is it a sensitive one?, a specific?, both?

Here we are going to return to the guard’s dilemma that has arisen to my poor brother-in-law, because we have left him abandoned and we have not answered yet which of the two guards we recommend him to hire, the one who ask almost everyone to open their bags and so offending many honest people, or the one who almost never stops honest people but, stopping almost anyone, many thieves get away.

And what do you think is the better choice? The simple answer is: it depends. Those of you who are still awake by now will have noticed that the first guard (the one who checks many people) is the sensitive one while the second is the specific one. What is better for us, the sensitive or the specific guard? It depends, for example, on where our shop is located. If your shop is located in a heeled neighborhood the first guard won’t be the best choice because, in fact, few people will be stealers and we’ll prefer not to offend our customers so they don’t fly away. But if our shop is located in front of the Cave of Ali Baba we’ll be more interested in detecting the maximum number of clients carrying stolen stuff. Also, it can depend on what we sell in the store. If we have a flea market we can hire the specific guard although someone can escape (at the end of the day, we’ll lose a few amount of money). But if we sell diamonds we’ll want no thieve to escape and we’ll hire the sensitive guard (we’ll rather bother someone honest than allows anybody escaping with a diamond).

The same happens in medicine with the choice of diagnostic tests: we have to decide in each case whether we are more interested in being sensitive or specific, because diagnostic tests not always have a high sensitivity (Se) and specificity (Sp).

In general, a sensitive test is preferred when the inconveniences of a false positive (FP) are smaller than those of a false negative (FN). For example, suppose that we’re going to vaccinate a group of patients and we know that the vaccine is deadly in those with a particular metabolic error. It’s clear that our interest is that no patient be undiagnosed (to avoid FN), but nothing happens if we wrongly label a healthy as having a metabolic error (FP): it’s preferable not to vaccinate a healthy thinking that it has a metabolopathy (although it hasn’t) that to kill a patient with our vaccine supposing he was healthy. Another less dramatic example: in the midst of an epidemic our interest will be to be very sensitive and isolate the largest number of patients. The problem here if for the unfortunate healthy who test positive (FP) and get isolated with the rest of sick people. No doubt we’d do him a disservice with the maneuver. Of course, we could do to all the positives to the first test a second confirmatory one that is very specific in order to avoid bad consequences to FP people.

On the other hand, a specific test is preferred when it is better to have a FN than a FP, as when we want to be sure that someone is actually sick. Imagine that a test positive result implies a surgical treatment: we’ll have to be quite sure about the diagnostic so we don’t operate any healthy people.

Another example is a disease whose diagnosis can be very traumatic for the patient or that is almost incurable or that has no treatment. Here we´ll prefer specificity to not to give any unnecessary annoyance to a healthy. Conversely, if the disease is serious but treatable we´ll probably prefer a sensitive test.

So far we have talked about tests with a dichotomous result: positive or negative. But, what happens when the result is quantitative? Let’s imagine that we measure fasting blood glucose. We must decide to what level of glycemia we consider normal and above which one will seem pathological. And this is a crucial decision, because Se and Sp will depend on the cutoff point we choose.

To help us to choose we have the receiver operating characteristic, known worldwide as the ROC curve. We represent in coordinates (y axis) the Se and in abscissas the complementary Sp (1-Sp) and draw a curve in which each point represents the probability that the test correctly classifies a healthy-sick couple taken at random. The diagonal of the graph would represent the “curve” if the test had no ability to discriminate healthy from sick patients.

As you can see in the figure, the curve usually has a segment of steep slope where the Se increases rapidly without hardly changing the Sp: if we move up we can increase Se without practically increasing FP. But there comes a time when we get to the flat part. If we continue to move to the right, there will be a point from which the Se will no longer increase, but will begin to increase FP. If we are interested in a sensitive test, we will stay in the first part of the curve. If we want specificity we will have to go further to the right. And, finally, if we do not have a predilection for either of the two (we are equally concerned with obtaining FP than FN), the best cutoff point will be the one closest to the upper left corner. For this, some use the so-called Youden’s index, which optimizes the two parameters to the maximum and is calculated by adding Se and Sp and subtracting 1. The higher the index, the fewer patients misclassified by the diagnostic test.

A parameter of interest is the area under the curve (AUC), which represents the probability that the diagnostic test correctly classifies the patient who is being tested (see attached figure). An ideal test with Se and Sp of 100% has an area under the curve of 1: it always hits. In clinical practice, a test whose ROC curve has an AUC> 0.9 is considered very accurate, between 0.7-0.9 of moderate accuracy and between 0.5-0.7 of low accuracy. On the diagonal, the AUC is equal to 0.5 and it indicates that it does not matter if the test is done by throwing a coin in the air to decide if the patient is sick or not. Values below 0.5 indicate that the test is even worse than chance, since it will systematically classify patients as healthy and vice versa.Curious these ROC curves, aren`t they? Its usefulness is not limited to the assessment of the goodness of diagnostic tests with quantitative results. The ROC curves also serve to determine the goodness of fit of a logistic regression model to predict dichotomous outcomes, but that is another story…

Fine-tuning

Print Friendly, PDF & Email

We already know what Pubmed MeSH terms are and how an advanced search can be done with them. We saw that the search method by selecting the descriptors can be a bit laborious, but allowed us to select very well, not only the descriptor, but also some of its subheadings, including or not the terms that depended on it in the hierarchy, etc.

Today we are going to see another method of advanced search a little faster when it comes to building the search string, and that allows us to combine several different searches. We will use the Pubmed advanced search form.

To get started, click on the “Advanced” link under the search box on the Pubmed home page. This brings us to the advanced search page, which you can see in the first figure. Let’s take a look.

First there is a box with the text “Use the builder below to create your search” and on which, initially, we cannot write. Here is going to be created the search string that Pubmed will use when we press the “Search” button. This string can be edited by clicking on the link below to the left of the box, “Edit”, which will allow us to remove or put text to the search string that has been elaborated until then, with natural or controlled text, so we can click the “Search” button and repeat the search with the new string. There is also a link below and to the right of the box that says “Clear”, with which we can erase its contents.

Below this text box we have the search string constructor (“Builder”), with several rows of fields. In each row we will introduce a different descriptor, so we can add or remove the rows we need with the “+” and “-” buttons to the right of each row.

Within each row there are several boxes. The first, which is not shown in the first row, is a dropdown with the boolean search operator. By default it marks the AND operator, but we can change it if we want. The following is a drop-down where we can select where we want the descriptor to be searched. By default it marks “All Fields”, all the fields, but we can select only the title, only the author, only last author and many other possibilities. In the center is the text box where we will enter the descriptor. On its right, the “+” and “-” buttons of which we have already spoken. And finally, in the far right there is a link that says “Show index list”. This is a help from Pubmed, because if we click on it, it will give us a list of possible descriptors that fit with what we have written in the text box.

As we are entering terms in the boxes, creating the rows we need and selecting the boolean operators of each row, the search string will be formed, When we are finished we have to options we can take.

The most common will be to press the “Search” button and do the search. But there is another possibility, which is to click on the link “Add to history”, whereupon the search is stored at the bottom of the screen, where it says “History”. This will be very useful since the saved searches can be entered in block in the field of the descriptors when making a new search and combined with other searches or with series of descriptors. Do you think this is a little messy? Let’s be clear with an example.

Suppose I treat my infants with otitis media with amoxicillin, but I want to know if other drugs, specifically cefaclor and cefuroxime, could improve the prognosis. Here are two structured clinical questions. The first one would say “Does cefaclor treatment improve the prognosis of otitis media in infants?” The second one would say the same but changing cefaclor to cefuroxime. So there would be two different searches, one with the terms infants, otitis media, amoxicillin, cefaclor and prognosis, and another with the terms infants, otitis media, amoxicillin, cefuroxime and prognosis.

What we are going to do is to plan three searches. A first one about article about the prognosis of otitis media in infants; a second one about cefaclor; and a third one about cefuroxime. Finally, we will combine the first with the second and the first with the third in two different searches, using the boolean AND.

Let us begin. We write otitis in the text box of the first search row and click on the link “Show index”. A huge drop-down appears with the list of related descriptors (when we see a word followed by the slash and another word it will mean that it is a subheader of the descriptor). If we look down in the list, there is a possibility that says “otitis / media infants” that fits well to what we are interested in, so we select it. We can now close the list of descriptors by clicking the “Hide index list” link. Now in the second box we write prognosis (we must follow the same method: write part in the box and select the term from the index list). We have a third row of boxes (if not, press the “+” button). In this third row we write amoxicillin. Finally, we will exclude from the search those articles dealing with the combination of amoxicillin and clavulanic acid. We write clavulanic and click on “Show index list”, which shows us the descriptor “clavulanic acid”, which we select. Since we want to exclude these articles from the search, we change the boolean operator of that row to NOT.

In the second screen capture you can see what we have done so far. You see that the terms are in quotes. That’s because we’ve chosen the MeSHs from the index list. If we write the text directly in the box it will appear without quotes, which will mean that the search has been done with natural language (so the accuracy of the controlled language of MeSH terms will have been lost). Note also that in the first text box of the form the search string that we have built so far has been written, which says (((“otitis/media infants”) AND prognosis) AND amoxicillin) NOT “clavulanic acid”. If we wanted, we have already said that we could modify it, but we will leave it as it is.

Now we could click “Search” and make the search or directly click on the “Add to history” link. To see how the number of articles found can be reduced, click on “Search”. I get a list with 98 results (the number may depend on when you do the search). Very well, click on the link “Advanced” (at the top of the screen) to return to the advanced search form.

At the bottom of the screen we can see the first search saved, numbered as # 1 (you can see it in the third figure).

What remains to be done is simpler. We write cefaclor in the text box and give the link “Add to history”. We repeat the process with the term cefuroxime. You can see the result of these actions in the fourth screen capture. You see how Pubmed has saved all the three searches in the search history. If we now want to combine them, we just have to click on the number of each one (a window will open for us to choose the boolean we want, in this case all will be AND).

First we click on # 1 and # 2, selecting AND. You see the product in the fith capture. Notice that the search string has been somewhat complicated: (((((otitis/media infants) AND prognosis) AND amoxicillin) NOT clavulanic acid)) AND cefaclor. As a curiosity I will tell you that, if we write this string directly in the simple search box, the result would be the same. It is the method used by those who totally dominate the jargon of this search engine. But we have to do it with the help of the advanced search form. We click on “Search” and we obtain seven results that will (or so we expect and hope) compare amoxicillin with cefaclor for the treatment of otitis media in infants.

We click again on the link “Advanced” and in the form we see that there is a further search, the # 4, which is the combination of # 1 and # 2. You can already have an idea of how complicated the searching could become combining searches with each other, adding or subtracting according to the boolean operator that we choose. Well, we click on # 1 and # 3 and press “Search”, finding five articles that should deal with the problem we are looking for.

We are coming to the end of my comments for today. I think that the fact that the use of MeSH terms and advanced search yields more specific results than simple search has been fully demonstrated. The usual thing with the simple search with natural language is to obtain endless lists of articles, most of them without interest for our clinical question. But we have to keep one thing in mind. We have already mentioned that a number of people are dedicated to assigning the MeSH descriptors to articles that enter the Medline database. Of course, since the article enters the database until it is indexed (the MeSH is assigned), some time passes and during that time we cannot find them using MeSH terms. For this reason, it could not be a bad idea to do a natural language search after the advanced one and see if there are any articles in the top of the list that might interest us and are not indexed yet.

Finally, commenting that searches can be stored by downloading them to your disk (by clicking the link “download history”) or, much better, creating an account in PubMed by clicking on the link on the top right of the screen that says “Sign in to NCBI. ” This is free and allows us to save the search from one time to another, which can be very useful to use other tools such as Clinical Queries or search filters. But that is another story…

An open relationship

Print Friendly, PDF & Email

We already know about the relationship between variables. Who can doubt that smoking kills or that TV dries our brain?. The issue is that we have to try to quantify these relationships in an objective way because, otherwise, here will always be someone who can doubt them. To do this, we’ll have to use some parameter which studies if our variables change in a related way.

When the two are dichotomous variables the solution is simple: we can use the odds ratio. Regarding TV and brain damage, we could use it to calculate whether it is really more likely to have dry brains watching too much TV (although I’d not waste time). But what happens if the two variables are continuous?. We cannot use then the odds ratio and we have to use other tools. Let’s see an example.

R_generalSuppose I take blood pressure to a sample of 300 people and represent the values of systolic and diastolic pressure, as I show in the first scatterplot. At a glance, you realize that it smells a rat. If you look carefully, high systolic pressure is usually associated with high diastolic values and, conversely, low values of systolic are associated with low diastolic values. I would say that they vary similarly: higher values of one with higher of the other, and vice versa. For a better view, look at the following two graphs.

R_estandar_simpleStandardized pressure values (each value minus the mean) are shown in the first graph. We see that most of the points are in the lower left and upper right quadrants. You can still see it better in the second chart, in which I’ve omitted values between systolic ± 10 mmHg and diastolic ± 5 mmHg around zero, which would be the standardized means. Let’s see if we can quantify this somehow.

Remember that the variance measured how varied the values of a distribution respect to the mean. We subtract the mean to each value, the result is squared so it’s always positive (to avoid that positive and negative differences cancelled each other), all these differences are added and the sum is divided by the sample size (in reality, the sample size minus one, and do not ask why, only mathematicians know why). You know that the square root of the variance is the standard deviation, the queen of the measures of dispersion.

Well, with a couple of variables we can do a similar thing. We calculate, for every couple, the differences of every variable with their means and multiply these differences (the equivalent of squaring the difference we did with the variance). Finally, we add all these products and divided the result by the sample size minus one, thus obtaining this version of the couples’s variance which is called, how could it be otherwise, covariance.variance = \frac{1}{n-1}\sum_{i=1}^{n}{(x_{i}-\overline{x})}^{2}

covariance = \frac{1}{n-1}\sum_{i=1}^{n}(x_{i}-\overline{\mu }_{x})(y_{i}-\overline{\mu }_{y})

And what tells us the value of covariance?. Well, not much, as it will depend on the magnitudes of the variables, which can be different depending on what we are talking about. To circumvent this little problem we use a very handy solution in such situations: standardize.

Thus, we divide the differences from the mean by their standard deviations, obtaining the world famous linear correlation coefficient of Pearson.Pearson\ correlation\ coefficient = \frac{1}{n-1}\sum_{i=1}^{n}(\frac{{}x_{i}-\overline{\mu }_{x}}{\sigma _{x}})(\frac{{}y_{i}-\overline{\mu }_{y}}{\sigma _{y}})

It’s good to know that, actually, Pearson only made the initial development of the above mentioned coefficient and that the real father of the creature was Francis Galton. The poor man spent all his life trying to do something important because he was jealous of his cousin, much more famous, one Charles Darwin, which I think he wrote about a species that eat each other and saying that the secret is to procreate as much as possible to survive.

R_ejemplos_independPearson’s correlation coefficient, r for friends, can have any value from -1 to 1. When set to zero means that the variables are uncorrelated, but do not confuse this with the fact that whether they are or not independent; as the tittle of this post says, the Pearson’s coefficient relationship does not compromise variables to anything serious. Correlation and independence have nothing to do with each other, they are different concepts. If you look at the two graphs of the example you’ll see that r equals zero in both. However, although the variables in the first one are independent, this is not true for the second, which represents the function y = |x|.

If r is greater than zero it means that the correlation is positive, so that the two variables vary in the same sense: when one increases, so does the other and, conversely, when the second decreases so decreases the other one. It is said that this positive correlation is perfect when r is 1. On the other hand, when r is negative, it means that the variables vary in the opposite way: when one increases the other decreases, and vice versa. Again, the negative correlation is perfect when r is -1.

It is crucial to understand that correlation doesn’t always mean causality. As Stephen J. Gould said in his book “The false measure of man” to take this fact is one of the two or three most serious and frequent errors of human reasoning. And it must be true because, even though I searched, I have not found any cousin to which he wanted to shadow, which makes me think he said it because he was convinced. So now you know, even though when there’s causality there’s correlation, the opposite is not always true.

R_histohramasAnother mistake we can make is to use this coefficient without making a series of preflight checks. The first is that the correlation between the two variables must be linear. This is easy to check by plotting the points and seeing that does not look like a parabola, hyperbole or any other curved shape. The second is that at least one of the variables should follow a normal distribution of frequencies. For this we can use statistical tests such as Kolmogorov-Smirnov’s or Shapiro-Wilks’, but often it is enough with representing histograms with frequency curves and see if they fit the normal. In our case, diastolic may fit a normal curve, but I’d not hold my breath for the systolic. The form of the cloud of point in the scatter plot gives us another clue: elliptical or rugby ball shape indicates that the variables probably follow a normal distribution. Finally, the third test is to ensure that samples are random. In addition, we can only use r within the range of data. If we extrapolated outside this range, we’d make an error.

A final warning: do not confuse correlation with regression. Correlation investigates the strength of the lineal relationship between two continuous variables and is not useful for estimating the value of a variable based on the value of the other. Moreover, the (linear in this case) regression investigates the nature of the linear relationship between two continuous variables. Regression itself serves to predict the value of a variable (dependent) based on the other (the independent variable). This technique gives us the equation of the line that best fits the point’s cloud, with two coefficients that indicate the point of intersection with the vertical axis and the slope of the line.

And what if the variables were not normally distributed?. Well, then we cannot use the Pearson’s coefficient. But do not despair, we have the Spearman’s coefficient and a battery of tests based on ranges of data. But that is another story…

Another about coins

Print Friendly, PDF & Email

Few things in the world are immutable. Everything changes and everything is relative. Even the probability of an event can be somewhat changing. Let me explain it.

We usually contemplate the realm of probability from the frequentist point of view. If we have a die with six sides we assume that each side had one chance in six to appear every time we throw the die (assuming it’s a fair die and all its sides are equally likely).

If we have doubts as to whether the die is fair, what we do is to throw the die a huge number of times until we’re able to calculate how many times each side will predictably appear, thus calculating its probability. But in both cases, once we have the data, we stay with it forever. Whatever happens, we’ll continue saying that the probability of getting five is one in a sixth.

But sometimes the likelihood can change and become different that the one set at first. An initial probability can change if we feed the system with new information and it may depend on events that happen over time. This gives rise to the Bayesian statistical viewpoint, based largely on the Bayes’ rule, in which the probability of an event can be updated over time. Let’s see an example.

Suppose, for instance, that we have three coins. But they are three very special coins, as only one of them is fair (heads and tails, HT). Of the other two, one has two heads (HH) and the other has two tails (TT). Now we put the three coins in a bag and draw one of them without looking. The question is: what is the probability of drawing the coin with two heads?.

How easy!, most of you will think. It’s a simple case of desired events divided by possible events. As there’s one desirable event (HH) and three possible ones (HH, TT and HT), the probability is one-third. We have a 33% chance of drawing the coin with two heads.

But what happens if I tell you that I have toss the coin and I have gotten a head? Would I still have the same one-third chance of having the two-heads coin in my hands?. The answer is obviously no. And what is now the probability of having drawn the two-heads coin?. To calculate it we cannot use the quotient between favorable and possible events, but we have to use Bayes’ rule. Let’s deduce it.

The probability of two independent events A and B to occur is equal to the probability of A times the probability of B. If the two events are dependent, the probability of A and B would be equal to the probability of A times the probability of B given A:

P(A and B) = P(A) x P(B|A)

Taking it to the example of our coins, the probability of getting heads and having the two-heads coin can be expressed as

P(H and HH) = P(H) x P(HH|H) (probability of getting heads multiply by the probability of having HH given we have gotten heads).

But we can also express it in the opposite way:

P(H and HH) = P(HH) x P(H|HH) (probability of having the HH coin multiply by the probability of getting heads with the HH coin).

So, we can equate the two expressions and obtain our Bayes’ rule:

P(H) x P(HH|H) = P(HH) x P(H|HH)

P(HH|H) = [P(HH) x P(H|HH)] / P(H)

We will now calculate our chances of having drawn the HH coin if we have gotten heads. We know that P(HH) = 1/3. P(H|HH) = 1: if you have a coin with two heads you have a 100% chance of getting heads. What is P(H)?

The chance of getting heads is equal to the probability of having drawn the TT coin times the probability of getting head with the TT coin, plus the chance of having drawn the HH coin times the probability of getting heads with the HH coin, plus the probability of having drawn the fair coin times the probability of getting heads with it:

P(H) = (1/3 x 0) + (1/3 x 1/2) + (1/3 x 1) = 1/2

So, P(HH|H) = [1 x 1/3] / 1/2 = 2/3 = 0,66

That means that if we have toss the coin and have gotten heads, the probability that we have drawn the two-heads coin from the bag rises from 33% to 66% (and the probability of having the two-tails coin gets from 33% to 0).

Do you see how the probability is updated? What if we toss the coin again and get heads? What would them be the probability of having drawn the two-heads coin?. Let’s calculate it following the same reasoning:

P(HH|H) = [P(HH) x P(H|HH)] / P(H)

In this case, P(HH) is not equal to 1/3, but 2/3. P(H|HH) is still 1. Finally, P(H) has also change: we have already rule out the possibility of having drawn the two-tails coin, so the chances of getting heads in the second toss is the probability of having HH times the probability of getting heads with HH, plus the probability of having the fair coin times the probability of getting heads with it:

P(H) = (2/3 x 1) + (1/3 x 1/2) = 5/6

So, P(HH|H) = (2/3 x 1) / (5/6) = 4/5 = 0,8

If we get heads with the second toss, the probability of having the two-heads coin rises from 66% to 80%. Logically, if we keep repeating the experiment, every time we get heads we’ll be surer of having the two-heads coin, but we’ll never have total certainty. Of course, the experiment ends the moment we gets tails, in which the probability of having HH will drop automatically to zero (and the chance of having a fair coin will go to 100%).

As you can see, probability is not as immutable as it seem.

And here we stop playing with coins for today. Just say that, eventhough it’s less known than the frequentist approach, Bayesian statistics are of great use. There are text-books, dedicated software and specific method for analysis of results incorporating information derived from the study. But that’s another story…

The cooker and his cake

Print Friendly, PDF & Email

Knowing how to cook is a plus. What in good terms you stay when you have guests and you know how to cook properly!. It takes you two or three hours to buy the ingredients, you spend a fortune in it, and it takes another two or three hours working in the kitchen… and, in the end, it turns out that your great dish you were preparing ends up as a wreck.

And this may happen even to best cookers. We can never be sure that our dish will turn out good, although we have prepared it many times before. So you will understand the problem with my cousin.

As it happens, he’s going to give a party and the dessert has been his lot. He knows how to do a pretty and tasty cake, but it only turns out really good half of the times he tries. So he’s very concerned about making fool of himself at the party, as it’s easy to understand. Of course, my cousin is very clever and has thought that, if he makes more than one cake, at least one of them will turn out good. But how many cakes does he have to do to get at least one good?.

The problem with this question is that it doesn’t have an exact answer. The more cakes we make, the more likely one of them turns out good. But, of course, you can make two hundreds cakes and have the bad luck that all of them turn out bad. But do not despair: although we cannot give a number with absolute certainty, we can measure the probability of getting along with a certain number of cakes. Let’s see it.

We are going to imagine the probability distribution, which is just the set of situations that include all the possible situations that may occur. For example, if my cousin makes one cake, it can turns out good (G) or bad (B), both of them with a probability of 0.5. You can see it represented in Figure A. He’ll have a 50% chance of success.

If he makes two cakes it may be that he gets one cake good, two or none. The possible combinations are: GG, GB, BG, and BB. The chance of coming up with a good one is 0.5, and 0.25 the chance of getting two good ones, so the probability of getting at least one cake good is 0.75 or 75% (3/4). It’s represented in Figure B. We see that options have improved, but it’s still much room for failure.binomial

If he makes three cakes, the options will be: GGG, GGB, GBG, GBB, BGB, BGG, BBG, and BBB. The situation is improving, we have an 87.5% (1/8) of probabilities to get at least one cake. We represent it in Figure C.

And what if he makes four cakes, or five, or…?. The issue becomes a pain in the ass. It’s increasingly difficult to imagine all the possible combinations. What can we do?. Well, we can think a little.

If we look at the graphs, the bars represent the discrete elements of the probability of each of the possible events. As the number of possibilities and the number of vertical bars increase, the bars distribution begins to take a bell shape, conforming to a known probability distribution, the binomial distribution.

People who know about this stuff called Bernouilli experiments to those who have only two possible solutions (are dichotomous), like flipping a coin (heads or tails) or making our cakes (good or bad). However, the binomial distribution measures the number of successes (k) of a series of Bernouilli experiments (n) with a certain probability of success of each event (p).

In our case the probability is p = 0.5 and we can calculate the probability of success by repeating the experiment (cooking cakes) using the following formula:

\fn_jvn \fn_jvn P(k\ successes\ with\ n\ experiments)= \binom{n}{k} p^{k} (1-p)^{n-k}

If we replace p by 0.5 (the probability of the cake comes out good), we can play with different values of n to obtain the probability of getting at least one good cake (k ≥ 1).

If we make four cakes, the probability of having at least one good is of 93.75% and if we make five the probability increases to 96.87%, a reasonable probability for what we are dealing with. I believe that if my cousin makes fives cakes it will be very difficult for him to ruin his party.

We could also clear up the value of the probability and calculate the reverse: given a value of P(k,n), get the number of attempts  needed. Another thing you can also do is to calculate all these things without using the formula, but using any probability calculator available online.

And this is the end of this tasty post. There are, as you can imagine, more types of probability distributions, both discrete as the binomial and continuous as the normal distribution, the most famous of all of them. But that’s another story…