The concept of conditioned probability and the development of Bayes’ rule, the basic form of Bayes’ theorem, are described.
Sometimes statistical concepts are useful to other facets of life. For example, imagine that a burglary occurs in a bank and the thief has entered through a small hole in the wall. Now keep on imagining that five years ago a tiny thief, who was release from prison two months ago, made a similar theft. Who do you think the police will interrogate first?
All of you will agree with me that the dwarf thief will be the main suspect, but you probably will be wondering what all this has to do with statistics. And I’ll tell you that the answer is very simple: police are using the concept of conditional probability when thinking about its little suspect. Let’s see what conditional probability is and you’ll see I am right.
Two events may be dependent or independent. They are independent when the probability of one to occur has nothing to do with the probability of the other. For instance, if you throw a dice ten times, each of these runs will be independent of the preceding (and following).
If we get a six in a throw, therefore the probability of getting another in the following throw won’t be lower, but it’ll still be a sixth. Applying the same reasoning, if we throw ten times and don’t get a six in any of them, the probability of getting it the next time we throw will still be a sixth, and not higher. The probability of getting six twice in a row would be the product of the probability of getting each: 1/6 x 1/6 = 1/36.
Mathematically expressed, the probability of two independent events to occur is:
P(A and B) = P(A) x P(B)
In other cases the events may be dependent, which means that the occurrence of one of them changes the probability of occurrence of the other. We speak then of conditional probability. Let’s see an example.
The first that comes to a physician`s mind may be that of the positive and negative predictive values of diagnostic tests. The probability that a patient has a positive test is not the same than the probability of being sick once he has tested positive. The last, in turn, will be greater than if he gets a negative result. As you can see, the result of the test determines the probability of disease.
In the other hand, think we’re studying a population of children to see how many of them have anemia and malnutrition. Logically, the likelihood of malnutrition will be greater in anemic children. Once we determine that the child is anemic, the probability that he’s malnourished will increase. The good thing about all these is that if we know the different probabilities, we can calculate the probability of having anemia once we have found that the child is malnourished. Let’s see it mathematically.
The probability of two dependent events to occur can be expressed as follows:
P(A and B) = P(A) x P(B|A), where B|A is read as B given A.
We may also write the equation changing A by B, as follows:
P(A and B) = P(B) x P(A|B)
and as the left side of the two equations are the same, we can equalize them and get another expression:
P (A) x P (B | A) = P (B) x P (A | B)
P (B | A) = [P (B) x P (A | B)] / P (A)
what is known as Bayes’ rule. Bayes was a clergyman of the eighteenth century who was very fond of conditional events.
Let’s see an example
To understand its utility, we’ll apply it to the case of positive predictive value. Suppose a disease whose prevalence (probability of occurring in the population) is 0.2 and a test to diagnose it with a sensitivity of 0.8. If we take a population and get a 30% of positive results (probability 0.3), what is the probability that an individual being sick once he has obtained a positive test result?. Let’s solve the problem:
P (sick | positive) = [P (sick) x P (positive | diseased)] / P (positive)
P (sick | positive) = (prevalence x sensitivity) / P (positive test)
P (sick | positive) = (0.2 x 0.8) / 0.3 = 0.53
In summary, if an individual tests positive he has a 53% chance of being sick.
And here we end with Bayes for today. Note that Bayes’ contribution to the science of statistics was much broader. In fact, this type of reasoning leads to another way of seeing statistics depending on the events to be happening compared to classical frequentist statistical approach we use most of the time. But that’s another story…