All of you will know the Chinese Tale about the poor lone grain of rice that falls to the ground and no one hears. Of course, if instead of a grain it’s a sack of rice that fall that will be another thing. There’re many examples that show how unity creates strength. A lone red ant is harmless, unless it bites you in any soft and noble zone, which usually are the most sensitive parts. But what will you tell me about a scrum of millions of red ants?. That scare the crap out of you, because if they go against you all together there’s little you can do to stop them. Yes, the sum of many “few” makes a “lot”.
And that is true about statistics too. With the aid of a relatively small sample of well-chosen voters we can estimate who will win an election in which millions of people vote. So imagine what we could do with a lot of those samples. I’m sure that the estimate would be more reliable and generalizable.
Well, this is precisely one of the purposes of meta-analysis, which uses statistical techniques to come up with a quantitative synthesis from results of a series of studies that aim to answer the same question but don’t get exactly the same result.
We know we must check for heterogeneity among studies before combining them because, otherwise, it would make little sense to do it and the results we would get wouldn’t be valid or generalizable. Available for this purpose there’re a number of methods, both numerical and graphical ones, to ensure there’s the homogeneity we need.
The next step is to analyze the effect size estimates of the studies, weighing them according to the contribution of each of them to the pooled result. The most common way is to weigh the effect size estimates by the inverse of their variance and then doing the analysis to obtain an average effect. In order to this, there’re various possibilities, but the most commonly used methods are the fixed effects model and the random effects model. Both models differ in their assumptions about the original population from that primary studies come.
The fixed effects model considers that there’s heterogeneity and that all studies estimate the same effect size in the same population. So, it’s assumed that variability observed among individual studies is due solely to the error that occurs when performing random sampling in each study. This error is measured estimating intra-study variance, assuming that differences in effect size estimates are only due to the use of different samples of subjects.
On the other hand, in the random effects model it’s assumed that effect size follows a normal frequency distribution in the population, so each study estimates a different effect size. Therefore, in addition to intra-study variance due to random sampling, this model also includes the variability among studies that represents the deviation of each study with respect to the average effect size. These two errors are mutually independent and both of them contribute to the estimates variance.
In summary, the fixed effects model incorporates only one error term for the variability in each study, while the random effects model further adds another error term due to the variability among studies.
You can see I have not written a single formula. Actually, we don’t need them and they’re quite unfriendly, filled with Greek letters that no one can understand. But don’t worry. As always, statistical software such as the Cochrane Collaboration’s RevMan let you easily calculate the results, removing and drawing studies from the model, as well as change between models as we want.
It’s important what model we choose. If there’s not heterogeneity we can use the fixed effects model. But if we find out that our studies are heterogeneous, but not enough to advise against combining them, it is preferable to use the random effects model.
Another aspect to keep in mind is the applicability or external validity of the meta-analysis result. If we use the fixed effects models it will not be safe to generalize results to populations which are different of those of the included studies. This does not happen with the random effects model, whose external validity is higher because it takes into account different populations from different studies.
In any case, we’ll come up with an average effect measure along with its confidence interval. This confidence interval won’t be statistically significant if it crosses the line of no effect, we already know that it’s zero for mean differences and one for odds ratios and relative risks. In addition, the width of the interval will inform us about the accuracy of the estimated effect in the population: as much wider, less precise, and vice versa.
If you think about it you will understand why the random effects model is more conservative than the fixed effects models being that the confidence intervals obtained are less accurate because the former model incorporates more variability in its analysis. In some cases the estimate could be significant using the fixed effects model and not significant using the random effects model. But that shouldn’t be a reason when choosing the model to use. We must always decide taking into account our previous heterogeneity study and, in case we have doubts, we can use both methods and compare the different results.
And now, it only remains to present the results in a proper way. Meta-analysis results are usually represented using a specific chart that is call the forest plot. But that’s another story…