Critical appraisal overview
Following the usual practice, and trying to save time and effort, we have certainly asked to our closest colleagues, hoping that they solve the problem, avoiding us the need to deal with the dread PubMed (who said Google!?). As a last resort, we have consulted a medical book in a desperate attempt to get answers, but nor even the fattest books can free us from having to search on a database occasionally.
Steps in Evidence-Based Medicine
And in order to do it well, we should follow the five steps of Evidence-Based Medicine: formulating our question in a structured way (first step), doing our bibliographic search (second step) and critically appraise the articles we find and that we consider relevant to the theme (third step), ending with the last two steps that will be to combine what we have found with our experience and the preferences of the patient (fourth step) and to evaluate how it influences our performance (fifth step).
So we roll up our sleeves, make our structured clinical question, and enter PubMed, Embase or TRIP looking for answers. Covered in a cold sweat, we come up with the initial number of 15234 results and get the desired article that we hope to enlighten our ignorance with. But, even though our search has been impeccable, are we really sure we have found what we need?. Here it starts the arduous task of critically appraise the article to assess its actual utility to solve our problem.
This step, the third of the five we have seen and perhaps the most feared of all, is indispensable within the methodological flow of Evidence-Based Medicine. And this is so because all that glitters is not gold: even articles published in prestigious journals by well-known authors may have poor quality, contain methodological errors, have nothing to do with our problem or have errors in the way of analyzing or presenting the results, often in a suspiciously interested way.
And this is not true because I say so, because there are even people who think that the best place to send 90% out of what is published is the trash can, regardless of whether the journal has impact fact or if the authors are more famous than Julio Iglesias (or his son Enrique, for that matter). Our poor excuse to justify our lack of knowledge about how to produce and publish scientific papers is that we are clinicians rather than researchers, and of course the same is often the case with journals reviewers, who overlook all the mistakes that clinicians make.
Thus, it is easy to understand that critical appraising is a fundamental step in order to take full advantage of the scientific literature, especially in an era in which information abounds but we have little time available to evaluate it.
Critical appraisal overview
The first thing we must do is always to assess whether the article answers to our question. This is usually the case if we have developed the clinical question correctly and we have done a good search of the available evidence but, anyway, we should always check that the study population, the intervention, etc., match with what we are seeking.
Before entering into the systematic of critically appraising, we will take a look over the document and its summary to try to see if the article in question can meet our expectations. The first step we must always take is to evaluate whether the paper answers our question. This is often the case if we have correctly elaborated the structured clinical question and we have made a good search for the available evidence, but it is always appropriate to check that the type of population, study, intervention, etc. are in line with what we are looking for.
Once we are convinced that the article is what we need, we will perform a critical appraising. Although the details depend on the type of study design, we are always based on three basic pillars: validity, relevance and applicability.
Appraising validity consist on checking the scientific rigor of the paper to find out how much close to the true it is. There are a number of common criteria to all studies, such as the correct design, an adequate population, the existence of homogeneous intervention and control groups at the beginning of the study, a proper monitoring, etc. Someone thought this term should be best called internal validity, so we can find it with this name.
The second pillar is clinical importance, which measures the magnitude of the effect found. Imagine that a new hypotensive is better than the usual one with a p-value with many zeroes, but that it decrease blood pressure an average of 5 mmHg. No matter how many zeroes the p-value have (which is statistically significant, we cannot denied it), we have to admit that the clinical effect is rather ridiculous.
The last pillar is the clinical applicability, which consist in assessing whether the context, patients and intervention of the study are sufficiently similar to our environment as to generalize the results. The applicability is also known as external validity.
Not all scientific papers can be described favorably in these three aspects. It may happen that a valid study (internal validity) finds a significant effect that cannot be applied to our patients. And we must not forget that we are using just a working tool. Even the most suitable study must be appraise in terms of benefits, harms and costs, and patient preferences, the latter aspect one that we forget more often than it would be desirable.
For those with a fish memory, there are some templates by group CASP that are recommended to use as a guide to make critical reading without forgetting any important aspect. Logically, the specifics measures of association and impact and the requirements to meet internal validity criteria depend specifically on the type of the study design that we are dealing with. But that’s another story…