No doubt that the randomized clinical trial is the King of epidemiological designs when we want to show, for instance, the effectiveness of a treatment. The problem is that clinical trials are difficult and expensive to perform, so before we get into a trial it is usual to carry out other previous studies.
These previous studies may be of the observational type. With a cohort or a case-control study we can gather enough information about the effect of an intervention to justify the subsequent performance of a clinical trial.
However, observational studies are also expensive and complexes, so we often resort to another solution: doing a clinical trial on a smaller scale to obtain evidence in order to do or not to do a large-scale trial, which results would be definitive. These previous studies are generally known by the name of pilot studies, and they have a number of characteristics that should be taken into account.
For example, the aim of a pilot study is to provide some assurance that the effort of making the final trial will provide something useful, so it tries more to observe the type of intervention’s effect than to demonstrate its effectiveness.
Being relatively small studies, pilot studies often lack of sufficient power to achieve statistical significance at the usual level of 0.05, so some authors recommend setting the value of alpha at 0.2. This alpha-value is the chance we have of making a type I error, which is to reject the null hypothesis of no-effect when it’s true or, in other words, accepting the existence of an effect that doesn’t really exist.
But, what is going on? Don’t we mind to have a 20% chance of being wrong?. For other trial the acceptable limit is 5%. Well, the true isn’t that we don’t mind, but the point of view with a pilot study is different of the one with a conventional clinical trial.
If we commit a type I error doing a conventional clinical trial, we’ll admit a treatment as effective when it’s not. It’s easy to understand that this can carry bad consequences and harm patients who undergo in the future to the alleged beneficial intervention. However, if we make a type I error in a pilot study, all that will happens is that we’ll spend time and money to make a definitive trial that finally will prove that the treatment is not effective.
In a definitive clinical trial is preferable not to take for effective an ineffective or unsafe treatment, while in a pilot study is preferable to perform a bigger clinical trial with an ineffective treatment than not to test one that could be effective. This is why the threshold of type I error is increased to 0.2.
Better use confidence intervals
Anyway, if we are interested in study the direction of the intervention’s effect, it may be advisable to use confidence intervals instead of classical hypothesis testing with its p-values.
These confidence intervals have to be compared with the minimal clinically important difference, which must be defined a priori. If the interval doesn’t include the null value and includes the minimal important difference, we’ll have arguments for conducting a large-scale trial to definitively show the effect. Suffice is to say that, as we can increase the alpha-value, we can use confidence intervals with levels below 95%.
Another peculiarity of pilot studies is the choice of the outcome variables. Considering that a pilot study seeks to test just how the components of the trial will work together in the future trial, we can understand that sometimes it’s impractical to use an outcome variable and we have to use a surrogate variable, that is one which provides an indirect measure of the effect when the direct measurement is not practical or impossible. For example, if we’re studying an antitumor treatment, the outcome variable may be the five-year survival, but in a pilot study may be more useful an indirect variable who indicates the decrease in tumor size. It will indicate the direction of treatment’s effect without prolonging the pilot study for too long.
So, you can see that pilot studies should be interpreting taking into account their peculiarities. Moreover, they also help us to predict how the definitive trial can function, anticipating problems that could ruin an expensive and complex clinical trial. This is the case of missing data and losses to follow-up, which are usually larger in pilot studies than in conventional trials. Although they have less significance, losses in pilot studies should be evaluated trying to prevent future losses in the final trial because, although there’re many ways to manage losses and missing data, the best way is always to prevent their occurrence. But that’s another story…