E-value E-value

The cursed roundabout

How unmeasured confounding can distort associations in observational studies is reviewed, and parameters to quantify this effect are presented. This article explains how the E-value quantifies the minimum strength that an unmeasured confounder would need to have in order to fully explain an observed effect or make it compatible with the absence of association. Finally, it briefly discusses its extensions to different effect measures and its complementary role to p-values in critical appraisal.

Read MoreThe cursed roundabout
E-value E-value

Cooper’s bookshelf

Principal component analysis (PCA) is a statistical dimensionality reduction technique that transforms correlated variables into independent orthogonal components. Its purpose is to simplify complex data structures by maximizing explained variance and eliminating informational redundancy through methods such as singular value decomposition.

Read MoreCooper’s bookshelf
E-value E-value

We’re definitely going extinct

The central limit theorem states that if we take a sufficiently large number of random samples from the same population and calculate the mean for each sample, the distribution of those means will tend to follow a normal distribution, regardless of the original distribution of the data. This allows for the safe application of many statistical analyses, such as estimating confidence intervals and hypothesis testing.

Read MoreWe’re definitely going extinct
E-value E-value

The doctor who diagnosed vampires

The post analyzes the problem of class imbalance in biomedical models and how overall accuracy can become useless when the minority class is the clinically relevant one. It explains which evaluation metrics are most appropriate and outlines the main strategies to handle imbalance, such as oversampling (SMOTE, ADASYN), selective undersampling (Tomek links), and ensemble methods that stabilize performance in low-prevalence scenarios.

Read MoreThe doctor who diagnosed vampires
E-value E-value

The art of stylish data filling

The multiple imputation by chained equations (MICE) technique is based on a predictive algorithm that iteratively imputes missing data for a variable based on the values present in the other variables of the dataset. To do this, it is important to ensure that the presence of the missing data does not depend on the variable itself but rather is due to chance or its relationship with other variables.

Read MoreThe art of stylish data filling
Esta web utiliza cookies propias y de terceros para su correcto funcionamiento y para fines analíticos. Al hacer clic en el botón Aceptar, aceptas el uso de estas tecnologías y el procesamiento de tus datos para estos propósitos. Antes de aceptar puedes ver Configurar cookies para realizar un consentimiento selectivo.   
Privacidad