< Back to previous page

Publication

Size matters! Assessing effect sizes in single-case experiments using randomization tests with applications for the treatment of chronic pain.

Book - Dissertation

This dissertation concerns the use of randomization tests (RTs) for the analysis of experimental data from single-case experimental designs (SCEDs). This type of data often exhibit characteristics that violate the statistical assumptions made by parametric significance tests. For this reason the RT has been proposed as a viable technique for single-case data analysis as it does not require distributional assumptions or an assumption of random sampling to produce valid statistical inferences. The RT produces a non-parametric p-value, which can be used to evaluate the statistical significance of a treatment effect. The practice of using p-values to test treatment effectiveness has been severely criticized for not being able to evaluate treatment effect size in experimental data and is seen as one of the major contributing factors to the currently ongoing replicability crisis in psychological science. Current consensus in the scientific community dictates that the reporting of experimental results should, in addition to reporting p-values, rely more on the so-called 'new statistics' which refer to practices such as effect size (ES) estimation, construction of confidence intervals (CI) and meta-analysis. In the light of these developments within the field of single-case data-analysis, this dissertation focuses on the use of ES measures and CIs within the RT framework for the analysis and meta-analysis of SCEDs. In Chapter 1, we propose to use ES measures as test statistics in the RT to determine their statistical significance, introduce the immediate treatment effect index (ITEI) as an ES measure for AB phase designs and evaluate its statistical power in a simulation study. In addition, we demonstrate that the RT can produce valid statistical inferences for this design even when substantial data trend is present. Chapter 2 extends the proposal to use ES measures as test statistics in the RT to single-case alternation designs and proposes two single-case non-overlap ES measures for use in these types of designs. The Type I error and power of these ES measures are evaluated for three different alternation designs in a simulation study. Chapter 3 presents a technique to construct non-parametric CIs for single-case ES measures based on randomization test inversion (RTI). We discuss the rationale behind this technique and illustrate it with worked examples for various types of phase designs and alternation designs. Chapter 4 expands on the developed RTI technique from chapter 3 and proposes an extension to the meta-analysis of multiple SCEDs by calculating confidence intervals for combined ESs (CICES). In Chapter 5, we propose the use of a randomization test wrapper for multilevel models (MLMs) as an alternative meta-analytical technique for CICES. MLMs are widely used to meta-analyze data from SCEDs but also make parametric assumptions about the data that are often violated. In chapter 5, we discuss how the MLM parameters serve as the test statistic in the RT to evaluate the statistical significance of these parameters in a non-parametric way. Furthermore, we evaluate and compare the Type I error and power of this technique to traditional MLMs by means of a simulation study. Finally, in Chapter 6 we apply the developed data-analytical methods (use of ES measures as test statistic in the RT, RTI, CICES and the randomization test wrapper) to a convenience sample of studies evaluating graded exposure therapy for the treatment of chronic pain. We developed easy-to-use R-code which enables single-case researchers to use RTI, CICES and the randomization test wrapper for their own data.
Publication year:2018