< Back to previous page

Publication

The story of U+2018the dataU+2019 : on validity of data and performativity of research participation in psychotherapy research

Book - Dissertation

Subtitle:Het verhaal achter U+2018de dataU+2019 : over validiteit van data en performativiteit van onderzoeksparticipatie in psychotherapieonderzoek
This dissertation is focused on the validity of U+201Cthe dataU+201D that are collected in psychotherapy research for the purpose of evidencing treatment efficacy. In the U+2018Evidence Based TreatmentU+2019 (EBT) paradigm, researchers rely on the so-called U+2018gold standard methodologyU+2019 to gather sound and trustworthy evidence, which increasingly influences the organization of mental health care worldwide (Kazdin & Sternberg, 2006). In the gold standard, data are collected by quantified self-report measures, to assess the presence and severity of symptoms before and after treatment. When the pre-post difference is larger for a group of people that received the treatment of interest and a group of people who received no or an alternative treatment (Chambless & Ollendick, 2001), the treatment of interest is called effective. In this methodological procedure, researchers tend to assume that when this gold standard methodology is conducted properly, U+201Cthe dataU+201D will speak for themselves. However, when evidence is based on data, that evidence is principally limited by that data. In other words: output depends on input. That implies that when input is flawed, output will be flawed, even when the very best methods of analysis would be used. So what if U+201Cthe dataU+201D yield validity issues despite (or because of) being collected by validated measures? When U+201Cthe dataU+201D do not straightforwardly evidence treatment effects, will the subsequent steps in the analysis of these data be enough to secure that the evidence in the end does evidence treatment efficacy? In this dissertation, the focus is turned to validity of U+201Cthe dataU+201D that are concretely provided by patient-participants by scoring their own experienced symptoms in self-report questionnaires, in function of evidencing treatment efficacy. A series of empirical case studies were conducted to scrutinize how patient-participants in a randomized controlled psychotherapy study (U+2018The Ghent Psychotherapy StudyU+2019; Meganck et al., 2017) experienced the process of data collection, and how these experiences affected the data that they provided. Each of the studied patient-participants experienced a substantial effect of the questionnaire administration on their complaints. This impacted the level (presence and severity) of complaints, but also changed the way in which the complaints were understood at all, and how patient-participants perceived themselves. Thus, rather than neutrally measuring symptoms, questionnaire administration changed the experienced complaints U+2018performativelyU+2019 (Cavanaugh, 2015), which turned measurement into a clinical intervention of its own. Consequently, what is measured cannot straightforwardly be called U+2018treatment efficacyU+2019, as it may be entangled with, enabled by or even obstructed by effects of measurement and research. Therefore, the act of measurement itself can pose a vital threat to the validity of U+201Cthe dataU+201D. This way, the empirical case studies exhibited that data can yield validity problems despite (or because of) the use of validated measures. Consequently, the validity of a measure as such is no guarantee for the validity of data. Nonetheless, in gold standard research, the data are straightforwardly taken as input for analyses of general treatment efficacy. The question is what happens with those validity issues on the level of individual data, when they pursue their journey towards becoming evidence. In this dissertation, it was argued that the validity issues are not sufficiently solved in the methodological steps after data collection, so when data are invalid in the beginning, these validity issues will simply become part of the data set that forms the input for analysis of the final evidence. This urges that validity issues should be solved on the level of individually provided data, as the validity issues will otherwise become inherent to U+201Cthe dataU+201D. Put formally: valid data is a precondition for evidence in EBT. In conclusion, it is crucial for a sound evidence-base to scrutinize the validity of data in function of the overall goal and utility of the research. For this, it is important not to take U+201Cthe dataU+201D as speaking for themselves, but to regard them as clinical narratives, which are framed in a specific format to be communicated between researcher and respondent in a research context. This emphasizes that the choice for a certain format determines what can be evidenced, so it is vital that these choices indeed allow for obtaining evidence that is useful and valid to serve the clinical goal of EBT.
Pages: 329 p.
ISBN:9789090322261
Publication year:2019