DOI

https://doi.org/10.25772/QJ5E-AQ86

Defense Date

2017

Document Type

Dissertation

Degree Name

Doctor of Philosophy

Department

Psychology

First Advisor

Bryce D. McLeod

Abstract

Brief, easy to use, psychometrically strong (i.e., pragmatic) instruments are needed to support implementation research; the current study assessed whether it was possible to develop a pragmatic observational treatment integrity instrument and reduce the amount of time coders spend making treatment integrity ratings (while maintaining score validity) of therapists delivering two protocols of individual cognitive-behavioral treatment (ICBT) for youth anxiety in research and practice settings. The 12-item instrument was derived from four observational treatment integrity instruments with promising score reliability and validity that assess adherence, competence, differentiation, and alliance. A sample of 106 youths (M age = 10.12, SD = 1.81, ages 7-14; 42.50% Female; 69.80% Caucasian) received one of three treatments to address anxiety: standard ICBT in a research setting (n = 51) or standard ICBT (n = 22), modular ICBT (n = 16), or usual care (UC; n = 17) in practice settings. Four coders independently coded five- and 15- minute segments sampled from four sessions from each client (N = 756 sessions). Ten percent of sessions were double-coded for reliability purposes. Reliability, sensitivity to change, construct validity, and predictive validity from the two segments were compared to full session treatment integrity scores independently archived in a study assessing the same clients. Across five- and 15-minute segments, the instrument produced promising score reliability and convergent validity evidence for adherence, competence, and alliance items (items intended for inclusion in ICBT for youth anxiety; M ICCs = .62, SD = .17; M rs = .58, SD = .12) and poor score reliability and validity evidence for differentiation items (items intended for inclusion in other treatment domains; M ICCs = .21, SD = .28; M rs = .27, SD = .25). This study met its primary aim, to develop an instrument that can be coded in less than 20 minutes while maintaining evidence of score validity. Researchers interested in developing such instruments can use this study design as a roadmap. Future research should investigate whether psychometric findings replicate across samples, why certain items (e.g., client-centered interventions) did not evidence score validity, and how this type of instrument can inform EBT training.

Rights

© Meghan Smith

Is Part Of

VCU University Archives

Is Part Of

VCU Theses and Dissertations

Date of Submission

6-10-2017

Share

COinS