Author ORCID Identifier

Defense Date


Document Type


Degree Name

Doctor of Philosophy



First Advisor

Lisa M. Abrams, Ph.D.


Colleges and schools of pharmacy (C/SOP) use direct measures of assessment to provide evidence of student learning, with multiple-choice questions (MCQs) being one the most common formats used in health sciences education to assess students’ knowledge, skills, and abilities (Pate & Caldwell, 2014). This study examined the occurrence of item-writing flaws (IWFs) in the Clinical Therapeutics Module (CTM) sequence of courses at a college of pharmacy at an academic health center in the southeastern United States. The goals of the study were to: (1) identify the most common item-writing flaws on examinations in the CTM sequence of courses, (2) determine what percentage of item-writing flaws included on the CTM examinations contain one or more IWFs, and (3) to examine the relationship between the most frequently occurring IWFs and test item psychometric parameters including item difficulty, item discrimination, and average item answer time.

A total of 1,373 test items from 34 locally developed summative examinations of the second- and third-year CTM sequence of courses during the 2017-2018 academic year comprised the item pool. A stratified random sample of 313 items was used to assure proportionate representation from each course. Eight criteria from the Item-Writing Flaws Evaluation Instrument (IWFEI) were used to identify any item writing flaws in each of the 313 items.

Spearman’s rho correlations were conducted to examine the strength and direction of the relationship between the most common item-writing flaws and the psychometric indices, including item difficulty, item discrimination, and average answer time to determine the influence of writing flaws on student performance.

Findings of the current study suggest that item-writing flaws are common within the clinical therapeutics module examinations, with 37% of items having at least one item-writing flaw. Given the use of exam results for program accreditation, the results point to a clear need to examine and improve locally developed measures in pharmacy education programs to ensure the validity of inferences and decisions made on the basis of test scores. This study provides additional guidance for pharmacy educators to support needed improvements of multiple-choice question writing and test design.


© The Author

Is Part Of

VCU University Archives

Is Part Of

VCU Theses and Dissertations

Date of Submission