Article

A Practical Comparison of Decision Consistency Estimates

Authors
  • Amanda Wolkowitz
  • Russell Smith

Abstract

A decision consistency (DC) index is an estimate of the consistency of a classification decision on an exam. More specifically, DC estimates the percentage of examinees that would have the same classification decision on an exam if they were to retake the same or a parallel form of the exam again without memory of taking the exam the first time. This study compares three classical test theory DC estimates in the context of high stakes pass/fail exams. The three methods compared include those developed by Livingston and Lewis (1995), Peng and Subkoviak (1980), and Wolkowitz (2021). This study compares the computationally and conceptually simpler DC methods proposed by Peng-Subkoviak and Wolkowitz to the more widely used and accepted, but more complex, method proposed by Livingston and Lewis. Through a comparison of two simulated datasets and three operational datasets, the results suggest that the Livingston-Lewis and Wolkowitz methods produce relatively similar results for datasets with skewed distributions and all three methods produce reasonably similar results for normally distributed datasets. Following these results, this study provides guidelines for deciding which method to apply as well as industry guidelines for acceptable DC values.

Keywords: decision consistency, reliability

How to Cite:

Wolkowitz, A. & Smith, R., (2024) “A Practical Comparison of Decision Consistency Estimates”, Practical Assessment, Research, and Evaluation 29(1): 6. doi: https://doi.org/10.7275/pare.2023

Downloads:
Download PDF
View PDF

195 Views

30 Downloads

Published on
12 Mar 2024
Peer Reviewed