Skip to main content
Article

Text Complexity Versus Task Complexity: Item Difficulty Modeling for Reading Items

Authors
  • Christina Schneider (Cambium Assessment, Inc)
  • Jing Chen (Cambium Assessment, Inc)
  • Jeremy Heneger (ACT)

Abstract

This study investigates item features to aid in improving understanding of what makes items that measure reading comprehension easy or difficult. In this item difficulty modeling (IDM) study, item and passage features were included as predictors that represented text-task interactions and stimulus demands. The passage-level features included two common quantitative metrics of text complexity: the Lexile Framework® for Reading and Flesch-Kincaid. Passage word count, item type, Depth of Knowledge (DOK), and item to Range Achievement-Level Descriptor (RALD) match were held constant across conditions. Two IDM models were examined; one included passage-level text complexity features and not grade level, and the other included grade level and not passage level text complexity features. We found that quantitative metrics of text complexity added 3% to the IDM compared to when grade was substituted for those features. Text-task interactions as represented by RALDs and DOK levels were found to provide unique and significant information to the IDM model as did item type and particular standard topics. Implications for RALD construction and additional research related to RALDs for reading are discussed.

Keywords: achievement level descriptors, reading comprehension, item difficulty modeling, range achievement level descriptors, test score interpretation and use

How to Cite:

Schneider, C., Chen, J. & Heneger, J., (2026) “Text Complexity Versus Task Complexity: Item Difficulty Modeling for Reading Items”, Practical Assessment, Research, and Evaluation 31(1): 5. doi: https://doi.org/10.7275/pare.2928

Downloads:
Download PDF
View PDF

295 Views

66 Downloads

Published on
2026-02-03

Peer Reviewed