Skip to main content
Article

Increasing the Generalizability of Open-Ended Survey Item Responses

Authors
  • Emily Diaz orcid logo
  • John H. Hitchcock orcid logo
  • Greg Norman

Abstract

This article demonstrates a procedure for applying weights to data gathered from open-ended survey items. Open-ended survey items can provide detailed insights into phenomena in ways evaluators may not have anticipated when conceptualizing evaluands and research questions. However, these item types produce text data, which complicates analysts’ ability to generalize related findings to a target population of interest. Such items also tend to increase respondent burden relative to closed-ended items because they require written responses instead of simply selecting from a list of options (e.g., using a Likert scale). Such increased burden can lead to higher rates of item non-response, which can further hinder evaluators' ability to generalize findings from a sample to a target population. These challenges may be partially addressed by applying survey weights to numerically coded open-ended responses; however, there is limited guidance on how to do so. This article therefore demonstrates how to efficiently code text data and apply survey weights to subsequent numerical codes. This demonstration is presented within the context of a study that employed stratified random sampling and experienced survey non-response. This article should help evaluators and researchers better generalize findings from numerically coded open-ended survey items to their target populations.

Keywords: Survey; Qualitative Coding; Generalizability; Stratified Random Sampling

How to Cite:

Diaz, E., Hitchcock, J. H. & Norman, G., (2026) “Increasing the Generalizability of Open-Ended Survey Item Responses”, Practical Assessment, Research, and Evaluation 31(1). doi: https://doi.org/10.7275/pare.3068

Downloads:
Download PDF
View PDF

66 Views

19 Downloads

Published on
2026-04-22

Peer Reviewed