Article

Examining Design and Inter-Rater Reliability of a Rubric Measuring Research Quality across Multiple Disciplines

Authors
  • Marilee J. Bresciani
  • Megan Oakleaf
  • Fred Kolkhorst
  • Camille Nebeker
  • Jessica Barlow
  • Kristin Duncan
  • Jessica Hickmott

Abstract

The paper presents a rubric to help evaluate the quality of research projects. The rubric was applied in a competition across a variety of disciplines during a two-day research symposium at one institution in the southwest region of the United States of America. It was collaboratively designed by a faculty committee at the institution and was administered to 204 undergraduate, master, and doctoral oral presentations by approximately 167 different evaluators. No training or norming of the rubric was given to 147 of the evaluators prior to the competition. The findings of the inter-rater reliability analysis reveal substantial agreement among the judges, which contradicts literature describing the fact that formal norming must occur prior to seeing substantial levels of inter-rater reliability. By presenting the rubric along with the methodology used in its design and evaluation, it is hoped that others will find this to be a useful tool for evaluating documents and for teaching research methods. Accessed 15,405 times on https://pareonline.net from May 29, 2009 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right.

Keywords: Educational Research, Research Methodology, Teaching Methods

How to Cite:

Bresciani, M. J., Oakleaf, M., Kolkhorst, F., Nebeker, C., Barlow, J., Duncan, K. & Hickmott, J., (2009) “Examining Design and Inter-Rater Reliability of a Rubric Measuring Research Quality across Multiple Disciplines”, Practical Assessment, Research, and Evaluation 14(1): 12. doi: https://doi.org/10.7275/1w3h-7k62

Downloads:
Download PDF
View PDF

274 Views

46 Downloads