Do more online instructional ratings lead to better prediction of instructor quality?
- Shane Sanders
- Bhavneet Walia
- Joel Potter
- Kenneth W. Linna
Abstract
Online instructional ratings are taken by many with a grain of salt. This study analyzes the ability of said ratings to estimate the official (university-administered) instructional ratings of the same respective university instructors. Given self-selection among raters, we further test whether more online ratings of instructors lead to better prediction of official ratings in terms of both R-squared value and root mean squared error. We lastly test and correct for heteroskedastic error terms in the regression analysis to allow for the first robust estimations on the topic. Despite having a starkly different distribution of values, online ratings explain much of the variation in official ratings. This conclusion strengthens, and root mean squared error typically falls, as one considers regression subsets over which instructors have a larger number of online ratings. Though (public) online ratings do not mimic the results of (semi-private) official ratings, they provide a reliable source of information for predicting official ratings. There is strong evidence that this reliability increases in online rating usage. Accessed 6,372 times on https://pareonline.net from February 22, 2011 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right.
Keywords: Teacher Evaluation
How to Cite:
Sanders, S., Walia, B., Potter, J. & Linna, K. W., (2011) “Do more online instructional ratings lead to better prediction of instructor quality?”, Practical Assessment, Research, and Evaluation 16(1): 2. doi: https://doi.org/10.7275/nhnn-1n13
Downloads:
Download PDF
View PDF