Indirect Measures In Evaluation: On Not Knowing What We Don’t Know
- Linda Heath
- Adam DeHoek
- Sara House Locatelli
Abstract
Evaluators frequently make use of indirect measures of participant learning or skill mastery, with participants either being asked if they have learned material or mastered a skill or being asked to indicate how confident they are that they know the material or can perform the task in question. Unfortunately, myriad research in social psychology has demonstrated that people are very poor judges of their own levels of accomplishment. In this paper, the social psychological dynamics that contribute to biased self-assessments are overviewed. These include the self-serving bias (e.g., Miller & Ross, 1975), the better-than-average effect (e.g., Alicke et al., 1995; Brown, 1986), and the overconfidence phenomenon (Kahneman & Tversky, 1979). Methods of correcting these biased reports are generally ineffective, as illustrated by Kruger and Dunning’s (1999) findings that people lowest in mastery generally lack the metacognition even to understand what mastery looks like. As this type of person learns the skill in question, they often realize the level of their ignorance and lower their self-reported knowledge and skill levels. Although indirect measures of participant learning or mastery might tell us something about the level of confidence of the participants, they probably tell us little about actual ability or knowledge. Implications for applied research are discussed. Accessed 8,048 times on https://pareonline.net from February 06, 2012 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right.
Keywords: Evaluation Methods
How to Cite:
Heath, L., DeHoek, A. & Locatelli, S. H., (2012) “Indirect Measures In Evaluation: On Not Knowing What We Don’t Know”, Practical Assessment, Research, and Evaluation 17(1): 6. doi: https://doi.org/10.7275/00h8-7p49
Downloads:
Download PDF
View PDF