Rater calibration when observational assessment occurs at large scale: Degree of calibration and characteristics of raters associated with calibration |
| |
Authors: | Anne H Cash Bridget K Hamre Robert C Pianta Sonya S Myers |
| |
Institution: | 1. Curry School of Education, University of Virginia, 350 Old Ivy Way, Suite 100, Charlottesville, VA 22903, United States;2. Curry School of Education, University of Virginia, PO Box 400260, Charlottesville, VA 22904-4260, United States;3. Center for Research on Rural Families & Communities, Vanderbilt University, Peabody #90, 230 Appleton Place, Nashville, TN 37203-5721, United States |
| |
Abstract: | Observational assessment is used to study program and teacher effectiveness across large numbers of classrooms, but training a workforce of raters that can assign reliable scores when observations are used in large-scale contexts can be challenging and expensive. Limited data are available to speak to the feasibility of training large numbers of raters to calibrate to an observation tool, or the characteristics of raters associated with calibration. This study reports on the success of rater calibration across 2093 raters trained by the Office of Head Start (OHS) in 2008–2009 on the Classroom Assessment Scoring System (CLASS), and for a subsample of 704 raters, characteristics that predict their calibration. Findings indicate that it is possible to train large numbers of raters to calibrate to an observation tool, and rater beliefs about teachers and children predicted the degree of calibration. Implications for large-scale observational assessments are discussed. |
| |
Keywords: | Observation Rater calibration Teacher quality |
本文献已被 ScienceDirect 等数据库收录! |
|