Evaluation of construct-irrelevant variance yielded by machine and human scoring of a science teacher PCK constructed response assessment |
| |
Affiliation: | 1. Department of Mathematics and Science Education, University of Georgia, United States;2. CREATE for STEM Institute, Michigan State University, United States;3. Department of Biochemistry and Molecular Biology, Michigan State University, United States;4. BSCS Science Learning, United States |
| |
Abstract: | Machine learning has been frequently employed to automatically score constructed response assessments. However, there is a lack of evidence of how this predictive scoring approach might be compromised by construct-irrelevant variance (CIV), which is a threat to test validity. In this study, we evaluated machine scores and human scores with regard to potential CIV. We developed two assessment tasks targeting science teacher pedagogical content knowledge (PCK); each task contains three video-based constructed response questions. 187 in-service science teachers watched the videos with each had a given classroom teaching scenario and then responded to the constructed-response items. Three human experts rated the responses and the human-consent scores were used to develop machine learning algorithms to predict ratings of the responses. Including the machine as another independent rater, along with the three human raters, we employed the many-facet Rasch measurement model to examine CIV due to three sources: variability of scenarios, rater severity, and rater sensitivity of the scenarios. Results indicate that variability of scenarios impacts teachers’ performance, but the impact significantly depends on the construct of interest; for each assessment task, the machine is always the most severe rater, compared to the three human raters. However, the machine is less sensitive than the human raters to the task scenarios. This means the machine scoring is more consistent and stable across scenarios within each of the two tasks. |
| |
Keywords: | Construct-irrelevant variance Machine learning Pedagogical content knowledge Science teacher Constructed response assessment |
本文献已被 ScienceDirect 等数据库收录! |
|