首页 | 本学科首页   官方微博 | 高级检索  
     检索      


The Generalizability of Scores for a Performance Assessment Scored with a Computer-Automated Scoring System
Authors:Brian E Clauser  Polina Harik  Stephen G Clyman
Institution:National Board of Medical Examiners
Abstract:When performance assessments are delivered and scored by computer, the costs of scoring may be substantially lower than those of scoring the same assessment based on expert review of the individual performances. Computerized scoring algorithms also ensure that the scoring rules are implemented precisely and uniformly. Such computerized algorithms represent an effort to encode the scoring policies of experts. This raises the question, would a different group of experts have produced a meaningfully different algorithm? The research reported in this paper uses generalizability theory to assess the impact of using independent, randomly equivalent groups of experts to develop the scoring algorithms for a set of computer‐simulation tasks designed to measure physicians’ patient management skills. The results suggest that the impact of this “expert group” effect may be significant but that it can be controlled with appropriate test development strategies. The appendix presents multivariate generalizability analysis to examine the stability of the assessed proficiency across scores representing the scoring policies of different groups of experts.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号