首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Gauging Item Alignment Through Online Systems While Controlling for Rater Effects
Authors:Daniel Anderson  Shawn Irvin  Julie Alonzo  Gerald A Tindal
Institution:University of Oregon
Abstract:The alignment of test items to content standards is critical to the validity of decisions made from standards‐based tests. Generally, alignment is determined based on judgments made by a panel of content experts with either ratings averaged or via a consensus reached through discussion. When the pool of items to be reviewed is large, or the content‐matter experts are broadly distributed geographically, panel methods present significant challenges. This article illustrates the use of an online methodology for gauging item alignment that does not require that raters convene in person, reduces the overall cost of the study, increases time flexibility, and offers an efficient means for reviewing large item banks. Latent trait methods are applied to the data to control for between‐rater severity, evaluate intrarater consistency, and provide item‐level diagnostic statistics. Use of this methodology is illustrated with a large pool (1,345) of interim‐formative mathematics test items. Implications for the field and limitations of this approach are discussed.
Keywords:interim‐formative assessment  item‐standard alignment  rater effects  Rasch modeling  test Development  large‐scale testing
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号