首页 | 本学科首页   官方微博 | 高级检索  
     


Investigation of a Nonparametric Procedure for Assessing Goodness-of-Fit in Item Response Theory
Authors:Craig S. Wells  Daniel M. Bolt
Affiliation:1. Department of Educational Policy, Research and Administration , University of Massachusetts Amherst , cswells@educ.umass.edu;3. Department of Educational Psychology , University of Wisconsin—Madison ,
Abstract:There have been many studies of the comparability of computer-administered and paper-administered tests. Not surprisingly (given the variety of measurement and statistical sampling issues that can affect any one study) the results of such studies have not always been consistent. Moreover, the quality of computer-based test administration systems has changed considerably over recent years, as has the computer-experience of students. This study synthesizes the results of 81 studies performed between 1997 and 2007. The estimated effect size across all studies was very small (–.01 weighted, .00 unweighted). Meta-analytic methods were used to ascertain whether grade (elementary, middle, or high school) or subject (English Language Arts, Mathematics, Reading, Science, or Social Studies) had an impact on comparability. Grade appeared to have no affect on comparability. Subject did appear to affect comparability, with computer administration appearing to provide a small advantage for English Language Arts and Social Studies test (effect sizes of .11 and .15, respectively), and paper administration appearing to provide a small advantage for Mathematics tests (effect size of??.06).
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号