首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Assessment & Evaluation in Higher Education Using the margin of error statistic to examine the effects of aggregating student evaluations of teaching
Authors:David James  Gregory Schraw
Institution:1. Civil and Environmental Engineering and Construction, University of Nevada, Las Vegas, Las Vegas, Nevada, USA;2. Educational Psychology and Higher Education, University of Nevada, Las Vegas, Las Vegas, Nevada, USA
Abstract:We proposed an extended form of the Govindarajulu and Barnett margin of error (MOE) equation and used it with an analysis of variance experimental design to examine the effects of aggregating student evaluations of teaching (SET) ratings on the MOE statistic. The interpretative validity of SET ratings can be questioned when the number of students enrolled in a course is low or when the response rate is low. A possible method of improving interpretative validity is to aggregate SET ratings data from two or more courses taught by the same instructor. Based on non-parametric comparisons of the generated MOE, we found that aggregating course evaluation data from two courses reduced the MOE in most cases. However, significant improvement was only achieved when combining course evaluation data for the same instructor for the same course. Significance did not hold when combining data from different courses. We discuss the implications of our findings and provide recommendations for practice.
Keywords:Margin of error  student evaluations of teaching  interpretative validity
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号