首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Based on student evaluation of teaching (SET) ratings from 1,432 units of study over a period of a year, representing 74,490 individual sets of ratings, and including a significant number of units offered in wholly online mode, we confirm the significant influence of class size, year level, and discipline area on at least some SET ratings. We also find online mode of offer to significantly influence at least some SET ratings. We reveal both the statistical significance and effect sizes of these influences, and find that the magnitudes of the effect sizes of all factors are small, but potentially cumulative. We also show that the influence of online mode of offer is of the same magnitude as the other 3 factors. These results support and extend the rating interpretation guides (RIGs) model proposed by Neumann and colleagues, and we present a general method for the development of a RIGs system.  相似文献   

2.
Student evaluation of teaching (SET) ratings are used to evaluate faculty's teaching effectiveness based on a widespread belief that students learn more from highly rated professors. The key evidence cited in support of this belief are meta-analyses of multisection studies showing small-to-moderate correlations between SET ratings and student achievement (e.g., Cohen, 1980, Cohen, 1981; Feldman, 1989). We re-analyzed previously published meta-analyses of the multisection studies and found that their findings were an artifact of small sample sized studies and publication bias. Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between SET ratings and learning. Our up-to-date meta-analysis of all multisection studies revealed no significant correlations between the SET ratings and learning. These findings suggest that institutions focused on student learning and career success may want to abandon SET ratings as a measure of faculty's teaching effectiveness.  相似文献   

3.
4.
Standardised module evaluation surveys have recently been implemented or extensively redesigned at many different HEIs across the UK in response to an evolving national context, notwithstanding a body of scholarship that has called student evaluation of teaching (SET) into question. Through a focussed single-institution study, this mixed-methods research fills a notable gap in the current literature in establishing students’ perspectives on standardised module evaluation by means of a paper-based questionnaire. Its participants (N=40) recognised some general advantages of a university-wide system, such as facilitating comparison between different modules; but they also acknowledged several shortcomings relating to its lack of sensitivity to individual module contexts and schedules, yielding the overall view that standardised surveys are only partially effective as a means of teaching evaluation. The conclusion considers the wider implications of these distinctive findings, and suggests that the perceived limitations of SET point to the need to triangulate its results with data obtained through alternative evaluation mechanisms.  相似文献   

5.
This study examines the effect of age of instructor on student ratings of teaching performance after individual consultation. Instructors self-presented over an 11-year period at a large Canadian university teaching service, where each received one of three interventions. End of term student ratings of the teaching of younger and older instructors are compared before consultation, immediately post consultation, and 1–3 years after year of consultation. Younger faculty obtained significantly improved ratings immediately after consultation, while older faculty achieved significant rating increases 1–3 years post consultation. Results from an earlier study on the impact of individual consultation on teacher ratings are reevaluated using this larger sample of faculty. Generally, results in this analysis parallel the original research. Consultation produced changes in student ratings both immediately after consultation as well as longitudinally, thus confirming the utility of intervention in producing enduring pedagogical improvements. Control analyses ensured that improvements were a result of the interventions and not an artefact of time.  相似文献   

6.
Using nine years of student evaluation of teaching (SET) data from a large US research university, we examine whether changes to the SET instrument have a substantial impact on overall instructor scores. Our study exploits four distinct natural experiments that arose when the SET instrument was changed. To maximise power, we compare the same course/instructor before and after each of the four changes occurred. We find that switching from in-class, paper course evaluations to online evaluations generates an average change of ?0.14 points on a five-point scale, or 0.25 standard deviations (SDs) in the overall instructor ratings. Changing labelling of the scale and the wording of the overall instructor question generates another decrease in the average rating: ?0.15 of a point (0.27 SDs). In contrast, extending the evaluation period to include the final examination and offering an incentive (early grade release) for completing the evaluations do not have a statistically significant effect on the overall instructor rating. The cumulative impact of these individual changes is ?0.29 points (0.52 SDs). This large decrease shows that SET scores are not comparable over time when instruments change. Therefore, administrators should measure and account for such changes when using historical benchmarks for evaluative purposes (e.g. appointments and compensation).  相似文献   

7.
This paper examines the stability and validity of a student evaluations of teaching (SET) instrument used by the administration at a university in the PR China. The SET scores for two semesters of courses taught by 435 teachers were collected. Total 388 teachers (170 males and 218 females) were also invited to fill out the 60‐item NEO Five‐Factor Inventory together with a demographic information questionnaire. The SET responses were found to have very high internal consistency and confirmatory factor analysis supported a one‐factor solution. The SET re‐test correlations were .62 for both the teachers who taught the same course (n = 234) and those who taught a different course in the second semester (n = 201). Linguistics teachers received higher SET scores than either social science or humanities or science and technology teachers. Student ratings were significantly related to Neuroticism and Extraversion. Regression results showed that the Big‐Five personality traits as a group explained only 2.6% of the total variance of student ratings and academic discipline explained 12.7% of the total variance of student ratings. Overall the stability and validity of SET was supported and future uses of SET scores in the PR China are discussed.  相似文献   

8.
In the context of increased emphasis on quality assurance of teaching, it is crucial that student evaluations of teaching (SET) methods be both reliable and workable in practice. Online SETs particularly tend to raise criticisms with those most reactive to mechanisms of teaching accountability. However, most studies of SET processes have been conducted with convenience, small and cross-sectional samples. Longitudinal studies are rare, as comparison studies on SET methodological approaches are generally pilot studies followed shortly after by implementation. The investigation presented here significantly contributes to the debate by examining the impact of the online administration method of SET on a very large longitudinal sample at the course level rather than attending to the student unit, thus compensating for the inter-dependency of students’ responses according to the instructor variable. It explores the impact of the administration method of SET (paper based in-class vs. out-of-class online collection) on scores, with a longitudinal sample of over 63,000 student responses collected over a total period of 10 years. Having adjusted for the confounding effect of class size, faculty, year of evaluation, years of teaching experience and student performance, it is observed that the actual effect of the administration method exists, but is insignificant.  相似文献   

9.
10.
Using multilevel models, this study examined the effects of student- and course-level variables on monotonic response patterns in student evaluation of teaching (SET). In total, 11,203 ratings taken from 343 general education courses in a Korean four-year private university in 2011 were analyzed. The results indicated that 96 % of variance of monotonic response patterns could be explained by student characteristics, such as gender, academic year, major, grade point average, SET score, and perceptions about course difficulty, while controlling for course-level variables. Furthermore, 4 % of variance of monotonic response patterns was derived from course characteristics, including faculty age and class size, while controlling for student-level variables. The findings suggest that Korean higher education institutions need to take proper measures to encourage students to participate more actively and sincerely in SET for the best and proper use of the evaluation’s outcomes.  相似文献   

11.
Abstract

This study uses decision tree analysis to determine the most important variables that predict high overall teaching and course scores on a student evaluation of teaching (SET) instrument at a large public research university in the United States. Decision tree analysis is a more robust and intuitive approach for analysing and interpreting SET scores compared to more common parametric statistical approaches. Variables in this analysis included individual items on the SET instrument, self-reported student characteristics, course characteristics and instructor characteristics. The results show that items on the SET instrument that most directly address fundamental issues of teaching and learning, such as helping the student to better understand the course material, are most predictive of high overall teaching and course scores. SET items less directly related to student learning, such as those related to course grading policies, have little importance in predicting high overall teaching and course scores. Variables irrelevant to the construct, such as an instructor’s gender and race/ethnicity, were not predictive of high overall teaching and course scores. These findings provide evidence of criterion and discriminant validity, and show that high SET scores do not reflect student biases against an instructor’s gender or race/ethnicity.  相似文献   

12.
Students' ratings of teacher personality and teaching competence   总被引:1,自引:0,他引:1  
The validity of student rating of teaching is discussed in terms of the effect that students' perceptions of teacher personality might have on that rating. A procedure for using student feedback to evaluate teaching was trialled which sought to minimise the effect of teacher personality on students' ratings of teaching quality. A total of fifteen rating exercises, using ten teachers over a two year period, was carried out. Results indicate that teacher personality, as perceived by students, is still very significantly related to their ratings of teaching quality. It is argued that this is a proper state of affairs which does not undermine the validity of student ratings.  相似文献   

13.
This paper provides new evidence on the disparity between student evaluation of teaching (SET) ratings when evaluations are conducted online versus in‐class. Using a multiple regression analysis, we show that after controlling for many of the class and student characteristics not under the direct control of the instructor, average SET ratings from evaluations conducted online are significantly lower than average SET ratings conducted in‐class. Further, we demonstrate the importance of controlling for the factors not under the instructor’s control when using SET ratings to evaluate faculty performance in the classroom. We do not suggest that moving to online evaluation is overly problematic, only that it is difficult to compare evaluations done online with evaluations done in‐class. While we do not suppose that one method is ‘more accurate’ than another, we do believe that institutions would benefit from either moving all evaluations online or by continuing to do all evaluations in‐class.  相似文献   

14.
Abstract

The student evaluation of teaching (SET) tool is widely used to measure student satisfaction in institutions of higher education. A SET typically includes several criteria, which are assigned equal weights. The motivation for this research is to examine student and lecturer perceptions and the behaviour of the students (i.e. ratings given by them to lecturers) of various criteria on a SET. To this end, an analytic hierarchy process methodology was used to capture the importance (weights) of SET criteria from the points of view of students and lecturers; the students' actual ratings on the SET were then analysed. Results revealed statistically significant differences in the weights of the SET criteria; those weights differ for students and lecturers. However, analysis of 1436 SET forms of the same population revealed that, although students typically rate instructors very similarly on all criteria, they rate instructors higher on the criteria that are more important to them. The practical implications of this research is the reduction of the number of criteria on the SETs used for personnel decisions, while identifying for instructors and administrators those criteria that are perceived by students to be more important.  相似文献   

15.
Abstract

The validity of student evaluation of teaching (SET) scores depends on minimum effect of extraneous response processes or biases. A bias may increase or decrease scores and change the relationship with other variables. In contrast, SET literature defines bias as an irrelevant variable correlated with SET scores, and among many, a relevant biasing factor in literature is the instructor’s gender. The study examines the extent to which acquiescence, the tendency to endorse the highest response option across items and bias in the first sense affects students’ responses to a SET rating scale. The study also explores how acquiescence affects the difference in teaching quality (TQ) by instructor’s gender, a bias in the latter sense. SET data collected at a faculty of education in Ontario, Canada were analysed using the Rasch rating scale model. Findings provide empirical support for acquiescence affecting students’ responses. Latent regression analyses show how acquiescence reduces the difference in TQ by instructor’s gender. Findings encourage greater attention to the response process quality as a way to better defend the utility of SET and prevent potentially misleading conclusions from the analysis of SET data.  相似文献   

16.
A diversity of student questionnaires are used by colleges and universities to provide data on faculty teaching performance. Yet the purposes for collecting this data are frequently unclear, and at times superficial. Rarely is student rating data used as a tool to improve faculty teaching. A more relevant approach incorporates a variety of types of student ratings into a model for improving university teaching. One type of student rating data is used to identify broad instructional problem areas. Another type pinpoints probable causes and solutions for the instructional problems. Instructional improvement procedures are designed on the basis of this data. A third type of student rating data evaluates the instructional improvement procedures and indicates when modifications are needed. In addition to these three types of student ratings, and the generation of appropriate questionnaires, this paper presents an overview of the teaching improvement model and discusses its effectiveness.  相似文献   

17.
Students in part‐time courses were interviewed about their perceptions of good teaching and tutoring. The perceptions differed markedly between those with reproductive conceptions of learning and students holding self‐determining ones. The former preferred didactic teaching but disliked interaction, whereas the latter had almost diametrically opposite perspectives by finding student‐centred approaches consistent with their conceptions of learning. The findings have implications for the evaluation of teaching, as ratings are likely to be influenced by the predominant conceptions of learning of a class. It is common for individual instructors to be regularly evaluated by teacher evaluation questionnaires, which often have a teacher‐centred bias, and for the ratings to be used for appraisal. It is argued that this leads to conservatism as teachers fear that students with reproductive conceptions of learning will reduce their ratings if they innovate in their teaching. As the degree of bias from this ratings‐lowering phenomenon may be quite large, the findings are a caution against the common practice of using absolute rating values from both teacher evaluation questionnaires and programme‐level evaluation by instruments such as the Course Experience Questionnaire. Results need to be interpreted together with other evidence and take into account contextual factors including students' conceptions of learning.  相似文献   

18.
教师专业发展需要倾听学生的声音,但是当前国内中小学学生评教系统主要是满意度测评或基于专家评教改编而来的学生评教问卷。由于其设计理念是基于经验,编制视角是成人中心,使用目的是绩效评估,导致学生评教结果不能指向教师专业发展、教师不愿被评教、师生关系异化等后果。因此,良好的学生评教工具系统的建构原则应该是基于实证的理论建构、学生中心的编制视角、实用主义的使用目的。以此为指导尝试建构了三个模块共九个维度的学生评教工具系统。  相似文献   

19.
As research on student evaluations of teaching (SET) has dominated our understanding of teaching evaluation, debate over SET implementation has turned attention away from basic principles for appraising employee performance as established in human resources literature. Considering SET research and its practical significance in isolation from relevant human resources literature not only risks unlawful remedies for issues such as bias in SET but also risks replacing one form of bias with another. Meanwhile, the full potential of human resources tools to support consistent evaluation of teaching remains unrealized. To address this gap, this article clarifies how teaching evaluation can be conducted as sound performance appraisal by deploying SET and peer review of teaching within a larger framework of established human resources techniques. A review of recent literature articulates prominent themes in research on SET and peer review of teaching and outlines key principles for performance appraisal and performance management. Those principles are used to analyze representative faculty evaluation policies and procedures and clarify the weaknesses of both traditional and recently revised approaches to teaching evaluation. The final section elaborates performance appraisal techniques relevant to teaching evaluation. These include planful use of results and/or behavior approaches to performance appraisal, robust rating instruments for behavioral performance appraisal, targeted collection of information from multiple stakeholders, and job analysis. Efforts to de-emphasize quantitative SET data to address issues such as bias can be strengthened through the incorporation of performance appraisal tools that clearly articulate performance criteria and standards and that gather both qualitative and quantitative data on employee performance.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号