首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

This study uses decision tree analysis to determine the most important variables that predict high overall teaching and course scores on a student evaluation of teaching (SET) instrument at a large public research university in the United States. Decision tree analysis is a more robust and intuitive approach for analysing and interpreting SET scores compared to more common parametric statistical approaches. Variables in this analysis included individual items on the SET instrument, self-reported student characteristics, course characteristics and instructor characteristics. The results show that items on the SET instrument that most directly address fundamental issues of teaching and learning, such as helping the student to better understand the course material, are most predictive of high overall teaching and course scores. SET items less directly related to student learning, such as those related to course grading policies, have little importance in predicting high overall teaching and course scores. Variables irrelevant to the construct, such as an instructor’s gender and race/ethnicity, were not predictive of high overall teaching and course scores. These findings provide evidence of criterion and discriminant validity, and show that high SET scores do not reflect student biases against an instructor’s gender or race/ethnicity.  相似文献   

2.
A continuing decline in an institution’s response rates for student evaluations of teaching (SET) raised faculty concerns about non-response bias in summary statistics. In response, this institution’s SET stakeholders partnered with a Marketing Methods class section to create strategies for increasing response rates. The project was also an exercise in organisational citizenship behaviour (OCB) training because students in that class section received intensive training on how SET feedback is valued by instructors and its role in improving their academic organisation. Within the context of OCB theory, this article finds student exposure to OCB training increases SET response rates because knowing how SET benefits their organisation increases unit-level response propensity for member surveys intended to improve their institution. In the year of the OCB training, SET response rates increased by 26%, though the increases did not persist into later academic years. The response rate increases are realised across all demographic groups with disproportionate increases among low response rate groups, including low performing students, men and ethnic minorities.  相似文献   

3.
We proposed an extended form of the Govindarajulu and Barnett margin of error (MOE) equation and used it with an analysis of variance experimental design to examine the effects of aggregating student evaluations of teaching (SET) ratings on the MOE statistic. The interpretative validity of SET ratings can be questioned when the number of students enrolled in a course is low or when the response rate is low. A possible method of improving interpretative validity is to aggregate SET ratings data from two or more courses taught by the same instructor. Based on non-parametric comparisons of the generated MOE, we found that aggregating course evaluation data from two courses reduced the MOE in most cases. However, significant improvement was only achieved when combining course evaluation data for the same instructor for the same course. Significance did not hold when combining data from different courses. We discuss the implications of our findings and provide recommendations for practice.  相似文献   

4.
Student evaluation of teaching (SET) is now common practice across higher education, with the results used for both course improvement and quality assurance purposes. While much research has examined the validity of SETs for measuring teaching quality, few studies have investigated the factors that influence student participation in the SET process. This study aimed to address this deficit through the analysis of an SET respondent pool at a large Canadian research-intensive university. The findings were largely consistent with available research (showing influence of student gender, age, specialisation area and final grade on SET completion). However, the study also identified additional influential course-specific factors such as term of study, course year level and course type as statistically significant. Collectively, such findings point to substantively significant patterns of bias in the characteristics of the respondent pool. Further research is needed to specify and quantify the impact (if any) on SET scores. We conclude, however, by recommending that such bias does not invalidate SET implementation, but instead should be embraced and reported within standard institutional practice, allowing better understanding of feedback received, and driving future efforts at recruiting student respondents.  相似文献   

5.
As research on student evaluations of teaching (SET) has dominated our understanding of teaching evaluation, debate over SET implementation has turned attention away from basic principles for appraising employee performance as established in human resources literature. Considering SET research and its practical significance in isolation from relevant human resources literature not only risks unlawful remedies for issues such as bias in SET but also risks replacing one form of bias with another. Meanwhile, the full potential of human resources tools to support consistent evaluation of teaching remains unrealized. To address this gap, this article clarifies how teaching evaluation can be conducted as sound performance appraisal by deploying SET and peer review of teaching within a larger framework of established human resources techniques. A review of recent literature articulates prominent themes in research on SET and peer review of teaching and outlines key principles for performance appraisal and performance management. Those principles are used to analyze representative faculty evaluation policies and procedures and clarify the weaknesses of both traditional and recently revised approaches to teaching evaluation. The final section elaborates performance appraisal techniques relevant to teaching evaluation. These include planful use of results and/or behavior approaches to performance appraisal, robust rating instruments for behavioral performance appraisal, targeted collection of information from multiple stakeholders, and job analysis. Efforts to de-emphasize quantitative SET data to address issues such as bias can be strengthened through the incorporation of performance appraisal tools that clearly articulate performance criteria and standards and that gather both qualitative and quantitative data on employee performance.  相似文献   

6.
Student evaluation of teaching (SET) only becomes an effective tool for improving teaching and learning when the relevant stakeholders seriously consider and plan appropriate actions according to student feedback. It is common practice in medical education to provide clinical teachers with student feedback. However, there is limited evidence about how teachers in higher education, and medical education in particular, systematically apply student feedback to improve the quality of their teaching practice. The focus of this case study was to examine clinical teachers’ perceptions of and responses to SET with respect to its purposes and uses for enhancing their teaching. An explanatory sequential mixed methods approach was employed to collect both quantitative and qualitative data from the clinical coaches. These clinical coaches perceived the main purpose of student evaluation as quality assurance, and were moderately receptive to student feedback. Four key factors enabling or inhibiting their responses were revealed: institutional requirements, operational practices, personal biases and provision of support. Future research should further explore the interrelationships among the above factors as the core mechanism in influencing clinical teachers’ perceptions of and responses to student evaluation.  相似文献   

7.
The use of student evaluation of teaching (SET) to evaluate and improve teaching is widespread amongst institutions of higher education. Many authors have searched for a conclusive understanding about the influence of student, course, and teacher characteristics on SET. One hotly debated discussion concerns the interpretation of the positive and statistically significant relationship that has been found between course grades and SET scores. In addition to reviewing the literature, the main purpose of the present study is to examine the influence of course grades and other characteristics of students, courses, and teachers on SET. Data from 1244 evaluations were collected using the SET-37 instrument and analyzed by means of cross-classified multilevel models. The results show positive significant relationships between course grades, class attendance, the examination period in which students receive their highest course grades, and the SET score. These relationships, however, are subject to different interpretations. Future research should focus on providing a definitive and empirically supported interpretation for these relationships. In the absence of such an interpretation, it will remain unclear whether these relationships offer proof of the validity of SET or whether they are a biasing factor.  相似文献   

8.
We compare three control charts for monitoring data from student evaluations of teaching (SET) with the goal of improving student satisfaction with teaching performance. The two charts that we propose are a modified p chart and a z‐score chart. We show that these charts overcome some of the shortcomings of the more traditional charts for analyzing SET data. A comparison of three charts (an individuals chart, the modified p chart, and the z‐score chart) reveals that the modified p chart is the best approach for analyzing SET data because it utilizes distributions that are appropriate for categorical data, and its interpretation is more straightforward. We conclude that administrators and faculty alike can benefit by using the modified p chart to monitor and improve teaching performance as measured by student evaluations.  相似文献   

9.
Using multilevel models, this study examined the effects of student- and course-level variables on monotonic response patterns in student evaluation of teaching (SET). In total, 11,203 ratings taken from 343 general education courses in a Korean four-year private university in 2011 were analyzed. The results indicated that 96 % of variance of monotonic response patterns could be explained by student characteristics, such as gender, academic year, major, grade point average, SET score, and perceptions about course difficulty, while controlling for course-level variables. Furthermore, 4 % of variance of monotonic response patterns was derived from course characteristics, including faculty age and class size, while controlling for student-level variables. The findings suggest that Korean higher education institutions need to take proper measures to encourage students to participate more actively and sincerely in SET for the best and proper use of the evaluation’s outcomes.  相似文献   

10.
Quantitative student evaluations of teaching (SET) and assessments are widely used in higher education as a proxy for teaching quality. However, SET are a function of individual rating behaviours resulting from student background, knowledge and personalities, as well as the learning experience being rated. SET from three years of data from a science department at a Russell Group University in the UK were analysed to highlight issues of sample size in relation to variable perceptions of modules, and develop a statistical model of feedback incorporating individual rating behaviours across modules. Key results are that sample size and individual rating behaviours have the potential to significantly affect summary module ratings, especially for <20 respondents or if individuals have heterogeneous views. A new approach is suggested, to interpret and compare quantitative module ratings, acknowledging uncertainty, variability and individual rating behaviours. This has implications for the interpretation of SET in many aspects of academic life, including university league table positions, the identification of good teaching practice with respect to student satisfaction, and the weight given to SET in individual academics’ promotion applications.  相似文献   

11.
12.
This paper examines the effects of two background variables in students' ratings of teaching effectiveness (SETs): class size and students' motivation (as surrogated by students' likelihood to respond randomly). Resampling simulation methodology has been employed to test the sensitivity of the SET scale for three hypothetical instructors (excellent, average, and poor). In an ideal scenario without confounding factors, SET statistics unmistakably distinguish the instructors. However, at different class sizes and levels of random responses, SET class averages are significantly biased. Results suggest that evaluations based on SET statistics should look at more than class averages. Resampling methodology (bootstrap simulation) is useful for SET research for scale sensitivity study, research results validation, and actual SET score analyses. Examples will be given on how bootstrap simulation can be applied to real-life SET data comparison.  相似文献   

13.
Institutes of higher learning are tending to reduce the amount of face-to-face teaching that they offer, and particularly through the traditional pedagogical method of lecturing. There is ongoing debate about the educational value of lectures as a teaching approach, in terms of both whether they facilitate understanding of subject material and whether they augment the student educational experience. In this study, student evaluation of teaching scores plus academic outcome (percentage of students who fail) was assessed for 236 course units offered by a science faculty at an Australian university over the course of one year. These measures were related to the degree to which lectures and other face-to-face teaching were used in these units, controlling for factors such as class size, school and year level. An information-theoretic model selection approach was employed to identify the best models and predictors of student assessments and fail rates. All the top models of student feedback included a measure reflecting amount of face-to-face teaching, with the evaluation of quality of teaching being higher in units with higher proportions of lectures. However, these models explained only 12–20% of the variation in student evaluation scores, suggesting that many other factors come into play. By contrast, units with fewer lectures have lower failure rates. These results suggest that moving away from lectures and face-to-face teaching may not harm, and indeed may improve the number of students who pass the subject, but that this may be incurred at the expense of greater dissatisfaction in students' learning experience.  相似文献   

14.
学生评教是高校教学质量保障体系的重要组成部分.目前,学生在评教过程中存在评教行为偏差,主要表现为引致评教、恶意评教、随意评教和放弃评教.其直接原因在于高校在学生评教管理中的若干强制规定和院系对学生评教的不当引导;根本原因则是由学生评教定位偏差以及学生评教运作无关学生利益造成的.因此,高校需要科学定位学生评教,极力完善学生评教的制度设计,构建学生评教和学生利益紧密关联的基点,以便促成教师根据学生评教改进教学和服务学生、满足学生学习需要这一根本目的的实现.  相似文献   

15.
This paper examines the stability and validity of a student evaluations of teaching (SET) instrument used by the administration at a university in the PR China. The SET scores for two semesters of courses taught by 435 teachers were collected. Total 388 teachers (170 males and 218 females) were also invited to fill out the 60‐item NEO Five‐Factor Inventory together with a demographic information questionnaire. The SET responses were found to have very high internal consistency and confirmatory factor analysis supported a one‐factor solution. The SET re‐test correlations were .62 for both the teachers who taught the same course (n = 234) and those who taught a different course in the second semester (n = 201). Linguistics teachers received higher SET scores than either social science or humanities or science and technology teachers. Student ratings were significantly related to Neuroticism and Extraversion. Regression results showed that the Big‐Five personality traits as a group explained only 2.6% of the total variance of student ratings and academic discipline explained 12.7% of the total variance of student ratings. Overall the stability and validity of SET was supported and future uses of SET scores in the PR China are discussed.  相似文献   

16.
Medical education research is becoming increasingly concerned with the value (defined as “educational outcomes per dollar spent”) of different teaching approaches. However, the financial costs of various approaches to teaching anatomy are under-researched, making evidence-based comparisons of the value of different teaching approaches impossible. Therefore, the aims of this study were to report the cost of six popular anatomy teaching methods through a specific, yet generalizable approach, and to demonstrate a process in which these results can be used in conjunction with existing effectiveness data to undertake an economic evaluation. A cost analysis was conducted to report the direct and indirect costs of six anatomy teaching methods, using an established approach to cost-reporting. The financial information was then combined with previously published information about the effectiveness of these six teaching methods in increasing anatomy knowledge, thereby demonstrating how estimations of value can be made. Dissection was reported as the most expensive teaching approach and computer aided instruction/learning (CAI/L) was the least, based on an estimation of total cost per student per year and assuming a student cohort size of just over 1,000 (the United Kingdom average). The demonstrated approach to economic evaluation suggested computer aided instruction/learning as the approach that provided the most value, in terms of education outcomes per dollar spent. The study concludes by suggesting that future medical education research should incorporate substantially greater consideration of cost, in order to draw important conclusions about value for learners.  相似文献   

17.
Using nine years of student evaluation of teaching (SET) data from a large US research university, we examine whether changes to the SET instrument have a substantial impact on overall instructor scores. Our study exploits four distinct natural experiments that arose when the SET instrument was changed. To maximise power, we compare the same course/instructor before and after each of the four changes occurred. We find that switching from in-class, paper course evaluations to online evaluations generates an average change of ?0.14 points on a five-point scale, or 0.25 standard deviations (SDs) in the overall instructor ratings. Changing labelling of the scale and the wording of the overall instructor question generates another decrease in the average rating: ?0.15 of a point (0.27 SDs). In contrast, extending the evaluation period to include the final examination and offering an incentive (early grade release) for completing the evaluations do not have a statistically significant effect on the overall instructor rating. The cumulative impact of these individual changes is ?0.29 points (0.52 SDs). This large decrease shows that SET scores are not comparable over time when instruments change. Therefore, administrators should measure and account for such changes when using historical benchmarks for evaluative purposes (e.g. appointments and compensation).  相似文献   

18.
In the context of increased emphasis on quality assurance of teaching, it is crucial that student evaluations of teaching (SET) methods be both reliable and workable in practice. Online SETs particularly tend to raise criticisms with those most reactive to mechanisms of teaching accountability. However, most studies of SET processes have been conducted with convenience, small and cross-sectional samples. Longitudinal studies are rare, as comparison studies on SET methodological approaches are generally pilot studies followed shortly after by implementation. The investigation presented here significantly contributes to the debate by examining the impact of the online administration method of SET on a very large longitudinal sample at the course level rather than attending to the student unit, thus compensating for the inter-dependency of students’ responses according to the instructor variable. It explores the impact of the administration method of SET (paper based in-class vs. out-of-class online collection) on scores, with a longitudinal sample of over 63,000 student responses collected over a total period of 10 years. Having adjusted for the confounding effect of class size, faculty, year of evaluation, years of teaching experience and student performance, it is observed that the actual effect of the administration method exists, but is insignificant.  相似文献   

19.
In recent years many universities switched from paper- to online-based student evaluation of teaching (SET) without knowing the consequences for data quality. Based on a series of three consecutive field experiments—a split-half design, twin courses, and pre–post-measurements—this paper examines the effects of survey mode on SET. First, all three studies reveal marked differences in non-response between online- and paper-based SET and systematic, but small differences in the overall course ratings. On average, online SET reveal a slightly less optimistic picture of teaching quality in students’ perception. Similarly, a web survey mode does not impair the reliability of student ratings. Second, we highlight the importance of taking selection and class absenteeism into account when studying survey mode effects and also show that it is necessary and informative to survey the subgroup of no-shows when evaluating teaching. Third, we empirically demonstrate the need to account for contextual setting of the survey (in class vs. after class) and the specific type of the online survey mode (TAN vs. email). Previous research either confounded contextual setting with variation in survey mode or generalized results for a specific online mode to web surveys in general. Our findings suggest that higher response rates in email surveys can be achieved if students are given the opportunity and time to evaluate directly in class.  相似文献   

20.
Book Reviews     
Good teaching should be inclusive of all students. There are very strong arguments for making courses/units/modules as inclusive as possible, based on issues of equity and access. Inclusive teaching has been a catch cry in recent times and most universities have policies related to this issue. However, research into the effectiveness of measures taken to ensure that teaching caters to all students is rare. The scarcity of such information may be due to the difficulty in finding an appropriate method of evaluation. One of the means to evaluate teaching of a unit or course is to obtain feedback from students. Although information collected through this method may be subjected to biases, student perceptions can still provide useful data that can be incorporated into a broader evaluation system. This paper discusses an investigation into the inclusive nature of a large number of units offered at The University of Western Australia over 3 years. Student ratings in relation to the issue of inclusivity were also explored for possible influences of the year level of courses, broad discipline areas and student gender. The results of this study indicate that these three factors could affect how students view the inclusive nature of particular units.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号