首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

Evaluation of college instructors often centers on course ratings; however, there is little evidence that these ratings only reflect teaching. The purpose of this study was to assess the relative importance of three facets of course ratings: instructor, course and occasion. We sampled 2,459 fully-crossed dyads from a large university where two instructors taught the same two courses at least twice in a 3-year period. Generalizability theory was used to estimate unconfounded variance components for instructor, course and occasion, as well as their interactions. Meta-analysis was used to summarize those estimates. Results indicated that a three-way interaction between instructor, course and occasion that includes measurement error accounted for the most variance in student ratings (24%), with instructor accounting for the second largest amount (22%). While instructor - and presumably teaching - accounted for substantial variance in student course ratings, factors other than instructor quality had a larger influence on student ratings.  相似文献   

2.
Research has demonstrated that student evaluations of instruction are influenced by variables extraneous to the instructional procedures being evaluated. One of the most important of these is the student's motivation to take the course. The Instructional Development and Effectiveness Assessment (IDEA) system controls this variable by comparing a course evaluation to a norm group of courses having students with similar motivation. The present study examined the possibility that the IDEA procedure of having students rate their precourse motivation at theend of a course might be unacceptable, because the rating would be influenced by experiences within the course itself. The data indicated that postcourse ratings of precourse motivation do deviate somewhat from actual precourse ratings, but the deviation is not of an order of magnitude which would seriously distort the interpretation of the ratings.  相似文献   

3.
A two-dimensional analysis of student ratings of instruction   总被引:2,自引:1,他引:2  
Student ratings of instruction were analyzed in terms of two global factors. One factor, which includes items on advanced planning, presentation clarity, and increased student knowledge, was named pedagogical skill. The other factor taps information about class discussion, grading, and the availability of help and was named rapport. Ratings on the skill factor did not covary with class size or the leniency of the instructor's grading but did correlate with a reasonable external criterion of student learning. Ratings of rapport correlated inversely with class size and directly with average class grade and showed only a weak relationship to the external criterion of student learning. The skill factor showed more interclass stability than the rapport factor. Previous research studies which have examined the reliability and validity of instructional ratings and their relationship to student grades and class size have reported inconsistent findings. These inconsistencies appear to result from an inappropriate unidimensional analysis of ratings which should be examined in terms of two of more separate attitude dimensions.  相似文献   

4.
Expectancy violation and student rating of instruction   总被引:1,自引:0,他引:1  
This study examined whether violations that are incongruent with student expectations are significantly different than congruent violations of expectancy in relation to student ratings of instruction. Analysis using the Scheffe' post ANOVA test revealed that college students having high expectations/high experiences evaluated teachers more favorably than students with low expectations/high experiences, low expectations/low experiences and high expectations/low experiences. Reasons why these findings did not coincide with the expectancy violation model are offered.  相似文献   

5.
6.
This study compares student evaluations of instruction that were collected in-class with those gathered through an online survey. The two modes of administration were compared with respect to response rate, psychometric characteristics and mean ratings through different statistical analyses. Findings indicated that in-class evaluations produced a significantly higher response rate than online evaluation. In addition, Rasch analysis showed that mean ratings obtained in in-class evaluation were significantly higher than those obtained in online evaluation. Finally, the distributions of student attendance and expected grade in both modes were compared via chi-square tests, and were found to differ in the two modes of administration.  相似文献   

7.
This study aimed at exploring faculty and student perspectives on student evaluations, as well as identifying their perceptions of the usefulness and appropriateness of the ratings for evaluating teaching effectiveness. More specifically, the study aimed at identifying the consequences, both intended and unintended, of using the evaluations, in addition to better understanding the process students used in responding to evaluations and what use faculty members made of them. Two surveys were developed and placed on the website of the Office of Institutional Research and Assessment. Emails were sent to all students and faculty participating in evaluations soliciting their cooperation and requesting their input. Faculty and student perceptions were compared qualitatively and statistically. Results revealed that students and faculty believe in the effectiveness and usefulness of the system with the need to overcome some negative consequences and biases inherent in its application at the University. Recommendations for system improvement are provided.  相似文献   

8.
This research addressed itself to the issue of the validity of student ratings of instruction, describing and comparing the factor patterns obtained under three different sets of directions. Graduate education students (n=414) were randomly assigned to one of three conditions: administrator use, instructor use, and student use. Subjects in each condition received a different cover letter which explained the purported use to which their ratings were to be put and which asked them to rate their course and instructor on a 33-item rating scale. Data were factor analyzed using principal-axes factor method followed by an oblique transformation. The factor patterns obtained were then compared using the coefficient of congruence. While two clusters (organization/structure and rapport/interaction) emerged across all three conditions, a third cluster appeared which was unique to each condition. The coefficients of congruence obtained generally indicated that the factors could not be considered invariant across the three conditions.  相似文献   

9.
In this paper, we have developed a classification model for online learning environments that relates the Instructors Overall Performance (IOP) rating (according to students’ perceptions) with the course characteristics, students’ demographics and the effectiveness of the instructor in his/her teaching roles. To that end, a comprehensive Student Evaluation of Teaching (SET) instrument is proposed, which includes not only conventional teaching elements, but also items that encourage twenty-first century skills. The goal of the study is twofold: (i) to quantify the extent to which the selected variables explain the IOP rating, and (ii) determine which teaching and non-teaching variables most affect the IOP rating. The best performing classifier achieved a competitive accuracy, highlighting that the selected variables mainly determine the IOP values. Other important findings include: (i) the IOP value is mainly influenced by the effectiveness of the instructor in his/her teaching roles; (ii) teaching strategies that involve the cooperation between the technical and pedagogical roles should be promoted; (iii) the pedagogical role has the highest impact on the final IOP value; and (iv) the most influential demographic variable is the student’s status (working commitments and family responsibilities).  相似文献   

10.
Using nine years of student evaluation of teaching (SET) data from a large US research university, we examine whether changes to the SET instrument have a substantial impact on overall instructor scores. Our study exploits four distinct natural experiments that arose when the SET instrument was changed. To maximise power, we compare the same course/instructor before and after each of the four changes occurred. We find that switching from in-class, paper course evaluations to online evaluations generates an average change of ?0.14 points on a five-point scale, or 0.25 standard deviations (SDs) in the overall instructor ratings. Changing labelling of the scale and the wording of the overall instructor question generates another decrease in the average rating: ?0.15 of a point (0.27 SDs). In contrast, extending the evaluation period to include the final examination and offering an incentive (early grade release) for completing the evaluations do not have a statistically significant effect on the overall instructor rating. The cumulative impact of these individual changes is ?0.29 points (0.52 SDs). This large decrease shows that SET scores are not comparable over time when instruments change. Therefore, administrators should measure and account for such changes when using historical benchmarks for evaluative purposes (e.g. appointments and compensation).  相似文献   

11.
The present study, employing a 2×2 design, examines the independent and interactive effects of two experimentally manipulated variables, Anonymity (A) and Retaliatory Potential (RP), on students' ratings of faculty performance. Data were obtained from 188 students at a large midwestern university using an instrument that had been specially developed for the study. A hypothesized main effect was found for A: The professor was rated more positively by students completing signed ratings than those completing ratings anonymously. The data showed no support, however, for the hypothesized RP main effect or the A×RP interactive effect. Practical implications of the study's findings are discussed.  相似文献   

12.
As student evaluation of teaching (SET) instruments are increasingly administered online, research has found that the response rates have dropped significantly. Validity concerns have necessitated research that explores student motivation for completing SETs. This study uses Vroom's [(1964). Work and motivation (3rd ed.). New York, NY: John Wiley & Sons] expectancy theory to frame student focus group responses regarding their motivations for completing and not completing paper and online SETs. Results show that students consider the following outcomes when deciding whether to complete SETs: (a) course improvement, (b) appropriate instructor tenure and promotion, (c) accurate instructor ratings are available to students, (d) spending reasonable amount of time on SETs, (e) retaining anonymity, (f) avoiding social scrutiny, (g) earning points and releasing grades, and (h) being a good university citizen. Results show that the lower online response rate is largely due to students’ differing feelings of obligation in the 2 formats. Students also noted that in certain situations, students often answer SETs insincerely.  相似文献   

13.
A manipulation of the instructions students received prior to completing the 7-item Endeavor Instructional Rating card differentially affected their ratings on two types of items. Specifically, when students were led to believe their ratings would have a strong impact on the instructor's career, they tended to be more lenient on items measuring rapport (i.e., the affective domain); this same effect was not observed for items measuring pedagogical skill (i.e., the cognitive domain). The different items on our instructional rating instrument appear to be measuring different things. One implication of this observation is that the inconsistent findings reported in past research on student ratings of instruction may be due to the differential mix of items from one instrument to another. When instructors are compared on ratings given them by students, unbiased interpretation requires that the multidimensional nature of teaching (and of the rating instrument) be considered.  相似文献   

14.
The present article examined the validity of public web‐based teaching evaluations by comparing the ratings on RateMyProfessors.com for 126 professors at Lander University to the institutionally administered student evaluations of teaching and actual average assigned GPAs for these same professors. Easiness website ratings were significantly positively correlated with actual assigned grades. Further, clarity and helpfulness website ratings were significantly positively correlated with student ratings of overall instructor excellence and overall course excellence on the institutionally administered IDEA forms. The results of this study offer preliminary support for the validity of the evaluations on RateMyProfessors.com.  相似文献   

15.
Course evaluations (often termed student evaluations of teaching or SETs) are pervasive in higher education. As SETs increasingly shift from pencil-and-paper to online, concerns grow over the lower response rates that typically accompany online SETs. This study of online SET response rates examined data from 678 faculty respondents and student response rates from an entire semester. The analysis focused on those tactics that faculty employ to raise response rates for their courses, and explored instructor and course characteristics as contributing factors. A comprehensive regression model was evaluated to determine the most effective tactics and characteristics. Using incentives had the most impact on response rates. Other effective tactics that increase response rates include reminding students to take the evaluation, explaining how the evaluations would be used to improve instruction, sending personal emails and posting reminders on Blackboard®. Incentives are not widely used; however, findings suggest that non-point incentives work as well as point-based ones, as do simple-to-administer minimum class-wide response rate expectations (compared to individual completion).  相似文献   

16.
Online self-testing as part of the online learning environment (OLE) provides practice questions on key concepts with immediate feedback – in a ‘no-risk’ environment. OLE activity was analysed for 471 on-site and distance students enrolled in health science courses to determine total activity on the OLE and usage of online self-tests. The study also aimed to determine whether utilisation of self-tests differed by final grade, particularly between students who just pass (C grades and Restricted pass) and those who fail (D and E grades). Results indicated that on-site students were significantly more active on the OLE compared to distance students. However, these groups engaged similarly with self-tests and achieved a similar distribution of grades. A significant positive relationship was found between final grade achieved and percentage of self-tests attempted. This relationship was significant regardless of study status (on-site or distance), course studied or total activity logged. A more targeted analysis of C?+?R vs. D?+?E students showed that although these two groups were similar on overall usage of the OLE, C?+?R students utilised self-tests to a significantly greater extent. Recommendations from this study are that students (particularly those struggling to achieve academic success) should be directed towards online self-tests.  相似文献   

17.
Based on student evaluation of teaching (SET) ratings from 1,432 units of study over a period of a year, representing 74,490 individual sets of ratings, and including a significant number of units offered in wholly online mode, we confirm the significant influence of class size, year level, and discipline area on at least some SET ratings. We also find online mode of offer to significantly influence at least some SET ratings. We reveal both the statistical significance and effect sizes of these influences, and find that the magnitudes of the effect sizes of all factors are small, but potentially cumulative. We also show that the influence of online mode of offer is of the same magnitude as the other 3 factors. These results support and extend the rating interpretation guides (RIGs) model proposed by Neumann and colleagues, and we present a general method for the development of a RIGs system.  相似文献   

18.
The use of aggregated student evaluations of their courses and course elements (e.g., subject functionality, affect, difficulty, graded assignments) is suggested as an efficient and useful means of obtaining program and department assessments. Given that the instruments used to collect student evaluations are valid (if they are not, they should not be used for any purpose), then averaging class data is likely to provide a valid and reliable index of program and department effectiveness as evaluated by students.Program and department assessment data are presented and discussed for a large northeastern professional school. Large and significant differences in the ratings of program elements were found. Although many of the elements designed into the program by the administration and faculty were perceived as operational by the students, some discrepancies between the design and student perceptions existed. Substantial departmental differences were also found which indicated areas of strength and weakness both within and across departments. The potential usefulness of the assessment for internal change and development is discussed.  相似文献   

19.
The relationships among several variables outside of the instructor's classroom control and student ratings of teaching effectiveness are investigated in a causal network. Student ratings are relatively independent of external variables. Students may be able to take into account more factors than generally assumed when they rate their instructors.  相似文献   

20.
Motivation theory suggests that autonomy supportiveness in instruction often leads to many positive outcomes in the classroom, such as higher levels of intrinsic motivation and engagement. The purpose of this study was to determine whether perceived autonomy support and course-related intrinsic motivation in college classrooms positively predict student ratings of instruction. Data were collected from 47 undergraduate education courses and 914 students. Consistent with expectations, the results indicated that both intrinsic motivation and autonomy support were positively associated with multiple dimensions of student ratings of instruction. Results also showed that intrinsic motivation moderated the association between autonomy support and instructional ratings—the higher intrinsic motivation, the less predictive autonomy support, and the lower intrinsic motivation, the more predictive autonomy support. These results suggest that incorporating classroom activities that engender autonomy support may lead to improved student perceptions of classroom instruction and may also enhance both student motivation and learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号