首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   20篇
  免费   0篇
教育   19篇
信息传播   1篇
  2022年   1篇
  2018年   1篇
  2016年   1篇
  2013年   1篇
  2012年   1篇
  2011年   1篇
  2006年   1篇
  2004年   2篇
  2003年   1篇
  2002年   1篇
  1998年   1篇
  1997年   1篇
  1996年   1篇
  1995年   1篇
  1994年   1篇
  1993年   1篇
  1992年   1篇
  1991年   1篇
  1981年   1篇
排序方式: 共有20条查询结果,搜索用时 15 毫秒
1.
Wainer and Steinberg (1992) showed that within broad categories of first-year college mathematics courses (e.g., calculus) men had substantially higher average scores on the mathematics section of the SAT (SAT-M) than women who earned the same letter grade. However, three aspects of their analyses may lead to unwarranted conclusions. First, they focused primarily on differences in SAT-M scores given course grades when the more important question for admissions officers is the difference in course grades given scores on the predictor. Second, they failed to account for differences among calculus courses (e.g., calculus for engineers versus calculus for liberal arts students). Most importantly, Wainer and Steinberg focused on the use of SAT-M as a single predictor. A reanalysis presented here indicated that a more appropriate composite indicator made up of both SAT-M and high school grade point average demonstrated minuscule gender differences for both calculus and precalculus courses.  相似文献   
2.
3.
A sample of college-bound juniors from 275 high schools took a test consisting of 70 math questions from the SAT. A random half of the sample was allowed to use calculators on the test. Both genders and three ethnic groups (White, African American, and Asian American) benefitted about equally from being allowed to use calculators; Latinos benefitted slightly more than the other groups. Students who routinely used calculators on classroom mathematics tests were relatively advantaged on the calculator test. Test speededness was about the same whether or not students used calculators. Calculator effects on individual items ranged from positive through neutral to negative and could either increase or decrease the validity of an item as a measure of mathematical reasoning skills. Calculator effects could be either present or absent in both difficult and easy items  相似文献   
4.
Using data from a sample of 10 colleges at which most students had taken both SAT I: Reasoning tests and SAT II: Subject tests, we simulated the effects of making selection decisions using SAT II scores in place of SAT I scores. Specifically, we treated the students in each college as forming the applicant pool for a more select college, and then selected the top two thirds (and top one third) of the students using high school grade point average combined with either SAT I scores or the average of SAT II scores. Success rates, in terms of first-year grade point averages, were virtually identical for students selected by the different models. The percentage of African American, Asian American, and White students selected varied only slightly across models. Appreciably more Mexican American and Other Latino students were selected with the model that used SAT II scores in place of SAT I scores because these students submitted subject test scores for the Spanish test on which they had high scores.  相似文献   
5.
Scores on essay‐based assessments that are part of standardized admissions tests are typically given relatively little weight in admissions decisions compared to the weight given to scores from multiple‐choice assessments. Evidence is presented to suggest that more weight should be given to these assessments. The reliability of the writing scores from two of the large volume admissions tests, the GRE General Test (GRE) and the Test of English as a Foreign Language Internet‐based test (TOEFL iBT), based on retesting with a parallel form, is comparable to the reliability of the multiple‐choice Verbal or Reading scores from those tests. Furthermore, and even more important, the writing scores from both tests are as effective as the multiple‐choice scores in predicting academic success and could contribute to fairer admissions decisions.  相似文献   
6.
In this study data were examined from several national testing programs to determine whether the change from paper-based administration to computer-based tests (CBTs) influences group differences in performance. Performances by gender, racial, and ethnic groups on the Graduate Record Examination General Test, Graduate Management Admissions Test, SAT I: Reasoning Test, and Praxis: Professional Assessment for Beginning Teachers, were analyzed to determine whether the shift in testing format from paper-and-pencil tests to CBTs posed a disadvantage to any of these subgroups, beyond that already identified for paper-based tests. Although all differences were quite small, some consistent patterns were found for some racial-ethnic and gender groups. African-American examinees and, to a lesser degree, Hispanic examinees appear to benefit from the CBT format. On some tests, female examinees' performance was relatively lower on the CBT version.  相似文献   
7.
Exploratory and confirmatory factor analyses were used to explore relationships among existing item types and three new computer–administered item types for the analytical scale of the Graduate Record Examination General Test. One new item type was an open–ended version of the current multiple–choice analytical reasoning item type. The other new item types had no counterparts on the existing test. The computer tests were administered at four sites to a sample of students who had previously taken the GRE General Test. Scores from the regular GRE and the special computer administration were matched for a sample of 349 students. Factor analyses suggested that the new item types with no counterparts in the existing GRE were reliably assessing unique constructs but the open–ended analytical reasoning items were not measuring anything beyond what is measured by the current multiple–choice version of these items.  相似文献   
8.
The incremental validity of a short holistically scored expository essay for predicting freshman grade point average was explored in two samples. In one of the samples the essay was administered to incoming freshmen at state colleges as part of a basic skills assessment battery. In the second sample the essay was part of an achievement test that is one of the admissions tests used by highly selective colleges. In both samples, the essay added essentially nothing to what could be predicted from high school grade point average, Scholastic Aptitude Test scores, and a multiple-choice test of writing-related skills.This research was funded in part by the College Board. The conclusions and opinions expressed are those of the author. A version of this paper was presented at the annual meeting of the American Educational Research Association, Boston, April 1990.  相似文献   
9.
Time limits on some computer-adaptive tests (CATs) are such that many examinees have difficulty finishing, and some examinees may be administered tests with more time-consuming items than others. Results from over 100,000 examinees suggested that about half of the examinees must guess on the final six questions of the analytical section of the Graduate Record Examination if they were to finish before time expires. At the higher-ability levels, even more guessing was required because the questions administered to higher-ability examinees were typically more time consuming. Because the scoring model is not designed to cope with extended strings of guesses, substantial errors in ability estimates can be introduced when CATs have strict time limits. Furthermore, examinees who are administered tests with a disproportionate number of time-consuming items appear to get lower scores than examinees of comparable ability who are administered tests containing items that can be answered more quickly, though the issue is very complex because of the relationship of time and difficulty, and the multidimensionality of the test.  相似文献   
10.
This study assessed the ability of history students to choose the essay topic on which they can get the highest score. A second, equally important question was whether the score on the chosen topic was more highly related to other indicators of proficiency in history than the score on the unchosen topic. Overall, for both U.S. and European history, scores were about one third of a standard deviation higher for the preferred topic than for the other topic. For U.S. history, about 32% of the students made the wrong choice; that is, 32% got a higher score on the other topic than on the preferred topic. In European history, 29% made the wrong choice. In the U.S. history sample, the preferred essay correlated .40 with an external criterion score, compared to .34 for the other essay; in the European history sample, the preferred essay correlated .52 with the external criterion, compared to .44 for the other topic.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号