排序方式: 共有2条查询结果,搜索用时 15 毫秒
1
1.
Hunter Gehlbach Carly D. Robinson 《Journal of research on educational effectiveness》2018,11(2):296-315
Like performance-enhancing drugs inflating apparent athletic achievements, several common social science practices contribute to the production of illusory results. In this article, we examine the processes that lead to illusory findings and describe their consequences. We borrow from an approach used increasingly by other disciplines—the norm of preregistering studies. Specifically, we examine how this practice of publicly posting documentation of one's prespecified hypotheses and other key decisions of a study prior to study implementation or data analysis could improve scientific integrity within education. In an attempt to develop initial guidelines to facilitate preregistrations in education, we discuss the types of studies that ought to be preregistered and the logistics of how educational researchers might execute preregistrations. We conclude with ideas for how researchers, reviewers, and the field of education more broadly might speed the adoption of this new norm. 相似文献
2.
Rebecca Wolf Jennifer Morrison Amanda Inns Robert Slavin Kelsey Risman 《Journal of research on educational effectiveness》2020,13(2):428-447
AbstractRigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been fully explored is whether program evaluations carried out or commissioned by developers produce larger effect sizes than evaluations conducted by independent third parties. Using study data from the What Works Clearinghouse, we find evidence of a “developer effect,” where program evaluations carried out or commissioned by developers produced average effect sizes that were substantially larger than those identified in evaluations conducted by independent parties. We explore potential reasons for the existence of a “developer effect” and provide evidence that interventions evaluated by developers were not simply more effective than those evaluated by independent parties. We conclude by discussing plausible explanations for this phenomenon as well as providing suggestions for researchers to mitigate potential bias in evaluations moving forward. 相似文献
1