Average Effect Sizes in Developer-Commissioned and Independent Evaluations |
| |
Authors: | Rebecca Wolf Jennifer Morrison Amanda Inns Robert Slavin Kelsey Risman |
| |
Affiliation: | 1. Center for Research and Reform in Education, Johns Hopkins University, Baltimore, Maryland, USAbetsywolf@jhu.edu;3. Center for Research and Reform in Education, Johns Hopkins University, Baltimore, Maryland, USA |
| |
Abstract: | AbstractRigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been fully explored is whether program evaluations carried out or commissioned by developers produce larger effect sizes than evaluations conducted by independent third parties. Using study data from the What Works Clearinghouse, we find evidence of a “developer effect,” where program evaluations carried out or commissioned by developers produced average effect sizes that were substantially larger than those identified in evaluations conducted by independent parties. We explore potential reasons for the existence of a “developer effect” and provide evidence that interventions evaluated by developers were not simply more effective than those evaluated by independent parties. We conclude by discussing plausible explanations for this phenomenon as well as providing suggestions for researchers to mitigate potential bias in evaluations moving forward. |
| |
Keywords: | Program evaluation What Works Clearinghouse meta-analysis preregistration publication bias |
|
|