首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4篇
  免费   0篇
教育   4篇
  2021年   1篇
  2019年   2篇
  2016年   1篇
排序方式: 共有4条查询结果,搜索用时 0 毫秒
1
1.
ABSTRACT

As a systemic approach to improving educational practice through research, ‘What Works’ has come under repeated challenge from alternative approaches, most recently that of improvement science. While ‘What Works’ remains a dominant paradigm for centralized knowledge-building efforts, there is need to understand why this alternative has gained support, and what it can contribute. I set out how the core elements of experimental and improvement science can be combined into a strategy to raise educational achievement with the support of evidence from randomized experiments. Central to this combined effort is a focus on identifying and testing mechanisms for improving teaching and learning, as applications of principles from the learning sciences. This article builds on current efforts to strengthen approaches to evidence-based practice and policy in a range of international contexts. It provides a foundation for those who aim to avoid another paradigm war and to accelerate international discussions on the design of systemic education research infrastructure and funding.  相似文献   
2.
3.
ABSTRACT

Across the evidence-based policy and practice (EBPP) community, including education, randomised controlled trials (RCTS) rank as the most “rigorous” evidence for causal conclusions. This paper argues that that is misleading. Only narrow conclusions about study populations can be warranted with the kind of “rigour” that RCTs excel at. Educators need a great deal more information to predict if a programme will work for their pupils. It is unlikely that that information can be obtained with EBPP-style rigour. So, educators should not be overly optimistic about success with programmes that have been “rigorously” tested. I close with a plea to the EBPP community to take on the job of identifying and vetting the information educators need in practice.  相似文献   
4.
ABSTRACT

Within evidence-based education, results from randomised controlled trials (RCTs), and meta-analyses of them, are taken as reliable evidence for effectiveness – they speak to “what works”. Extending RCT results requires establishing that study samples and settings are representative of the intended target. Although widely recognised as important for drawing causal inferences from RCTs, claims regarding representativeness tend to be poorly evidenced. Strategies for demonstrating it typically involve comparing observable characteristics (e.g., race, gender, location) of study samples to those in the population of interest to decision makers. This paper argues that these strategies provide insufficient evidence for establishing representativeness. Characteristics typically used for comparison are unlikely to be causally relevant to all educational interventions. Treating them as evidence that supports extending RCT results without providing evidence demonstrating their relevance undermines the inference. Determining what factors are causally relevant requires studying the causal mechanisms underlying the interventions in question.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号