共查询到5条相似文献,搜索用时 0 毫秒
1.
In this ITEMS module, we introduce the generalized deterministic inputs, noisy “and” gate (G‐DINA) model, which is a general framework for specifying, estimating, and evaluating a wide variety of cognitive diagnosis models. The module contains a nontechnical introduction to diagnostic measurement, an introductory overview of the G‐DINA model, as well as common special cases, and a review of model‐data fit evaluation practices within this framework. We use the flexible GDINA R package, which is available for free within the R environment and provides a user‐friendly graphical interface in addition to the code‐driven layer. The digital module also contains videos of worked examples, solutions to data activity questions, curated resources, a glossary, and quizzes with diagnostic feedback. 相似文献
2.
Diagnostic classification models (aka cognitive or skills diagnosis models) have shown great promise for evaluating mastery on a multidimensional profile of skills as assessed through examinee responses, but continued development and application of these models has been hindered by a lack of readily available software. In this article we demonstrate how diagnostic classification models may be estimated as confirmatory latent class models using Mplus, thus bridging the gap between the technical presentation of these models and their practical use for assessment in research and applied settings. Using a sample English test of three grammatical skills, we describe how diagnostic classification models can be phrased as latent class models within Mplus and how to obtain the syntax and output needed for estimation and interpretation of the model parameters. We also have written a freely available SAS program that can be used to automatically generate the Mplus syntax. We hope this work will ultimately result in greater access to diagnostic classification models throughout the testing community, from researchers to practitioners. 相似文献
3.
认知诊断测验是近年来兴起的测验形式,受到越来越多研究人员的关注。认知诊断测验与传统的纸笔测验、计算机自适应测验最大的不同之处在于它能够提供被试者在知识上的详细掌握情况,能够让被试者了解自身的长处和不足,可以进行有针对性的补救学习;能够让指导者了解学生在知识上的优点和弱点,可以进行有针对性的补救教学。认知诊断测验是适应现代教育的发展而产生的,认知诊断测验的功能虽然强大,但是要真正大规模的实现,却也不是容易的事情,因为实施认知诊断测验涉及很多关键技术,并且有的技术目前还不成熟。对实施认知诊断测验的关键技术进行系统性的介绍,以使更多的研究人员投身到诊断测验的研究中来,促进认知诊断测验的发展。 相似文献
4.
Although much research has been conducted on the psychometric properties of cognitive diagnostic models, they are only recently being used in operational settings to provide results to examinees and other stakeholders. Using this newer class of models in practice comes with a fresh challenge for diagnostic assessment developers: effectively reporting results and supporting end users to accurately interpret results. Achieving the goal of communicating results in a way that leads users of the assessment to make accurate interpretations requires a prerequisite step that cannot be taken for granted. The assessment developers must first accurately interpret results from a psychometric, or measurement, standpoint. Through this article, we seek to begin a discussion about reasonable interpretations of the results that classification‐based models provide about examinees. Interpretations from published research and ongoing practice show different—and sometimes conflicting—ways to interpret these results. This article seeks to formalize a comparison, critique, and discussion among the interpretations. Before beginning this discussion, we first present background on the results provided by classification‐based models regarding the examinees. We then structure our discussion around key questions an assessment development team needs to answer themselves prior to constructing reports and interpretative guides for end users of the assessment. 相似文献
5.
Drawing valid inferences from modern measurement models is contingent upon a good fit of the data to the model. Violations of model‐data fit have numerous consequences, limiting the usefulness and applicability of the model. As Bayesian estimation is becoming more common, understanding the Bayesian approaches for evaluating model‐data fit models is critical. In this instructional module, Allison Ames and Aaron Myers provide an overview of Posterior Predictive Model Checking (PPMC), the most common Bayesian model‐data fit approach. Specifically, they review the conceptual foundation of Bayesian inference as well as PPMC and walk through the computational steps of PPMC using real‐life data examples from simple linear regression and item response theory analysis. They provide guidance for how to interpret PPMC results and discuss how to implement PPMC for other model(s) and data. The digital module contains sample data, SAS code, diagnostic quiz questions, data‐based activities, curated resources, and a glossary. 相似文献