共查询到16条相似文献,搜索用时 15 毫秒
1.
“马尔可夫链蒙特卡洛”(MCMC)方法在估计IRT模型参数中的应用 总被引:2,自引:0,他引:2
本文介绍和阐述怎样运用“马尔可夫链蒙特卡洛”(MCMC)技术,并结合Bayes方法来估计IRT的模型参数。首先简要地概述了MCMC方法估计模型参数的基本原理;其次介绍MCMC方法估计模型参数的一般方法,涉及Gibbs抽样、取舍抽样、Metropolis-Hastings算法等概念和方法;最后以IRT的“二参数逻辑斯蒂”(2PL)模型为例,重点介绍了用“Gibbs范围内的M-H算法”估计项目参数(β1jβ2j)的算法过程。结束本文时还解说了MCMC方法的特点。阅读本文需具有随机过程、Markov链、Bayes方法等概率论的基本知识。 相似文献
2.
Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model‐data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing this module, the reader will have an understanding of traditional and Bayesian approaches for evaluating model‐data fit of IRT models, the relative advantages of each approach, and the software available to implement each method. 相似文献
3.
基于项目反应理论的测验编制方法研究 总被引:3,自引:0,他引:3
本文在简单介绍项目反应理论的基础上,从计量分析的角度,深入探讨了应用项目反应理论编制各种测验的一般步骤;探讨了项目反应理论题库建设方法及基于题库的测验编制方法;探讨了标准参照测验合格分数线的划分方法。 相似文献
4.
苏婕 《天津职业院校联合学报》2007,9(5):106-109
随着计算机的普及、网络的发展、教学和考试测评理论的更新,一种基于题目反应理论的计算机自适应考试已经越来越普及,它以其题目适应不同能力学生水平自动变化的特点,已经被越来越多的考试所采用,针对题目反应理论,需要对自适应考试实现等问题加以论述。 相似文献
5.
Roy Levy 《Educational Measurement》2020,39(1):94-95
In this digital ITEMS module, Dr. Roy Levy describes Bayesian approaches to psychometric modeling. He discusses how Bayesian inference is a mechanism for reasoning in a probability-modeling framework and is well-suited to core problems in educational measurement: reasoning from student performances on an assessment to make inferences about their capabilities more broadly conceived, as well as fitting models to characterize the psychometric properties of tasks. The approach is first developed in the context of estimating a mean and variance of a normal distribution before turning to the context of unidimensional item response theory (IRT) models for dichotomously scored data. Dr. Levy illustrates the process of fitting Bayesian models using the JAGS software facilitated through the R statistical environment. The module is designed to be relevant for students, researchers, and data scientists in various disciplines such as education, psychology, sociology, political science, business, health, and other social sciences. It contains audio-narrated slides, diagnostic quiz questions, and data-based activities with video solutions as well as curated resources and a glossary. 相似文献
6.
7.
8.
学生的数学素养具有多维结构,素养导向的数学学业成就测评需要提供被试在各维度上的表现信息,而不仅是一个单一的总分。以PISA数学素养结构为理论模型,以多维项目反应理论(MIRT)为测量模型,利用R语言的MIRT程序包处理和分析某地区8年级数学素养测评题目数据,研究数学素养的多维测量方法。结果表明:MIRT兼具单维项目反应理论和因子分析的优点,利用其可对测试的结构效度和测试题目质量进行分析,以及对被试进行多维能力认知诊断。 相似文献
9.
采用随机整群抽样抽取505名中小学教师作为被试,其中,男教师189名,女教师271名,年龄均在25至55岁之间。采用教学效能感问卷进行施测,基于项目反应理论,对测试结果进行分析,得出所有项目的区分度、难度和项目信息峰值,参考项目区分度、难度及项目信息函数峰值对教学效能感量表做了修订,再运用结构方程模型、层面理论技术和最小空间分析对修订后的量表进行质量检验,结果表明修订后的量表测量拥有更为清晰的结构效度和更高的信度,测量更为精确。运用SPSS15.0管理数据,运用Hudap6.0和MULTILOG 7.03分析数据,研究得出如下五个结论:1)教学效能感量表为单一维度,可以使用项目反应理论进行分析;2)修订后的量表项目的区分度、难度更为合理;3)修订后的量表的测验信息峰值较原量表稍低;4)修订前后量表对应层面元素之间存在高相关;5)量表的三个方面内容结构得以证实,即学生品德行为教育、课堂组织管理和知识传授。 相似文献
10.
Most of the software that is available to implement Bayesian approaches uses Markov chain Monte Carlo (MCMC) methods. It is our impression that many researchers are primarily concerned with convergence as assessed by the Potential Scale Reduction (PSR) and that other aspects of MCMC are largely ignored. In this article, we argue that the precision with which the Bayesian estimates are approximated by summary statistics for the MCMC chain is essential to ensure good statistical properties. We discuss the Effective Sample Size (ESS), which indicates how well an estimate is approximated, and present evidence from two simulation studies and an example from organizational research to support our claim that researchers should be concerned not only with convergence but also with precision, particularly when a multilevel model is estimated. In addition, we demonstrate how Mplus can be modified so that users can control the ESS, and we conclude with recommendations. 相似文献
11.
Chueh-An Hsieh Alexander von Eye Kimberly Maier Hsin-Jung Hsieh Shi-Hsiung Chen 《Structural equation modeling》2013,20(4):592-615
Applying item response theory models to repeated observations has demonstrated great promise in developmental research. By allowing the researcher to take account of the characteristics of both item response and measurement error in longitudinal trajectory analysis, it improves the reliability and validity of latent growth curve analysis. This has enabled the study, to differentially weigh individual items and examine developmental stability and change over time, to propose a comprehensive modeling framework, combining a measurement model with a structural model. Despite a large number of components requiring attention, this study focuses on model formulation, evaluates the performance of the estimators of model parameters, incorporates prior knowledge from Bayesian analysis, and applies the model using an illustrative example. It is hoped that this fundamental study can demonstrate the breadth of this unified latent growth curve model. 相似文献
12.
Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be compared. When applied to a binary data set, our experience suggests that IRT and FA models yield similar fits. However, when the data are polytomous ordinal, IRT models yield a better fit because they involve a higher number of parameters. But when fit is assessed using the root mean square error of approximation (RMSEA), similar fits are obtained again. We explain why. These test statistics have little power to distinguish between FA and IRT models; they are unable to detect that linear FA is misspecified when applied to ordinal data generated under an IRT model. 相似文献
13.
为比较结构方程模型和 IRT等级反应模型在人格量表项目筛选上的作用,以《中国大学生人格量表》的7229个实际测量数据为基础,针对因素二“爽直”分别以Lisrel8.70和Multilog7.03进行结构方程模型和等级反应模型的参数估计与拟合,比较两种方法的项目筛选结果.二者统计结果均认为项目5、6、7、8拟合度不佳,在结构方程模型上表现为因子负荷较低,整体拟合指数不理想;在等级反应模型上表现为区分度参数和位置参数不理想,相关项目的特征曲线和信息曲线形态较差.但结构方程模型倾向于项目6、8更差,而等级反应模型则倾向于项目5、6更差.结构方程模型和 IRT等级反应模型对人格量表项目的统计推断结果从总体上讲是一致的,但在个别项目上略有差异.二者各有优势,可以结合使用. 相似文献
14.
The high school grade point average (GPA) is often adjusted to account for nominal indicators of course rigor, such as “honors” or “advanced placement.” Adjusted GPAs—also known as weighted GPAs—are frequently used for computing students’ rank in class and in the college admission process. Despite the high stakes attached to GPA, weighting policies vary considerably across states and high schools. Previous methods of estimating weighting parameters have used regression models with college course performance as the dependent variable. We discuss and demonstrate the suitability of the graded response model for estimating GPA weighting parameters and evaluating traditional weighting schemes. In our sample, which was limited to self‐reported performance in high school mathematics courses, we found that commonly used policies award more than twice the bonus points necessary to create parity for standard and advanced courses. 相似文献
15.
为了改进用于分析大量影响因素的交通事故模型,采用基于马尔可夫链蒙特卡罗法和吉布斯抽样的条件自回归负二项模型来拟合过度散布性(由负二项过程拟合)、未观察异质性和空间相关性(由条件自回归过程拟合).统计检验显示,由于具有更小的预测误差和更强的参数估计,条件自回归负二项模型优于条件自回归泊松模型、负二项模型、零膨胀泊松模型和零膨胀负二项模型.研究结果表明,交通事故率和死亡人数与车道数、曲线长度、车道年平均日交通量和降雨量成正比.最大限速和最近医院距离与交通事故次数成反比,而与死亡事故次数成正比,这可能是由于过高的速度会引发更严重的事故以及救援伤者时丧失较长时间. 相似文献
16.
In this digital ITEMS module, Dr. Brian Leventhal and Dr. Allison Ames provide an overview of Monte Carlo simulation studies (MCSS) in item response theory (IRT). MCSS are utilized for a variety of reasons, one of the most compelling being that they can be used when analytic solutions are impractical or nonexistent because they allow researchers to specify and manipulate an array of parameter values and experimental conditions (e.g., sample size, test length, and test characteristics). Dr. Leventhal and Dr. Ames review the conceptual foundation of MCSS in IRT and walk through the processes of simulating total scores as well as item responses using the two-parameter logistic, graded response, and bifactor models. They provide guidance for how to implement MCSS using other item response models and best practices for efficient syntax and executing an MCSS. The digital module contains sample SAS code, diagnostic quiz questions, activities, curated resources, and a glossary. 相似文献