首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose of this study was to compare and evaluate three on-line pretest item calibration-scaling methods (the marginal maximum likelihood estimate with one expectation maximization [EM] cycle [OEM] method, the marginal maximum likelihood estimate with multiple EM cycles [MEM] method, and Stocking's Method B) in terms of itern parameter recovery when the item responses to the pretest items in the pool are sparse. Simulations of computerized adaptive tests were used to evaluate the results yielded by the three methods. The MEM method produced the smallest average total error in parameter estimation, and the OEM method yielded the largest total error.  相似文献   

2.
3.
This paper reviews methods for handling missing data in a research study. Many researchers use ad hoc methods such as complete case analysis, available case analysis (pairwise deletion), or single-value imputation. Though these methods are easily implemented, they require assumptions about the data that rarely hold in practice. Model-based methods such as maximum likelihood using the EM algorithm and multiple imputation hold more promise for dealing with difficulties caused by missing data. While model-based methods require specialized computer programs and assumptions about the nature of the missing data, these methods are appropriate for a wider range of situations than the more commonly used ad hoc methods. The paper provides an illustration of the methods using data from an intervention study designed to increase students’ ability to control their asthma symptoms.  相似文献   

4.
本文应用EM算法研究了三元混合指数分布在正常应力条件下和恒定应力加速寿命试验条件下,在完全数据场合、Ⅰ-型截尾和Ⅱ-型截尾场合的参数估计问题。  相似文献   

5.
对基于CFA(颜色滤波阵列)模型的篡改检测算法进行了改进。其检测过程为:首先,利用插值算法得到像素位置的预测误差,根据预测误差计算出CFA单元特征;然后,利用EM(期望最大化)算法估计特征模型的参数,算法对篡改位置的均值不做事先确定(从实验来看这种改进具有较好的效果);最后,利用贝叶斯理论计算出每个像素点的似然率,根据似然率的不同来定位篡改区域。在进行单CFA阵列模式检测的情况下,对多种CFA阵列模式的图像也进行了检测分析,实验结果显示,该算法能够对多种CFA阵列模式的图像准确定位篡改区域。  相似文献   

6.
The purpose of this study was to compare and evaluate five on-line pretest item-calibration/scaling methods in computerized adaptive testing (CAT): marginal maximum likelihood estimate with one EM cycle (OEM), marginal maximum likelihood estimate with multiple EM cycles (MEM), Stocking's Method A, Stocking's Method B, and BILOG/Prior. The five methods were evaluated in terms of item-parameter recovery, using three different sample sizes (300, 1000 and 3000). The MEM method appeared to be the best choice among these, because it produced the smallest parameter-estimation errors for all sample size conditions. MEM and OEM are mathematically similar, although the OEM method produced larger errors. MEM also was preferable to OEM, unless the amount of time involved in iterative computation is a concern. Stocking's Method B also worked very well, but it required anchor items that either would increase test lengths or require larger sample sizes depending on test administration design. Until more appropriate ways of handling sparse data are devised, the BILOG/Prior method may not be a reasonable choice for small sample sizes. Stocking's Method A had the largest weighted total error, as well as a theoretical weakness (i.e., treating estimated ability as true ability); thus, there appeared to be little reason to use it.  相似文献   

7.
Structural equation modeling (SEM) is now a generic modeling framework for many multivariate techniques applied in the social and behavioral sciences. Many statistical models can be considered either as special cases of SEM or as part of the latent variable modeling framework. One popular extension is the use of SEM to conduct linear mixed-effects modeling (LMM) such as cross-sectional multilevel modeling and latent growth modeling. It is well known that LMM can be formulated as structural equation models. However, one main difference between the implementations in SEM and LMM is that maximum likelihood (ML) estimation is usually used in SEM, whereas restricted (or residual) maximum likelihood (REML) estimation is the default method in most LMM packages. This article shows how REML estimation can be implemented in SEM. Two empirical examples on latent growth model and meta-analysis are used to illustrate the procedures implemented in OpenMx. Issues related to implementing REML in SEM are discussed.  相似文献   

8.
应用EM算法之ECM算法,研究了混合分布在截尾数据场合下的参数估计问题,并给出具体的算式.说明EM算法是一种行之有效的方法.  相似文献   

9.
Although structural equation modeling software packages use maximum likelihood estimation by default, there are situations where one might prefer to use multiple imputation to handle missing data rather than maximum likelihood estimation (e.g., when incorporating auxiliary variables). The selection of variables is one of the nuances associated with implementing multiple imputation, because the imputer must take special care to preserve any associations or special features of the data that will be modeled in the subsequent analysis. For example, this article deals with multiple group models that are commonly used to examine moderation effects in psychology and the behavioral sciences. Special care must be exercised when using multiple imputation with multiple group models, as failing to preserve the interactive effects during the imputation phase can produce biased parameter estimates in the subsequent analysis phase, even when the data are missing completely at random or missing at random. This study investigates two imputation strategies that have been proposed in the literature, product term imputation and separate group imputation. A series of simulation studies shows that separate group imputation adequately preserves the multiple group data structure and produces accurate parameter estimates.  相似文献   

10.
在网络环境中文本挖掘的过程主要包括特征提取、特征选择、挖掘方法选择、结果评价和知识模块等几个部分;最新的发展方向是基于EM算法对文本进行挖掘,基于该算法的的比较挖掘模型为:首先对已知数据集任意分为几个类,然后根据各个类集和背景集对文档集的各个词进行似然,再通过求和可以得到整个数据集的似然,该过程反复进行,直到收敛,从而可以根据各类和背景集结果中的较大的概率值得出文本的共同主题和各个类的主题。  相似文献   

11.
In this paper, a prediction model is developed that combines a Gaussian mixture model (GMM) and a Kalman filter for online forecasting of traffic safety on expressways. Raw time-to-collision (TTC) samples are divided into two categories: those representing vehicles in risky situations and those in safe situations. Then, the GMM is used to model the bimodal distribution of the TTC samples, and the maximum likelihood (ML) estimation parameters of the TTC distribution are obtained using the expectation-maximization (EM) algorithm. We propose a new traffic safety indicator, named the proportion of exposure to traffic conflicts (PETTC), for assessing the risk and predicting the safety of expressway traffic. A Kalman filter is applied to forecast the short-term safety indicator, PETTC, and solves the online safety prediction problem. A dataset collected from four different expressway locations is used for performance estimation. The test results demonstrate the precision and robustness of the prediction model under different traffic conditions and using different datasets. These results could help decision-makers to improve their online traffic safety forecasting and enable the optimal operation of expressway traffic management systems.  相似文献   

12.
Multilevel modeling is a statistical approach to analyze hierarchical data that consist of individual observations nested within clusters. Bayesian method is a well-known, sometimes better, alternative of Maximum likelihood method for fitting multilevel models. Lack of user friendly and computationally efficient software packages or programs was a main obstacle in applying Bayesian multilevel modeling. In recent years, the development of software packages for multilevel modeling with improved Bayesian algorithms and faster speed has been growing. This article aims to update the knowledge of software packages for Bayesian multilevel modeling and therefore to promote the use of these packages. Three categories of software packages capable of Bayesian multilevel modeling including brms, MCMCglmm, glmmBUGS, Bambi, R2BayesX, BayesReg, R2MLwiN and others are introduced and compared in terms of computational efficiency, modeling capability and flexibility, as well as user-friendliness. Recommendations to practical users and suggestions for future development are also discussed.  相似文献   

13.
提出了一种基于形态学的线粒体电镜图像边缘检测算法。首先对图像按需求裁剪,然后根据像素间灰度差异关系实现灰度图的二值化,再利用一套组合的形态学操作实现抽出背景、去除目标区域噪声、平滑边缘,最后通过去除所有内部的点获得线粒体的边缘。实验结果表明,对于电镜图像的线粒体边缘检测,该算法比现有的算法更有效,更接近人工检测的结果。  相似文献   

14.
Maximum likelihood is commonly used for estimation of model parameters in analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in maximum likelihood analysis. Nonlinear constraints could be encountered in complicated applications. In this paper we develop an EM-type algorithm for estimating model parameters with both linear and nonlinear constraints. The empirical performance of the algorithm is demonstrated by a Monte Carlo study. Application of the algorithm for linear constraints is illustrated by setting up a two-level mean and covariance structure model for a real two-level data set and running an EQS program.  相似文献   

15.
In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.  相似文献   

16.
Recently, analysis of structural equation models with polytomous and continuous variables has received a lot of attention. However, contributions to the selection of good models are limited. The main objective of this article is to investigate the maximum likelihood estimation of unknown parameters in a general LISREL-type model with mixed polytomous and continuous data and propose a model selection procedure for obtaining good models for the underlying substantive theory. The maximum likelihood estimate is obtained by a Monte Carlo Expectation Maximization algorithm, in which the E step is evaluated via the Gibbs sampler and the M step is completed via the method of conditional maximization. The convergence of the Monte Carlo Expectation Maximization algorithm is monitored by the bridge sampling. A model selection procedure based on Bayes factor and Occam's window search strategy is proposed. The effectiveness of the procedure in accounting for the model uncertainty and in picking good models is discussed. The proposed methodology is illustrated with a real example.  相似文献   

17.
采用二元多项式模型对时变OFDM系统的时频响应进行建模.在多项式模型的基础上,结合期望最大化(EM)方法的思想,提出了一种利用时频面上的二维数据来获取模型参数的最大似然(ML)估计值的算法(PEMTO).为了降低计算复杂度,避免由于矩阵求逆而带来的风险,给出了PEMTO的一种迭代计算方法(RPEMTO)PEMTO算法在数学上进行简化后,可以用来进行一维序贯信道估计.仿真结果显示,所提出算法的误码率低于其他类型的盲估计算法.  相似文献   

18.
Though the common default maximum likelihood estimator used in structural equation modeling is predicated on the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to utilize distribution-free estimation methods. Fortunately, promising alternatives are being integrated into popular software packages. Bootstrap resampling, which is offered in AMOS (Arbuckle, 1997), is one potential solution for estimating model test statistic p values and parameter standard errors under nonnormal data conditions. This study is an evaluation of the bootstrap method under varied conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Accuracy of the test statistic p values is evaluated in terms of model rejection rates, whereas accuracy of bootstrap standard error estimates takes the form of bias and variability of the standard error estimates themselves.  相似文献   

19.
为减弱信号传播中非视距等因素的影响,提高定位精度,提出改进到达时间差(TDOA)算法,给出了算法实现方案的流程和仿真对比结果.该算法在TDOA算法的基础上,融入了速度和区域约束,以此判断目标是否超出移动距离阈值或者区域边界,对定位结果进行优化.建模后对TDOA算法、速度受限定位算法、区域受限算法和改进TDOA算法进行了...  相似文献   

20.
模式匹配算法已广泛应用于各个领域,针对如何减少匹配次数,提高算法效率,提出两种改进的QS快速匹配算法。第一种算法通过检测匹配窗口的末字符是否出现于模式串中,并依据情况滑动模式串。第二种算法通过构造BM及QS算法两个坏字符滑动表,经查表比较后确定每一次的滑动距离,使得模式串的滑动距离达到最大,从而大大减少了尝试的次数。实验结果表明,UCD与MSD算法的尝试匹配次数明显优于QS及其他算法。具有更高的效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号