首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 640 毫秒
1.
为了检测数据是否符合给定的模型,需要对数据进行统计诊断.研究了基于最大Lq似然估计的广义非线性模型的统计诊断问题.利用3个统计诊断量来检验数据中是否都存在异常点.模拟结果显示,当样本容量较小时,使用最大Lq似然估计方法得到的诊断统计量的结果要比使用极大似然估计(MLE)方法得到的结果大;随着样本容量的增加,它们之间的区别逐渐减小.因此,使用最大Lq似然估计方法比用MLE方法更容易找到数据中的异常点.  相似文献   

2.
探讨了基于充分统计量的几个重要应用,重点讨论了充分统计量在参数估计中一致最小方差无偏估计和极大似然估计的构造方法及在假设检验中检验函数的构造方法。  相似文献   

3.
逐步二型删失数据在生存分析中的应用较为广泛,因此在逐步二型删失数据下推导了广义Pareto分布中的尺度参数的极大似然估计与Bayes估计的估计形式,并在平方损失函数下,基于MCMC方法与Lindley近似法,给出了广义Pareto分布尺度参数的Bayes估计.结果表明:贝叶斯估计方法优于极大似然估计方法,MCMC后验抽样下的Bayes估计优于Lindley近似方法.  相似文献   

4.
由Owen(1988)提出的经验似然是一种重要的统计方法.本文主要研究经验似然的基础之一——无偏估计函数的选择问题。为此,定义了基于最优无偏估计函数的最优经验似然,得到了最优无偏估计函数的性质,研究了最优无偏估计函数的构造方法.最后,将这种方法推广到拟似然。  相似文献   

5.
极大似然估计在参数的点估计方法中是一个重要的估计方法,并且其估计值有很多优良的统计性质。在教学中,由于此方法计算较为复杂,学生学习起来较为困难。主要介绍了极大似然估计的容易理解的课堂讲授方法。  相似文献   

6.
两个正态总体的均值与标准差的商的比值等于定值的似然比检验.给出了假设下的最大似然估计,似然比统计量及其渐近分布。  相似文献   

7.
第四章 数理统计1、了解总体、样本、统计量等概念.2、熟练掌握参数点估计方法——最大似然估计和最小二乘估计.3、掌握区间估计和假设检验的意义和方法,尤其是U检验法.4、熟练掌握用最小二乘法求回归直线的方法及显著性检验.5、知道抽检与工序控制问题.总体是由可观测的个体的全体构成的.样本是从总体中随机抽取的一串互不影响的个体,它是一串与总体同分布的相互独立的随机变量.统计量是对样本进行加工处理后引进的量,它是样本的不合未知参数的函数.最大似然估计的步骤:  相似文献   

8.
文章主要讨论了两个0-1总体均具有部分缺失数据时的参数的极大似然估计,并证明了估计的强相合性和渐进正态性,给出了大样本场合下的似然比检验统计量的极限分布。  相似文献   

9.
最大似然估计教学中的几点注记   总被引:1,自引:0,他引:1  
本文总结了最大似然估计教学中容易忽视的一些问题,并对最大似然估计中出现的一些特殊情况进行了举例说明,特别指出不是所有参数都有最大似然估计.  相似文献   

10.
摘要 本文研究一般的线性回归模型的观测向量受到一种污染时未知参数的最大似然估计,并讨论了若干用于统计推论的统计量在污染下的不变性。不变性的研究提示我们,在何种统计推断中可无视污染的影响。  相似文献   

11.
Classical accounts of maximum likelihood (ML) estimation of structural equation models for continuous outcomes involve normality assumptions: standard errors (SEs) are obtained using the expected information matrix and the goodness of fit of the model is tested using the likelihood ratio (LR) statistic. Satorra and Bentler (1994) introduced SEs and mean adjustments or mean and variance adjustments to the LR statistic (involving also the expected information matrix) that are robust to nonnormality. However, in recent years, SEs obtained using the observed information matrix and alternative test statistics have become available. We investigate what choice of SE and test statistic yields better results using an extensive simulation study. We found that robust SEs computed using the expected information matrix coupled with a mean- and variance-adjusted LR test statistic (i.e., MLMV) is the optimal choice, even with normally distributed data, as it yielded the best combination of accurate SEs and Type I errors.  相似文献   

12.
A 2-stage robust procedure as well as an R package, rsem, were recently developed for structural equation modeling with nonnormal missing data by Yuan and Zhang (2012). Several test statistics that have been used for complete data analysis are employed to evaluate model fit in the 2-stage robust method. However, properties of these statistics under robust procedures for incomplete nonnormal data analysis have never been studied. This study aims to systematically evaluate and compare 5 test statistics, including a test statistic derived from normal-distribution-based maximum likelihood, a rescaled chi-square statistic, an adjusted chi-square statistic, a corrected residual-based asymptotical distribution-free chi-square statistic, and a residual-based F statistic. These statistics are evaluated under a linear growth curve model by varying 8 factors: population distribution, missing data mechanism, missing data rate, sample size, number of measurement occasions, covariance between the latent intercept and slope, variance of measurement errors, and downweighting rate of the 2-stage robust method. The performance of the test statistics varies and the one derived from the 2-stage normal-distribution-based maximum likelihood performs much worse than the other four. Application of the 2-stage robust method and of the test statistics is illustrated through growth curve analysis of mathematical ability development, using data on the Peabody Individual Achievement Test mathematics assessment from the National Longitudinal Survey of Youth 1997 Cohort.  相似文献   

13.
Fitting a large structural equation modeling (SEM) model with moderate to small sample sizes results in an inflated Type I error rate for the likelihood ratio test statistic under the chi-square reference distribution, known as the model size effect. In this article, we show that the number of observed variables (p) and the number of free parameters (q) have unique effects on the Type I error rate of the likelihood ratio test statistic. In addition, the effects of p and q cannot be fully explained using degrees of freedom (df). We also evaluated the performance of 4 correctional methods for the model size effect, including Bartlett’s (1950), Swain’s (1975), and Yuan’s (2005) corrected statistics, and Yuan, Tian, and Yanagihara’s (2015) empirically corrected statistic. We found that Yuan et al.’s (2015) empirically corrected statistic generally yields the best performance in controlling the Type I error rate when fitting large SEM models.  相似文献   

14.
Assessing the correspondence between model predictions and observed data is a recommended procedure for justifying the application of an IRT model. However, with shorter tests, current goodness-of-fit procedures that assume precise point estimates of ability, are inappropriate. The present paper describes a goodness-of-fit statistic that considers the imprecision with which ability is estimated and involves constructing item fit tables based on each examinee's posterior distribution of ability, given the likelihood of their response pattern and an assumed marginal ability distribution. However, the posterior expectations that are computed are dependent and the distribution of the goodness-of-fit statistic is unknown. The present paper also describes a Monte Carlo resampling procedure that can be used to assess the significance of the fit statistic and compares this method with a previously used method. The results indicate that the method described herein is an effective and reasonably simple procedure for assessing the validity of applying IRT models when ability estimates are imprecise.  相似文献   

15.
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and 2 well-known robust test statistics. A modification to the Satorra–Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the 4 test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies 7 sample sizes and 3 distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ2 test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra–Bentler scaled test statistic performed best overall, whereas the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.  相似文献   

16.
The problem of testing the normal covariance matrix equal to a specified matrix is considered.A new Chi-Square test statistic is derived for multivariate normal population.Unlike the likelihood ratio test,the new test is an exact one.  相似文献   

17.
The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) × 3 (exogenous latent mean difference) × 3 (endogenous latent mean difference) × 3 (correlation between factors) × 3 (mixture proportions) factorial design. In addition, the efficacy of several likelihood-based statistics (Akaike's Information Criterion [AIC], Bayesian Information Ctriterion [BIC], the sample-size adjusted BIC [ssBIC], the consistent AIC [CAIC], the Vuong-Lo-Mendell-Rubin adjusted likelihood ratio test [aVLMR]), classification-based statistics (CLC [classification likelihood information criterion], ICL-BIC [integrated classification likelihood], normalized entropy criterion [NEC], entropy), and distributional statistics (multivariate skew and kurtosis test) were examined to determine which statistics best recover the correct number of components. Results indicate that the structural parameters were recovered, but the model fit statistics were not exceedingly accurate. The ssBIC statistic was the most accurate statistic, and the CLC, ICL-BIC, and aVLMR showed limited utility. However, none of these statistics were accurate for small samples (n = 500).  相似文献   

18.
As noted by Fremer and Olson, analysis of answer changes is often used to investigate testing irregularities because the analysis is readily performed and has proven its value in practice. Researchers such as Belov, Sinharay and Johnson, van der Linden and Jeon, van der Linden and Lewis, and Wollack, Cohen, and Eckerly have suggested several statistics for detection of aberrant answer changes. This article suggests a new statistic that is based on the likelihood ratio test. An advantage of the new statistic is that it follows the standard normal distribution under the null hypothesis of no aberrant answer changes. It is demonstrated in a detailed simulation study that the Type I error rate of the new statistic is very close to the nominal level and the power of the new statistic is satisfactory in comparison to those of several existing statistics for detecting aberrant answer changes. The new statistic and several existing statistics were shown to provide useful information for a real data set. Given the increasing interest in analysis of answer changes, the new statistic promises to be useful to measurement practitioners.  相似文献   

19.
In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several researchers have pointed out this phenomenon, but it is not well known among applied researchers who use exploratory factor analysis. We demonstrate that overfactoring is one cause for the well-known fact that the likelihood ratio test tends to find too many factors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号