首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and 2 well-known robust test statistics. A modification to the Satorra–Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the 4 test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies 7 sample sizes and 3 distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ2 test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra–Bentler scaled test statistic performed best overall, whereas the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.  相似文献   

2.
A 2-stage robust procedure as well as an R package, rsem, were recently developed for structural equation modeling with nonnormal missing data by Yuan and Zhang (2012). Several test statistics that have been used for complete data analysis are employed to evaluate model fit in the 2-stage robust method. However, properties of these statistics under robust procedures for incomplete nonnormal data analysis have never been studied. This study aims to systematically evaluate and compare 5 test statistics, including a test statistic derived from normal-distribution-based maximum likelihood, a rescaled chi-square statistic, an adjusted chi-square statistic, a corrected residual-based asymptotical distribution-free chi-square statistic, and a residual-based F statistic. These statistics are evaluated under a linear growth curve model by varying 8 factors: population distribution, missing data mechanism, missing data rate, sample size, number of measurement occasions, covariance between the latent intercept and slope, variance of measurement errors, and downweighting rate of the 2-stage robust method. The performance of the test statistics varies and the one derived from the 2-stage normal-distribution-based maximum likelihood performs much worse than the other four. Application of the 2-stage robust method and of the test statistics is illustrated through growth curve analysis of mathematical ability development, using data on the Peabody Individual Achievement Test mathematics assessment from the National Longitudinal Survey of Youth 1997 Cohort.  相似文献   

3.
Bootstrapping approximate fit indexes in structural equation modeling (SEM) is of great importance because most fit indexes do not have tractable analytic distributions. Model-based bootstrap, which has been proposed to obtain the distribution of the model chi-square statistic under the null hypothesis (Bollen & Stine, 1992), is not theoretically appropriate for obtaining confidence intervals (CIs) for fit indexes because it assumes the null is exactly true. On the other hand, naive bootstrap is not expected to work well for those fit indexes that are based on the chi-square statistic, such as the root mean square error of approximation (RMSEA) and the comparative fit index (CFI), because sample noncentrality is a biased estimate of the population noncentrality. In this article we argue that a recently proposed bootstrap approach due to Yuan, Hayashi, and Yanagihara (YHY; 2007) is ideal for bootstrapping fit indexes that are based on the chi-square. This method transforms the data so that the “parent” population has the population noncentrality parameter equal to the estimated noncentrality in the original sample. We conducted a simulation study to evaluate the performance of the YHY bootstrap and the naive bootstrap for 4 indexes: RMSEA, CFI, goodness-of-fit index (GFI), and standardized root mean square residual (SRMR). We found that for RMSEA and CFI, the CIs under the YHY bootstrap had relatively good coverage rates for all conditions, whereas the CIs under the naive bootstrap had very low coverage rates when the fitted model had large degrees of freedom. However, for GFI and SRMR, the CIs under both bootstrap methods had poor coverage rates in most conditions.  相似文献   

4.
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA methodology for generation of synthetic datasets. Performance of the bootstrap test was compared with the likelihood ratio difference test and the DIMTEST procedure using a Monte Carlo simulation. The bootstrap test was found to exhibit much better control of the Type I error rate than the likelihood ratio difference test, and comparable power to DIMTEST under most conditions. A major conclusion to be taken from this research is that under many real-world conditions, the bootstrap MPA test presents a useful alternative for practitioners using Marginal Maximum Likelihood factor analysis to test for multidimensional testing data.  相似文献   

5.
The purpose of this study is to investigate the effects of missing data techniques in longitudinal studies under diverse conditions. A Monte Carlo simulation examined the performance of 3 missing data methods in latent growth modeling: listwise deletion (LD), maximum likelihood estimation using the expectation and maximization algorithm with a nonnormality correction (robust ML), and the pairwise asymptotically distribution-free method (pairwise ADF). The effects of 3 independent variables (sample size, missing data mechanism, and distribution shape) were investigated on convergence rate, parameter and standard error estimation, and model fit. The results favored robust ML over LD and pairwise ADF in almost all respects. The exceptions included convergence rates under the most severe nonnormality in the missing not at random (MNAR) condition and recovery of standard error estimates across sample sizes. The results also indicate that nonnormality, small sample size, MNAR, and multicollinearity might adversely affect convergence rate and the validity of statistical inferences concerning parameter estimates and model fit statistics.  相似文献   

6.
The asymptotically distribution free (ADF) method is often used to estimate parameters or test models without a normal distribution assumption on variables, both in covariance structure analysis and in correlation structure analysis. However, little has been done to study the differences in behaviors of the ADF method in covariance versus correlation structure analysis. The behaviors of 3 test statistics frequently used to evaluate structural equation models with nonnormally distributed variables, χ2 test TAGLS and its small-sample variants TYB and TF(AGLS) were compared. Results showed that the ADF method in correlation structure analysis with test statistic TAGLS performs much better at small sample sizes than the corresponding test for covariance structures. In contrast, test statistics TYB and TF(AGLS) under the same conditions generally perform better with covariance structures than with correlation structures. It is proposed that excessively large and variable condition numbers of weight matrices are a cause of poor behavior of ADF test statistics in small samples, and results showed that these condition numbers are systematically increased with substantial increase in variance as sample size decreases. Implications for research and practice are discussed.  相似文献   

7.
The authors performed a Monte Carlo simulation to empirically investigate the robustness and power of 4 methods in testing mean differences for 2 independent groups under conditions in which 2 populations may not demonstrate the same pattern of nonnormality. The approaches considered were the t test, Wilcoxon rank-sum test, Welch-James test with trimmed means and Winsorized variances, and a nonparametric bootstrap test. Results showed that the Wilcoxon rank-sum test and Welch-James test with trimmed means and Winsorized variances were not robust in terms of type I error control when the 2 populations showed different patterns of nonnormality. The nonparametric bootstrap test provided power advantages over the t test. The authors discuss other results from the simulation study and provide recommendations.  相似文献   

8.
Classical accounts of maximum likelihood (ML) estimation of structural equation models for continuous outcomes involve normality assumptions: standard errors (SEs) are obtained using the expected information matrix and the goodness of fit of the model is tested using the likelihood ratio (LR) statistic. Satorra and Bentler (1994) introduced SEs and mean adjustments or mean and variance adjustments to the LR statistic (involving also the expected information matrix) that are robust to nonnormality. However, in recent years, SEs obtained using the observed information matrix and alternative test statistics have become available. We investigate what choice of SE and test statistic yields better results using an extensive simulation study. We found that robust SEs computed using the expected information matrix coupled with a mean- and variance-adjusted LR test statistic (i.e., MLMV) is the optimal choice, even with normally distributed data, as it yielded the best combination of accurate SEs and Type I errors.  相似文献   

9.
Response accuracy and response time data can be analyzed with a joint model to measure ability and speed of working, while accounting for relationships between item and person characteristics. In this study, person‐fit statistics are proposed for joint models to detect aberrant response accuracy and/or response time patterns. The person‐fit tests take the correlation between ability and speed into account, as well as the correlation between item characteristics. They are posited as Bayesian significance tests, which have the advantage that the extremeness of a test statistic value is quantified by a posterior probability. The person‐fit tests can be computed as by‐products of a Markov chain Monte Carlo algorithm. Simulation studies were conducted in order to evaluate their performance. For all person‐fit tests, the simulation studies showed good detection rates in identifying aberrant patterns. A real data example is given to illustrate the person‐fit statistics for the evaluation of the joint model.  相似文献   

10.
This paper examines the effects of two background variables in students' ratings of teaching effectiveness (SETs): class size and students' motivation (as surrogated by students' likelihood to respond randomly). Resampling simulation methodology has been employed to test the sensitivity of the SET scale for three hypothetical instructors (excellent, average, and poor). In an ideal scenario without confounding factors, SET statistics unmistakably distinguish the instructors. However, at different class sizes and levels of random responses, SET class averages are significantly biased. Results suggest that evaluations based on SET statistics should look at more than class averages. Resampling methodology (bootstrap simulation) is useful for SET research for scale sensitivity study, research results validation, and actual SET score analyses. Examples will be given on how bootstrap simulation can be applied to real-life SET data comparison.  相似文献   

11.
This study examined the effect of model size on the chi-square test statistics obtained from ordinal factor analysis models. The performance of six robust chi-square test statistics were compared across various conditions, including number of observed variables (p), number of factors, sample size, model (mis)specification, number of categories, and threshold distribution. Results showed that the unweighted least squares (ULS) robust chi-square statistics generally outperform the diagonally weighted least squares (DWLS) robust chi-square statistics. The ULSM estimator performed the best overall. However, when fitting ordinal factor analysis models with a large number of observed variables and small sample size, the ULSM-based chi-square tests may yield empirical variances that are noticeably larger than the theoretical values and inflated Type I error rates. On the other hand, when the number of observed variables is very large, the mean- and variance-corrected chi-square test statistics (e.g., based on ULSMV and WLSMV) could produce empirical variances conspicuously smaller than the theoretical values and Type I error rates lower than the nominal level, and demonstrate lower power rates to reject misspecified models. Recommendations for applied researchers and future empirical studies involving large models are provided.  相似文献   

12.
The asymptotic performance of structural equation modeling tests and standard errors are influenced by two factors: the model and the asymptotic covariance matrix Γ of the sample covariances. Although most simulation studies clearly specify model conditions, specification of Γ is usually limited to values of univariate skewness and kurtosis. We illustrate that marginal skewness and kurtosis are not sufficient to adequately specify a nonnormal simulation condition by showing that asymptotic standard errors and test statistics vary substantially among distributions with skewness and kurtosis that are identical. We argue therefore that Γ should be reported when presenting the design of simulation studies. We show how Γ can be exactly calculated under the widely used Vale–Maurelli transform. We suggest plotting the elements of Γ and reporting the eigenvalues associated with the test statistic. R code is provided.  相似文献   

13.
Two types of answer-copying statistics for detecting copiers in small-scale examinations are proposed. One statistic identifies the "copier-source" pair, and the other in addition suggests who is copier and who is source. Both types of statistics can be used when the examination has alternate test forms. A simulation study shows that the statistics do not depend on the total-test score. Another simulation study compares the statistics with two known statistics, and shows that they have substantial power. The new statistics are applied to data from a small-scale examination  ( N = 230)  with two alternate test forms. Auxiliary information on the seat location of the examinees and the test scores of the examinees was used to determine whether or not examinees could be suspected.  相似文献   

14.
Though the common default maximum likelihood estimator used in structural equation modeling is predicated on the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to utilize distribution-free estimation methods. Fortunately, promising alternatives are being integrated into popular software packages. Bootstrap resampling, which is offered in AMOS (Arbuckle, 1997), is one potential solution for estimating model test statistic p values and parameter standard errors under nonnormal data conditions. This study is an evaluation of the bootstrap method under varied conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Accuracy of the test statistic p values is evaluated in terms of model rejection rates, whereas accuracy of bootstrap standard error estimates takes the form of bias and variability of the standard error estimates themselves.  相似文献   

15.
This article investigates likelihood-based difference statistics for testing nonlinear effects in structural equation modeling using the latent moderated structural equations (LMS) approach. In addition to the standard difference statistic TD, 2 robust statistics have been developed in the literature to ensure valid results under the conditions of nonnormality or small sample sizes: the robust TDR and the “strictly positive” TDRP. These robust statistics have not been examined in combination with LMS yet. In 2 Monte Carlo studies we investigate the performance of these methods for testing quadratic or interaction effects subject to different sources of nonnormality, nonnormality due to the nonlinear terms, and nonnormality due to the distribution of the predictor variables. The results indicate that TD is preferable to both TDR and TDRP. Under the condition of strong nonlinear effects and nonnormal predictors, TDR often produced negative differences and TDRP showed no desirable power.  相似文献   

16.
Smoothing is designed to yield smoother equating results that can reduce random equating error without introducing very much systematic error. The main objective of this study is to propose a new statistic and to compare its performance to the performance of the Akaike information criterion and likelihood ratio chi-square difference statistics in selecting the smoothing parameter for polynomial loglinear equating under the random groups design. These model selection statistics were compared for four sample sizes (500, 1,000, 2,000, and 3,000) and eight simulated equating conditions, including both conditions where equating is not needed and conditions where equating is needed. The results suggest that all model selection statistics tend to improve the equating accuracy by reducing the total equating error. The new statistic tended to have less overall error than the other two methods.  相似文献   

17.
As noted by Fremer and Olson, analysis of answer changes is often used to investigate testing irregularities because the analysis is readily performed and has proven its value in practice. Researchers such as Belov, Sinharay and Johnson, van der Linden and Jeon, van der Linden and Lewis, and Wollack, Cohen, and Eckerly have suggested several statistics for detection of aberrant answer changes. This article suggests a new statistic that is based on the likelihood ratio test. An advantage of the new statistic is that it follows the standard normal distribution under the null hypothesis of no aberrant answer changes. It is demonstrated in a detailed simulation study that the Type I error rate of the new statistic is very close to the nominal level and the power of the new statistic is satisfactory in comparison to those of several existing statistics for detecting aberrant answer changes. The new statistic and several existing statistics were shown to provide useful information for a real data set. Given the increasing interest in analysis of answer changes, the new statistic promises to be useful to measurement practitioners.  相似文献   

18.
基于相对危险度提出了一个假设检验问题,在配对设计下用delta方法构建了Wald检验统计量,并对Wald检验统计量进行连续性修正。通过Monte Carlo方法模拟,表明Wald检验统计量控制第一类错误的概率较差,而修正后的Wald检验统计量能较好地控制第一类错误率,且第一类错误率接近于给定的显著性水平,是一个理想检验。  相似文献   

19.
This article demonstrates the use of a new class of model‐free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model‐free person‐fit statistics found in the literature as well as an adaptation of another CUSUM approach. The study used both simulated responses and real response data from a large‐scale standardized admission test.  相似文献   

20.
A well-known ad-hoc approach to conducting structural equation modeling with missing data is to obtain a saturated maximum likelihood (ML) estimate of the population covariance matrix and then to use this estimate in the complete data ML fitting function to obtain parameter estimates. This 2-stage (TS) approach is appealing because it minimizes a familiar function while being only marginally less efficient than the full information ML (FIML) approach. Additional advantages of the TS approach include that it allows for easy incorporation of auxiliary variables and that it is more stable in smaller samples. The main disadvantage is that the standard errors and test statistics provided by the complete data routine will not be correct. Empirical approaches to finding the right corrections for the TS approach have failed to provide unequivocal solutions. In this article, correct standard errors and test statistics for the TS approach with missing completely at random and missing at random normally distributed data are developed and studied. The new TS approach performs well in all conditions, is only marginally less efficient than the FIML approach (and is sometimes more efficient), and has good coverage. Additionally, the residual-based TS statistic outperforms the FIML test statistic in smaller samples. The TS method is thus a viable alternative to FIML, especially in small samples, and its further study is encouraged.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号