首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When the multivariate normality assumption is violated in structural equation modeling, a leading remedy involves estimation via normal theory maximum likelihood with robust corrections to standard errors. We propose that this approach might not be best for forming confidence intervals for quantities with sampling distributions that are slow to approach normality, or for functions of model parameters. We implement and study a robust analog to likelihood-based confidence intervals based on inverting the robust chi-square difference test of Satorra (2000). We compare robust standard errors and the robust likelihood-based approach versus resampling methods in confirmatory factor analysis (Studies 1 & 2) and mediation analysis models (Study 3) for both single parameters and functions of model parameters, and under a variety of nonnormal data generation conditions. The percentile bootstrap emerged as the method with the best calibrated coverage rates and should be preferred if resampling is possible, followed by the robust likelihood-based approach.  相似文献   

2.
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed‐score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the normality assumption, would be preferred, because it is asymptotically accurate regardless of the distribution of the data. In this article, an analytical formula for the standard error of linear observed‐score equating, which characterizes the effect of nonnormality, is obtained under elliptical distributions. Using three large‐scale real data sets as the populations, resampling studies are conducted to empirically evaluate the normal and general estimators of the standard error of linear observed‐score equating. The effect of sample size (50, 100, 250, or 500) and equating method (chained linear, Tucker, or Levine observed‐score equating) are examined. Results suggest that the general estimator has smaller bias than the normal estimator in all 36 conditions; it has larger standard error when the sample size is at least 100; and it has smaller root mean squared error in all but one condition. An R program is also provided to facilitate the use of the general estimator.  相似文献   

3.
The asymptotically distribution-free (ADF) test statistic depends on very mild distributional assumptions and is theoretically superior to many other so-called robust tests available in structural equation modeling. The ADF test, however, often leads to model overrejection even at modest sample sizes. To overcome its poor small-sample performance, a family of robust test statistics obtained by modifying the ADF statistics was recently proposed. This study investigates by simulation the performance of the new modified test statistics. The results revealed that although a few of the test statistics adequately controlled Type I error rates in each of the examined conditions, most performed quite poorly. This result underscores the importance of choosing a modified test statistic that performs well for specific examined conditions. A parametric bootstrap method is proposed for identifying such a best-performing modified test statistic. Through further simulation it is shown that the proposed bootstrap approach performs well.  相似文献   

4.
Robust maximum likelihood (ML) and categorical diagonally weighted least squares (cat-DWLS) estimation have both been proposed for use with categorized and nonnormally distributed data. This study compares results from the 2 methods in terms of parameter estimate and standard error bias, power, and Type I error control, with unadjusted ML and WLS estimation methods included for purposes of comparison. Conditions manipulated include model misspecification, level of asymmetry, level and categorization, sample size, and type and size of the model. Results indicate that cat-DWLS estimation method results in the least parameter estimate and standard error bias under the majority of conditions studied. Cat-DWLS parameter estimates and standard errors were generally the least affected by model misspecification of the estimation methods studied. Robust ML also performed well, yielding relatively unbiased parameter estimates and standard errors. However, both cat-DWLS and robust ML resulted in low power under conditions of high data asymmetry, small sample sizes, and mild model misspecification. For more optimal conditions, power for these estimators was adequate.  相似文献   

5.
Bootstrapping approximate fit indexes in structural equation modeling (SEM) is of great importance because most fit indexes do not have tractable analytic distributions. Model-based bootstrap, which has been proposed to obtain the distribution of the model chi-square statistic under the null hypothesis (Bollen & Stine, 1992), is not theoretically appropriate for obtaining confidence intervals (CIs) for fit indexes because it assumes the null is exactly true. On the other hand, naive bootstrap is not expected to work well for those fit indexes that are based on the chi-square statistic, such as the root mean square error of approximation (RMSEA) and the comparative fit index (CFI), because sample noncentrality is a biased estimate of the population noncentrality. In this article we argue that a recently proposed bootstrap approach due to Yuan, Hayashi, and Yanagihara (YHY; 2007) is ideal for bootstrapping fit indexes that are based on the chi-square. This method transforms the data so that the “parent” population has the population noncentrality parameter equal to the estimated noncentrality in the original sample. We conducted a simulation study to evaluate the performance of the YHY bootstrap and the naive bootstrap for 4 indexes: RMSEA, CFI, goodness-of-fit index (GFI), and standardized root mean square residual (SRMR). We found that for RMSEA and CFI, the CIs under the YHY bootstrap had relatively good coverage rates for all conditions, whereas the CIs under the naive bootstrap had very low coverage rates when the fitted model had large degrees of freedom. However, for GFI and SRMR, the CIs under both bootstrap methods had poor coverage rates in most conditions.  相似文献   

6.
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA methodology for generation of synthetic datasets. Performance of the bootstrap test was compared with the likelihood ratio difference test and the DIMTEST procedure using a Monte Carlo simulation. The bootstrap test was found to exhibit much better control of the Type I error rate than the likelihood ratio difference test, and comparable power to DIMTEST under most conditions. A major conclusion to be taken from this research is that under many real-world conditions, the bootstrap MPA test presents a useful alternative for practitioners using Marginal Maximum Likelihood factor analysis to test for multidimensional testing data.  相似文献   

7.
When the assumption of multivariate normality is violated and the sample sizes are relatively small, existing test statistics such as the likelihood ratio statistic and Satorra–Bentler’s rescaled and adjusted statistics often fail to provide reliable assessment of overall model fit. This article proposes four new corrected statistics, aiming for better model evaluation with nonnormally distributed data at small sample sizes. A Monte Carlo study is conducted to compare the performances of the four corrected statistics against those of existing statistics regarding Type I error rate. Results show that the performances of the four new statistics are relatively stable compared with those of existing statistics. In particular, Type I error rates of a new statistic are close to the nominal level across all sample sizes under a condition of asymptotic robustness. Other new statistics also exhibit improved Type I error control, especially with nonnormally distributed data at small sample sizes.  相似文献   

8.
The purpose of this study is to investigate the effects of missing data techniques in longitudinal studies under diverse conditions. A Monte Carlo simulation examined the performance of 3 missing data methods in latent growth modeling: listwise deletion (LD), maximum likelihood estimation using the expectation and maximization algorithm with a nonnormality correction (robust ML), and the pairwise asymptotically distribution-free method (pairwise ADF). The effects of 3 independent variables (sample size, missing data mechanism, and distribution shape) were investigated on convergence rate, parameter and standard error estimation, and model fit. The results favored robust ML over LD and pairwise ADF in almost all respects. The exceptions included convergence rates under the most severe nonnormality in the missing not at random (MNAR) condition and recovery of standard error estimates across sample sizes. The results also indicate that nonnormality, small sample size, MNAR, and multicollinearity might adversely affect convergence rate and the validity of statistical inferences concerning parameter estimates and model fit statistics.  相似文献   

9.
A Monte Carlo approach was used to examine bias in the estimation of indirect effects and their associated standard errors. In the simulation design, (a) sample size, (b) the level of nonnormality characterizing the data, (c) the population values of the model parameters, and (d) the type of estimator were systematically varied. Estimates of model parameters were generally unaffected by either nonnormality or small sample size. Under severely nonnormal conditions, normal theory maximum likelihood estimates of the standard error of the mediated effect exhibited less bias (approximately 10% to 20% too small) compared to the standard errors of the structural regression coefficients (20% to 45% too small). Asymptotically distribution free standard errors of both the mediated effect and the structural parameters were substantially affected by sample size, but not nonnormality. Robust standard errors consistently yielded the most accurate estimates of sampling variability.  相似文献   

10.
This Monte Carlo simulation study investigated the impact of nonnormality on estimating and testing mediated effects with the parallel process latent growth model and 3 popular methods for testing the mediated effect (i.e., Sobel’s test, the asymmetric confidence limits, and the bias-corrected bootstrap). It was found that nonnormality had little effect on the estimates of the mediated effect, standard errors, empirical Type I error, and power rates in most conditions. In terms of empirical Type I error and power rates, the bias-corrected bootstrap performed best. Sobel’s test produced very conservative Type I error rates when the estimated mediated effect and standard error had a relationship, but when the relationship was weak or did not exist, the Type I error was closer to the nominal .05 value.  相似文献   

11.
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and 2 well-known robust test statistics. A modification to the Satorra–Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the 4 test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies 7 sample sizes and 3 distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ2 test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra–Bentler scaled test statistic performed best overall, whereas the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.  相似文献   

12.
Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional z test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a simulation was conducted to evaluate these alternative methods in a more complex path model with multiple mediators and indirect paths with 2 and 3 paths. Methods for testing contrasts of 2 effects were evaluated also. The simulation included 1 exogenous independent variable, 3 mediators and 2 outcomes and varied sample size, number of paths in the mediated effects, test used to evaluate effects, effect sizes for each path, and the value of the contrast. Confidence intervals were used to evaluate the power and Type I error rate of each method, and were examined for coverage and bias. The bias-corrected bootstrap had the least biased confidence intervals, greatest power to detect nonzero effects and contrasts, and the most accurate overall Type I error. All tests had less power to detect 3-path effects and more inaccurate Type I error compared to 2-path effects. Confidence intervals were biased for mediated effects, as found in previous studies. Results for contrasts did not vary greatly by test, although resampling approaches had somewhat greater power and might be preferable because of ease of use and flexibility.  相似文献   

13.
DIMTEST is a nonparametric statistical test procedure for assessing unidimensionality of binary item response data. The development of Stout's statistic, T, used in the DIMTEST procedure, does not require the assumption of a particular parametric form for the ability distributions or the item response functions. The purpose of the present study was to empirically investigate the performance of the statistic T with respect to different shapes of ability distributions. Several nonnormal distributions, both symmetric and nonsymmetric, were considered for this purpose. Other factors varied in the study were test length, sample size, and the level of correlation between abilities. The results of Type I error and power studies showed that the test statistic T exhibited consistently similar performance for all different shapes of ability distributions investigated in the study, which confirmed the nonparametric nature of the statistic T.  相似文献   

14.
In this study, the authors investigated incorporating adjusted model fit information into the root mean square error of approximation (RMSEA) fit index. Through Monte Carlo simulation, the usefulness of this adjusted index was evaluated for assessing model adequacy in structural equation modeling when the multivariate normality assumption underlying maximum likelihood estimation is violated. Adjustment to the RMSEA was considered in 2 forms: a rescaling adjustment via the Satorra-Bentler rescaled goodness-of-fit statistic and a bootstrap adjustment via the Bollen and Stine adjusted model p value. Both properly specified and misspecifed models were examined. The adjusted RMSEA was evaluated in terms of the average index value across study conditions and with respect to model rejection rates under tests of exact fit, close fit, and not-close fit.  相似文献   

15.
The present study evaluated the multiple imputation method, a procedure that is similar to the one suggested by Li and Lissitz (2004), and compared the performance of this method with that of the bootstrap method and the delta method in obtaining the standard errors for the estimates of the parameter scale transformation coefficients in item response theory (IRT) equating in the context of the common‐item nonequivalent groups design. Two different estimation procedures for the variance‐covariance matrix of the IRT item parameter estimates, which were used in both the delta method and the multiple imputation method, were considered: empirical cross‐product (XPD) and supplemented expectation maximization (SEM). The results of the analyses with simulated and real data indicate that the multiple imputation method generally produced very similar results to the bootstrap method and the delta method in most of the conditions. The differences between the estimated standard errors obtained by the methods using the XPD matrices and the SEM matrices were very small when the sample size was reasonably large. When the sample size was small, the methods using the XPD matrices appeared to yield slight upward bias for the standard errors of the IRT parameter scale transformation coefficients.  相似文献   

16.
Among the commonly used resampling methods of dealing with small-sample problems, the bootstrap enjoys the widest applications because it often outperforms its counterparts. However, the bootstrap still has limitations when its operations are contemplated. Therefore, the purpose of this study is to examine an alternative, new resampling method (called S-SMART) and compare the statistical performance of it with that of the bootstrap through an application of them to the most advanced modelling technique, SEM, as an example. The evaluation of the statistical performances of S-SMART and the bootstrap with respect to the standard errors of the parameter estimates was conducted through a Monte Carlo simulation study. This work, while potentially benefiting educational and behavioural research, conceivably would also provide methodological support for other research areas, such as bioinformatics, biology, geosciences, astronomy, and ecology, where large samples are hard to obtain.  相似文献   

17.
In classical test theory, a test is regarded as a sample of items from a domain defined by generating rules or by content, process, and format specifications, l f the items are a random sample of the domain, then the percent-correct score on the test estimates the domain score, that is, the expected percent correct for all items in the domain. When the domain is represented by a large set of calibrated items, as in item banking applications, item response theory (IRT) provides an alternative estimator of the domain score by transformation of the IRT scale score on the test. This estimator has the advantage of not requiring the test items to be a random sample of the domain, and of having a simple standard error. We present here resampling results in real data demonstrating for uni- and multidimensional models that the IRT estimator is also a more accurate predictor of the domain score than is the classical percent-correct score. These results have implications for reporting outcomes of educational qualification testing and assessment.  相似文献   

18.
Using Monte Carlo simulations, this research examined the performance of four missing data methods in SEM under different multivariate distributional conditions. The effects of four independent variables (sample size, missing proportion, distribution shape, and factor loading magnitude) were investigated on six outcome variables: convergence rate, parameter estimate bias, MSE of parameter estimates, standard error coverage, model rejection rate, and model goodness of fit—RMSEA. A three-factor CFA model was used. Findings indicated that FIML outperformed the other methods in MCAR, and MI should be used to increase the plausibility of MAR. SRPI was not comparable to the other three methods in either MCAR or MAR.  相似文献   

19.
Examined in this study were three procedures for estimating the standard errors of school passing rates using a generalizability theory model. Also examined was how these procedures behaved for student samples that differed in size. The procedures differed in terms of their assumptions about the populations from which students were sampled, and it was found that student sample size generally had a notable effect on the size of the standard error estimates they produced. Also the three procedures produced markedly different standard error estimates when student sample size was small.  相似文献   

20.
Mean and mean-and-variance corrections are the 2 major principles to develop test statistics with violation of conditions. In structural equation modeling (SEM), mean-rescaled and mean-and-variance-adjusted test statistics have been recommended under different contexts. However, recent studies indicated that their Type I error rates vary from 0% to 100% as the number of variables p increases. Can we still trust the 2 principles and what alternative rules can be used to develop test statistics for SEM with “big data”? This article addresses the issues by a large-scale Monte Carlo study. Results indicate that empirical means and standard deviations of each statistic can differ from their expected values many times in standardized units when p is large. Thus, the problems in Type I error control with the 2 statistics are because they do not possess the properties to which they are entitled, not because of the wrongdoing of the mean and mean-and-variance corrections. However, the 2 principles need to be implemented using small sample methodology instead of asymptotics. Results also indicate that distributions other than chi-square might better describe the behavior of test statistics in SEM with big data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号