首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Structural equation modeling (SEM) is now a generic modeling framework for many multivariate techniques applied in the social and behavioral sciences. Many statistical models can be considered either as special cases of SEM or as part of the latent variable modeling framework. One popular extension is the use of SEM to conduct linear mixed-effects modeling (LMM) such as cross-sectional multilevel modeling and latent growth modeling. It is well known that LMM can be formulated as structural equation models. However, one main difference between the implementations in SEM and LMM is that maximum likelihood (ML) estimation is usually used in SEM, whereas restricted (or residual) maximum likelihood (REML) estimation is the default method in most LMM packages. This article shows how REML estimation can be implemented in SEM. Two empirical examples on latent growth model and meta-analysis are used to illustrate the procedures implemented in OpenMx. Issues related to implementing REML in SEM are discussed.  相似文献   

2.
3.
Data collected from questionnaires are often in ordinal scale. Unweighted least squares (ULS), diagonally weighted least squares (DWLS) and normal-theory maximum likelihood (ML) are commonly used methods to fit structural equation models. Consistency of these estimators demands no structural misspecification. In this article, we conduct a simulation study to compare the equation-by-equation polychoric instrumental variable (PIV) estimation with ULS, DWLS, and ML. Accuracy of PIV for the correctly specified model and robustness of PIV for misspecified models are investigated through a confirmatory factor analysis (CFA) model and a structural equation model with ordinal indicators. The effects of sample size and nonnormality of the underlying continuous variables are also examined. The simulation results show that PIV produces robust factor loading estimates in the CFA model and in structural equation models. PIV also produces robust path coefficient estimates in the model where valid instruments are used. However, robustness highly depends on the validity of instruments.  相似文献   

4.
5.
In many intervention and evaluation studies, outcome variables are assessed using a multimethod approach comparing multiple groups over time. In this article, we show how evaluation data obtained from a complex multitrait–multimethod–multioccasion–multigroup design can be analyzed with structural equation models. In particular, we show how the structural equation modeling approach can be used to (a) handle ordinal items as indicators, (b) test measurement invariance, and (c) test the means of the latent variables to examine treatment effects. We present an application to data from an evaluation study of an early childhood prevention program. A total of 659 children in intervention and control groups were rated by their parents and teachers on prosocial behavior and relational aggression before and after the program implementation. No mean change in relational aggression was found in either group, whereas an increase in prosocial behavior was found in both groups. Advantages and limitations of the proposed approach are highlighted.  相似文献   

6.
Structural equation models are typically evaluated on the basis of goodness-of-fit indexes. Despite their popularity, agreeing what value these indexes should attain to confidently decide between the acceptance and rejection of a model has been greatly debated. A recently proposed approach by means of equivalence testing has been recommended as a superior way to evaluate the goodness of fit of models. The approach has also been proposed as providing a necessary vehicle that can be used to advance the inferential nature of structural equation modeling as a confirmatory tool. The purpose of this article is to introduce readers to key ideas in equivalence testing and illustrate its use for conducting model–data fit assessments. Two confirmatory factor analysis models in which a priori specified latent variable models with known structure and tested against data are used as examples. It is advocated that whenever the goodness of fit of a model is to be assessed researchers should always examine the resulting values obtained via the equivalence testing approach.  相似文献   

7.
In psychological research, available data are often insufficient to estimate item factor analysis (IFA) models using traditional estimation methods, such as maximum likelihood (ML) or limited information estimators. Bayesian estimation with common-sense, moderately informative priors can greatly improve efficiency of parameter estimates and stabilize estimation. There are a variety of methods available to evaluate model fit in a Bayesian framework; however, past work investigating Bayesian model fit assessment for IFA models has assumed flat priors, which have no advantage over ML in limited data settings. In this paper, we evaluated the impact of moderately informative priors on ability to detect model misfit for several candidate indices: posterior predictive checks based on the observed score distribution, leave-one-out cross-validation, and widely available information criterion (WAIC). We found that although Bayesian estimation with moderately informative priors is an excellent aid for estimating challenging IFA models, methods for testing model fit in these circumstances are inadequate.  相似文献   

8.
As useful multivariate techniques, structural equation models have attracted significant attention from various fields. Most existing statistical methods and software for analyzing structural equation models have been developed based on the assumption that the response variables are normally distributed. Several recently developed methods can partially address violations of this assumption, but still encounter difficulties in analyzing highly nonnormal data. Moreover, the presence of missing data is a practical issue in substantive research. Simply ignoring missing data or improperly treating nonignorable missingness as ignorable could seriously distort statistical influence results. The main objective of this article is to develop a Bayesian approach for analyzing transformation structural equation models with highly nonnormal and missing data. Different types of missingness are discussed and selected via the deviance information criterion. The empirical performance of our method is examined via simulation studies. Application to a study concerning people’s job satisfaction, home life, and work attitude is presented.  相似文献   

9.
In the presence of omitted variables or similar validity threats, regression estimates are biased. Unbiased estimates (the causal effects) can be obtained in large samples by fitting instead the Instrumental Variables Regression (IVR) model. The IVR model can be estimated using structural equation modeling (SEM) software or using Econometric estimators such as two-stage least squares (2SLS). We describe 2SLS using SEM terminology, and report a simulation study in which we generated data according to a regression model in the presence of omitted variables and fitted (a) a regression model using ordinary least squares, (b) an IVR model using maximum likelihood (ML) as implemented in SEM software, and (c) an IVR model using 2SLS. Coverage rates of the causal effect using regression methods are always unacceptably low (often 0). When using the IVR model, accurate coverage is obtained across all conditions when N = 500. Even when the IVR model is misspecified, better coverage than regression is generally obtained. Differences between 2SLS and ML are small and favor 2SLS in small samples (N ≤ 100).  相似文献   

10.
This article relates a still-popular motivation for using parceling to an unrecognized cost. The still-popular motivation is improvement in fit with respect to the item-solution. The cost is uncertainty in fit due to the selection of one out of many possible item-to-parcel allocations. A theoretical framework establishes the reason for this relationship: The same mechanisms that cause larger item- versus parcel-solution differences in the minimized discrepancy function also cause larger allocation to allocation variability in the parcel-solution's minimized discrepancy function. Study 1 illustrates that these shared causal mechanisms lead to a strong positive association between average item–parcel differences in minimized discrepancy function values and parcel-allocation variability in those values. Study 2 extends these results from discrepancy function values to fit indexes, showing that the association remains positive, but varies in magnitude depending on what quantities other than the discrepancy function are involved in computing the fit index. The important implication for practice is that when item–parcel fit differences are large enough to alter conclusions about model adequacy, parcel-allocation variability tends to be large enough for parcel-solution model adequacy to depend on the particular allocation chosen.  相似文献   

11.
本文讨论了左截断右删失情况下最大似然估计的大样本性质,在一定的条件下证明了最大似然估计具有渐近正态性.  相似文献   

12.
Exploratory graph analysis (EGA) is a commonly applied technique intended to help social scientists discover latent variables. Yet, the results can be influenced by the methodological decisions the researcher makes along the way. In this article, we focus on the choice regarding the number of factors to retain: We compare the performance of the recently developed EGA with various traditional factor retention criteria. We use both continuous and binary data, as evidence regarding the accuracy of such criteria in the latter case is scarce. Simulation results, based on scenarios resulting from varying sample size, communalities from major factors, interfactor correlations, skewness, and correlation measure, show that EGA outperforms the traditional factor retention criteria considered in most cases in terms of bias and accuracy. In addition, we show that factor retention decisions for binary data are preferably made using Pearson, instead of tetrachoric, correlations, which is contradictory to popular belief.  相似文献   

13.
The current widespread availability of software packages with estimation features for testing structural equation models with binary indicators makes it possible to investigate many hypotheses about differences in proportions over time that are typically only tested with conventional categorical data analyses for matched pairs or repeated measures, such as McNemar’s chi-square. The connection between these conventional tests and simple longitudinal structural equation models is described. The equivalence of several conventional analyses and structural equation models reveals some foundational concepts underlying common longitudinal modeling strategies and brings to light a number of possible modeling extensions that will allow investigators to pursue more complex research questions involving multiple repeated proportion contrasts, mixed between-subjects × within-subjects interactions, and comparisons of estimated membership proportions using latent class factors with multiple indicators. Several models are illustrated, and the implications for using structural equation models for comparing binary repeated measures or matched pairs are discussed.  相似文献   

14.
This simulation study investigated the sensitivity of commonly used cutoff values for global-model-fit indexes, with regard to different degrees of violations of the assumption of uncorrelated errors in confirmatory factor analysis. It is shown that the global-model-fit indexes fell short in identifying weak to strong model misspecifications under both different degrees of correlated error terms, and various simulation conditions. On the basis of an example misspecification search, it is argued that global model testing must be supplemented by this procedure. Implications for the use of structural equation modeling are discussed.  相似文献   

15.
This study compared diagonal weighted least squares robust estimation techniques available in 2 popular statistical programs: diagonal weighted least squares (DWLS; LISREL version 8.80) and weighted least squares–mean (WLSM) and weighted least squares—mean and variance adjusted (WLSMV; Mplus version 6.11). A 20-item confirmatory factor analysis was estimated using item-level ordered categorical data. Three different nonnormality conditions were applied to 2- to 7-category data with sample sizes of 200, 400, and 800. Convergence problems were seen with nonnormal data when DWLS was used with few categories. Both DWLS and WLSMV produced accurate parameter estimates; however, bias in standard errors of parameter estimates was extreme for select conditions when nonnormal data were present. The robust estimators generally reported acceptable model–data fit, unless few categories were used with nonnormal data at smaller sample sizes; WLSMV yielded better fit than WLSM for most indices.  相似文献   

16.
Formulas are derived for computing the chi-square statistic from proportions or percentages, both for tests of goodness of fit and association. The advantages of the new formulas are: (1) computation is conceptually more congruent with the hypothesis being tested; (2) interpretation is facilitated (expected frequencies and discrepancies in frequencies are a function of sample size, whereas expected proportions and corresponding discrepancies are not); and (3) computation is facilitated in contingency tables since expected proportions do not need to be determined separately for each cell.  相似文献   

17.
In practice, models always have misfit, and it is not well known in what situations methods that provide point estimates, standard errors (SEs), or confidence intervals (CIs) of standardized structural equation modeling (SEM) parameters are trustworthy. In this article we carried out simulations to evaluate the empirical performance of currently available methods. We studied maximum likelihood point estimates, as well as SE estimators based on the delta method, nonparametric bootstrap (NP-B), and semiparametric bootstrap (SP-B). For CIs we studied Wald CI based on delta, and percentile and BCa intervals based on NP-B and SP-B. We conducted simulation studies using both confirmatory factor analysis and SEM models. Depending on (a) whether point estimate, SE, or CI is of interest; (b) amount of model misfit; (c) sample size; and (d) model complexity, different methods can be the one that renders best performance. Based on the simulation results, we discuss how to choose proper methods in practice.  相似文献   

18.
This study evaluates latent differential equation models on binary and ordinal data. Binary and ordinal data are widely used in psychology research and many statistical models have been developed, such as the probit model and the logit model. We combine the latent differential equation model with the probit model through a threshold approach, and then compare the threshold model with a naive model, which blindly treats binary and ordinal data as continuous. Simulation results suggest that the naive model leads to bias on binary data and on ordinal data with fewer than 5 levels, whereas the threshold model is unbiased and efficient for binary and ordinal data. Two example analyses on empirical binary data and ordinal data show that the threshold model also has better external validity. The R code for the threshold model is provided.  相似文献   

19.
In 1959, Campbell and Fiske introduced the use of multitrait–multimethod (MTMM) matrices in psychology, and for the past 4 decades confirmatory factor analysis (CFA) has commonly been used to analyze MTMM data. However, researchers do not always fit CFA models when MTMM data are available; when CFA modeling is used, multiple models are available that have attendant strengths and weaknesses. In this article, we used a Monte Carlo simulation to investigate the drawbacks of either using CFA models that fail to match the data-generating model or completely ignore the MTMM structure of data when the research goal is to uncover associations between trait constructs and external variables. We then used data from the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development to illustrate the substantive implications of fitting models that partially or completely ignore MTMM data structures. Results from analyses of both simulated and empirical data show noticeable biases when the MTMM data structure is partially or completely neglected.  相似文献   

20.
Robust maximum likelihood (ML) and categorical diagonally weighted least squares (cat-DWLS) estimation have both been proposed for use with categorized and nonnormally distributed data. This study compares results from the 2 methods in terms of parameter estimate and standard error bias, power, and Type I error control, with unadjusted ML and WLS estimation methods included for purposes of comparison. Conditions manipulated include model misspecification, level of asymmetry, level and categorization, sample size, and type and size of the model. Results indicate that cat-DWLS estimation method results in the least parameter estimate and standard error bias under the majority of conditions studied. Cat-DWLS parameter estimates and standard errors were generally the least affected by model misspecification of the estimation methods studied. Robust ML also performed well, yielding relatively unbiased parameter estimates and standard errors. However, both cat-DWLS and robust ML resulted in low power under conditions of high data asymmetry, small sample sizes, and mild model misspecification. For more optimal conditions, power for these estimators was adequate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号