首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This simulation study investigated the sensitivity of commonly used cutoff values for global-model-fit indexes, with regard to different degrees of violations of the assumption of uncorrelated errors in confirmatory factor analysis. It is shown that the global-model-fit indexes fell short in identifying weak to strong model misspecifications under both different degrees of correlated error terms, and various simulation conditions. On the basis of an example misspecification search, it is argued that global model testing must be supplemented by this procedure. Implications for the use of structural equation modeling are discussed.  相似文献   

2.
Fitting a large structural equation modeling (SEM) model with moderate to small sample sizes results in an inflated Type I error rate for the likelihood ratio test statistic under the chi-square reference distribution, known as the model size effect. In this article, we show that the number of observed variables (p) and the number of free parameters (q) have unique effects on the Type I error rate of the likelihood ratio test statistic. In addition, the effects of p and q cannot be fully explained using degrees of freedom (df). We also evaluated the performance of 4 correctional methods for the model size effect, including Bartlett’s (1950), Swain’s (1975), and Yuan’s (2005) corrected statistics, and Yuan, Tian, and Yanagihara’s (2015) empirically corrected statistic. We found that Yuan et al.’s (2015) empirically corrected statistic generally yields the best performance in controlling the Type I error rate when fitting large SEM models.  相似文献   

3.
Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group.  相似文献   

4.
Kelley and Lai (2011) recently proposed the use of accuracy in parameter estimation (AIPE) for sample size planning in structural equation modeling. The sample size that reaches the desired width for the confidence interval of root mean square error of approximation (RMSEA) is suggested. This study proposes a graphical extension with the AIPE approach, abbreviated as GAIPE, on RMSEA to facilitate sample size planning in structural equation modeling. GAIPE simultaneously displays the expected width of a confidence interval of RMSEA, the necessary sample size to reach the desired width, and the RMSEA values covered in the confidence interval. Power analysis for hypothesis tests related to RMSEA can also be integrated into the GAIPE framework to allow for a concurrent consideration of accuracy in estimation and statistical power to plan sample sizes. A package written in R has been developed to implement GAIPE. Examples and instructions for using the GAIPE package are presented to help readers make use of this flexible framework. With the capacity of incorporating information on accuracy in RMSEA estimation, values of RMSEA, and power for hypothesis testing on RMSEA in a single graphical representation, the GAIPE extension offers an informative and practical approach for sample size planning in structural equation modeling.  相似文献   

5.
In dyadic research, the actor–partner interdependence model (APIM) is widely used to model the effect of a predictor measured across dyad members on one’s own and one’s partner outcome. When such dyadic data are measured repeatedly over time, both the non-independence within couples and the non-independence over time need to be accounted for. In this paper, we present a longitudinal extension of the APIM, the L-APIM, that allows for both stable and time-varying sources of non-independence. Its implementation is readily available in multilevel software, such as proc mixed in SAS, but is lacking in the structural equation modeling (SEM) framework. We tackle the computational challenges associated with its SEM-implementation and propose a user-friendly free application for the L-APIM, which can be found at http://fgisteli.shinyapps.io/Shiny_LDD. As an illustration, we explore the actor and partner effects of positive relationship feelings on next day’s intimacy using 3-week diary data of 66 heterosexual couples.  相似文献   

6.
The purpose of the present study was to validate an existing school environment instrument, the School Level Environment Questionnaire (SLEQ). The SLEQ consists of 56 items, with seven items in each of eight scales. One thousand, one hundred and six (1106) teachers in 59 elementary schools in a southwestern USA public school district completed the instrument. An exploratory factor analysis was undertaken for a random sample of half of the completed surveys. Using principal axis factoring with oblique rotation, this analysis suggested that 13 items should be dropped and that the remaining 43 items could best be represented by seven rather than eight factors. A confirmatory factor analysis was run with the other half of the original sample using structural equation modeling. Examination of the fit indices indicated that the model came close to fitting the data, with goodness-of-fit (GOF) coefficients just below recommended levels. A second model was then run with two of the seven factors, with their associated items removed. That left five factors with 35 items. Model fit was improved. A third model was tried, using the same five factors with 35 items but with correlated residuals between some of the items within a factor. This model seemed to fit the data well, with GOF coefficients in recommended ranges. These results led to a refined, more parsimonious version of the SLEQ that was then used in a larger study. Future research is needed to see if this model would fit other samples in different elementary schools and in secondary schools both in the USA and in other countries. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

7.
The asymptotic performance of structural equation modeling tests and standard errors are influenced by two factors: the model and the asymptotic covariance matrix Γ of the sample covariances. Although most simulation studies clearly specify model conditions, specification of Γ is usually limited to values of univariate skewness and kurtosis. We illustrate that marginal skewness and kurtosis are not sufficient to adequately specify a nonnormal simulation condition by showing that asymptotic standard errors and test statistics vary substantially among distributions with skewness and kurtosis that are identical. We argue therefore that Γ should be reported when presenting the design of simulation studies. We show how Γ can be exactly calculated under the widely used Vale–Maurelli transform. We suggest plotting the elements of Γ and reporting the eigenvalues associated with the test statistic. R code is provided.  相似文献   

8.
Model comparison is one useful approach in applications of structural equation modeling. Akaike’s information criterion (AIC) and the Bayesian information criterion (BIC) are commonly used for selecting an optimal model from the alternatives. We conducted a comprehensive evaluation of various model selection criteria, including AIC, BIC, and their extensions, in selecting an optimal path model under a wide range of conditions over different compositions of candidate set, distinct values of misspecified parameters, and diverse sample sizes. The chance of selecting an optimal model rose as the values of misspecified parameters and sample sizes increased. The relative performance of AIC and BIC type criteria depended on the magnitudes of the parameter misspecified. The BIC family in general outperformed AIC counterparts unless under small values of omitted parameters and sample sizes, where AIC performed better. Scaled unit information prior BIC (SPBIC) and Haughton's BIC (HBIC) demonstrated the highest accuracy ratios across most of the conditions investigated in this simulation.  相似文献   

9.
A validation study was conducted on the Child Sex Abuse Attitude Scale (CSAAS) using confirmatory factor analysis (CFA) to examine factor structure. The CSAAS was developed based on Festinger's (1957) theory of attitude development resulting in a 4‐factor first‐order structure (cognition, value, affect, and behavior) and a single‐factor 2nd‐order structure (attitude). A sample of 215 school psychologists, members of the National Association of School Psychologists, responded to the CSAAS survey. CFA results supported the hypothesized factor structure of the CSAAS, thus indicating the plausibility of a 4‐factor 1st‐order and a single‐factor higher order structure of the CSAAS.  相似文献   

10.
In 1959, Campbell and Fiske introduced the use of multitrait–multimethod (MTMM) matrices in psychology, and for the past 4 decades confirmatory factor analysis (CFA) has commonly been used to analyze MTMM data. However, researchers do not always fit CFA models when MTMM data are available; when CFA modeling is used, multiple models are available that have attendant strengths and weaknesses. In this article, we used a Monte Carlo simulation to investigate the drawbacks of either using CFA models that fail to match the data-generating model or completely ignore the MTMM structure of data when the research goal is to uncover associations between trait constructs and external variables. We then used data from the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development to illustrate the substantive implications of fitting models that partially or completely ignore MTMM data structures. Results from analyses of both simulated and empirical data show noticeable biases when the MTMM data structure is partially or completely neglected.  相似文献   

11.
12.
A Monte Carlo simulation study was conducted to evaluate the sensitivities of the likelihood ratio test and five commonly used delta goodness-of-fit (ΔGOF) indices (i.e., ΔGamma, ΔMcDonald’s, ΔCFI, ΔRMSEA, and ΔSRMR) to detect a lack of metric invariance in a bifactor model. Experimental conditions included factor loading differences, location and number of noninvariant items, and sample size. The results indicated all ΔGOF indices held Type I error to a minimum and overall had adequate power for the study. For detecting the violation of metric invariance, only ΔGamma and ΔCFI, in addition to Δχ2, are recommended to use in the bifactor model with values of ?.016 to ?.023 and ?.003 to ?.004, respectively. Moreover, in the variance component analysis, the magnitude of the factor loading differences contributed the most variation to all ΔGOF indices, whereas sample size affected Δχ2 the most.  相似文献   

13.
Minor cross-loadings on non-targeted factors are often found in psychological or other instruments. Forcing them to zero in confirmatory factor analyses (CFA) leads to biased estimates and distorted structures. Alternatively, exploratory structural equation modeling (ESEM) and Bayesian structural equation modeling (BSEM) have been proposed. In this research, we compared the performance of the traditional independent-clusters-confirmatory-factor-analysis (ICM-CFA), the nonstandard CFA, ESEM with the Geomin- or Target-rotations, and BSEMs with different cross-loading priors (correct; small- or large-variance priors with zero mean) using simulated data with cross-loadings. Four factors were considered: the number of factors, the size of factor correlations, the cross-loading mean, and the loading variance. Results indicated that ICM-CFA performed the worst. ESEMs were generally superior to CFAs but inferior to BSEM with correct priors that provided the precise estimation. BSEM with large- or small-variance priors performed similarly while the prior mean for cross-loadings was more important than the prior variance.  相似文献   

14.
Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the ubiquity of correlated residuals and imperfect model specification. Our research focuses on a scale evaluation context and the performance of four standard model fit indices: root mean square error of approximate (RMSEA), standardized root mean square residual (SRMR), comparative fit index (CFI), and Tucker–Lewis index (TLI), and two equivalence test-based model fit indices: RMSEAt and CFIt. We use Monte Carlo simulation to generate and analyze data based on a substantive example using the positive and negative affective schedule (N = 1,000). We systematically vary the number and magnitude of correlated residuals as well as nonspecific misspecification, to evaluate the impact on model fit indices in fitting a two-factor exploratory factor analysis. Our results show that all fit indices, except SRMR, are overly sensitive to correlated residuals and nonspecific error, resulting in solutions that are overfactored. SRMR performed well, consistently selecting the correct number of factors; however, previous research suggests it does not perform well with categorical data. In general, we do not recommend using model fit indices to select number of factors in a scale evaluation framework.  相似文献   

15.
Multilevel modeling (MLM) is a popular way of assessing mediation effects with clustered data. Two important limitations of this approach have been identified in prior research and a theoretical rationale has been provided for why multilevel structural equation modeling (MSEM) should be preferred. However, to date, no empirical evidence of MSEM's advantages relative to MLM approaches for multilevel mediation analysis has been provided. Nor has it been demonstrated that MSEM performs adequately for mediation analysis in an absolute sense. This study addresses these gaps and finds that the MSEM method outperforms 2 MLM-based techniques in 2-level models in terms of bias and confidence interval coverage while displaying adequate efficiency, convergence rates, and power under a variety of conditions. Simulation results support prior theoretical work regarding the advantages of MSEM over MLM for mediation in clustered data.  相似文献   

16.
We examine the accuracy of p values obtained using the asymptotic mean and variance (MV) correction to the distribution of the sample standardized root mean squared residual (SRMR) proposed by Maydeu-Olivares to assess the exact fit of SEM models. In a simulation study, we found that under normality, the MV-corrected SRMR statistic provides reasonably accurate Type I errors even in small samples and for large models, clearly outperforming the current standard, that is, the likelihood ratio (LR) test. When data shows excess kurtosis, MV-corrected SRMR p values are only accurate in small models (p = 10), or in medium-sized models (p = 30) if no skewness is present and sample sizes are at least 500. Overall, when data are not normal, the MV-corrected LR test seems to outperform the MV-corrected SRMR. We elaborate on these findings by showing that the asymptotic approximation to the mean of the SRMR sampling distribution is quite accurate, while the asymptotic approximation to the standard deviation is not.  相似文献   

17.
Bayesian structural equation modeling (BSEM) was used to investigate the latent structure of the Differential Ability Scales—Second Edition core battery using the standardization sample normative data for ages 7–17. Results revealed plausibility of a three‐factor model, consistent with publisher theory, expressed as either a higher‐order (HO) or a bifactor (BF) model. The results also revealed an alternative structure with the best model fit, a two‐factor BF model with Matrices (MA) and Sequential and Quantitative Reasoning (SQ) loading on g only with no respective group factor loading. This was only the second study to use BSEM to investigate the structure of a commercial ability test and the first to use a large normative sample and the specification of both approximate zero cross‐loadings and correlated residual terms. It is believed that the results produced from the current study will advance the field's understanding of not only the factor structure of the DAS‐II core battery but also the potential utility of BSEM in psychometric investigations of intelligence test structures.  相似文献   

18.
Learning Environments Research - The What Is Happening In this Class? (WIHIC) questionnaire was validated cross-nationally using a sample of 3980 high school students from Australia, the UK and...  相似文献   

19.
This study examined the effect of sample size ratio and model misfit on the Type I error rates and power of the Difficulty Parameter Differences procedure using Winsteps. A unidimensional 30-item test with responses from 130,000 examinees was simulated and four independent variables were manipulated: sample size ratio (20/100/250/500/1000); model fit/misfit (1 PL and 3PLc =. 15 models); impact (no difference/mean differences/variance differences/mean and variance differences); and percentage of items with uniform and nonuniform DIF (0%/10%/20%). In general, the results indicate the importance of ensuring model fit to achieve greater control of Type I error and adequate statistical power. The manipulated variables produced inflated Type I error rates, which were well controlled when a measure of DIF magnitude was applied. Sample size ratio also had an effect on the power of the procedure. The paper discusses the practical implications of these results.  相似文献   

20.
Researchers in the behavioral and social sciences often have expectations that can be expressed in the form of inequality constraints among the parameters of a structural equation model resulting in an informative hypothesis. The questions they would like an answer to are “Is the hypothesis Correct” or “Is the hypothesis incorrect?” We demonstrate a Bayesian approach to compare an inequality-constrained hypothesis with its complement in an SEM framework. The method is introduced and its utility is illustrated by means of an example. Furthermore, the influence of the specification of the prior distribution is examined. Finally, it is shown how the approach proposed can be implemented using Mplus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号