首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Maximum likelihood is commonly used for estimation of model parameters in analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in maximum likelihood analysis. Nonlinear constraints could be encountered in complicated applications. In this paper we develop an EM-type algorithm for estimating model parameters with both linear and nonlinear constraints. The empirical performance of the algorithm is demonstrated by a Monte Carlo study. Application of the algorithm for linear constraints is illustrated by setting up a two-level mean and covariance structure model for a real two-level data set and running an EQS program.  相似文献   

2.
As part of developing a comprehensive strategy for structural equation model building and assessment, a large‐scale Monte Carlo study of 7,200 covariance matrices sampled from 36 population models was conducted. This study compared maximum likelihood with the much simpler centroid method for the confirmatory factor analysis of multiple‐indicator measurement models. Surprisingly, the contribution of maximum likelihood to model analysis is limited to formal evaluation of the model. No statistically discernible differences were obtained for the bias, standard errors, or mean squared error (MSE) of the estimated factor correlations, and empirically obtained maximum likelihood standard errors for the pattern coefficients were only slightly smaller than their centroid counterparts. Further supporting the recommendations of Anderson and Gerbing (1982), the considerably faster centroid method may have a useful role in the analysis of these models, particularly for the analysis of large models with 50 or more input variables. These results encourage the further development of a comprehensive research paradigm that exploits the relative strengths of both centroid and maximum likelihood as complementary estimation procedures along an integrated exploratory‐confirmatory continuum of model specification, revision, and formal evaluation.  相似文献   

3.
In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by applying maximum likelihood factor analysis subject to scaling and rotation constraints. As an illustrative example, an oblique 5-factor model will be fitted to the variance-covariance matrix of the 30 personality facets measured by the Revised NEO Personality Inventory, and confidence intervals will be estimated for all factor loadings and factor correlations, as well as for the associated reliability and validity coefficients.  相似文献   

4.
The main purpose of this article is to develop a Bayesian approach for a general multigroup nonlinear factor analysis model. Joint Bayesian estimates of the factor scores and the structural parameters subjected to some constraints across different groups are obtained simultaneously. A hybrid algorithm that combines the Metropolis-Hastings algorithm and the Gibbs sampler is implemented to produce these joint Bayesian estimates. It is shown that this algorithm is computationally efficient. The Bayes factor approach is introduced for comparing models under various degrees of invariance across groups. The Schwarz criterion (BIC), a simple and useful approximation of the Bayes factor, is calculated on the basis of simulated observations from the Gibbs sampler. Efficiency and flexibility of the proposed Bayesian procedure are illustrated by some simulation results and a real-life example.  相似文献   

5.
When modeling latent variables at multiple levels, it is important to consider the meaning of the latent variables at the different levels. If a higher-level common factor represents the aggregated version of a lower-level factor, the associated factor loadings will be equal across levels. However, many researchers do not consider cross-level invariance constraints in their research. Not applying these constraints when in fact they are appropriate leads to overparameterized models, and associated convergence and estimation problems. This simulation study used a two-level mediation model on common factors to show that when factor loadings are equal in the population, not applying cross-level invariance constraints leads to more estimation problems and smaller true positive rates. Some directions for future research on cross-level invariance in MLSEM are discussed.  相似文献   

6.
The objective was to offer guidelines for applied researchers on how to weigh the consequences of errors made in evaluating measurement invariance (MI) on the assessment of factor mean differences. We conducted a simulation study to supplement the MI literature by focusing on choosing among analysis models with different number of between-group constraints imposed on loadings and intercepts of indicators. Data were generated with varying proportions, patterns, and magnitudes of differences in loadings and intercepts as well as factor mean differences and sample size. Based on the findings, we concluded that researchers who conduct MI analyses should recognize that relaxing as well as imposing constraints can affect Type I error rate, power, and bias of estimates in factor mean differences. In addition, fit indexes can be misleading in making decisions about constraints of loadings and intercepts. We offer suggestions for making MI decisions under uncertainty when assessing factor mean differences.  相似文献   

7.
Marginal likelihood-based methods are commonly used in factor analysis for ordinal data. To obtain the maximum marginal likelihood estimator, the full information maximum likelihood (FIML) estimator uses the (adaptive) Gauss–Hermite quadrature or stochastic approximation. However, the computational burden increases rapidly as the number of factors increases, which renders FIML impractical for large factor models. Another limitation of the marginal likelihood-based approach is that it does not allow inference on the factors. In this study, we propose a hierarchical likelihood approach using the Laplace approximation that remains computationally efficient in large models. We also proposed confidence intervals for factors, which maintains the level of confidence as the sample size increases. The simulation study shows that the proposed approach generally works well.  相似文献   

8.
The article gives alternatives to Campbell and O'Connell's (1967) definitions of additive and multiplicative method effects in multitrait-multimethod (MTMM) data. The alternative definitions can be formulated by means of constraints in the parameters of the correlated uniqueness (CU) model (Marsh, 1989), which is first reviewed. The definitions have 2 major advantages. First, they allow the researcher to test for additive and multiplicative method effects in a straightforward manner by simply testing the appropriate constraints. An illustration of these tests is given. Second, the alternative definitions are closely linked to other currently used models. The article shows that CU models with additive constraints are equivalent to constrained versions of the confirmatory factor analysis model for MTMM data (Althauser, Heberlein, & Scott, 1971; Werts & Linn, 1970). In addition, Coenders and Saris (1998) showed that, for designs with 3 methods, a CU model with multiplicative constraints is equivalent to the direct product model (Browne, 1984).  相似文献   

9.
In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several researchers have pointed out this phenomenon, but it is not well known among applied researchers who use exploratory factor analysis. We demonstrate that overfactoring is one cause for the well-known fact that the likelihood ratio test tends to find too many factors.  相似文献   

10.
Testlet effects can be taken into account by incorporating specific dimensions in addition to the general dimension into the item response theory model. Three such multidimensional models are described: the bi-factor model, the testlet model, and a second-order model. It is shown how the second-order model is formally equivalent to the testlet model. In turn, both models are constrained bi-factor models. Therefore, the efficient full maximum likelihood estimation method that has been established for the bi-factor model can be modified to estimate the parameters of the two other models. An application on a testlet-based international English assessment indicated that the bi-factor model was the preferred model for this particular data set.  相似文献   

11.
This paper examines the educational implications of two curriculum initiatives in China that have produced curricular materials promoting education for sustainable development (ESD) in minority-populated ethnic autonomous areas in China. The two curriculum projects present distinctive discourses, conceptions, models, frameworks and scopes of ESD in the country. Nonetheless, there is a likelihood that the actual implementation of the curriculum initiatives, especially the enactment of the curriculum materials produced, might be thwarted due to structural and systemic educational constraints, an anthropocentric approach to sustainable development, poor teacher support and teacher training, omissions of the affective learning components in curricular contents, as well as loopholes and weaknesses in the development of the curriculum materials themselves.  相似文献   

12.
The recovery of weak factors has been extensively studied in the context of exploratory factor analysis. This article presents the results of a Monte Carlo simulation study of recovery of weak factor loadings in confirmatory factor analysis under conditions of estimation method (maximum likelihood vs. unweighted least squares), sample size, loading size, factor correlation, and model specification (correct vs. incorrect). The effects of these variables on goodness of fit and convergence are also examined. Results show that recovery of weak factor loadings, goodness of fit, and convergence are improved when factors are correlated and models are correctly specified. Additionally, unweighted least squares produces more convergent solutions and successfully recovers the weak factor loadings in some instances where maximum likelihood fails. The implications of these findings are discussed and compared to previous research.  相似文献   

13.
To raise university research productivity, the government of Kazakhstan introduced a requirement for university faculty members to publish in journals with a nonzero impact factor in order to qualify for promotion. A survey of faculty members at six universities was conducted to explore their response to the policy. The results suggest that a promotion-linked publication requirement may lift faculty research productivity if it is accompanied by support structures and if universities have control over the promotion process.  相似文献   

14.
In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid–Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals, nonlinear factor loadings, and a skew-normal second-order factor distribution. Using marginal maximum likelihood, we fit this model to Spanish standardization data of the Wechsler Adult Intelligence Scale (3rd ed.) to test the differentiation hypothesis.  相似文献   

15.
The designation “low income” is often assigned to students who are Federal Pell Grant eligible; however, family incomes for these recipients range from $0 to as high as $60,000 (Baum & Payea, 2011). Over 93% of all zero expected family contribution (EFC) students have a family income of $30,000 or less and constituted 67.4% of all Federal Pell Grant disbursements in 2009–2010. Given the wide range of family incomes, state need-based grant eligibility requirements, and current fiscal constraints, the purpose of this study was to compare predictors of student first- to second-year persistence, for zero and nonzero EFC students at two- and four-year institutions, as suggested by Davidson (2013). Logistic regressions showed differences between students at two- and four-year institutions as well as zero and nonzero students. Using a zero EFC as a criterion for low income could prove a valuable alternative to Federal Pell Grant eligibility for state and institutional policy makers when allocating need-based financial aid. In doing so, consideration must be given to this student population’s particular needs and factors that best foster student success.  相似文献   

16.
证明了Gauss整数环Z[i]中非零元素在映射φ的作用下的最小值的原像α0~1,由此给出了求Z[i]中元素最大公因子的两种方法:辗转相除法和矩阵的广义初等变换法.  相似文献   

17.
Recently, analysis of structural equation models with polytomous and continuous variables has received a lot of attention. However, contributions to the selection of good models are limited. The main objective of this article is to investigate the maximum likelihood estimation of unknown parameters in a general LISREL-type model with mixed polytomous and continuous data and propose a model selection procedure for obtaining good models for the underlying substantive theory. The maximum likelihood estimate is obtained by a Monte Carlo Expectation Maximization algorithm, in which the E step is evaluated via the Gibbs sampler and the M step is completed via the method of conditional maximization. The convergence of the Monte Carlo Expectation Maximization algorithm is monitored by the bridge sampling. A model selection procedure based on Bayes factor and Occam's window search strategy is proposed. The effectiveness of the procedure in accounting for the model uncertainty and in picking good models is discussed. The proposed methodology is illustrated with a real example.  相似文献   

18.
Several papers have been devoted to the use of structural equation modeling (SEM) software in fitting autoregressive moving average (ARMA) models to a univariate series observed in a single subject. Van Buuren (1997) went beyond specification and examined the nature of the estimates obtained with SEM software. Although the results were mixed, he concluded that these parameter estimates resemble true maximum likelihood (ML) estimates. Molenaar (1999) argued that the negative findings for pure moving average models might be due to the absence of invertibility constraints in Van Buuren's simulation experiment. The aim of this article is to (a) reexamine the nature of SEM estimates of ARMA parameters; (b) replicate Van Buuren's simulation experiment in light of Molenaar's comment; and (c) examine the behavior of the log-likelihood ratio test. We conclude that estimates of ARMA parameters obtained with SEM software are identical to those obtained by univariate stochastic model preliminary estimation, and are not true ML estimates. Still, these estimates, which may be viewed as moment estimates, have the same asymptotic properties as ML estimates for pure autoregressive (AR) processes. For pure moving average (MA) processes, they are biased and less efficient. The estimates from SEM software for mixed processes seem to have the same asymptotic properties as ML estimates. Furthermore, the log-likelihood ratio is reliable for pure AR processes, but this is not the case for pure MA processes. For mixed processes, the behavior of the log-likelihood ratio varies, and in this case these statistics should be handled with caution.  相似文献   

19.
Some researchers suggest that having a learning disability (LD) may act as a risk factor, increasing the likelihood that adolescents experience more negative outcomes in many areas of their lives. However, researchers have yet to examine in one study how having LD with and without attention deficit hyperactivity disorder (ADHD) is related to a comprehensive set of psychosocial variables across a diverse set of domains (e.g., peer, family, school, intrapersonal). The purpose of the present study was to address that limitation by comparing the perceptions of adolescents with LD (N= 230), with comorbid LD/ADHD (N= 92), and without LD or ADHD (N= 322) regarding their academic orientation, temperament, well‐being, loneliness, parental relationships, victimization, activities, and friendships. Results are consistent with the hypothesis that LD may indeed act as a risk factor increasing the likelihood of more negative outcomes. The results also indicate that for some psychosocial variables this likelihood may be increased in adolescents with comorbid LD/ADHD. The findings have important implications for stakeholders concerned about supporting adolescents with LD with and without comorbid ADHD.  相似文献   

20.
When the assumption of multivariate normality is violated or when a discrepancy function other than (normal theory) maximum likelihood is used in structural equation models, the null distribution of the test statistic may not be χ2 distributed. Most existing methods to approximate this distribution only match up to 2 moments. In this article, we propose 2 additional approximation methods: a scaled F distribution that matches 3 moments simultaneously and a direct Monte Carlo–based weighted sum of i.i.d. χ2 variates. We also conduct comprehensive simulation studies to compare the new and existing methods for both maximum likelihood and nonmaximum likelihood discrepancy functions and to separately evaluate the effect of sampling uncertainty in the estimated weights of the weighted sum on the performance of the approximation methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号