首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We present a multigroup multilevel confirmatory factor analysis (CFA) model and a procedure for testing multilevel factorial invariance in n-level structural equation modeling (nSEM). Multigroup multilevel CFA introduces a complexity when the group membership at the lower level intersects the clustered structure, because the observations in different groups but in the same cluster are not independent of one another. nSEM provides a framework in which the multigroup multilevel data structure is represented with the dependency between groups at the lower level properly taken into account. The procedure for testing multilevel factorial invariance is illustrated with an empirical example using an R package xxm2.  相似文献   

2.
The purposes of this study were to (a) test the hypothesized factor structure of the Student-Teacher Relationship Scale (STRS; Pianta, 2001) for 308 African American (AA) and European American (EA) children using confirmatory factor analysis (CFA) and (b) examine the measurement invariance of the factor structure across AA and EA children. CFA of the hypothesized three-factor model with correlated latent factors did not yield an optimal model fit. Parameter estimates obtained from CFA identified items with low factor loadings and R2 values, suggesting that content revision is required for those items on the STRS. Deletion of two items from the scale yielded a good model fit, suggesting that the remaining 26 items reliably and validly measure the constructs for the whole sample. Tests for configural invariance, however, revealed that the underlying constructs may differ for AA and EA groups. Subsequent exploratory factor analyses (EFAs) for AA and EA children were carried out to investigate the comparability of the measurement model of the STRS across the groups. The results of EFAs provided evidence suggesting differential factor models of the STRS across AA and EA groups. This study provides implications for construct validity research and substantive research using the STRS given that the STRS is extensively used in intervention and research in early childhood education.  相似文献   

3.
This article presents a new method for multiple-group confirmatory factor analysis (CFA), referred to as the alignment method. The alignment method can be used to estimate group-specific factor means and variances without requiring exact measurement invariance. A strength of the method is the ability to conveniently estimate models for many groups. The method is a valuable alternative to the currently used multiple-group CFA methods for studying measurement invariance that require multiple manual model adjustments guided by modification indexes. Multiple-group CFA is not practical with many groups due to poor model fit of the scalar model and too many large modification indexes. In contrast, the alignment method is based on the configural model and essentially automates and greatly simplifies measurement invariance analysis. The method also provides a detailed account of parameter invariance for every model parameter in every group.  相似文献   

4.
Chinese University of Hong Kong students (N = 844) selected a “good” and a “poor” teacher, and rated each using a Chinese translation of the Students' Evaluations of Educational Quality (SEEQ) instrument. Multigroup confirmatory factor analysis (CFA) models, based on a 3 × 2 design, were constructed to test the invariance of the SEEQ factor structure across 3 discipline groups (a between‐group comparison of ratings by students in arts, social sciences, and education; in business administration; and in engineering, medicine, and science) and across ratings of good and poor teachers (via within‐subjects comparison). The selected model imposed between‐group invariance constraints on factor loadings, factor correlations, and factor variances across the 3 discipline groups and within‐subjects invariance constraints on factor loadings across ratings of good and poor teachers. The results support the use of SEEQ in this Chinese setting, demonstrating the generality of North American research findings and the usefulness of CFA in this research area.  相似文献   

5.
With the increasing use of international survey data especially in cross-cultural and multinational studies, establishing measurement invariance (MI) across a large number of groups in a study is essential. Testing MI over many groups is methodologically challenging, however. We identified 5 methods for MI testing across many groups (multiple group confirmatory factor analysis, multilevel confirmatory factor analysis, multilevel factor mixture modeling, Bayesian approximate MI testing, and alignment optimization) and explicated the similarities and differences of these approaches in terms of their conceptual models and statistical procedures. A Monte Carlo study was conducted to investigate the efficacy of the 5 methods in detecting measurement noninvariance across many groups using various fit criteria. Generally, the 5 methods showed reasonable performance in identifying the level of invariance if an appropriate fit criterion was used (e.g., Bayesian information criteron with multilevel factor mixture modeling). Finally, general guidelines in selecting an appropriate method are provided.  相似文献   

6.
Social‐emotional health influences youth developmental trajectories and there is growing interest among educators to measure the social‐emotional health of the students they serve. This study replicated the psychometric characteristics of the Social Emotional Health Survey (SEHS) with a diverse sample of high school students (Grades 9–12; N = 14,171), and determined whether the factor structure was invariant across sociocultural and gender groups. A confirmatory factor analysis (CFA) tested the fit of the previously known factor structure, and then structural equation modeling was used to test invariance across sociocultural and gender groups through multigroup CFAs. Results supported the SEHS measurement model, with full invariance of the SEHS higher‐order structure for all five sociocultural groups. There were no moderate effect size or higher group differences on the overall index for sociocultural or gender groups, which lends support to the eventual development of common norms and universal interpretation guidelines.  相似文献   

7.
The objective was to offer guidelines for applied researchers on how to weigh the consequences of errors made in evaluating measurement invariance (MI) on the assessment of factor mean differences. We conducted a simulation study to supplement the MI literature by focusing on choosing among analysis models with different number of between-group constraints imposed on loadings and intercepts of indicators. Data were generated with varying proportions, patterns, and magnitudes of differences in loadings and intercepts as well as factor mean differences and sample size. Based on the findings, we concluded that researchers who conduct MI analyses should recognize that relaxing as well as imposing constraints can affect Type I error rate, power, and bias of estimates in factor mean differences. In addition, fit indexes can be misleading in making decisions about constraints of loadings and intercepts. We offer suggestions for making MI decisions under uncertainty when assessing factor mean differences.  相似文献   

8.
Confirmatory factor analytic procedures are routinely implemented to provide evidence of measurement invariance. Current lines of research focus on the accuracy of common analytic steps used in confirmatory factor analysis for invariance testing. However, the few studies that have examined this procedure have done so with perfectly or near perfectly fitting models. In the present study, the authors examined procedures for detecting simulated test structure differences across groups under model misspecification conditions. In particular, they manipulated sample size, number of factors, number of indicators per factor, percentage of a lack of invariance, and model misspecification. Model misspecification was introduced at the factor loading level. They evaluated three criteria for detection of invariance, including the chi-square difference test, the difference in comparative fit index values, and the combination of the two. Results indicate that misspecification was associated with elevated Type I error rates in measurement invariance testing.  相似文献   

9.
We estimated the invariance of educational achievement (EA) and learning attitudes (LA) measures across nations. A multi-group confirmatory factor analysis was used to estimate the invariance of educational achievement and learning attitudes across 55 nations (Programme for International Student Assessment [PISA] 2006 data, N?=?354,203). The constructs had the same meaning (factor loadings) but different scales (intercepts). Our conclusion is that comparisons of the relationships between educational achievement and learning attitudes across countries need to take into consideration two sources of variability: individual differences of students and group differences of educational systems. The lack of scalar invariance in EA and LA measures means that the relationships between EA and LA may have a different meaning at the level of nations and at the student level within countries. In other words, as PISA measures are not invariant in scalar sense, the comparisons across countries with nationally aggregated scores are not justified.  相似文献   

10.
As a prerequisite for meaningful comparison of latent variables across multiple populations, measurement invariance or specifically factorial invariance has often been evaluated in social science research. Alongside with the changes in the model chi-square values, the comparative fit index (CFI; Bentler, 1990) is a widely used fit index for evaluating different stages of factorial invariance, including metric invariance (equal factor loadings), scalar invariance (equal intercepts), and strict invariance (equal unique factor variances). Although previous literature generally showed that the CFI performed well for single-group structural equation modeling analyses, its applicability to multiple group analyses such as factorial invariance studies has not been examined. In this study we argue that the commonly used default baseline model for the CFI might not be suitable for factorial invariance studies because (a) it is not nested within the scalar invariance model, and thus (b) the resulting CFI values might not be sensitive to the group differences in the measurement model. We therefore proposed a modified version of the CFI with an alternative (and less restrictive) baseline model that allows observed variables to be correlated. Monte Carlo simulation studies were conducted to evaluate the utility of this modified CFI across various conditions including varying degree of noninvariance and different factorial invariance models. Results showed that the modified CFI outperformed both the conventional CFI and the ΔCFI (Cheung & Rensvold, 2002) in terms of sensitivity to small and medium noninvariance.  相似文献   

11.
In latent growth modeling, measurement invariance across groups has received little attention. Considering that a group difference is commonly of interest in social science, a Monte Carlo study explored the performance of multigroup second-order latent growth modeling (MSLGM) in testing measurement invariance. True positive and false positive rates in detecting noninvariance across groups in addition to bias estimates of major MSLGM parameters were investigated. Simulation results support the suitability of MSLGM for measurement invariance testing when either forward or iterative likelihood ratio procedure is applied.  相似文献   

12.
In testing the factorial invariance of a measure across groups, the groups are often of different sizes. Large imbalances in group size might affect the results of factorial invariance studies and lead to incorrect conclusions of invariance because the fit function in multiple-group factor analysis includes a weighting by group sample size. The implication is that violations of invariance might not be detected if the sample sizes of the 2 groups are severely unbalanced. In this study, we examined the effects of group size differences on results of factorial invariance tests, proposed a subsampling method to address unbalanced sample size issue in factorial invariance studies, and evaluated the proposed approach in various simulation conditions. Our findings confirm that violations of invariance might be masked in the case of severely unbalanced group size conditions and support the use of the proposed subsampling method to obtain accurate results for invariance studies.  相似文献   

13.
Treating Likert rating scale data as continuous outcomes in confirmatory factor analysis violates the assumption of multivariate normality. Given certain requirements pertaining to the number of categories, skewness, size of the factor loadings, and so forth, it seems nevertheless possible to recover true parameter values if the data stem from a single homogeneous population. It is shown that, in a multigroup context, an analysis of Likert data under the assumption of multivariate normality may distort the factor structure differently across groups. In that case, investigations of measurement invariance (MI), which are necessary for meaningful group comparisons, are problematic. Analyzing subscale scores computed from Likert items does not seem to solve the problem.  相似文献   

14.
The computerization of reading assessments has presented a set of new challenges to test designers. From the vantage point of measurement invariance, test designers must investigate whether the traditionally recognized causes for violating invariance are still a concern in computer-mediated assessments. In addition, it is necessary to understand the technology-related causes of measurement invariance among test-taking populations. In this study, we used the available data (n = 800) from the previous administrations of the Pearson Test of English Academic (PTE Academic) reading, an international test of English comprising 10 test items, to investigate measurement invariance across gender and the Information and Communication Technology Development index (IDI). We conducted a multi-group confirmatory factor analysis (CFA) to assess invariance at four levels: configural, metric, scalar, and structural. Overall, we were able to confirm structural invariance for the PTE Academic, which is a necessary condition for conducting fair assessments. Implications for computer-based education and the assessment of reading are discussed.  相似文献   

15.
The article employs exploratory structural equation modeling (ESEM) to evaluate constructs of economic, cultural, and social capital in international large-scale assessment (LSA) data from the Progress in International Reading Literacy Study (PIRLS) 2006 and the Programme for International Student Assessment (PISA) 2009. ESEM integrates the theory-generating approach of exploratory factor analysis (EFA) and theory-testing approach of confirmatory factor analysis (CFA). It relaxes the zero-loading restriction in CFA, allowing items to load on different factors simultaneously, and it provides measurement invariance tests across countries not available in EFA. A main criticism of international LSA studies is the extended use of indicators poorly grounded in theory, like socioeconomic status, that prevent the study of mechanisms underlying associations with student outcomes. This article contributes to addressing this criticism by providing statistical criteria to evaluate the fit of well-defined sociological constructs with the empirical data.  相似文献   

16.
We illustrate testing measurement invariance in a second-order factor model using a quality of life dataset (n = 924). Measurement invariance was tested across 2 groups at a set of hierarchically structured levels: (a) configural invariance, (b) first-order factor loadings, (c) second-order factor loadings, (d) intercepts of measured variables, (e) intercepts of first-order factors, (f) disturbances of first-order factors, and (g) residual variances of observed variables. Given that measurement invariance at the factor loading and intercept levels was achieved, the latent factor mean difference on the higher order factor between the groups was also estimated. The analyses were performed on the mean and covariance structures within the framework of the confirmatory factor analysis using the LISREL 8.51 program. Implications of second-order factor models and measurement invariance in psychological research were discussed.  相似文献   

17.
Multi-group confirmatory factor analysis (MGCFA) allows researchers to determine whether a research inventory elicits similar response patterns across samples. If statistical equivalence in responding is found, then scale score comparisons become possible and samples can be said to be from the same population. This paper illustrates the use of MGCFA by examining survey results relating to practising teachers' conceptions of feedback in two very different jurisdictions (Louisiana, USA, n?=?308; New Zealand, n?=?518), highlighting challenges which can occur when conducting this kind of cross-cultural research. As the two contexts had very different policies and practices around educational assessment, it was considered possible that a common research inventory may elicit non-equivalent responding, leading to non-invariance. Independent models for each group and a joint model for all participants were tested for invariance using MGCFA and all were inadmissible for one of the two groups. Inspection of joint model differences in item loadings, scale reliabilities, and scale inter-correlations established the extent of non-invariance. This paper discusses the implications of non-invariance within this particular study and identifies difficulties in using an inventory in cross-cultural settings. It also provides suggestions about how to increase the likelihood that a common factor structure can be recovered.  相似文献   

18.
Multigroup confirmatory factor analysis (MCFA) is a popular method for the examination of measurement invariance and specifically, factor invariance. Recent research has begun to focus on using MCFA to detect invariance for test items. MCFA requires certain parameters (e.g., factor loadings) to be constrained for model identification, which are assumed to be invariant across groups, and act as referent variables. When this invariance assumption is violated, location of the parameters that actually differ across groups becomes difficult. The factor ratio test and the stepwise partitioning procedure in combination have been suggested as methods to locate invariant referents, and appear to perform favorably with real data examples. However, the procedures have not been evaluated through simulations where the extent and magnitude of a lack of invariance is known. This simulation study examines these methods in terms of accuracy (i.e., true positive and false positive rates) of identifying invariant referent variables.  相似文献   

19.
This study is a methodological-substantive synergy, demonstrating the power and flexibility of exploratory structural equation modeling (ESEM) methods that integrate confirmatory and exploratory factor analyses (CFA and EFA), as applied to substantively important questions based on multidimentional students' evaluations of university teaching (SETs). For these data, there is a well established ESEM structure but typical CFA models do not fit the data and substantially inflate correlations among the nine SET factors (median rs = .34 for ESEM, .72 for CFA) in a way that undermines discriminant validity and usefulness as diagnostic feedback. A 13-model taxonomy of ESEM measurement invariance is proposed, showing complete invariance (factor loadings, factor correlations, item uniquenesses, item intercepts, latent means) over multiple groups based on the SETs collected in the first and second halves of a 13-year period. Fully latent ESEM growth models that unconfounded measurement error from communality showed almost no linear or quadratic effects over this 13-year period. Latent multiple indicators multiple causes models showed that relations with background variables (workload/difficulty, class size, prior subject interest, expected grades) were small in size and varied systematically for different ESEM SET factors, supporting their discriminant validity and a construct validity interpretation of the relations. A new approach to higher order ESEM was demonstrated, but was not fully appropriate for these data. Based on ESEM methodology, substantively important questions were addressed that could not be appropriately addressed with a traditional CFA approach.  相似文献   

20.
Given multivariate data, many research questions pertain to the covariance structure: whether and how the variables (e.g., personality measures) covary. Exploratory factor analysis (EFA) is often used to look for latent variables that might explain the covariances among variables; for example, the Big Five personality structure. In the case of multilevel data, one might wonder whether or not the same covariance (factor) structure holds for each so-called data block (containing data of 1 higher level unit). For instance, is the Big Five personality structure found in each country or do cross-cultural differences exist? The well-known multigroup EFA framework falls short in answering such questions, especially for numerous groups or blocks. We introduce mixture simultaneous factor analysis (MSFA), performing a mixture model clustering of data blocks, based on their factor structure. A simulation study shows excellent results with respect to parameter recovery and an empirical example is included to illustrate the value of MSFA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号