首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the presence of omitted variables or similar validity threats, regression estimates are biased. Unbiased estimates (the causal effects) can be obtained in large samples by fitting instead the Instrumental Variables Regression (IVR) model. The IVR model can be estimated using structural equation modeling (SEM) software or using Econometric estimators such as two-stage least squares (2SLS). We describe 2SLS using SEM terminology, and report a simulation study in which we generated data according to a regression model in the presence of omitted variables and fitted (a) a regression model using ordinary least squares, (b) an IVR model using maximum likelihood (ML) as implemented in SEM software, and (c) an IVR model using 2SLS. Coverage rates of the causal effect using regression methods are always unacceptably low (often 0). When using the IVR model, accurate coverage is obtained across all conditions when N = 500. Even when the IVR model is misspecified, better coverage than regression is generally obtained. Differences between 2SLS and ML are small and favor 2SLS in small samples (N ≤ 100).  相似文献   

2.
Ordinal variables are common in many empirical investigations in the social and behavioral sciences. Researchers often apply the maximum likelihood method to fit structural equation models to ordinal data. This assumes that the observed measures have normal distributions, which is not the case when the variables are ordinal. A better approach is to use polychoric correlations and fit the models using methods such as unweighted least squares (ULS), maximum likelihood (ML), weighted least squares (WLS), or diagonally weighted least squares (DWLS). In this simulation evaluation we study the behavior of these methods in combination with polychoric correlations when the models are misspecified. We also study the effect of model size and number of categories on the parameter estimates, their standard errors, and the common chi-square measures of fit when the models are both correct and misspecified. When used routinely, these methods give consistent parameter estimates but ULS, ML, and DWLS give incorrect standard errors. Correct standard errors can be obtained for these methods by robustification using an estimate of the asymptotic covariance matrix W of the polychoric correlations. When used in this way the methods are here called RULS, RML, and RDWLS.  相似文献   

3.
The present study examines bias in parameter estimates and standard error in cross-classified random effect modeling (CCREM) caused by omitting the random interaction effects of the cross-classified factors, focusing on the effect of a sample size within cells and ratio of a small cell. A Monte Carlo simulation study was conducted to compare the correctly specified and the misspecified CCREM. While there was negligible bias in fixed effects, substantial biases were found in the random effects of the misspecified model depending on the number of samples within a cell and the proportion of small cells. However, in the case of the correctly specified model, no bias occurred. The present study suggests considering the random interaction effects when conducting CCREM to avoid overestimation of variance components and to calculate an accurate value of estimation. The implications of this study are to illuminate the conditions of cross-classification ratio and to provide a meaningful reference for applied researchers using CCREM.  相似文献   

4.
The primary purpose of this study was to investigate the appropriateness and implication of incorporating a testlet definition into the estimation of procedures of the conditional standard error of measurement (SEM) for tests composed of testlets. Another purpose was to investigate the bias in estimates of the conditional SEM when using item-based methods instead of testlet-based methods. Several item-based and testlet-based estimation methods were proposed and compared. In general, item-based estimation methods underestimated the conditional SEM for tests composed for testlets, and the magnitude of this negative bias increased as the degree of conditional dependence among items within testlets increased. However, an item-based method using a generalizability theory model provided good estimates of the conditional SEM under mild violation of the assumptions for measurement modeling. Under moderate or somewhat severe violation, testlet-based methods with item response models provided good estimates.  相似文献   

5.
This simulation study demonstrates how the choice of estimation method affects indexes of fit and parameter bias for different sample sizes when nested models vary in terms of specification error and the data demonstrate different levels of kurtosis. Using a fully crossed design, data were generated for 11 conditions of peakedness, 3 conditions of misspecification, and 5 different sample sizes. Three estimation methods (maximum likelihood [ML], generalized least squares [GLS], and weighted least squares [WLS]) were compared in terms of overall fit and the discrepancy between estimated parameter values and the true parameter values used to generate the data. Consistent with earlier findings, the results show that ML compared to GLS under conditions of misspecification provides more realistic indexes of overall fit and less biased parameter values for paths that overlap with the true model. However, despite recommendations found in the literature that WLS should be used when data are not normally distributed, we find that WLS under no conditions was preferable to the 2 other estimation procedures in terms of parameter bias and fit. In fact, only for large sample sizes (N = 1,000 and 2,000) and mildly misspecified models did WLS provide estimates and fit indexes close to the ones obtained for ML and GLS. For wrongly specified models WLS tended to give unreliable estimates and over-optimistic values of fit.  相似文献   

6.
Mediation is usually assessed by a regression-based or structural equation modeling (SEM) approach that we will refer to as the classical approach. This approach relies on the assumption that there are no confounders that influence both the mediator, M, and the outcome, Y. This assumption holds if individuals are randomly assigned to levels of M but generally random assignment is not possible. We propose the use of propensity scores to help remove the selection bias that may result when individuals are not randomly assigned to levels of M. The propensity score is the probability that an individual receives a particular level of M. Results from a simulation study are presented to demonstrate this approach, referred to as Classical + Propensity Model (C+PM), confirming that the population parameters are recovered and that selection bias is successfully dealt with. Comparisons are made to the classical approach that does not include propensity scores. Propensity scores were estimated by a logistic regression model. If all confounders are included in the propensity model, then the C+PM is unbiased. If some, but not all, of the confounders are included in the propensity model, then the C+PM estimates are biased although not as severely as the classical approach (i.e. no propensity model is included).  相似文献   

7.
This Monte Carlo simulation study investigated different strategies for forming product indicators for the unconstrained approach in analyzing latent interaction models when the exogenous factors are measured by unequal numbers of indicators under both normal and nonnormal conditions. Product indicators were created by (a) multiplying parcels of the larger scale by items of the smaller scale, and (b) matching items according to reliability to create several product indicators, ignoring those items with lower reliability. Two scaling approaches were compared where parceling was not involved: (a) fixing the factor variances, and (b) fixing 1 loading to 1 for each factor. The unconstrained approach was compared with the latent moderated structural equations (LMS) approach. Results showed that under normal conditions, the LMS approach was preferred because the biases of its interaction estimates and associated standard errors were generally smaller, and its power was higher than that of the unconstrained approach. Under nonnormal conditions, however, the unconstrained approach was generally more robust than the LMS approach. It is recommended to form product indicators by using items with higher reliability (rather than parceling) in the matching and then to specify the model by fixing 1 loading of each factor to unity when adopting the unconstrained approach.  相似文献   

8.
Appropriate model specification is fundamental to unbiased parameter estimates and accurate model interpretations in structural equation modeling. Thus detecting potential model misspecification has drawn the attention of many researchers. This simulation study evaluates the efficacy of the Bayesian approach (the posterior predictive checking, or PPC procedure) under multilevel bifactor model misspecification (i.e., ignoring a specific factor at the within level). The impact of model misspecification on structural coefficients was also examined in terms of bias and power. Results showed that the PPC procedure performed better in detecting multilevel bifactor model misspecification, when the misspecification became more severe and sample size was larger. Structural coefficients were increasingly negatively biased at the within level, as model misspecification became more severe. Model misspecification at the within level affected the between-level structural coefficient estimates more when data dependency was lower and the number of clusters was smaller. Implications for researchers are discussed.  相似文献   

9.
In structural equation modeling (SEM), researchers need to evaluate whether item response data, which are often multidimensional, can be modeled with a unidimensional measurement model without seriously biasing the parameter estimates. This issue is commonly addressed through testing the fit of a unidimensional model specification, a strategy previously determined to be problematic. As an alternative to the use of fit indexes, we considered the utility of a statistical tool that was expressly designed to assess the degree of departure from unidimensionality in a data set. Specifically, we evaluated the ability of the DETECT “essential unidimensionality” index to predict the bias in parameter estimates that results from misspecifying a unidimensional model when the data are multidimensional. We generated multidimensional data from bifactor structures that varied in general factor strength, number of group factors, and items per group factor; a unidimensional measurement model was then fit and parameter bias recorded. Although DETECT index values were generally predictive of parameter bias, in many cases, the degree of bias was small even though DETECT indicated significant multidimensionality. Thus we do not recommend the stand-alone use of DETECT benchmark values to either accept or reject a unidimensional measurement model. However, when DETECT was used in combination with additional indexes of general factor strength and group factor structure, parameter bias was highly predictable. Recommendations for judging the severity of potential model misspecifications in practice are provided.  相似文献   

10.
When a covariance structure model is misspecified, parameter estimates will be affected. It is important to know which estimates are systematically affected and which are not. The approach of analyzing the path is both intuitive and informative for such a purpose. Different from path analysis, analyzing the path uses path tracing and elementary numerical analysis to identify affected parameters when a 1-way or 2-way arrow in a path diagram is omitted. It not only characterizes how a misspecification affects model parameters but also facilitates a good understanding of the relation among different parts of the model. This article introduces and studies this technique and, for commonly used models, provides detailed analysis to identify the directions of change for various model parameters. Examples based on real data show that the technique of analyzing the path can reliably predict the direction of change in parameter estimates even when the true model is unknown. Conditions that interfere with the results are also discussed and advice is provided for its proper application.  相似文献   

11.
In this article 2 major problems of using the three‐wave quasi simplex model to obtain reliability estimates are illustrated. The 1st problem is that the sampling variance of the reliability estimates can be very large, especially if the stability through time is low. The 2nd problem is that, for the reliability parameter to be identified, the model assumes a particular change process, namely a Markov process. We show that minor violations of this assumption can lead to a large bias in the reliability estimates. The problems are evaluated using both real and Monte Carlo data. A model with repeated measurements in 1 of the waves is also discussed.  相似文献   

12.
Measurement bias can be detected using structural equation modeling (SEM), by testing measurement invariance with multigroup factor analysis (Jöreskog, 1971;Meredith, 1993;Sörbom, 1974) MIMIC modeling (Muthén, 1989) or restricted factor analysis (Oort, 1992,1998). In educational research, data often have a nested, multilevel structure, for example when data are collected from children in classrooms. Multilevel structures might complicate measurement bias research. In 2-level data, the potentially “biasing trait” or “violator” can be a Level 1 variable (e.g., pupil sex), or a Level 2 variable (e.g., teacher sex). One can also test measurement invariance with respect to the clustering variable (e.g., classroom). This article provides a stepwise approach for the detection of measurement bias with respect to these 3 types of violators. This approach works from Level 1 upward, so the final model accounts for all bias and substantive findings at both levels. The 5 proposed steps are illustrated with data of teacher–child relationships.  相似文献   

13.
Previous assessments of the reliability of test scores for testlet-composed tests have indicated that item-based estimation methods overestimate reliability. This study was designed to address issues related to the extent to which item-based estimation methods overestimate the reliability of test scores composed of testlets and to compare several estimation methods for different measurement models using simulation techniques. Three types of estimation approach were conceptualized for generalizability theory (GT) and item response theory (IRT): item score approach (ISA), testlet score approach (TSA), and item-nested-testlet approach (INTA). The magnitudes of overestimation when applying item-based methods ranged from 0.02 to 0.06 and were related to the degrees of dependence among within-testlet items. Reliability estimates from TSA were lower than those from INTA due to the loss of information with IRT approaches. However, this could not be applied in GT. Specified methods in IRT produced higher reliability estimates than those in GT using the same approach. Relatively smaller magnitudes of error in reliability estimates were observed for ISA and for methods in IRT. Thus, it seems reasonable to use TSA as well as INTA for both GT and IRT. However, if there is a relatively large dependence among within-testlet items, INTA should be considered for IRT due to nonnegligible loss of information.  相似文献   

14.
Meta-analytic structural equation modeling (MA-SEM) is increasingly being used to assess model-fit for variables' interrelations synthesized across studies. MA-SEM researchers have analyzed synthesized correlation matrices using structural equation modeling (SEM) estimation that is designed for covariance matrices. This can produce incorrect model-fit chi-square statistics, standard error estimates (Cudeck, 1989), or both for parameters that are not scale free or that describe a scale-noninvariant model unless corrected SEM estimation is used to analyze the correlations. This study introduced univariate and multivariate approximate methods for synthesizing covariance matrices for use in MA-SEM. A simulation study assessed the approximate methods by estimating parameters in a scale-noninvariant model using synthesized covariances versus synthesized correlations with and without the appropriate corrections. Standard error bias was noted only for uncorrected analyses of pooled correlations. Chi-square model-fit statistics were overly conservative except when covariance matrices were analyzed. Benefits and limitations of this approximate method are presented and discussed.  相似文献   

15.
A Monte Carlo simulation study was conducted to investigate the effects on structural equation modeling (SEM) fit indexes of sample size, estimation method, and model specification. Based on a balanced experimental design, samples were generated from a prespecified population covariance matrix and fitted to structural equation models with different degrees of model misspecification. Ten SEM fit indexes were studied. Two primary conclusions were suggested: (a) some fit indexes appear to be noncomparable in terms of the information they provide about model fit for misspecified models and (b) estimation method strongly influenced almost all the fit indexes examined, especially for misspecified models. These 2 issues do not seem to have drawn enough attention from SEM practitioners. Future research should study not only different models vis‐à‐vis model complexity, but a wider range of model specification conditions, including correctly specified models and models specified incorrectly to varying degrees.  相似文献   

16.
It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing for consistent estimation of the relevant regression parameters. In many instances, however, embedding the measurement model into structural equation models is not possible because the model would not be identified. To correct for measurement error one has no other recourse than to provide the exact values of the variances of the measurement error terms of the model, although in practice such variances cannot be ascertained exactly, but only estimated from an independent study. The usual approach so far has been to treat the estimated values of error variances as if they were known exact population values in the subsequent structural equation modeling (SEM) analysis. In this article we show that fixing measurement error variance estimates as if they were true values can make the reported standard errors of the structural parameters of the model smaller than they should be. Inferences about the parameters of interest will be incorrect if the estimated nature of the variances is not taken into account. For general SEM, we derive an explicit expression that provides the terms to be added to the standard errors provided by the standard SEM software that treats the estimated variances as exact population values. Interestingly, we find there is a differential impact of the corrections to be added to the standard errors depending on which parameter of the model is estimated. The theoretical results are illustrated with simulations and also with empirical data on a typical SEM model.  相似文献   

17.
In applied research, such as with motivation theories, typically many variables are theoretically implied predictors of an outcome and several interactions are assumed (e.g., Watt, 2004). However, estimation problems that might arise when several interaction and/or quadratic effects are analyzed simultaneously have not been investigated because simulation studies on interaction effects in the structural equation modeling framework have mainly focused on small models that contain single interaction effects. In this article, we show that traditional approaches can provide estimates with low accuracy when complex models are estimated. We introduce an adaptive Bayesian lasso approach with spike-and-slab priors that overcomes this problem. Using a complex model in a simulation study, we show that the parameter estimates of the proposed approach are more accurate in situations with high multicollinearity or low reliability compared with a standard Bayesian lasso approach and typical frequentist approaches (i.e., unconstrained product indicator approach and latent moderated structures approach).  相似文献   

18.
The sample invariance of item discrimination statistics is evaluated in this case study using real data. The hypothesized superiority of the item response model (IRM) is tested against structural equation modeling (SEM) for responses to the Center for Epidemiologic Studies-Depression (CES-D) scale. Responses from 10 random samples of 500 people were drawn from a base sample of 6,621 participants across gender, age, and different health groups. Hierarchical tests of multiple-group structural equation models indicated statistically significant differences exist in item regressions across contrast groups. Although the IRM item discrimination estimates were most stable in all conditions of this case study, additional research on the precision of individual scores and possible item bias is required to support the validity of either model for scoring the CES-D. The SEM approach to examining between-group differences holds promise for any field where heterogeneous populations are assessed and important consequences arise from score interpretations.  相似文献   

19.
Person reliability parameters (PRPs) model temporary changes in individuals’ attribute level perceptions when responding to self‐report items (higher levels of PRPs represent less fluctuation). PRPs could be useful in measuring careless responding and traitedness. However, it is unclear how well current procedures for estimating PRPs can recover parameter estimates. This study assesses these procedures in terms of mean error (ME), average absolute difference (AAD), and reliability using simulated data with known values. Several prior distributions for PRPs were compared across a number of conditions. Overall, our results revealed little differences between using the χ or lognormal distributions as priors for estimated PRPs. Both distributions produced estimates with reasonable levels of ME; however, the AAD of the estimates was high. AAD did improve slightly as the number of items increased, suggesting that increasing the number of items would ameliorate this problem. Similarly, a larger number of items were necessary to produce reasonable levels of reliability. Based on our results, several conclusions are drawn and implications for future research are discussed.  相似文献   

20.
The 3-step approach has been recently advocated over the simultaneous 1-step approach to model a distal outcome predicted by a latent categorical variable. We generalize the 3-step approach to situations where the distal outcome is predicted by multiple and possibly associated latent categorical variables. Although the simultaneous 1-step approach has been criticized, simulation studies have found that the performance of the two approaches is similar in most situations (Bakk & Vermunt, 2016). This is consistent with our findings for a 2-LV extension when all model assumptions are satisfied. Results also indicate that under various degrees of violation of the normality and conditional independence assumption for the distal outcome and indicators, both approaches are subject to bias but the 3-step approach is less sensitive. The differences in estimates using the two approaches are illustrated in an analysis of the effects of various childhood socioeconomic circumstances on body mass index at age 50.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号