首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This simulation study assesses the statistical performance of two mathematically equivalent parameterizations for multitrait–multimethod data with interchangeable raters—a multilevel confirmatory factor analysis (CFA) and a classical CFA parameterization. The sample sizes of targets and raters, the factorial structure of the trait factors, and rater missingness are varied. The classical CFA approach yields a high proportion of improper solutions under conditions with small sample sizes and indicator-specific trait factors. In general, trait factor related parameters are more sensitive to bias than other types of parameters. For multilevel CFAs, there is a drastic bias in fit statistics under conditions with unidimensional trait factors on the between level, where root mean square error of approximation (RMSEA) and χ2 distributions reveal a downward bias, whereas the between standardized root mean square residual is biased upwards. In contrast, RMSEA and χ2 for classical CFA models are severely upwardly biased in conditions with a high number of raters and a small number of targets.  相似文献   

2.
The recovery of weak factors has been extensively studied in the context of exploratory factor analysis. This article presents the results of a Monte Carlo simulation study of recovery of weak factor loadings in confirmatory factor analysis under conditions of estimation method (maximum likelihood vs. unweighted least squares), sample size, loading size, factor correlation, and model specification (correct vs. incorrect). The effects of these variables on goodness of fit and convergence are also examined. Results show that recovery of weak factor loadings, goodness of fit, and convergence are improved when factors are correlated and models are correctly specified. Additionally, unweighted least squares produces more convergent solutions and successfully recovers the weak factor loadings in some instances where maximum likelihood fails. The implications of these findings are discussed and compared to previous research.  相似文献   

3.
Testing factorial invariance has recently gained more attention in different social science disciplines. Nevertheless, when examining factorial invariance, it is generally assumed that the observations are independent of each other, which might not be always true. In this study, we examined the impact of testing factorial invariance in multilevel data, especially when the dependency issue is not taken into account. We considered a set of design factors, including number of clusters, cluster size, and intraclass correlation (ICC) at different levels. The simulation results showed that the test of factorial invariance became more liberal (or had inflated Type I error rate) in terms of rejecting the null hypothesis of invariance held between groups when the dependency was not considered in the analysis. Additionally, the magnitude of the inflation in the Type I error rate was a function of both ICC and cluster size. Implications of the findings and limitations are discussed.  相似文献   

4.
Latent profile analysis (LPA) has become a popular statistical method for modeling unobserved population heterogeneity in cross-sectionally sampled data, but very few empirical studies have examined the question of how well enumeration indexes accurately identify the correct number of latent profiles present. This Monte Carlo simulation study examined the ability of several classes of enumeration indexes to correctly identify the number of latent population profiles present under 3 different research design conditions: sample size, the number of observed variables used for LPA, and the separation distance among the latent profiles measured in Mahalanobis D units. Results showed that, for the homogeneous population (i.e., the population has k = 1 latent profile) conditions, many of the enumeration indexes used in LPA were able to correctly identify the single latent profile if variances and covariances were freely estimated. However, for a heterogeneous population (i.e., the population has k = 3 distinct latent profiles), the correct identification rate for the enumeration indexes in the k = 3 latent profile conditions was typically very low. These results are compared with the previous cross-sectional mixture modeling studies, and the limitations of this study, as well as future cross-sectional mixture modeling and enumeration index research possibilities, are discussed.  相似文献   

5.
This research was designed to investigate how much more suitable moving average (MA) and autoregressive-moving average (ARMA) models are for longitudinal panel data in which measurement errors correlate than AR, quasi-simplex, and 1-factor models. The conclusions include (a) when testing for a stochastic process hypothesized to occur in a longitudinal data set, testing for other processes is necessary, because incorrect models often fit other processes well enough to be deceiving; (b) when measurement error correlations are flagged to be relatively high in panel data, the fit and propriety of an MA or ARMA model should be considered and compared to the fit and propriety of other models; (c) when an MA model is fit to AR data, measurement error correlations may nonetheless be deceptively high, though fortunately MA model fit indexes are almost always lower than those for an AR model; and (d) the assumption that longitudinal panel data always contain measurement error correlations is patently false. In summary, whenever evaluating longitudinal panel data, the fit, propriety, and parsimony of all 5 models should be considered jointly and compared before a particular model is endorsed as most suitable.  相似文献   

6.
This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test p values (RTcombiP). Four factors were manipulated: mean intervention effect, number of cases included in a study, number of measurement occasions for each case, and between-case variance. Under the simulated conditions, Type I error rate was under control at the nominal 5% level for both HLM and RTcombiP. Furthermore, for both procedures, a larger number of combined cases resulted in higher statistical power, with many realistic conditions reaching statistical power of 80% or higher. Smaller values for the between-case variance resulted in higher power for HLM. A larger number of data points resulted in higher power for RTcombiP.  相似文献   

7.
工业生产装置中,压力管道是重要的组成部分.通常压力管道中存在着局部减薄,而且局部减薄尺寸等参数具有物理不确定性和统计不确定性,所以应该用可靠性来定量衡量其安全状况.依据GB/T 19624-2004计算模型进行蒙特卡罗模拟,计算局部减薄管道的可靠性.该计算方法使用方便,结果可靠.  相似文献   

8.
In testing factorial invariance, researchers have often used a reference variable strategy in which the factor loading for a variable (i.e., reference variable) is fixed to 1 for identification. This commonly used method can be misleading if the chosen reference variable is actually a noninvariant item. This simulation study suggests an alternative method for testing factorial invariance and evaluates the performance of the method in specification searches based on the modification index. The results of the study showed that the proposed specification searches performed well when the number of noninvariant variables was relatively small and this performance improved as sample size increased and the size of group differences increased. When the number of noninvariant variables was relatively large, however, the method rarely succeeded in detecting the noninvariant items in the specification searches. Implications of the findings are discussed along with the limitations of the study.  相似文献   

9.
Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study population. This article presents the results of a simulation study that examines the performance of likelihood-based tests and the traditionally used Information Criterion (ICs) used for determining the number of classes in mixture modeling. We look at the performance of these tests and indexes for 3 types of mixture models: latent class analysis (LCA), a factor mixture model (FMA), and a growth mixture models (GMM). We evaluate the ability of the tests and indexes to correctly identify the number of classes at three different sample sizes (n = 200, 500, 1,000). Whereas the Bayesian Information Criterion performed the best of the ICs, the bootstrap likelihood ratio test proved to be a very consistent indicator of classes across all of the models considered.  相似文献   

10.
Wording effect refers to the systematic method variance caused by positive and negative item wordings on a self-report measure. This Monte Carlo simulation study investigated the impact of ignoring wording effect on the reliability and validity estimates of a self-report measure. Four factors were considered in the simulation design: (a) the number of positively and negatively worded items, (b) the loadings on the trait and the wording effect factors, (c) sample size, and (d) the magnitude of population validity coefficient. The findings suggest that the unidimensional model that ignores the negative wording effect would underestimate the composite reliability and criterion-related validity, but overestimate the homogeneity coefficient. The magnitude of relative bias of the composite reliability was generally small and acceptable, whereas the relative bias for the homogeneity coefficient and criterion-related validity coefficient was negatively correlated with the strength of the general trait factor.  相似文献   

11.
Just as growth mixture models are useful with single-phase longitudinal data, multiphase growth mixture models can be used with multiple-phase longitudinal data. One of the practically important issues in single- and multiphase growth mixture models is the sample size requirements for accurate estimation. In a Monte Carlo simulation study, the sample sizes required for using these models are investigated under various theoretical and realistic conditions. In particular, the relationship between the sample size requirement and the number of indicator variables is examined, because the number of indicators can be relatively easily controlled by researchers in many multiphase data collection settings such as ecological momentary assessment. The findings not only provide tangible information about required sample sizes under various conditions to help researchers, but they also increase understanding of sample size requirements in single- and multiphase growth mixture models.  相似文献   

12.
本文采用OpenMP和CUDA技术,对蒙特卡洛算法进行并行化改进,以充分利用多核处理器和GPU的计算能力,通过对比算法改进前后的性能表现,可以看到采用OpenMP和CUDA技术,能够极大提高计算性能。借鉴该方法,我们可以在个人计算机上改进相关软件计算性能。  相似文献   

13.
Cluster sampling results in response variable variation both among respondents (i.e., within-cluster or Level 1) and among clusters (i.e., between-cluster or Level 2). Properly modeling within- and between-cluster variation could be of substantive interest in numerous settings, but applied researchers typically test only within-cluster (i.e., individual difference) theories. Specifying a between-cluster model in the absence of theory requires a specification search in multilevel structural equation modeling. This study examined a variety of within-cluster and between-cluster sample sizes, intraclass correlation coefficients, start models, parameter addition and deletion methods, and Type I error control techniques to identify which combination of start model, parameter addition or deletion method, and Type I error control technique best recovered the population of the between-cluster model. Results indicated that a “saturated” start model, univariate parameter deletion technique, and no Type I error control performed best, but recovered the population between-cluster model in less than 1 in 5 attempts at the largest sample sizes. The accuracy of specification search methods, suggestions for applied researchers, and future research directions are discussed.  相似文献   

14.
Increasingly, assessment practitioners use generalizability coefficients to estimate the reliability of scores from performance tasks. Little research, however, examines the relation between the estimation of generalizability coefficients and the number of rubric scale points and score distributions. The purpose of the present research is to inform assessment practitioners of (a) the optimum number of scale points necessary to achieve the best estimates of generalizability coefficients and (b) the possible biases of generalizability coefficients when the distribution of scores is non-normal. Results from this study indicate that the number of scale points substantially affects the generalizability estimates. Generalizability estimates increase as scale points increase, with little bias after scales reach 12 points. Score distributions had little effect on generalizability estimates.  相似文献   

15.
概率型量本利分析是企业当前常用的分析方法,分析所得的结果可以从不同角度反应企业生产、销售、采购、加工等方面的经济状况。为了保证概率型量本利分析的有序进行,在分析时采用蒙特卡洛模拟辅助研究,能让最终得到的数据结构更加客观、科学。  相似文献   

16.
The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest lies in the between-study variance estimate, including at least 30 studies is warranted. Modeling covariance does not result in less biased between-study variance estimates as the between-study covariance estimate is biased. When the research interest lies in the between-case covariance, the model including covariance results in unbiased between-case variance estimates. The three-level model appears to be less appropriate for estimating between-study variance if fewer than 30 studies are included.  相似文献   

17.
In practice, several measures of association are used when analyzing structural equation models with ordinal variables: ordinary Pearson correlations (PE approach), polychoric and polyserial correlations (PO approach), and conditional polychoric correlations (CPO approach). In the case of structural equation models without latent variables, the literature has shown that the PE approach is outperformed by the alternatives. In this article we report a Monte Carlo study showing the comparative performance of the aforementioned alternative approaches under deviations from their respective assumptions in the case of structural equation models with latent variables when attention is restricted to point estimates of model parameters. The CPO approach is shown to be the most robust against nonnormality. It is also robust to randomness of the exogenous variables, but not to the existence of measurement errors in them. The PO approach lacks robustness against nonnormality. The PE approach lacks robustness against transformation errors but otherwise it can perform about as well as the alternative approaches.  相似文献   

18.
主要对CO在Pt表面催化氧化反应的实验做了具体描述,介绍以ZGB模型为基础的MC模拟方法,得到一系列与实验较接近的模拟结果.在模拟中,重点考察重构表面的诱导相变和反应物的脱附等因素对反应的影响.  相似文献   

19.
Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be compared. When applied to a binary data set, our experience suggests that IRT and FA models yield similar fits. However, when the data are polytomous ordinal, IRT models yield a better fit because they involve a higher number of parameters. But when fit is assessed using the root mean square error of approximation (RMSEA), similar fits are obtained again. We explain why. These test statistics have little power to distinguish between FA and IRT models; they are unable to detect that linear FA is misspecified when applied to ordinal data generated under an IRT model.  相似文献   

20.
Equity Indexed Annuities (EIAs) are controversial financial products because the payoffs to investors are based on formulas that are supposedly too complex for average investors to understand. This brief describes how Monte Carlo simulation can provide insight into the true risk and return of an EIA. This approach can be used as a project assignment or an in‐class assignment to demonstrate the ability of Monte Carlo simulation to solve problems involving uncertainty and nonlinear payoffs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号