首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Growth mixture modeling (GMM) has become a more popular statistical method for modeling population heterogeneity in longitudinal data, but the performance characteristics of GMM enumeration indexes in correctly identifying heterogeneous growth trajectories are largely unknown. Few empirical studies have addressed this issue. This study considered both homogeneous (a k = 1 growth trajectory) and heterogeneous (k = 3 different but unobserved growth trajectories) situations, and examined the performance of GMM in correctly identifying the latent trajectories in sample data. Four design conditions were manipulated: (a) sample size, (b) latent trajectory class proportions, (c) shapes of latent growth trajectories, and (d) degree of separation among latent growth trajectories. The findings suggest that, for k = 1 condition (1 homogenous growth trajectory), GMM's performance is reasonable in correctly identifying 1 latent growth trajectory (cf. Type I error control). However, for the k = 3 conditions (3 heterogeneous latent growth trajectories), GMM's general performance is very questionable (cf. Type II error). Different enumeration indexes varied considerably in their respective performances. Comparing the current results with previous GMM studies, the limitations of this study and future GMM enumeration research avenues are all discussed.  相似文献   

2.
Latent profile analysis (LPA) has become a popular statistical method for modeling unobserved population heterogeneity in cross-sectionally sampled data, but very few empirical studies have examined the question of how well enumeration indexes accurately identify the correct number of latent profiles present. This Monte Carlo simulation study examined the ability of several classes of enumeration indexes to correctly identify the number of latent population profiles present under 3 different research design conditions: sample size, the number of observed variables used for LPA, and the separation distance among the latent profiles measured in Mahalanobis D units. Results showed that, for the homogeneous population (i.e., the population has k = 1 latent profile) conditions, many of the enumeration indexes used in LPA were able to correctly identify the single latent profile if variances and covariances were freely estimated. However, for a heterogeneous population (i.e., the population has k = 3 distinct latent profiles), the correct identification rate for the enumeration indexes in the k = 3 latent profile conditions was typically very low. These results are compared with the previous cross-sectional mixture modeling studies, and the limitations of this study, as well as future cross-sectional mixture modeling and enumeration index research possibilities, are discussed.  相似文献   

3.
Recent handbooks of giftedness or expertise propose a plethora of conceptions on the development of excellent performance but, to our knowledge, there are no comparative studies that provide empirical evidence of their validity to guide researchers and practitioners in their adoption of a particular conception. This study sought to close that gap by conducting an empirical comparison of the major approaches to giftedness and expertise currently in use: the IQ model, the performance model, the moderator model, and the systemic model. The four models were tested in a longitudinal study with a sample of N = 350 German students attending university preparatory schools; 25% of the sample had been assigned to special classes for the gifted. The construct and predictive validity of the four models were tested by means of structural equation modeling. Theoretical considerations along with our results indicated a differentiation among the models whereby some could only predict while others could also explain the emergence of excellent performance and thereby yield valuable information for the design of interventions. The empirical comparison of the approaches showed that they were unequally suited for the two challenges. For prediction purposes, the performance approach proved best while, for explanations, the moderator and systemic approaches were the most promising candidates. Even so, the latter did demonstrate conceptual and/or methodological problems. The IQ approach was superseded by the other approaches on both prediction and explanation. Implications and limitations of the findings are discussed.  相似文献   

4.
Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the L ν-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider a Bayesian semiparametric approach for estimation and model comparison in the context of structural equation models with fixed covariates. A finite dimensional Dirichlet process is used to model the crucial latent variables, and a blocked Gibbs sampler is implemented for estimation. Empirical performance of the L ν-measure is evaluated through a simulation study. Results obtained indicate that the L ν-measure, which additionally requires very minor computational effort, gives satisfactory performance. Moreover, the methodologies are demonstrated through an example with a real data set on kidney disease. Finally, the application of the L ν-measure to Bayesian semiparametric nonlinear structural equation models is outlined.  相似文献   

5.
Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study population. This article presents the results of a simulation study that examines the performance of likelihood-based tests and the traditionally used Information Criterion (ICs) used for determining the number of classes in mixture modeling. We look at the performance of these tests and indexes for 3 types of mixture models: latent class analysis (LCA), a factor mixture model (FMA), and a growth mixture models (GMM). We evaluate the ability of the tests and indexes to correctly identify the number of classes at three different sample sizes (n = 200, 500, 1,000). Whereas the Bayesian Information Criterion performed the best of the ICs, the bootstrap likelihood ratio test proved to be a very consistent indicator of classes across all of the models considered.  相似文献   

6.
(m,k)-firm real-time or weakly hard real-time (WHRT) guarantee is becoming attractive as it closes the gap between hard and soft (or probabilistic) real-time guarantee, and enables finer granularity of real-time QoS through adjustingm andk. For multiple streams with (m, k)-firm constraint sharing a single server, an, on-line priority assignment policy based on the most recentk-length history of each stream called distance based priority (DBP) has been proposed to assign priority. In case of priority equality among these head-of-queue instances, Earliest Deadline First (EDF) is used. Under the context of WHRT schedule theory, DBP is the most popular, gets much attention and has many applications due to its straightforward priority assignment policy and easy implementation. However, DBP combined with EDF cannot always provide good performance, mainly because the initial, DBP does not underline the rich information on deadline met/missed distribution, specially streams in various failure states which will travel different distances to restore success states. Considering how to effectively restore the success state of each individual stream from a failure state, an integrated DBP utilizing deadline met/missed distribution is proposed in this paper. Simulation results validated the performance improvement of this proposal. Project supported by the National Natural Science Foundation of China (No.60203030) and Advanced Research Program of France-China (Nos. PRA S101-04, PRA S103-02)  相似文献   

7.
Social scientists are frequently interested in identifying latent subgroups within the population, based on a set of observed variables. One of the more common tools for this purpose is latent class analysis (LCA), which models a scenario involving k finite and mutually exclusive classes within the population. An alternative approach to this problem is presented by the grade of membership (GoM) model, in which individuals are assumed to have partial membership in multiple population subgroups. In this respect, it differs from the hard groupings associated with LCA. The current Monte Carlo simulation study extended on prior work on the GoM by investigating its ability to recover underlying subgroups in the population for a variety of sample sizes, latent group size ratios, and differing group response profiles. In addition, this study compared the performance of GoM with that of LCA. Results demonstrated that when the underlying process conforms to the GoM model form, the GoM approach yielded more accurate classification results than did LCA. In addition, it was found that the GoM modeling paradigm yielded accurate results for samples as small as 200, even when latent subgroups were very unequal in size. Implications for practice were discussed.  相似文献   

8.
Abstract

Factor mixture models are designed for the analysis of multivariate data obtained from a population consisting of distinct latent classes. A common factor model is assumed to hold within each of the latent classes. Factor mixture modeling involves obtaining estimates of the model parameters, and may also be used to assign subjects to their most likely latent class. This simulation study investigates aspects of model performance such as parameter coverage and correct class membership assignment and focuses on covariate effects, model size, and class-specific versus class-invariant parameters. When fitting true models, parameter coverage is good for most parameters even for the smallest class separation investigated in this study (0.5 SD between 2 classes). The same holds for convergence rates. Correct class assignment is unsatisfactory for the small class separation without covariates, but improves dramatically with increasing separation, covariate effects, or both. Model performance is not influenced by the differences in model size investigated here. Class-specific parameters may improve some aspects of model performance but negatively affect other aspects.  相似文献   

9.
The size of a model has been shown to critically affect the goodness of approximation of the model fit statistic T to the asymptotic chi-square distribution in finite samples. It is not clear, however, whether this “model size effect” is a function of the number of manifest variables, the number of free parameters, or both. It is demonstrated by means of 2 Monte Carlo computer simulation studies that neither the number of free parameters to be estimated nor the model degrees of freedom systematically affect the T statistic when the number of manifest variables is held constant. Increasing the number of manifest variables, however, is associated with a severe bias. These results imply that model fit drastically depends on the size of the covariance matrix and that future studies involving goodness-of-fit statistics should always consider the number of manifest variables, but can safely neglect the influence of particular model specifications.  相似文献   

10.
In this article we describe a structural equation modeling (SEM) framework that allows nonnormal skewed distributions for the continuous observed and latent variables. This framework is based on the multivariate restricted skew t distribution. We demonstrate the advantages of skewed SEM over standard SEM modeling and challenge the notion that structural equation models should be based only on sample means and covariances. The skewed continuous distributions are also very useful in finite mixture modeling as they prevent the formation of spurious classes formed purely to compensate for deviations in the distributions from the standard bell curve distribution. This framework is implemented in Mplus Version 7.2.  相似文献   

11.
There is consensus in the statistical literature that severe departures from its assumptions invalidate the use of regression modeling for purposes of inference. The assumptions of regression modeling are usually evaluated subjectively through visual, graphic displays in a residual analysis but such an approach, taken alone, may be insufficient for assessing the appropriateness of the fitted model. Here, an easy‐to‐use test of the assumption of equal variance (i.e., homoscedasticity) as well as model specification is provided. Given the importance of the equal‐variance assumption (i.e., if uncorrected, severe violations preclude the use of statistical inference and moderate violations result in a loss of statistical power) and given the fact that, if uncorrected, a misspecified or underspecified model could invalidate an entire study, the test developed by Halbert White in 1980 is recommended for supplementing a graphic residual analysis when teaching regression modeling to business students at both the undergraduate and graduate levels. Using this confirmatory approach to supplement a traditional residual analysis has value because students often find that graphic displays are too subjective for determining what constitutes severe from moderate departures from the equal variance assumption or for assessing patterns in plots that might indicate model misspecification or underspecification.  相似文献   

12.
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary T and N by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time series analysis (T large and N = 1) and conventional SEM (N large and T = 1 or small) by integrating both approaches. The resulting combined model offers a variety of new modeling options including a direct test of the ergodicity hypothesis, according to which the factorial structure of an individual observed at many time points is identical to the factorial structure of a group of individuals observed at a single point in time. Third, we illustrate the flexibility of SEM time series modeling by extending the approach to account for complex error structures. We end with a discussion of current limitations and future applications of SEM-based time series modeling for arbitrary T and N.  相似文献   

13.
In longitudinal design, investigating interindividual differences of intraindividual changes enables researchers to better understand the potential variety of development and growth. Although latent growth curve mixture models have been widely used, unstructured finite mixture models (uFMMs) are also useful as a preliminary tool and are expected to be more robust in identifying classes under the influence of possible model misspecifications, which are very common in actual practice. In this study, large-scale simulations were performed in which various normal uFMMs and nonnormal uFMMs were fit to evaluate their utility and the performance of each model selection procedure for estimating the number of classes in longitudinal designs. Results show that normal uFMMs assuming invariance of variance–covariance structures among classes perform better on average. Among model selection procedures, the Calinski–Harabasz statistic, which has a nonparametric nature, performed better on average than information criteria, including the Bayesian information criterion.  相似文献   

14.
This study examines what sources of evidence are used in intervention selection and what changes in belief occur when performance improvement professionals make these decisions. Sixty‐one certified performance technologists completed a dynamic, web‐delivered questionnaire in which they provided a general assessment of intervention success (Pr1), then responded to 12 performance improvement scenarios by selecting an intervention, providing a prior probability, receiving additional evidence, giving a posterior probability (Pr3), indicating whether the initial intervention was still preferred, and making a subsequent choice if not. Findings bolster the long‐standing concern about the technical nature of performance improvement, and practitioners are strongly encouraged to approach intervention selection as a decision, where their intervention preferences and beliefs of likely success are carefully adjudicated on the basis of the evidence they obtain. Future research with other types of performance improvement practitioners, replication studies, longitudinal, structural equation modeling, externally verifiable probabilities, and natural environments are recommended.  相似文献   

15.
《Infancia y Aprendizaje》2013,36(85):19-32
Abstract

The article presents a revision of the added value model in school assessment, understood to mean the extent to which the school achieves greater student performance, once other factors have been controlled, such as the socio-cultural context and the initial level of knowledge. Firstly, the evolution of this concept over the last few years is analysed, on account of an advance in three areas: studies on effective schools; performance measures; and changes in educational ideology. In point two there is some reflection on the contributions this model makes towards the most classical assessment approach, but attention is also drawn to the risk which may be run in doing so. Part three presents the assessment and research projects which are conducted by the National Foundation for Educational Research (NFER) within this theoretical framework. Finally, there is an appendix with a guide on taking the necessary decisions for working from the added value approach.  相似文献   

16.
A latent variable modeling procedure for examining whether a studied population could be a mixture of 2 or more latent classes is discussed. The approach can be used to evaluate a single-class model vis-à-vis competing models of increasing complexity for a given set of observed variables without making any assumptions about their within-class interrelationships. The method is helpful in the initial stages of finite mixture analyses to assess whether models with 2 or more classes should be subsequently considered as opposed to a single-class model. The discussed procedure is illustrated with a numerical example.  相似文献   

17.
For non-negative integers k, n, let P k (n) denote the sum {fx27-1}. We show by two different means that if k ≥ 3 and odd, then n 2(n+1)2 iss a factor of the polynomial P k (n); and if k ≥ 2 and even, then n (n+1) (2n+1) is a factor of the polynomial P k (n). We also derive a relatively unknown result first obtained by Johann Faulhaber in the 17th century. Shailesh Shirali has been at Rishi Valley School, Andhra Pradesh (Krishnamurti Foundation India) since the 1980’s. He has a deep interest in teaching and writing about mathematics at the high school/post school levels, with particular emphasis on problem solving and on historical aspects of the subject. He has been involved in the Mathematics Olympiad movement at the national and international level for the past two decades. He is the author of several expository books and articles aimed at interested high school students.  相似文献   

18.
Previous research has demonstrated the potential of examining log-file data from computer-based assessments to understand student interactions with complex inquiry tasks. Rather than solely providing information about what has been achieved or the accuracy of student responses (product data), students' log files offer additional insights into how the responses were produced (process data). In this study, we examined students' log files to detect patterns of students' interactions with computer-based assessment and to determine whether unique characteristics of these interactions emerge as distinct profiles of inquiry performance. Knowledge about the characteristics of these profiles can shed light on why some students are more successful at solving simulated inquiry tasks than others and how to support student understanding of scientific inquiry through computer-based environments. We analyzed the Norwegian PISA 2015 log-file data, science performance as well as background questionnaire (N = 1,222 students) by focusing on two inquiry tasks, which required scientific reasoning skills: coordinating the effects of multiple variables and coordinating theory and evidence. Using a mixture modeling approach, we identified three distinct profiles of students' inquiry performance: strategic, emergent, and disengaged. These profiles revealed different characteristics of students' exploration behavior, inquiry strategy, time-on-task, and item accuracy. Further analyses showed that students' assignment to these profiles varied according to their demographic characteristics (gender, socio-economic status, and language at home), attitudes (enjoyment in science, self-efficacy, and test anxiety), and science achievement. Although students' profiles on the two inquiry tasks were significantly related, we also found some variations in the proportion of students' transitions between profiles. Our study contributes to understanding how students interact with complex simulated inquiry tasks and showcases how log-file data from PISA 2015 can aid this understanding.  相似文献   

19.
Xia  Fengshun  Li  Wenpeng  Guo  Junheng  Han  You  Zhang  Minqing  Wang  Baoguo  Li  Wei  Zhang  Jinli 《天津大学学报(英文版)》2021,27(5):409-421

A pore-array intensified tube-in-tube microchannel (PA-TMC), which is characterized by high throughput and low pressure drop, was developed as a gas–liquid contactor. The sulfite oxidation method was used to determine the oxygen efficiency (φ) and volumetric mass transfer coefficient (kLa) of PA-TMC, and the mass transfer amount per unit energy (ε) was calculated by using the pressure drop. The effects of structural and operating parameters were investigated systematically, and the two-phase flow behavior was monitored by using a charge-coupled device imaging system. The results indicated that the gas absorption efficiency and mass transfer performance of the PA-TMC were improved with increasing pore number, flow rate, and number of helical coil turns and decreasing pore size, row number, annular size, annular length, and surface tension. The φ, ε and kLa of PA-TMC could reach 31.3%, 1.73 × 10−4 mol/J, and 7.0 s−1, respectively. The Sherwood number was correlated with the investigated parameters to guide the design of PA-TMC in gas absorption and mass transfer processes.

  相似文献   

20.
Selecting a subset of predictors from a pool of potential predictors continues to be a common problem encountered by applied researchers in education. Because of several limitations associated with stepwise variable selection procedures, the examination of all possible regression solutions has been recommended. The authors evaluated the use of Mallow's Cp and Wherry's adjusted R 2 statistics to select a final model from a pool of model solutions. Neither the Cp nor the adjusted R 2 statistic correctly identified the underlying regression model any better and was generally worse than the stepwise selection method, which itself was poor. Using any of the model selection procedures studied here resulted in biased estimates of the authentic regression coefficients and underestimation of their standard errors. The use of theory and professional judgment is recommended for the selection of variables in a prediction equation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号