首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
Geiser, Koch, and Eid (2014) expressed their views on an article we published describing findings from a simulation study and an empirical study of multitrait–multimethod (MTMM) data. Geiser and colleagues raised concerns with (a) our use of the term bias, (b) our statement that the correlated trait–correlated method minus one [CT–C(M–1)] model is not in line with Campbell and Fiske’s (1959) conceptualization of MTMM data, (c) our selection of a data-generating model for our simulation study, and (d) our preference for the correlated trait–correlated method (CT–CM) model over the CT–C(M–1) model. Here, we respond to and elaborate on issues raised by Geiser et al. We maintain our position on each of these issues and point to the interpretational challenges of the CT–C(M–1) model. But, we clarify our opinion that none of the existing structural models for MTMM data are flawless; each has its strengths and each has its weaknesses. We further remind readers of the goal, findings, and implications of our recently published article.  相似文献   

2.
The great advantages of e-learning have been recognized, and efforts have been made to promote e-learning adoption. Despite valuable research achievements from technology adoption, previous studies built their models from different perspectives. In this study, to offer a comprehensive research model on e-learning adoption, we integrated these models (ie, TAM, TPB and IDT) and included culture as a moderator. Based on 45 relevant empirical research, this study conducted a meta-analysis to explore key determinants of users’ attitude and behavioral intention to adopt e-learning. Combined with traditional technology acceptance theories and innovation diffusion theory, we developed a comprehensive model and explored the moderating role of culture. The results indicate that this integrated technology acceptance model can be applied to better understand e-learning adoption. The moderator analysis shows that the influence of subjective norms and self-efficacy on users’ behavior intention is more salient in the collectivistic culture, whereas perceived usefulness is more important for online learners in an individualistic culture.  相似文献   

3.
Discovery learning is generally seen as a promising but demanding mode of learning that, in most cases, can only be successful if students are guided in the discovery process. The present article discusses a study on discovery learning with a computer simulation environment in the physics domain of collisions. In the learning environment, which is called Collision, students learned about collisions where two particles move in the same direction and interact via a conservative force in such a way that the total mechanical energy is conserved. In the experiment we conducted with Collision, we evaluated the effects of adding two different ways to guide students: model progression, in which the model is presented in separate parts; and assignments, small exercises that the student can choose to do. The effect of providing assignments and model progression was evaluated by comparing the learning behavior and learning results over three experimental conditions in which different versions of the simulation environment were presented: pure simulation, simulation plus assignments, and simulation plus model progression and assignments. Students' use of the environment was logged, their subjectively experienced workload was measured on‐line, and their learning was assessed using a number of assessment procedures. Providing assignments with the simulation improved students' performance on one aspect of a so‐called intuitive knowledge test. Providing the students with model progression did not have an effect. A subjective workload measure indicated that expanding the simulation with assignments and model progression did not raise the workload experienced by the students. © 1999 John Wiley & Sons, Inc. J Res Sci Teach 36: 597–615, 1999  相似文献   

4.
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF estimates with a simulation study. For reference purposes, the results were compared to those obtained from using the Mantel-Haenszel procedure as well. Finally, we discuss some implications regarding the choice of model parameterizations for DIF detection using these frameworks.  相似文献   

5.
针对空间机器人(Space Robots,SR)共享自主控制(Shared Autonomous Control,SAC)面临的模式平稳转换问题,首先讨论了SAC的概念及其操作模式;其次研究了SAC操作模式的相互转换及基于时间Petri网(Timed Petri Nets,TPN)的模式转换方式;最后设计了一种基于TPN的SR虚拟环境仿真系统,并通过计算机虚拟仿真实验验证了自主与遥控融合的SAC操作模式及其模式转换顺序的有效性。  相似文献   

6.
The presence of nuisance dimensionality is a potential threat to the accuracy of results for tests calibrated using a measurement model such as a factor analytic model or an item response theory model. This article describes a mixture group bifactor model to account for the nuisance dimensionality due to a testlet structure as well as the dimensionality due to differences in patterns of responses. The model can be used for testing whether or not an item functions differently across latent groups in addition to investigating the differential effect of local dependency among items within a testlet. An example is presented comparing test speededness results from a conventional factor mixture model, which ignores the testlet structure, with results from the mixture group bifactor model. Results suggested the 2 models treated the data somewhat differently. Analysis of the item response patterns indicated that the 2-class mixture bifactor model tended to categorize omissions as indicating speededness. With the mixture group bifactor model, more local dependency was present in the speeded than in the nonspeeded class. Evidence from a simulation study indicated the Bayesian estimation method used in this study for the mixture group bifactor model can successfully recover generated model parameters for 1- to 3-group models for tests containing testlets.  相似文献   

7.
The standardized generalized dimensionality discrepancy measure and the standardized model‐based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence. Relative to their precursors, they allow for dimensionality assessment in a more readily interpretable metric of correlations. A simulation study demonstrates the utility of the discrepancy measures’ application at multiple levels of dimensionality analysis, and compares them to factor analytic and item response theoretic approaches. An example illustrates their use in practice.  相似文献   

8.
Because random assignment is not possible in observational studies, estimates of treatment effects might be biased due to selection on observable and unobservable variables. To strengthen causal inference in longitudinal observational studies of multiple treatments, we present 4 latent growth models for propensity score matched groups, and evaluate their performance with a Monte Carlo simulation study. We found that the 4 models performed similarly with respect to model fit, bias of parameter estimates, Type I error, and power to test the treatment effect. To demonstrate a multigroup latent growth model with dummy treatment indicators, we estimated the effect of students changing schools during elementary school years on their reading and mathematics achievement, using data from the Early Childhood Longitudinal Study Kindergarten Cohort.  相似文献   

9.
Data collected from questionnaires are often in ordinal scale. Unweighted least squares (ULS), diagonally weighted least squares (DWLS) and normal-theory maximum likelihood (ML) are commonly used methods to fit structural equation models. Consistency of these estimators demands no structural misspecification. In this article, we conduct a simulation study to compare the equation-by-equation polychoric instrumental variable (PIV) estimation with ULS, DWLS, and ML. Accuracy of PIV for the correctly specified model and robustness of PIV for misspecified models are investigated through a confirmatory factor analysis (CFA) model and a structural equation model with ordinal indicators. The effects of sample size and nonnormality of the underlying continuous variables are also examined. The simulation results show that PIV produces robust factor loading estimates in the CFA model and in structural equation models. PIV also produces robust path coefficient estimates in the model where valid instruments are used. However, robustness highly depends on the validity of instruments.  相似文献   

10.
Providing learners with opportunities to engage in activities similar to those carried out by scientists was addressed in a web-based research simulation in genetics developed for high school biology students. The research simulation enables learners to apply their genetics knowledge while giving them an opportunity to participate in an authentic genetics study using bioinformatics tools. The main purpose of the study outlined here is to examine how learning using this research simulation influences students’ understanding of genetics, and how students’ approaches to learning using the simulation influence their learning outcomes. Using both quantitative and qualitative procedures, we were able to show that while learning using the simulation students expanded their understanding of the relationships between molecular mechanisms and phenotype, and refined their understanding of certain genetic concepts. Two types of learners, research-oriented and task-oriented, were identified on the basis of the differences in the ways they seized opportunities to recognize the research practices, which in turn influenced their learning outcomes. The research-oriented learners expanded their genetics knowledge more than the task-oriented learners. The learning approach taken by the research-oriented learners enabled them to recognize the epistemology that underlies authentic genetic research, while the task-oriented learners referred to the research simulation as a set of simple procedural tasks. Thus, task-oriented learners should be encouraged by their teachers to cope with the scientists’ steps, while learning genetics through the simulation in a class setting.  相似文献   

11.
本文描述具有图形建模与动画显示能力的RTSS仿真软件.软件由仿真内核、建模程序和结果后处理程序3部分组成,它可以在客户/服务器方式下运行.RTSS软件采用面向对象技术以增加灵活性,因此具有模块化、灵活性强、易于改进升级等特点.RTSS所建的系统模型是一个开放式队列网络.使用RTSS可对数据采集系统、通信网络以及柔性制造系统进行不同等级的模拟,因而是系统性能分析的有效工具  相似文献   

12.
Appropriate model specification is fundamental to unbiased parameter estimates and accurate model interpretations in structural equation modeling. Thus detecting potential model misspecification has drawn the attention of many researchers. This simulation study evaluates the efficacy of the Bayesian approach (the posterior predictive checking, or PPC procedure) under multilevel bifactor model misspecification (i.e., ignoring a specific factor at the within level). The impact of model misspecification on structural coefficients was also examined in terms of bias and power. Results showed that the PPC procedure performed better in detecting multilevel bifactor model misspecification, when the misspecification became more severe and sample size was larger. Structural coefficients were increasingly negatively biased at the within level, as model misspecification became more severe. Model misspecification at the within level affected the between-level structural coefficient estimates more when data dependency was lower and the number of clusters was smaller. Implications for researchers are discussed.  相似文献   

13.
The cohort growth model (CGM) is a method for estimating the parameters of a latent growth model (LGM) based on cross-sectional data. The CGM models the interindividual differences in the growth rate, and it models how subjects’ growth rate is related to their initial status. We derive model identification for the CGM and illustrate, in a simulation study, that the CGM provides unbiased parameter estimates in most simulation conditions. Based on empirical data we compare the estimates of the CGM with the estimates of the LGM. The results were comparable for both models. Although the estimates of the (co)-variances were different, the estimates of both models led to similar conclusions on the developmental change. Finally, we discuss the advantages and limitations of the CGM, and we provide recommendations for its use in empirical research.  相似文献   

14.
A modified 3D finite element (3D-FE) model is developed under the FE software environment of LS-DYNA based on characteristics of stagger spinning process and actual production conditions. Several important characteristics of the model are proposed, including full model, hexahedral element, speed boundary mode, full simulation, double-precision mode, and no-interference. Modeling procedures and key technologies are compared and summarized: speed mode is superior to displacement mode in simulation accuracy and stability; time truncation is an undesirable option for analysis of the distribution trend of time-history parameters to guarantee that the data has reached the stable state; double-precision mode is more suitable for stagger spinning simulation, as truncation error has obvious effects on the accuracy of results; interference phenomenon can lead to obvious oscillation and mutation simulation results and influence the reliability of simulation significantly. Then, based on the modified model, some improvements of current reported results of roller intervals have been made, which lead to higher accuracy and reliability in the simulation.  相似文献   

15.
The latent change score framework allows for estimating a variety of univariate trajectory models, such as the no change, linear change, exponential forms of change, as well as multivariate trajectory models that allow for coupling between two or more constructs. A particularly attractive feature of these models is that it is easy to decompose and interpret aspects of change. One particularly flexible model, the dual change score model, has two components of change: a proportional change component that depends on scores at the previous time point, and a constant change component that is additive. We demonstrate through simulation and an empirical example that in a correctly specified model, the correlation between the proportional change parameter and the mean of the constant change component can approach either ?1 or 1, thus complicating interpretation. We provide recommendations and code to aid researchers’ ability to diagnose this issue in their own data.  相似文献   

16.
The adaptation of experimental cognitive tasks into measures that can be used to quantify neurocognitive outcomes in translational studies and clinical trials has become a key component of the strategy to address psychiatric and neurological disorders. Unfortunately, while most experimental cognitive tests have strong theoretical bases, they can have poor psychometric properties, leaving them vulnerable to measurement challenges that undermine their use in applied settings. Item response theory–based computerized adaptive testing has been proposed as a solution but has been limited in experimental and translational research due to its large sample requirements. We present a generalized latent variable model that, when combined with strong parametric assumptions based on mathematical cognitive models, permits the use of adaptive testing without large samples or the need to precalibrate item parameters. The approach is demonstrated using data from a common measure of working memory—the N-back task—collected across a diverse sample of participants. After evaluating dimensionality and model fit, we conducted a simulation study to compare adaptive versus nonadaptive testing. Computerized adaptive testing either made the task 36% more efficient or score estimates 23% more precise, when compared to nonadaptive testing. This proof-of-concept study demonstrates that latent variable modeling and adaptive testing can be used in experimental cognitive testing even with relatively small samples. Adaptive testing has the potential to improve the impact and replicability of findings from translational studies and clinical trials that use experimental cognitive tasks as outcome measures.  相似文献   

17.
基于马尔可夫决策过程理论,将终端直通选择与有限阶段折扣MDP模型相结合,研究网络吞吐量最优化问题。首先利用MDP对终端直通选择进行建模,再利用有限阶段后向迭代算法给出最优模式选择策略,最后通过大量的仿真实验,对给出的模式选择策略进行评估。结果表明,基于MDP的模式选择方法在最大化吞吐量方面拥有更好的性能,能得出更优的模式选择策略,具有获得更多系统吞吐量的优势。  相似文献   

18.
Bayesian methods are becoming very popular despite some practical difficulties in implementation. To assist in the practical application of Bayesian methods, we show how to implement Bayesian analysis with WinBUGS as part of a standard set of SAS routines. This implementation procedure is first illustrated by fitting a multiple regression model and then a linear growth curve model. A third example is also provided to demonstrate how to iteratively run WinBUGS inside SAS for Monte Carlo simulation studies. The SAS codes used in this study are easily extended to accommodate many other models with only slight modification. This interface can be of practical benefit in many aspects of Bayesian methods because it allows the SAS users to benefit from the implementation of Bayesian estimation and it also allows the WinBUGS user to benefit from the data processing routines available in SAS.  相似文献   

19.
A process model of writing development across the life span   总被引:4,自引:0,他引:4  
In this article, we provide an overview of writing development from a product perspective and from a process perspective. Then we discuss modifications of the most influential process model of skilled adult writing to explain beginning and developing writing, including a proposed developmental sequence of the emergence of cognitive processes in writing. Next we report the results of two recent dissertations by the second and third authors supervised by the first author aimed toward contrasting developmental issues: (a) specifying the algorithms or rules of thumb beginning and developing writers may use during on-line planning; and (b) investigating the further development of writing processes among skilled adult writers. In the first study, development was conceptualized as a linear process across age groups. In the second study, development was conceptualized as a horizontal process within skilled adult writers who expanded their expertise. Finally, we consider the developmental constraints and the instructional constraints on writing development and argue for a model of writing development in which endogenous and exogenous process variables interact to determine the outcome of the writing development process.  相似文献   

20.
It is now required for teachers to incorporate computational thinking (CT) into their science classes. Our research modifies the existing structure of a science methods course for preservice teachers to include CT via modeling and simulations. In the first study, preservice teachers were introduced to the basics of coding through an Hour of Code tutorial, followed by an exercise where they programmed an animated model of the solar system using Scratch. In the second study, we created a web-based simulation to visualize Newton’s second law of motion (F?=?ma) with a dynamic graph feature. The simulation is a race between two cars with interactive settings that the user can change, such as changing the mass and force of each car. Results from both studies reveal that after completing the exercises, preservice teachers learned the material effectively, felt that CT exercises would be beneficial in K-8 education, and plan to incorporate CT into their future classrooms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号