首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A procedure for evaluating candidate auxiliary variable correlations with response variables in incomplete data sets is outlined. The method provides point and interval estimates of the outcome-residual correlations with potentially useful auxiliaries, and of the bivariate correlations of outcome(s) with the latter variables. Auxiliary variables found in this way can enhance considerably the plausibility of the popular missing at random (MAR) assumption if included in ensuing maximum likelihood analyses, or can alternatively be incorporated in imputation models for subsequent multiple imputation analyses. The approach can be particularly helpful in empirical settings where violations of the MAR assumption are suspected, as is the case in many longitudinal studies, and is illustrated with data from cognitive aging research.  相似文献   

2.
Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes data. Statistical inference is based on the assumption that data are missing completely at random or missing at random. Importantly, whether or not data are missing is assumed to be independent of the missing data. A saturated correlates model that incorporates correlates of the missingness or the missing data into an analysis and multiple imputation that might also use such correlates offer advantages over the standard implementation of SEM when data are not missing at random because these approaches could result in a data analysis problem for which the missingness is ignorable. This article considers these approaches in an analysis of family data to assess the sensitivity of parameter estimates and statistical inferences to assumptions about missing data, a strategy that could be easily implemented using SEM software.  相似文献   

3.
Small samples are common in growth models due to financial and logistical difficulties of following people longitudinally. For similar reasons, longitudinal studies often contain missing data. Though full information maximum likelihood (FIML) is popular to accommodate missing data, the limited number of studies in this area have found that FIML tends to perform poorly with small-sample growth models. This report demonstrates that the fault lies not with how FIML accommodates missingness but rather with maximum likelihood estimation itself. We discuss how the less popular restricted likelihood form of FIML, along with small-sample-appropriate methods, yields trustworthy estimates for growth models with small samples and missing data. That is, previously reported small sample issues with FIML are attributable to finite sample bias of maximum likelihood estimation not direct likelihood. Estimation issues pertinent to joint multiple imputation and predictive mean matching are also included and discussed.  相似文献   

4.
Respondent attrition is a common problem in national longitudinal panel surveys. To make full use of the data, weights are provided to account for attrition. Weight adjustments are based on sampling design information and data from the base year; information from subsequent waves is typically not utilized. Alternative methods to address bias from nonresponse are full information maximum likelihood (FIML) or multiple imputation (MI). The effects on bias of growth parameter estimates from using these methods are compared via a simulation study. The results indicate that caution needs to be taken when utilizing panel weights when there is missing data, and to consider methods like FIML and MI, which are not as susceptible to the omission of important auxiliary variables.  相似文献   

5.
The current research demonstrates the effectiveness of using structural equation modeling (SEM) for the investigation of subgroup differences with sparse data designs where not every student takes every item. Simulations were conducted that reflected missing data structures like those encountered in large survey assessment programs (e.g., National Assessment of Educational Progress). A maximum likelihood method of estimation was implemented that allowed all data to be used without performing any imputation. A multiple indicators multiple causes (MIMIC) model was used to examine group differences. There was no detriment to the estimation of the MIMIC model parameters under sparse data design conditions when compared to the design without missing data. The overall size of samples had more influence on the variability of estimates than did the data design.  相似文献   

6.
Myriad approaches for handling missing data exist in the literature. However, few studies have investigated the tenability and utility of these approaches when used with intensive longitudinal data. In this study, we compare and illustrate two multiple imputation (MI) approaches for coping with missingness in fitting multivariate time-series models under different missing data mechanisms. They include a full MI approach, in which all dependent variables and covariates are imputed simultaneously, and a partial MI approach, in which missing covariates are imputed with MI, whereas missingness in the dependent variables is handled via full information maximum likelihood estimation. We found that under correctly specified models, partial MI produces the best overall estimation results. We discuss the strengths and limitations of the two MI approaches, and demonstrate their use with an empirical data set in which children’s influences on parental conflicts are modeled as covariates over the course of 15 days (Schermerhorn, Chow, & Cummings, 2010).  相似文献   

7.
Methods of uniform differential item functioning (DIF) detection have been extensively studied in the complete data case. However, less work has been done examining the performance of these methods when missing item responses are present. Research that has been done in this regard appears to indicate that treating missing item responses as incorrect can lead to inflated Type I error rates (false detection of DIF). The current study builds on this prior research by investigating the utility of multiple imputation methods for missing item responses, in conjunction with standard DIF detection techniques. Results of the study support the use of multiple imputation for dealing with missing item responses. The article concludes with a discussion of these results for multiple imputation in conjunction with other research findings supporting its use in the context of item parameter estimation with missing data.  相似文献   

8.
The purpose of this study is to investigate the effects of missing data techniques in longitudinal studies under diverse conditions. A Monte Carlo simulation examined the performance of 3 missing data methods in latent growth modeling: listwise deletion (LD), maximum likelihood estimation using the expectation and maximization algorithm with a nonnormality correction (robust ML), and the pairwise asymptotically distribution-free method (pairwise ADF). The effects of 3 independent variables (sample size, missing data mechanism, and distribution shape) were investigated on convergence rate, parameter and standard error estimation, and model fit. The results favored robust ML over LD and pairwise ADF in almost all respects. The exceptions included convergence rates under the most severe nonnormality in the missing not at random (MNAR) condition and recovery of standard error estimates across sample sizes. The results also indicate that nonnormality, small sample size, MNAR, and multicollinearity might adversely affect convergence rate and the validity of statistical inferences concerning parameter estimates and model fit statistics.  相似文献   

9.
考虑响应变量随机缺失下线性模型响应变量均值的估计问题,分别获得了基于完全观测样本数据、线性回归插补后的"完全样本"和逆概率加权插补后的"完全样本"得到的响应变量均值估计,并证明了其渐近正态性.  相似文献   

10.
Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in the context of a MANOVA, with the typical default for dealing with missing data: listwise deletion. When data are missing at random, the new methods maintained the nominal Type I error rate and had power comparable to the complete data condition. When 40% of the data were missing completely at random, the Type I error rates for the new methods were inflated, but not for lower percents.  相似文献   

11.
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact traditional marginal maximum likelihood (ML) estimation of IRT model parameters, including sample size, with smaller samples generally being associated with lower parameter estimation accuracy, and inflated standard errors for the estimates. Given this deleterious impact of small samples on IRT model performance, use of these techniques with low-incidence populations, where it might prove to be particularly useful, estimation becomes difficult, especially with more complex models. Recently, a Pairwise estimation method for Rasch model parameters has been suggested for use with missing data, and may also hold promise for parameter estimation with small samples. This simulation study compared item difficulty parameter estimation accuracy of ML with the Pairwise approach to ascertain the benefits of this latter method. The results support the use of the Pairwise method with small samples, particularly for obtaining item location estimates.  相似文献   

12.
The present study evaluated the multiple imputation method, a procedure that is similar to the one suggested by Li and Lissitz (2004), and compared the performance of this method with that of the bootstrap method and the delta method in obtaining the standard errors for the estimates of the parameter scale transformation coefficients in item response theory (IRT) equating in the context of the common‐item nonequivalent groups design. Two different estimation procedures for the variance‐covariance matrix of the IRT item parameter estimates, which were used in both the delta method and the multiple imputation method, were considered: empirical cross‐product (XPD) and supplemented expectation maximization (SEM). The results of the analyses with simulated and real data indicate that the multiple imputation method generally produced very similar results to the bootstrap method and the delta method in most of the conditions. The differences between the estimated standard errors obtained by the methods using the XPD matrices and the SEM matrices were very small when the sample size was reasonably large. When the sample size was small, the methods using the XPD matrices appeared to yield slight upward bias for the standard errors of the IRT parameter scale transformation coefficients.  相似文献   

13.
Many large-scale educational surveys have moved from linear form design to multistage testing (MST) design. One advantage of MST is that it can provide more accurate latent trait (θ) estimates using fewer items than required by linear tests. However, MST generates incomplete response data by design; hence, questions remain as to how to calibrate items using the incomplete data from MST design. Further complication arises when there are multiple correlated subscales per test, and when items from different subscales need to be calibrated according to their respective score reporting metric. The current calibration-per-subscale method produced biased item parameters, and there is no available method for resolving the challenge. Deriving from the missing data principle, we showed when calibrating all items together the Rubin's ignorability assumption is satisfied such that the traditional single-group calibration is sufficient. When calibrating items per subscale, we proposed a simple modification to the current calibration-per-subscale method that helps reinstate the missing-at-random assumption and therefore corrects for the estimation bias that is otherwise existent. Three mainstream calibration methods are discussed in the context of MST, they are the marginal maximum likelihood estimation, the expectation maximization method, and the fixed parameter calibration. An extensive simulation study is conducted and a real data example from NAEP is analyzed to provide convincing empirical evidence.  相似文献   

14.
Maximum likelihood algorithms for use with missing data are becoming commonplace in microcomputer packages. Specifically, 3 maximum likelihood algorithms are currently available in existing software packages: the multiple-group approach, full information maximum likelihood estimation, and the EM algorithm. Although they belong to the same family of estimator, confusion appears to exist over the differences among the 3 algorithms. This article provides a comprehensive, nontechnical overview of the 3 maximum likelihood algorithms. Multiple imputation, which is frequently used in conjunction with the EM algorithm, is also discussed.  相似文献   

15.
Detection of differential item functioning (DIF) on items intentionally constructed to favor one group over another was investigated on item parameter estimates obtained from two item response theory-based computer programs, LOGIST and BILOG. Signed- and unsigned-area measures based on joint maximum likelihood estimation, marginal maximum likelihood estimation, and two marginal maximum a posteriori estimation procedures were compared with each other to determine whether detection of DIF could be improved using prior distributions. Results indicated that item parameter estimates obtained using either prior condition were less deviant than when priors were not used. Differences in detection of DIF appeared to be related to item parameter estimation condition and to some extent to sample size.  相似文献   

16.
Determining the number of factors in exploratory factor analysis is arguably the most crucial decision a researcher faces when conducting the analysis. While several simulation studies exist that compare various so-called factor retention criteria under different data conditions, little is known about the impact of missing data on this process. Hence, in this study, we evaluated the performance of different factor retention criteria—the Factor Forest, parallel analysis based on a principal component analysis as well as parallel analysis based on the common factor model and the comparison data approach—in combination with different missing data methods, namely an expectation-maximization algorithm called Amelia, predictive mean matching, and random forest imputation within the multiple imputations by chained equations (MICE) framework as well as pairwise deletion with regard to their accuracy in determining the number of factors when data are missing. Data were simulated for different sample sizes, numbers of factors, numbers of manifest variables (indicators), between-factor correlations, missing data mechanisms and proportions of missing values. In the majority of conditions and for all factor retention criteria except the comparison data approach, the missing data mechanism had little impact on the accuracy and pairwise deletion performed comparably well as the more sophisticated imputation methods. In some conditions, especially small-sample cases and when comparison data were used to determine the number of factors, random forest imputation was preferable to other missing data methods, though. Accordingly, depending on data characteristics and the selected factor retention criterion, choosing an appropriate missing data method is crucial to obtain a valid estimate of the number of factors to extract.  相似文献   

17.
A 2-stage procedure for estimation and testing of observed measure correlations in the presence of missing data is discussed. The approach uses maximum likelihood for estimation and the false discovery rate concept for correlation testing. The method can be used in initial exploration-oriented empirical studies with missing data, where it is of interest to estimate manifest variable interrelationship indexes and test hypotheses about their population values. The procedure is applicable also with violations of the underlying missing at random assumption, via inclusion of auxiliary variables. The outlined approach is illustrated with data from an aging research study.  相似文献   

18.
The examinee‐selected‐item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set of items (e.g., choose one item to respond from a pair of items), always yields incomplete data (i.e., only the selected items are answered and the others have missing data) that are likely nonignorable. Therefore, using standard item response theory models, which assume ignorable missing data, can yield biased parameter estimates so that examinees taking different sets of items to answer cannot be compared. To solve this fundamental problem, in this study the researchers utilized the specific objectivity of Rasch models by adopting the conditional maximum likelihood estimation (CMLE) and pairwise estimation (PE) methods to analyze ESI data, and conducted a series of simulations to demonstrate the advantages of the CMLE and PE methods over traditional estimation methods in recovering item parameters in ESI data. An empirical data set obtained from an experiment on the ESI design was analyzed to illustrate the implications and applications of the proposed approach to ESI data.  相似文献   

19.
This article compares maximum likelihood and Bayesian estimation of the correlated trait–correlated method (CT–CM) confirmatory factor model for multitrait–multimethod (MTMM) data. In particular, Bayesian estimation with minimally informative prior distributions—that is, prior distributions that prescribe equal probability across the known mathematical range of a parameter—are investigated as a source of information to aid convergence. Results from a simulation study indicate that Bayesian estimation with minimally informative priors produces admissible solutions more often maximum likelihood estimation (100.00% for Bayesian estimation, 49.82% for maximum likelihood). Extra convergence does not come at the cost of parameter accuracy; Bayesian parameter estimates showed comparable bias and better efficiency compared to maximum likelihood estimates. The results are echoed via 2 empirical examples. Hence, Bayesian estimation with minimally informative priors outperforms enables admissible solutions of the CT–CM model for MTMM data.  相似文献   

20.
This paper reviews methods for handling missing data in a research study. Many researchers use ad hoc methods such as complete case analysis, available case analysis (pairwise deletion), or single-value imputation. Though these methods are easily implemented, they require assumptions about the data that rarely hold in practice. Model-based methods such as maximum likelihood using the EM algorithm and multiple imputation hold more promise for dealing with difficulties caused by missing data. While model-based methods require specialized computer programs and assumptions about the nature of the missing data, these methods are appropriate for a wider range of situations than the more commonly used ad hoc methods. The paper provides an illustration of the methods using data from an intervention study designed to increase students’ ability to control their asthma symptoms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号