首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The authors performed a Monte Carlo simulation to empirically investigate the robustness and power of 4 methods in testing mean differences for 2 independent groups under conditions in which 2 populations may not demonstrate the same pattern of nonnormality. The approaches considered were the t test, Wilcoxon rank-sum test, Welch-James test with trimmed means and Winsorized variances, and a nonparametric bootstrap test. Results showed that the Wilcoxon rank-sum test and Welch-James test with trimmed means and Winsorized variances were not robust in terms of type I error control when the 2 populations showed different patterns of nonnormality. The nonparametric bootstrap test provided power advantages over the t test. The authors discuss other results from the simulation study and provide recommendations.  相似文献   

2.
Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional z test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a simulation was conducted to evaluate these alternative methods in a more complex path model with multiple mediators and indirect paths with 2 and 3 paths. Methods for testing contrasts of 2 effects were evaluated also. The simulation included 1 exogenous independent variable, 3 mediators and 2 outcomes and varied sample size, number of paths in the mediated effects, test used to evaluate effects, effect sizes for each path, and the value of the contrast. Confidence intervals were used to evaluate the power and Type I error rate of each method, and were examined for coverage and bias. The bias-corrected bootstrap had the least biased confidence intervals, greatest power to detect nonzero effects and contrasts, and the most accurate overall Type I error. All tests had less power to detect 3-path effects and more inaccurate Type I error compared to 2-path effects. Confidence intervals were biased for mediated effects, as found in previous studies. Results for contrasts did not vary greatly by test, although resampling approaches had somewhat greater power and might be preferable because of ease of use and flexibility.  相似文献   

3.
Inspection of differential item functioning (DIF) in translated test items can be informed by graphical comparisons of item response functions (IRFs) across translated forms. Due to the many forms of DIF that can emerge in such analyses, it is important to develop statistical tests that can confirm various characteristics of DIF when present. Traditional nonparametric tests of DIF (Mantel-Haenszel, SIBTEST) are not designed to test for the presence of nonuniform or local DIF, while common probability difference (P-DIF) tests (e.g., SIBTEST) do not optimize power in testing for uniform DIF, and thus may be less useful in the context of graphical DIF analyses. In this article, modifications of three alternative nonparametric statistical tests for DIF, Fisher's χ 2 test, Cochran's Z test, and Goodman's U test ( Marascuilo & Slaughter, 1981 ), are investigated for these purposes. A simulation study demonstrates the effectiveness of a regression correction procedure in improving the statistical performance of the tests when using an internal test score as the matching criterion. Simulation power and real data analyses demonstrate the unique information provided by these alternative methods compared to SIBTEST and Mantel-Haenszel in confirming various forms of DIF in translated tests.  相似文献   

4.
The authors compared the Type I error rate and the power to detect differences in slopes and additive treatment effects of analysis of covariance (ANCOVA) and randomized block (RB) designs with a Monte Carlo simulation. For testing differences in slopes, 3 methods were compared: the test of slopes from ANCOVA, the omnibus Block × Treatment interaction, and the linear component of the Block × Treatment interaction of RB. In the test for adjusted means, 2 variations of both ANCOVA and RB were used. The power of the omnibus test of the interaction decreased dramatically as the number of blocks used increased and was always considerably smaller than the specific test of differences in slopes found in ANCOVA. Tests for means when there were concomitant differences in slopes showed that only ANCOVA uniformly controlled Type I error under all configurations of design variables. The most powerful option in almost all simulations for tests of both slopes and means was ANCOVA.  相似文献   

5.
When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA and CC nonparametrically by replacing the role of the parametric IRT model in Lee's classification indices with a modified version of Ramsay's kernel‐smoothed item response functions. The performance of the nonparametric CA and CC indices are tested in simulation studies in various conditions with different generating IRT models, test lengths, and ability distributions. The nonparametric approach to CA often outperforms Lee's method and Livingston and Lewis's method, showing robustness to nonnormality in the simulated ability. The nonparametric CC index performs similarly to Lee's method and outperforms Livingston and Lewis's method when the ability distributions are nonnormal.  相似文献   

6.
This study compares the ability of different robust regression estimators to detect and classify outliers. Well-known estimators with high breakdown points were compared using simulated data. Mean success rates (MSR) were computed and used as comparison criteria. The results showed that the least median of squares (LMS) and least trimmed squares (LTS) were the most successful methods for data that included leverage points, masking and swamping effects or critical and concentrated outliers. We recommend using LMS and LTS as diagnostic tools to classify outliers, because they remain robust even when applied to models that are heavily contaminated or that have a complicated structure of outliers.  相似文献   

7.
The purpose of this study was to investigate the power and Type I error rate of the likelihood ratio goodness-of-fit (LR) statistic in detecting differential item functioning (DIF) under Samejima's (1969, 1972) graded response model. A multiple-replication Monte Carlo study was utilized in which DIF was modeled in simulated data sets which were then calibrated with MULTILOG (Thissen, 1991) using hierarchically nested item response models. In addition, the power and Type I error rate of the Mantel (1963) approach for detecting DIF in ordered response categories were investigated using the same simulated data, for comparative purposes. The power of both the Mantel and LR procedures was affected by sample size, as expected. The LR procedure lacked the power to consistently detect DIF when it existed in reference/focal groups with sample sizes as small as 500/500. The Mantel procedure maintained control of its Type I error rate and was more powerful than the LR procedure when the comparison group ability distributions were identical and there was a constant DIF pattern. On the other hand, the Mantel procedure lost control of its Type I error rate, whereas the LR procedure did not, when the comparison groups differed in mean ability; and the LR procedure demonstrated a profound power advantage over the Mantel procedure under conditions of balanced DIF in which the comparison group ability distributions were identical. The choice and subsequent use of any procedure requires a thorough understanding of the power and Type I error rates of the procedure under varying conditions of DIF pattern, comparison group ability distributions.–or as a surrogate, observed score distributions–and item characteristics.  相似文献   

8.
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical procedures that counteract this problem by adjusting p values for effect estimates upward. Although MTPs are increasingly used in impact evaluations in education and other areas, an important consequence of their use is a change in statistical power that can be substantial. Unfortunately, researchers frequently ignore the power implications of MTPs when designing studies. Consequently, in some cases, sample sizes may be too small, and studies may be underpowered to detect effects as small as a desired size. In other cases, sample sizes may be larger than needed, or studies may be powered to detect smaller effects than anticipated. This paper presents methods for estimating statistical power for multiple definitions of statistical power and presents empirical findings on how power is affected by the use of MTPs.  相似文献   

9.
This paper presents the results of a simulation study to compare the performance of the Mann-Whitney U test, Student?s t test, and the alternate (separate variance) t test for two mutually independent random samples from normal distributions, with both one-tailed and two-tailed alternatives. The estimated probability of a Type I error was controlled (in the sense of being reasonably close to the attainable level) by all three tests when the variances were equal, regardless of the sample sizes. However, it was controlled only by the alternate t test for unequal variances with unequal sample sizes. With equal sample sizes, the probability was controlled by all three tests regardless of the variances. When it was controlled, we also compared the power of these tests and found very little difference. This means that very little power will be lost if the Mann-Whitney U test is used instead of tests that require the assumption of normal distributions.  相似文献   

10.
Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three IRT models (three- and two-parameter logistic model, and generalized partial credit model) used in a mixed-format test. The statistical properties of the proposed fit statistic were examined and compared to S-X2 and PARSCALE’s G2. Overall, RISE (Root Integrated Square Error) outperformed the other two fit statistics under the studied conditions in that the Type I error rate was not inflated and the power was acceptable. A further advantage of the nonparametric approach is that it provides a convenient graphical inspection of the misfit.  相似文献   

11.
Does the problem-based learning (PBL) approach contribute to personally meaningful learning? This study used a double-cohort design to explore the question by focusing a PBL singlecourse, which was compared to a conventional one. Consecutive sampling of medical students was obtained for both groups. Student outcomes measured were Course Valuing Inventory (CVI), affect, and preceptorship appeal responses. Paired tests showed significant increases in CVI scores from start to end of term in the PBL group only. Significant trends were observed in the relationships between experiencing the PBL approach and CVI score level, positive affect mode, and strength of preceptorship appeal. Stratified analysis did not detect confounding effects on the outcome measures from background course experience, learners' characteristics, or time trend. The findings suggest that the PBL approach can improve the quality of the learning environment in both cognitive and emotional ways for most students.  相似文献   

12.
A single-subject alternating treatment design was used to investigate the extent to which a specialized dyslexia font, OpenDyslexic, impacted reading rate or accuracy compared to two commonly used fonts when used with elementary students identified as having dyslexia. OpenDyslexic was compared to Arial and Times New Roman in three reading tasks: (a) letter naming, (b) word reading, and (c) nonsense word reading. Data were analyzed through visual analysis and improvement rate difference, a nonparametric measure of nonoverlap for comparing treatments. Results from this alternating treatment experiment show no improvement in reading rate or accuracy for individual students with dyslexia, as well as the group as a whole. While some students commented that the font was “new” or “different”, none of the participants reported preferring to read material presented in that font. These results indicate there may be no benefit for translating print materials to this font.  相似文献   

13.
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model. The power is related to the item response function (IRF) for the studied item, the latent trait distributions, and the sample sizes for the reference and focal groups. Simulation studies show that the theoretical values calculated from the formulas derived in the article are close to what are observed in the simulated data when the assumptions are satisfied. The robustness of the power formulas are studied with simulations when the assumptions are violated.  相似文献   

14.
The authors sought to identify through Monte Carlo simulations those conditions for which analysis of covariance (ANCOVA) does not maintain adequate Type I error rates and power. The conditions that were manipulated included assumptions of normality and variance homogeneity, sample size, number of treatment groups, and strength of the covariate-dependent variable relationship. Alternative tests studied were Quade's procedure, Puri and Sen's solution, Burnett and Barr's rank difference scores, Conover and Iman's rank transformation test, Hettmansperger's procedure, and the Puri-Sen-Harwell-Serlin test. For balanced designs, the ANCOVA F test was robust and was often the most powerful test through all sample-size designs and distributional configurations. With unbalanced designs, with variance heterogeneity, and when the largest treatment-group variance was matched with the largest group sample size, the nonparametric alternatives generally outperformed the ANCOVA test. When sample size and variance ratio were inversely coupled, all tests became very liberal; no test maintained adequate control over Type I error.  相似文献   

15.
Two new methods have been proposed to determine unexpected sum scores on sub-tests (testlets) both for paper-and-pencil tests and computer adaptive tests. A method based on a conservative bound using the hypergeometric distribution, denoted p, was compared with a method where the probability for each score combination was calculated using a highest density region (HDR). Furthermore, these methods were compared with the standardized log-likelihood statistic with and without a correction for the estimated latent trait value (denoted as l*z and lz, respectively). Data were simulated on the basis of the one-parameter logistic model, and both parametric and non-parametric logistic regression was used to obtain estimates of the latent trait. Results showed that it is important to take the trait level into account when comparing subtest scores. In a nonparametric item response theory (IRT) context, on adapted version of the HDR method was a powerful alterative to p. In a parametric IRT context, results showed that l*z had the highest power when the data were simulated conditionally on the estimated latent trait level.  相似文献   

16.
Abstract

This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of treatment for a population and how different types of RD analyses—“sharp” versus “fuzzy”—can identify average treatment effects for different conceptual subpopulations. Part 3 of the article introduces graphical methods, parametric statistical methods, and nonparametric statistical methods for estimating treatment effects in practice from regression discontinuity data plus validation tests and robustness tests for assessing these estimates. Section 4 considers generalizing RD findings and presents several different views on and approaches to the issue. Part 5 notes some important issues to pursue in future research about or applications of RD analysis.  相似文献   

17.
DIMTEST is a nonparametric statistical test procedure for assessing unidimensionality of binary item response data. The development of Stout's statistic, T, used in the DIMTEST procedure, does not require the assumption of a particular parametric form for the ability distributions or the item response functions. The purpose of the present study was to empirically investigate the performance of the statistic T with respect to different shapes of ability distributions. Several nonnormal distributions, both symmetric and nonsymmetric, were considered for this purpose. Other factors varied in the study were test length, sample size, and the level of correlation between abilities. The results of Type I error and power studies showed that the test statistic T exhibited consistently similar performance for all different shapes of ability distributions investigated in the study, which confirmed the nonparametric nature of the statistic T.  相似文献   

18.
In the present experiments, the outcome specificity of learning was explored in an appetitive Pavlovian backward conditioning procedure with rats. The rats initially were administered Pavlovian backward training with two qualitatively different unconditioned stimulus conditioned-stimulus (US-CS) pairs of stimuli (e.g., pellet --> noise or sucrose --> light), and then the effects of this training were assessed in Pavlovian-to-instrumental transfer (Experiment 1) and retardation-of-learning (Experiment 2) tests. In the transfer test, it was shown that during the last 10-sec interval, the CSs selectively reduced the rate of the instrumental responses with which they shared a US, relative to the instrumental responses with which they did not share a US. The opposite result was obtained when the USs (in the absence of the CSs) were presented noncontingently. In the retardation test, conditioned magazine approach, responding to the CSs was acquired more slowly when the stimulus-outcome combinations in the backward and the forward conditioning phases were the same, as compared with when they were reversed. These results are collectively in accord with the view that Pavlovian backward conditioning can result in the formation of outcome-specific inhibitory associations. Alternative views of backward conditioning are also examined.  相似文献   

19.
How should researchers choose between competing scales in predicting a criterion variable? This article proposes the use of nonnested tests for the 2SLS estimator of latent variable models to discriminate between scales. The finite sample performance of these tests is compared to structural equation modeling information-based criteria such as root mean squared error of approximation (RMSEA) and Akaike's Information Criterion (AIC). The Cox and encompassing tests and augmented versions of these tests are compared to the inconsistent ordinary least squares (OLS) J test. An augmented version of the encompassing test performs best for sample sizes of 100 or more and can be recommended for use on scales with high reliability (0.9) and sample sizes of 200 or more, under varying regressor and error distributions. The OLS J test performs best for small samples of N = 50 and can be recommended for use in small samples when scales have high reliability (0.9). Relative to the nonnested tests, the information-based criteria perform poorly.  相似文献   

20.
Latent means methods such as multiple-indicator multiple-cause (MIMIC) and structured means modeling (SMM) allow researchers to determine whether or not a significant difference exists between groups' factor means. Strong invariance is typically recommended when interpreting latent mean differences. The extent of the impact of noninvariant intercepts on conclusions made when implementing both MIMIC and SMM methods was the main purpose of this study. The impact of intercept noninvariance on Type I error rates, power, and two model fit indices when using MIMIC and SMM approaches under various conditions were examined. Type I error and power were adversely affected by intercept noninvariance. Although the fit indices did not detect small misspecifications in the form of noninvariant intercepts, one did perform more optimally.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号