首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A primary assumption underlying several of the common methods for modeling item response data is unidimensionality, that is, test items tap into only one latent trait. This assumption can be assessed several ways, using nonlinear factor analysis and DETECT, a method based on the item conditional covariances. When multidimensionality is identified, a question of interest concerns the degree to which individual items are related to the latent traits. In cases where an item response is primarily associated with one of these traits it is said that (approximate) simple structure exists, whereas when the item response is related to both traits, the structure is complex. This study investigated the performance of three indices designed to assess the underlying structure present in item response data, two of which are based on factor analysis and one on DETECT. Results of the Monte Carlo simulations show that none of the indices works uniformly well in identifying the structure underlying item responses, although the DETECT r-ratio might be promising in differentiating between approximate simple and complex structures under certain circumstances.  相似文献   

2.
In structural equation modeling (SEM), researchers need to evaluate whether item response data, which are often multidimensional, can be modeled with a unidimensional measurement model without seriously biasing the parameter estimates. This issue is commonly addressed through testing the fit of a unidimensional model specification, a strategy previously determined to be problematic. As an alternative to the use of fit indexes, we considered the utility of a statistical tool that was expressly designed to assess the degree of departure from unidimensionality in a data set. Specifically, we evaluated the ability of the DETECT “essential unidimensionality” index to predict the bias in parameter estimates that results from misspecifying a unidimensional model when the data are multidimensional. We generated multidimensional data from bifactor structures that varied in general factor strength, number of group factors, and items per group factor; a unidimensional measurement model was then fit and parameter bias recorded. Although DETECT index values were generally predictive of parameter bias, in many cases, the degree of bias was small even though DETECT indicated significant multidimensionality. Thus we do not recommend the stand-alone use of DETECT benchmark values to either accept or reject a unidimensional measurement model. However, when DETECT was used in combination with additional indexes of general factor strength and group factor structure, parameter bias was highly predictable. Recommendations for judging the severity of potential model misspecifications in practice are provided.  相似文献   

3.
DETECT, the acronym for Dimensionality Evaluation To Enumerate Contributing Traits, is an innovative and relatively new nonparametric dimensionality assessment procedure used to identify mutually exclusive, dimensionally homogeneous clusters of items using a genetic algorithm ( Zhang & Stout, 1999 ). Because the clusters of items are mutually exclusive, this procedure is most useful when the data display approximate simple structure. In many testing situations, however, data display a complex multidimensional structure. The purpose of the current study was to evaluate DETECT item classification accuracy and consistency when the data display different degrees of complex structure using both simulated and real data. Three variables were manipulated in the simulation study: The percentage of items displaying complex structure (10%, 30%, and 50%), the correlation between dimensions (.00, .30, .60, .75, and .90), and the sample size (500, 1,000, and 1,500). The results from the simulation study reveal that DETECT can accurately and consistently cluster items according to their true underlying dimension when as many as 30% of the items display complex structure, if the correlation between dimensions is less than or equal to .75 and the sample size is at least 1,000 examinees. If 50% of the items display complex structure, then the correlation between dimensions should be less than or equal to .60 and the sample size be, at least, 1,000 examinees. When the correlation between dimensions is .90, DETECT does not work well with any complex dimensional structure or sample size. Implications for practice and directions for future research are discussed.  相似文献   

4.
This article considers potential problems that can arise in estimating a unidimensional item response theory (IRT) model when some test items are multidimensional (i.e., show a complex factorial structure). More specifically, this study examines (1) the consequences of model misfit on IRT item parameter estimates due to unintended minor item‐level multidimensionality, and (2) whether a Projection IRT model can provide a useful remedy. A real‐data example is used to illustrate the problem and also is used as a base model for a simulation study. The results suggest that ignoring item‐level multidimensionality might lead to inflated item discrimination parameter estimates when the proportion of multidimensional test items to unidimensional test items is as low as 1:5. The Projection IRT model appears to be a useful tool for updating unidimensional item parameter estimates of multidimensional test items for a purified unidimensional interpretation.  相似文献   

5.
This study examines the performance of a new method for assessing and characterizing dimensionality in test data using the NOHARM model, and comparing it with DETECT. Dimensionality assessment is carried out using two goodness-of-fit statistics that are compared to reference χ2 distributions. A Monte Carlo study is used with item parameters based on a statewide basic skills assessment and the SAT. Other factors that are varied include the correlation among the latent traits, the number of items, the number of subjects, skewness of the latent traits, and the presence or absence of guessing. The performance of the two procedures is judged by the accuracy in determining the number of underlying dimensions, and the degree to which items are correctly clustered together. Results indicate that the new, NOHARM-based method appears to perform comparably to DETECT in terms of simultaneously finding the correct number of dimensions and clustering items correctly. NOHARM is generally better able to determine the number of underlying dimensions, but less able to group items together, than DETECT. When errors in item cluster assignment are made, DETECT is more likely to incorrectly separate items while NOHARM more often incorrectly groups them together.  相似文献   

6.
DIMTEST is a widely used and studied method for testing the hypothesis of test unidimensionality as represented by local item independence. However, DIMTEST does not report the amount of multidimensionality that exists in data when rejecting its null. To provide more information regarding the degree to which data depart from unidimensionality, a DIMTEST-based Effect Size Measure (DESM) was formulated. In addition to detailing the development of the DESM estimate, the current study describes the theoretical formulation of a DESM parameter. To evaluate the efficacy of the DESM estimator according to test length, sample size, and correlations between dimensions, Monte Carlo simulations were conducted. The results of the simulation study indicated that the DESM estimator converged to its parameter as test length increased, and, as desired, its expected value did not increase with sample size (unlike the DIMTEST statistic in the case of multidimensionality). Also as desired, the standard error of DESM decreased as sample size increased.  相似文献   

7.
AN ITERATIVE ITEM BIAS DETECTION METHOD   总被引:1,自引:0,他引:1  
Two strategies for assessing item bias are discussed: methods that compare (transformed) item difficulties unconditional on ability level and methods that compare the probabilities of correct response conditional on ability level. In the present study, the logit model was used to compare the probabilities of correct response to an item by members of two groups, these probabilities being conditional on the observed score. Here the observed score serves as an indicator of ability level. The logit model was iteratively applied: In the Tth iteration, the T items with the highest value of the bias statistic are excluded from the test, and the observed score indicator of ability for the (T + 1)th iteration is computed from the remaining items. This method was applied to simulated data. The results suggest that the iterative logit method is a substantial improvement on the noniterative one, and that the iterative method is very efficient in detecting biased and unbiased items.  相似文献   

8.
Exact nonparametric procedures have been used to identify the level of differential item functioning (DIF) in binary items. This study explored the use of exact DIF procedures with items scored on a Likert scale. The results from an attitude survey suggest that the large-sample Cochran-Mantel-Haenszel (CMH) procedure identifies more items as statistically significant than two comparable exact nonparametric methods. This finding is consistent with previous findings; however, when items are classified in National Assessment of Educational Progress DIF categories, the results show that the CMH and its exact nonparametric counterparts produce almost identical classifications. Since DIF is often evaluated in terms of statistical and practical significance, this study provides evidence that the large-sample CMH procedure may be safely used even when the focal group has as few as 76 cases.  相似文献   

9.
This study compares the Rasch item fit approach for detecting multidimensionality in response data with principal component analysis without rotation using simulated data. The data in this study were simulated to represent varying degrees of multidimensionality and varying proportions of items representing each dimension. Because the requirement of unidimensionality is necessary to preserve the desirable measurement properties of Rasch models, useful ways of testing this requirement must be developed. The results of the analyses indicate that both the principal component approach and the Rasch item fit approach work in a variety of multidimensional data structures. However, each technique is unable to detect multidimensionality in certain combinations of the level of correlation between the two variables and the proportion of items loading on the two factors. In cases where the intention is to create a unidimensional structure, one would expect few items to load on the second factor and the correlation between the factors to be high. The Rasch item fit approach detects dimensionality more accurately in these situations.  相似文献   

10.
Restricted factor analysis (RFA) can be used to detect item bias (also called differential item functioning). In the RFA method of item bias detection, the common factor model serves as an item response model, but group membership is also included in the model. Two simulation studies are reported, both showing that the RFA method detects bias in 7‐point scale items very well, especially when the sample size is large, the mean trait difference between groups is small, the group sizes are equal, and the amount of bias is large. The first study further shows that the RFA method detects bias in dichotomous items at least as well as an established method based on the one‐parameter logistic item response model. The second study concerns various procedures to evaluate the significance of two‐item bias indices provided by the RFA method. The results indicate that the RFA method performs best when it is used in an iterative procedure.  相似文献   

11.
The conventional noncentrality parameter estimator of covariance structure models, which is currently implemented in widely circulated structural modeling programs (e.g., LISREL, EQS, AMOS, RAMONA), is shown to possess asymptotically potentially large bias, variance, and mean squared error (MSE). A formal expression for its large-sample bias is presented, and its large-sample variance and MSE are quantified. Based on these results, it is suggested that future research needs to develop means of possibly unbiased estimation of the noncentrality parameter, with smaller variance and MSE.  相似文献   

12.
The latent class reliability coefficient (LCRC) is improved by using the divisive latent class model instead of the unrestricted latent class model. This results in the divisive latent class reliability coefficient (DLCRC), which unlike LCRC avoids making subjective decisions about the best solution and thus avoids judgment error. A computational study using large numbers of items shows that DLCRC also is faster than LCRC and fast enough for practical purposes. Speed and objectivity render DLCRC superior to LCRC. A decisive feature of DLCRC is that it aims at closely approximating the multivariate distribution of item scores, which might render the method suited when test data are multidimensional. A simulation study focusing on multidimensionality shows that DLCRC in general has little bias relative to the true reliability and is relatively accurate compared to LCRC and classical lower bound methods coefficients α and λ2 and the greatest lower bound.  相似文献   

13.
Many large‐scale assessments are designed to yield two or more scores for an individual by administering multiple sections measuring different but related skills. Multidimensional tests, or more specifically, simple structured tests, such as these rely on multiple multiple‐choice and/or constructed responses sections of items to generate multiple scores. In the current article, we propose an extension of the hierarchical rater model (HRM) to be applied with simple structured tests with constructed response items. In addition to modeling the appropriate trait structure, the multidimensional HRM (M‐HRM) presented here also accounts for rater severity bias and rater variability or inconsistency. We introduce the model formulation, test parameter recovery with a focus on latent traits, and compare the M‐HRM to other scoring approaches (unidimensional HRMs and a traditional multidimensional item response theory model) using simulated and empirical data. Results show more precise scores under the M‐HRM, with a major improvement in scores when incorporating rater effects versus ignoring them in the traditional multidimensional item response theory model.  相似文献   

14.
Some IRT models can be equivalently modeled in alternative frameworks such as logistic regression. Logistic regression can also model time-to-event data, which concerns the probability of an event occurring over time. Using the relation between time-to-event models and logistic regression and the relation between logistic regression and IRT, this article outlines how the nonparametric Kaplan-Meier estimator for time-to-event data can be applied to IRT data. Established Kaplan-Meier computational formulas are shown to aid in better approximating “parametric-type” item difficulty compared to methods from existing nonparametric methods, particularly for the less-well-defined scenario wherein the response function is monotonic but invariant item ordering is unreasonable. Limitations and the potential for Kaplan-Meier within differential item functioning are also discussed.  相似文献   

15.
This study investigated the effect of complex structure on dimensionality assessment in compensatory multidimensional item response models using DETECT- and NOHARM-based methods. The performance was evaluated via the accuracy of identifying the correct number of dimensions and the ability to accurately recover item groupings using a simple matching similarity (SM) coefficient. The DETECT-based methods yielded higher proportion correct than the NOHARM-based methods in two- and three-dimensional conditions, especially when correlations were ≤.60, data exhibited ≤30% complexity, and sample size was 1,000. As the complexity increased and the sample size decreased, the performance of the methods typically diminished. The NOHARM-based methods were either equally successful or better in recovering item groupings than the DETECT-based methods and were mostly affected by complexity levels. The DETECT-based methods were affected largely by the test length, such that with the increase of the number of items, SM coefficients would decrease substantially.  相似文献   

16.
Unidimensionality and local independence are two common assumptions of item response theory. The former implies that all items measure a common latent trait, while the latter implies that responses are independent, conditional on respondents’ location on the latent trait. Yet, few tests are truly unidimensional. Unmodeled dimensions may result in test items displaying dependencies, which can lead to misestimated parameters and inflated reliability estimates. In this article, we investigate the dimensionality of interim mathematics tests and evaluate the extent to which modeling minor dimensions in the data change model parameter estimates. We found evidence of minor dimensions, but parameter estimates across models were similar. Our results indicate that minor dimensions outside the primary trait have negligible consequences on parameter estimates. This finding was observed despite the ratio of multidimensional to unidimensional items being above previously recommended thresholds.  相似文献   

17.
To assess item dimensionality, the following two approaches are described and compared: hierarchical generalized linear model (HGLM) and multidimensional item response theory (MIRT) model. Two generating models are used to simulate dichotomous responses to a 17-item test: the unidimensional and compensatory two-dimensional (C2D) models. For C2D data, seven items are modeled to load on the first and second factors, θ1 and θ2, with the remaining 10 items modeled unidimensionally emulating a mathematics test with seven items requiring an additional reading ability dimension. For both types of generated data, the multidimensionality of item responses is investigated using HGLM and MIRT. Comparison of HGLM and MIRT's results are possible through a transformation of items' difficulty estimates into probabilities of a correct response for a hypothetical examinee at the mean on θ and θ2. HGLM and MIRT performed similarly. The benefits of HGLM for item dimensionality analyses are discussed.  相似文献   

18.
In the presence of test speededness, the parameter estimates of item response theory models can be poorly estimated due to conditional dependencies among items, particularly for end‐of‐test items (i.e., speeded items). This article conducted a systematic comparison of five‐item calibration procedures—a two‐parameter logistic (2PL) model, a one‐dimensional mixture model, a two‐step strategy (a combination of the one‐dimensional mixture and the 2PL), a two‐dimensional mixture model, and a hybrid model‐–by examining how sample size, percentage of speeded examinees, percentage of missing responses, and way of scoring missing responses (incorrect vs. omitted) affect the item parameter estimation in speeded tests. For nonspeeded items, all five procedures showed similar results in recovering item parameters. For speeded items, the one‐dimensional mixture model, the two‐step strategy, and the two‐dimensional mixture model provided largely similar results and performed better than the 2PL model and the hybrid model in calibrating slope parameters. However, those three procedures performed similarly to the hybrid model in estimating intercept parameters. As expected, the 2PL model did not appear to be as accurate as the other models in recovering item parameters, especially when there were large numbers of examinees showing speededness and a high percentage of missing responses with incorrect scoring. Real data analysis further described the similarities and differences between the five procedures.  相似文献   

19.
Many researchers have suggested that the main cause of item bias is the misspecification of the latent ability space, where items that measure multiple abilities are scored as though they are measuring a single ability. If two different groups of examinees have different underlying multidimensional ability distributions and the test items are capable of discriminating among levels of abilities on these multiple dimensions, then any unidimensional scoring scheme has the potential to produce item bias. It is the purpose of this article to provide the testing practitioner with insight about the difference between item bias and item impact and how they relate to item validity. These concepts will be explained from a multidimensional item response theory (MIRT) perspective. Two detection procedures, the Mantel-Haenszel (as modified by Holland and Thayer, 1988) and Shealy and Stout's Simultaneous Item Bias (SIB; 1991) strategies, will be used to illustrate how practitioners can detect item bias.  相似文献   

20.
In classical test theory, a test is regarded as a sample of items from a domain defined by generating rules or by content, process, and format specifications, l f the items are a random sample of the domain, then the percent-correct score on the test estimates the domain score, that is, the expected percent correct for all items in the domain. When the domain is represented by a large set of calibrated items, as in item banking applications, item response theory (IRT) provides an alternative estimator of the domain score by transformation of the IRT scale score on the test. This estimator has the advantage of not requiring the test items to be a random sample of the domain, and of having a simple standard error. We present here resampling results in real data demonstrating for uni- and multidimensional models that the IRT estimator is also a more accurate predictor of the domain score than is the classical percent-correct score. These results have implications for reporting outcomes of educational qualification testing and assessment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号