首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   60篇
  免费   1篇
教育   34篇
科学研究   20篇
信息传播   7篇
  2023年   2篇
  2020年   3篇
  2019年   2篇
  2018年   2篇
  2017年   3篇
  2016年   1篇
  2015年   1篇
  2014年   7篇
  2013年   8篇
  2012年   6篇
  2011年   4篇
  2010年   1篇
  2009年   3篇
  2008年   2篇
  2007年   3篇
  2006年   2篇
  2005年   2篇
  2004年   1篇
  2000年   2篇
  1999年   1篇
  1997年   1篇
  1985年   1篇
  1984年   1篇
  1983年   1篇
  1978年   1篇
排序方式: 共有61条查询结果,搜索用时 15 毫秒
21.
Will subscores provide additional information than what is provided by the total score? Is there a method that can estimate more trustworthy subscores than observed subscores? To answer the first question, this study evaluated whether the true subscore was more accurately predicted by the observed subscore or total score. To answer the second question, three subscore estimation methods (i.e., subscore estimated from the observed subscore, total score, or a combination of both the subscore and total score) were compared. Analyses were conducted using data from six licensure tests. Results indicated that reporting subscores at the examinee level may not be necessary as they did not provide much additional information over what is provided by the total score. However, at the institutional level (for institution size ≥ 30), reporting subscores may not be harmful, although they may be redundant because the subscores were predicted equally well by the observed subscores or total scores. Finally, results indicated that estimating the subscore using a combination of observed subscore and total score resulted in the highest reliability.  相似文献   
22.
When a constructed‐response test form is reused, raw scores from the two administrations of the form may not be comparable. The solution to this problem requires a rescoring, at the current administration, of examinee responses from the previous administration. The scores from this “rescoring” can be used as an anchor for equating. In this equating, the choice of weights for combining the samples to define the target population can be critical. In rescored data, the anchor usually correlates very strongly with the new form but only moderately with the reference form. This difference has a predictable impact: the equating results are most accurate when the target population is the reference form sample, least accurate when the target population is the new form sample, and somewhere in the middle when the new form and reference form samples are equally weighted in forming the target population.  相似文献   
23.
24.
This paper characterizes oscillations found in block pulse function (BPF) domain identification of open loop first-order systems with step input. A useful condition for occurrence of such oscillations is presented mathematically. For any positive value of ‘ah’, oscillations are observed to occur, where h is the width of BPF domain sub-interval and 1/a is the time constant of the first-order system under consideration.  相似文献   
25.
The conclusive identification of specific etiological factors or pathogenic processes in the illness of schizophrenia has remained elusive despite great technological progress. The convergence of state-of-art scientific studies in molecular genetics, molecular neuropathophysiology, in vivo brain imaging and psychopharmacology, however, indicates that we may be coming much closer to understanding the genesis of schizophrenia. In near future, the diagnosis and assessment of schizophrenia using biochemical markers may become a “dream come true” for the medical community as well as for the general population. An understanding of the biochemistry/ visa vis pathophysiology of schizophrenia is essential to the discovery of preventive measures and therapeutic intervention.  相似文献   
26.
Goals and plans organize much of complex problem solving behavior and are often inferable from action sequences. This paper addresses the strengths and limitations of inferring goals and plans from information that can be derived from computer traces of software used to solve mathematics problems. We examined mathematics problem solving activity about distance, rate, time relationships in a computer software environment designed to support understanding of functional relationships among these variables (e.g., distance =rate × time; time=distance/rate) using graphical representations of the results of simulations. Ten adolescent-aged students used the software to solve two distance, rate, time problems, and provided think-aloud protocols. To determine the inferability of understanding from the action traces, coders analyzed students' understanding from the computer traces alone (Trace-only raters) and compared these to analyses based on the traces plus the verbal protocols (Traceplus raters). Inferability of understanding from the action traces was related to level of student understanding how they used the graphing tool. When students had a good understanding of distance, rate, time relationships, it could be accurately inferred from the computer traces if they used the simulation tool in conjunction with the graphing tool. When students had a weak understanding, the verbal protocols were necessary to make accurate inferences about what was and was not understood. The computer trace also failed to capture dynamic exploration of the visual environment so students who relied on the graphing tool were inaccurately characterized by the Trace-only coders. Discussion concerns types of scaffolds that would be helpful learning environment for complex problems, the kind of information that is needed to adequately track student understanding in this content domain, and instructional models for integrating learning environments like these into classrooms.Members of the Cognition and Technology Group at Vanderbilt who have contributed to this project are (in alphabetical order) Helen Bateman, John Bransford, Thaddeus Crews, Allison Moore, Mitchell Nathan, and Stephen Owens. The research was supported, in part, by grants from the National Science Foundation (NSF-MDR-9252990) but no official endorsement of the ideas expressed herein should be inferred.  相似文献   
27.
The study examined two approaches for equating subscores. They are (1) equating subscores using internal common items as the anchor to conduct the equating, and (2) equating subscores using equated and scaled total scores as the anchor to conduct the equating. Since equated total scores are comparable across the new and old forms, they can be used as an anchor to equate the subscores. Both chained linear and chained equipercentile methods were used. Data from two tests were used to conduct the study and results showed that when more internal common items were available (i.e., 10–12 items), then using common items to equate the subscores is preferable. However, when the number of common items is very small (i.e., five to six items), then using total scaled scores to equate the subscores is preferable. For both tests, not equating (i.e., using raw subscores) is not reasonable as it resulted in a considerable amount of bias.  相似文献   
28.
With ever increasing information being available to the end users, search engines have become the most powerful tools for obtaining useful information scattered on the Web. However, it is very common that even most renowned search engines return result sets with not so useful pages to the user. Research on semantic search aims to improve traditional information search and retrieval methods where the basic relevance criteria rely primarily on the presence of query keywords within the returned pages. This work is an attempt to explore different relevancy ranking approaches based on semantics which are considered appropriate for the retrieval of relevant information. In this paper, various pilot projects and their corresponding outcomes have been investigated based on methodologies adopted and their most distinctive characteristics towards ranking. An overview of selected approaches and their comparison by means of the classification criteria has been presented. With the help of this comparison, some common concepts and outstanding features have been identified.  相似文献   
29.
An on-chip lectin microarray based glycomic approach is employed to identify glyco markers for different gastritis and gastric cancer. Changes in protein glycosylation have impact on biological function and carcinogenesis. These altered glycosylation patterns in serum proteins and membrane proteins of tumor cells can be unique markers of cancer progression and hence have been exploited to diagnose various stages of cancer through lectin microarray technology. In the present work, we aimed to study the alteration of glycan structure itself in different stages of gastritis and gastric cancer thoroughly. In order to perform the study from both serum and tissue glycoproteins in an efficient and high-throughput manner, we indigenously developed and employed lectin microarray integrated on a microfluidic lab-on-a-chip platform. We analyzed serum and gastric biopsy samples from 8 normal, 15 chronic Type-B gastritis, 10 chronic Type-C gastritis, and 6 gastric adenocarcinoma patients and found that the glycoprofile obtained from tissue samples was more distinctive than that of the sera samples. We were able to establish signature glycoprofile for the three disease groups, that were absent in healthy normal individuals. In addition, our findings elucidated certain novel signature glycan expression in chronic gastritis and gastric cancer. In silico analysis showed that glycoprofile of chronic gastritis and gastric adenocarcinoma formed close clusters, confirming the previously hypothesized linkage between them. This signature can be explored further as gastric cancer marker to develop novel analytical tools and obtain in-depth understanding of the disease prognosis.  相似文献   
30.
Estimation of low density lipoprotein cholesterol (LDL-C) is crucial in management of coronary artery disease patients. Though a number of homogenous assays are available for estimation of LDL-C, use of calculated LDL-C by Friedewald’s formula (FF) is common in Indian laboratories for logistic reasons. Recently Anandaraja and colleagues have derived a new formula for calculating LDL-C. This formula needs to be evaluated before it is extensively applied in diagnosis. We measured LDL-C by homogenous method (D-LDL-C) in 515 fasting samples. Friedewald’s and Anandaraja’s formulas were used for calculation of LDL-C (F-LDL-C and A-LDL-C, respectively). The mean LDL-C levels were 123.3 ± 53.2, 112.4 ± 50.2 and 109.2 ± 49.8 mg/dl for D-LDL-C, F-LDL-C and A-LDL-C, respectively. There was a statistically significant difference between the results (P > 0.001) obtained by calculation formulas compared to the measured LDL-C. There was underestimation of LDL-C by 10.8 and 14 mg/dl by Friedewald’s and Anandaraja’s formulas respectively. The Pearson’s correlation between F-LDL-C and D-LDL-C was 0.931 and that between A-LDL-C and D-LDL-C was 0.930. Bland–Altman graphs showed a definite agreement between mean and differences of the calculation formulas and direct LDL-C with 95% of values lying with in ±2 SD limits. The mean percentage difference (calculated as {(Calculated LDL-C)-(D-LDL-C)}/D-LDL-C × 100) for F-LDL-C was maximum (−11.6%) at HDL-C ≥ 60 mg/dl and TG levels of 200–300 mg/dl (−10.4%) compared to D-LDL-C. A-LDL-C results gave highest mean percentage difference at total cholesterol concentrations <100 mg/dl (−37.3%) and HDL-C < 40 mg/dl (−17.1%), respectively. The results of our study showed that FF is better in agreement with D-LDL-C than Anandaraja’s formula for estimation of LDL-C by calculation though both lead to its underestimation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号