首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   132篇
  免费   1篇
教育   57篇
科学研究   4篇
各国文化   4篇
体育   20篇
文化理论   1篇
信息传播   47篇
  2023年   1篇
  2021年   2篇
  2019年   6篇
  2018年   11篇
  2017年   2篇
  2016年   5篇
  2015年   2篇
  2014年   3篇
  2013年   29篇
  2012年   10篇
  2011年   6篇
  2010年   10篇
  2009年   7篇
  2008年   5篇
  2007年   3篇
  2006年   3篇
  2005年   1篇
  2004年   6篇
  2002年   2篇
  2000年   2篇
  1999年   3篇
  1998年   2篇
  1996年   1篇
  1991年   2篇
  1989年   1篇
  1986年   1篇
  1982年   1篇
  1981年   2篇
  1971年   1篇
  1968年   1篇
  1966年   1篇
  1904年   1篇
排序方式: 共有133条查询结果,搜索用时 46 毫秒
71.
Bibliometrics has become an indispensable tool in the evaluation of institutions (in the natural and life sciences). An evaluation report without bibliometric data has become a rarity. However, evaluations are often required to measure the citation impact of publications in very recent years in particular. As a citation analysis is only meaningful for publications for which a citation window of at least three years is guaranteed, very recent years cannot (should not) be included in the analysis. This study presents various options for dealing with this problem in statistical analysis. The publications from two universities from 2000 to 2011 are used as a sample dataset (n = 2652, univ 1 = 1484 and univ 2 = 1168). One option is to show the citation impact data (percentiles) in a graphic and to use a line for percentiles regressed on ‘distant’ publication years (with confidence interval) showing the trend for the ‘very recent’ publication years. Another way of dealing with the problem is to work with the concept of samples and populations. The third option (very related to the second) is the application of the counterfactual concept of causality.  相似文献   
72.
The data of F1000 and InCites provide us with the unique opportunity to investigate the relationship between peers’ ratings and bibliometric metrics on a broad and comprehensive data set with high-quality ratings. F1000 is a post-publication peer review system of the biomedical literature. The comparison of metrics with peer evaluation has been widely acknowledged as a way of validating metrics. Based on the seven indicators offered by InCites, we analyzed the validity of raw citation counts (Times Cited, 2nd Generation Citations, and 2nd Generation Citations per Citing Document), normalized indicators (Journal Actual/Expected Citations, Category Actual/Expected Citations, and Percentile in Subject Area), and a journal based indicator (Journal Impact Factor). The data set consists of 125 papers published in 2008 and belonging to the subject category cell biology or immunology. As the results show, Percentile in Subject Area achieves the highest correlation with F1000 ratings; we can assert that for further three other indicators (Times Cited, 2nd Generation Citations, and Category Actual/Expected Citations) the “true” correlation with the ratings reaches at least a medium effect size.  相似文献   
73.
Abstract

This article reports on classroom research designed to answer questions about authority—how institutions and disciplines, broadly conceived, influence teachers' ability to abnegate authority and how students' experiences influence their perceptions of authority in a business writing and a first-year composition class. The theoretical framework is derived from research about institutional and disciplinary influences on these two areas of study. This framework and our results lead us to speculate about the ways in which our students' experience of the institution and expectations of the classes and their intentions for using the material taught in the classes may have thwarted our attempt to share authority in our classrooms.  相似文献   
74.
Information centers are being established for many disciplines. For the medical profession, users can benefit directly from these centers by having information searched by medical library professionals and readily available. If the users of an information system are to share in the operating expenses, some equitable system of charges must be established. The numerous systems of establishing user charges are listed and discussed, with the advantages or disadvantages of each system explained. After the systems have been reviewed, alternative methods of establishing prices are presented along with a typical example of what these prices might be, ranging from $7.50 to $2.50 per request. The implementation of the cost system is outlined and certain philosophical questions are posed.  相似文献   
75.
76.
77.
Research evaluation based on bibliometrics is prevalent in modern science. However, the usefulness of citation counts for measuring research impact has been questioned for many years. Empirical studies have demonstrated that the probability of being cited might depend on many factors that are not related to the accepted conventions of scholarly publishing. The current study investigates the relationship between the performance of universities in terms of field-normalized citation impact (NCS) and four factors (FICs) with possible influences on the citation impact of single papers: journal impact factor (JIF), number of pages, number of authors, and number of cited references. The study is based on articles and reviews published by 49 German universities in 2000, 2005 and 2010. Multilevel regression models have been estimated, since multiple levels of data have been analyzed which are on the single paper and university level. The results point to weak relationships between NCSs and number of authors, number of cited references, number of pages, and JIF. Thus, the results demonstrate that there are similar effects of all FICs on NCSs in universities with high or low NCSs. Although other studies revealed that FICs might be effective on the single paper level, the results of this study demonstrate that they are not effective on the aggregated level (i.e., on the institutional NCSs level).  相似文献   
78.
Lee et al. (2015) – based on Uzzi et al. (2013) – and Wang et al. (2017) proposed scores based on cited references (cited journals) data which can be used to measure the novelty of papers (named as novelty scores U and W in this study). Although previous research has used novelty scores in various empirical analyses, no study has been published up to now – to the best of our knowledge – which quantitatively tested the convergent validity of novelty scores: do these scores measure what they propose to measure? Using novelty assessments by faculty members (FMs) at F1000Prime for comparison, we tested the convergent validity of the two novelty scores (U and W). FMs’ assessments do not only refer to the quality of biomedical papers, but also to their characteristics (by assigning certain tags to the papers): for example, are the presented findings or formulated hypotheses novel (tags “new findings” and “hypothesis”)? We used these and other tags to investigate the convergent validity of both novelty scores. Our study reveals different results for the novelty scores: the results for novelty score U are mostly in agreement with previously formulated expectations. We found, for instance, that for a standard deviation (one unit) increase in novelty score U, the expected number of assignments of the “new finding” tag increase by 7.47%. The results for novelty score W, however, do not reflect convergent validity with the FMs’ assessments: only the results for some tags are in agreement with the expectations. Thus, we propose – based on our results – the use of novelty score U for measuring novelty quantitatively, but question the use of novelty score W.  相似文献   
79.
Whereas numerous studies have confirmed the objectivity and reliability of the Allgemeine Sportmotorische Test für Kinder” (AST, a general motor fitness and coordination test for children), examinations of its construct validity reveal that: (1) the few available findings fail to confirm the theoretically predicted distinction between the latent variables of condition and coordination, and (2) no studies so far have examined its item homogeneity, even though the test recommends computing a sum score to provide a general estimate of motor fitness. Confirmatory factor analyses show that the two latent variables condition and coordination cannot be extracted from the AST. Models with different latent variables also cannot be confirmed, too. Analyses based on IRT models, in contrast, reveal that one can distinguish between two qualitative classes of persons who are associated with either better running performance combined with lower throwing performance or better throwing performance combined with lower running performance. Nonetheless, it should be noted that class membership does not remain constant over time. In sum, results based on classical and probabilistic test theory negate the ability assumption underlying the AST. For practical assessments of a skill-specific performance, this means that it only seems justifiable to use a sum score for the locomotion items and a sum score for the object manipulation items.  相似文献   
80.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号