首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The landmark citation method is a new collection assessment method based on the citation record of a single landmark article. This citation record is developed by identifying sources which cite the landmark article. A bibliography, extracted from the citation record, is then used to complete an assessment of the collection. This method was developed and used to assess the biotechnology collection of the National Library of Medicine. The information gained from this study, in addition to demonstrating the technique, also provided insight into the evolution of the biotechnology literature.  相似文献   

3.
4.
Between 1965 and 1980, the Library Research and Demonstration Branch within the Department of Education awarded over $25 million to 312 projects. By tracing the citations in Social Sciences Citation Index from a random sampling of 52% of these projects, this study has attempted to assess the dissemination and impact of the projects in the professional literature.Approximately half of the projects were not cited in SSCI. The citations tended to be clustered among a small number of library-related serials. A small number of funded projects accounted for a large number of the citations. The most cited projects cost only one-fifth as much as the most expensive studies, yet were cited nearly five times as often.  相似文献   

5.
References formulate the foundation and endorsements of scientific research. Using articles published in 2005 covered by Microsoft Academic Graph, the current paper defines and calculates five indicators of references, i.e., the number of references, the number of citations of references, the age of references, the number of nodes in a reference cascade (a multi-generation reference network), and the density of bibliographic coupling networks by references, and investigates their relations with the citation impact of the focal publication. A non-linear relationship is shown in all the five indicators; specifically, we observe two types of patterns, namely inverted-L and -U relations, both presenting the existence of critical points. We further explore the discipline-level differences of such a relationship and how it relates to the characteristics of the discipline itself. Among all five indicators, the effect of disciplinary academic “environments” is universally identified. We believe that the current paper provides insightful views to the discussion regarding the significance of a reference list.  相似文献   

6.
This paper reviews the use of publication activity indicators for the assessment of the qualifications of scientific and teaching staff. A new section entitled The Publication Activities of Scientists in Belarus has been created on the website of the Central Science Library of the National Academy of Sciences of Belarus. This section includes a list of periodicals that are available for the publication of scientific results in key areas of research (based on the Web of Knowledge) and a listof Belarusian organizations that are ranked by the h-index and the number of references to their papers according to Scopus. This section also provides the editors of scientific journals with recommendations on the inclusion of citation data into global databases and other materials for researchers.  相似文献   

7.
Reliable methods for the assessment of research success are still in discussion. One method, which uses the likelihood of publishing very highly cited papers, has been validated in terms of Nobel prizes garnered. However, this method cannot be applied widely because it uses the fraction of publications in the upper tail of citation distribution that follows a power law, which includes a low number of publications in most countries and institutions. To achieve the same purpose without restrictions, we have developed the double rank analysis, in which publications that have a low number of citations are also included. By ranking publications by their number of citations from highest to lowest, publications from institutions or countries have two ranking numbers: one for their internal and another one for world positions; the internal ranking number can be expressed as a function of the world ranking number. In log–log double rank plots, a large number of publications fit a straight line; extrapolation allows estimating the likelihood of publishing the highest cited publication. The straight line derives from a power law behavior of the double rank that occurs because citations follow lognormal distributions with values of μ and σ that vary within narrow limits.  相似文献   

8.
Across the various scientific domains, significant differences occur with respect to research publishing formats, frequencies and citing practices, the nature and organisation of research and the number and impact of a given domain's academic journals. Consequently, differences occur in the citations and h-indices of the researchers. This paper attempts to identify cross-domain differences using quantitative and qualitative measures. The study focuses on the relationships among citations, most-cited papers and h-indices across domains and for research group sizes. The analysis is based on the research output of approximately 10,000 researchers in Slovenia, of which we focus on 6536 researchers working in 284 research group programmes in 2008–2012.As comparative measures of cross-domain research output, we propose the research impact cube (RIC) representation and the analysis of most-cited papers, highest impact factors and citation distribution graphs (Lorenz curves). The analysis of Lotka's model resulted in the proposal of a binary citation frequencies (BCF) distribution model that describes well publishing frequencies. The results may be used as a model to measure, compare and evaluate fields of science on the global, national and research community level to streamline research policies and evaluate progress over a definite time period.  相似文献   

9.
A standard procedure in citation analysis is that all papers published in one year are assessed at the same later point in time, implicitly treating all publications as if they were published at the exact same date. This leads to systematic bias in favor of early-months publications and against late-months publications. This contribution analyses the size of this distortion on a large body of publications from all disciplines over citation windows of up to 15 years. It is found that early-month publications enjoy a substantial citation advantage, which arises from citations received in the first three years after publication. While the advantage is stronger for author self-citations as opposed to citations from others, it cannot be eliminated by excluding self-citations. The bias decreases only slowly over longer citation windows due to the continuing influence of the earlier years’ citations. Because of the substantial extent and long persistence of the distortions, it would be useful to remove or control for this bias in research and evaluation studies which use citation data. It is demonstrated that this can be achieved by using the newly introduced concept of month-based citation windows.  相似文献   

10.
This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus.  相似文献   

11.
12.
This paper presents a statistical analysis of the relationship between three science indicators applied in earlier bibliometric studies, namely research leadership based on corresponding authorship, international collaboration using international co-authorship data, and field-normalized citation impact. Indicators at the level of countries are extracted from the SIR database created by SCImago Research Group from publication records indexed for Elsevier’s Scopus. The relationship between authorship and citation-based indicators is found to be complex, as it reflects a country’s phase of scientific development and the coverage policy of the database. Moreover, one should distinguish a genuine leadership effect from a purely statistical effect due to fractional counting. Further analyses at the level of institutions and qualitative validation studies are recommended.  相似文献   

13.
Biomedical research encompasses diverse types of activities, from basic science (“bench”) to clinical medicine (“bedside”) to bench-to-bedside translational research. It, however, remains unclear whether different types of research receive citations at varying rates. Here we aim to answer this question by using a newly proposed paper-level indicator that quantifies the extent to which a paper is basic science or clinical medicine. Applying this measure to 5 million biomedical papers, we find a systematic citation disadvantage of clinical oriented papers; they tend to garner far fewer citations and are less likely to be hit works than papers oriented towards basic science. At the same time, clinical research has a higher variance in its citation. We also find that the citation difference between basic and clinical research decreases, yet still persists, if longer citation-window is used. Given the increasing adoption of short-term, citation-based bibliometric indicators in funding decisions, the under-cited effect of clinical research may provide disincentives for bio-researchers to venture into the translation of basic scientific discoveries into clinical applications, thus providing explanations of reasons behind the existence of the gap between basic and clinical research that is commented as “valley of death” and the commentary of “extinction” risk of translational researchers. Our work may provide insights to policy-makers on how to evaluate different types of biomedical research.  相似文献   

14.
Information specialists should provide a value-added service by supplementing their online searches with primary research. Primary research results in more up-to-date information from a broader spectrum of sources. The requester benefits from primary research by receiving a cost-effective and more comprehensive information package. The information professional benefits by acquiring a more in-depth knowledge of the subject, increasing awareness of important projects within the organization, and achieving recognition as a key information analyst.  相似文献   

15.
16.
The normalized citation indicator may not be sufficiently reliable when a short citation time window is used, because the citation counts for recently published papers are not as reliable as those for papers published many years ago. In a limited time period, recent publications usually have insufficient time to accumulate citations and the citation counts of these publications are not sufficiently reliable to be used in the citation impact indicators. However, normalization methods themselves cannot solve this problem. To solve this problem, we introduce a weighting factor to the commonly used normalization indicator Category Normalized Citation Impact (CNCI) at the paper level. The weighting factor, which is calculated as the correlation coefficient between citation counts of papers in the given short citation window and those in the fixed long citation window, reflects the degree of reliability of the CNCI value of one paper. To verify the effect of the proposed weighted CNCI indicator, we compared the CNCI score and CNCI ranking of 500 universities before and after introducing the weighting factor. The results showed that although there was a strong positive correlation before and after the introduction of the weighting factor, some universities’ performance and rankings changed dramatically.  相似文献   

17.
This study examines how the social sciences' debate between qualitative and quantitative methods is reflected in the citation patterns of sociology journal articles. Citation analysis revealed that quantitative articles were more likely to cite journal articles than monographs, while qualitative articles were more likely to cite monographs than journals. Quantitative articles cited other articles from their own quantitative-dominated journals but virtually excluded citations to articles from qualitative journals, while qualitative articles cited articles from the quantitative-dominated journals as well as their own qualitative-specialized journals. Discussion and conclusions include this study's implications for library collection development.  相似文献   

18.
Traditionally, citation count has served as the main evaluation measure for a paper's importance and influence. In turn, many evaluations of authors, institutions and journals are based on aggregations upon papers (e.g. h-index). In this work, we explore measures defined on the citation graph that offer a more intuitive insight into the impact of a paper than the superficial count of citations. Our main argument is focused on the identification of influence as an expression of the citation density in the subgraph of citations built for each paper. We propose two measures that capitalize on the notion of density providing researchers alternative evaluations of their work. While the general idea of impact for a paper can be viewed as how many researchers have shown interest to a piece of work, the proposed measures are based on the hypothesis that a piece of work may have influenced some papers even if they do not contain references to that piece of work. The proposed measures are also extended to researchers and journals.  相似文献   

19.
We address the question how citation-based bibliometric indicators can best be normalized to ensure fair comparisons between publications from different scientific fields and different years. In a systematic large-scale empirical analysis, we compare a traditional normalization approach based on a field classification system with three source normalization approaches. We pay special attention to the selection of the publications included in the analysis. Publications in national scientific journals, popular scientific magazines, and trade magazines are not included. Unlike earlier studies, we use algorithmically constructed classification systems to evaluate the different normalization approaches. Our analysis shows that a source normalization approach based on the recently introduced idea of fractional citation counting does not perform well. Two other source normalization approaches generally outperform the classification-system-based normalization approach that we study. Our analysis therefore offers considerable support for the use of source-normalized bibliometric indicators.  相似文献   

20.
In citation network analysis, complex behavior is reduced to a simple edge, namely, node A cites node B. The implicit assumption is that A is giving credit to, or acknowledging, B. It is also the case that the contributions of all citations are treated equally, even though some citations appear multiply in a text and others appear only once. In this study, we apply text-mining algorithms to a relatively large dataset (866 information science articles containing 32,496 bibliographic references) to demonstrate the differential contributions made by references. We (1) look at the placement of citations across the different sections of a journal article, and (2) identify highly cited works using two different counting methods (CountOne and CountX). We find that (1) the most highly cited works appear in the Introduction and Literature Review sections of citing papers, and (2) the citation rankings produced by CountOne and CountX differ. That is to say, counting the number of times a bibliographic reference is cited in a paper rather than treating all references the same no matter how many times they are invoked in the citing article reveals the differential contributions made by the cited works to the citing paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号