首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
This paper analyzes several well-known bibliometric indices using an axiomatic approach. We concentrate on indices aiming at capturing the global impact of a scientific output and do not investigate indices aiming at capturing an average impact. Hence, the indices that we study are designed to evaluate authors or groups of authors but not journals. The bibliometric indices that are studied include classic ones such as the number of highly cited papers as well as more recent ones such as the h-index and the g-index. We give conditions that characterize these indices, up to the multiplication by a positive constant. We also study the bibliometric rankings that are induced by these indices. Hence, we provide a general framework for the comparison of bibliometric rankings and indices.  相似文献   

2.
Do academic journals favor authors who share their institutional affiliation? To answer this question we examine citation counts, as a proxy for paper quality, for articles published in four leading international relations journals during the years 2000–2015. We compare citation counts for articles written by “in-group members” (authors affiliated with the journal’s publishing institution) versus “out-group members” (authors not affiliated with that institution). Articles written by in-group authors received 18% to 49% fewer Web of Science citations when published in their home journal (International Security or World Politics) vs. an unaffiliated journal, compared to out-group authors. These results are mainly driven by authors who received their PhDs from Harvard or MIT. The findings show evidence of a bias within some journals towards publishing papers by faculty from their home institution, at the expense of paper quality.  相似文献   

3.
4.
Today’s scientific research is an expensive enterprise funded primarily by taxpayers’ and corporate groups’ monies. All nations want to discover fields of study that promise to create future industries, and dominate these by building up and securing scientific and technological expertise early. However, the conversion of scientific leadership into market dominance remains very much an alchemy. To gain insights into how science becomes technology, we focused on graphene (which shows promise in batteries, sensors, flexible displays and other technologies) as a case study. In particular, we asked whether research on the material is on track to deliver all its technological promises. To answer this question, we analyzed in this paper bibliometric records of scientific journal publications and patents related to graphene. While performing straightforward analyses at the aggregate and temporal level to do so, we stumbled upon evidences that suggest ‘Golden Eras’ of graphene science and technology in the recent past. To confirm this unexpected finding, we developed a novel simulation-based method to determine how the interest levels in graphene science and technology change with time. We then found compelling evidences that these interest levels peaked in 2010 and 2012 respectively, despite the continued growth of journal and patent publications in this area. This suggests that publication numbers in a research topic could sometimes give rise to false positives concerning its importance.  相似文献   

5.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

6.
In an age of intensifying scientific collaboration, the counting of papers by multiple authors has become an important methodological issue in scientometric based research evaluation. Especially, how counting methods influence institutional level research evaluation has not been studied in existing literatures. In this study, we selected the top 300 universities in physics in the 2011 HEEACT Ranking as our study subjects. We compared the university rankings generated from four different counting methods (i.e. whole counting, straight counting using first author, straight counting using corresponding author, and fractional counting) to show how paper counts and citation counts and the subsequent university ranks were affected by counting method selection. The counting was based on the 1988–2008 physics papers records indexed in ISI WoS. We also observed how paper and citation counts were inflated by whole counting. The results show that counting methods affected the universities in the middle range more than those in the upper or lower ranges. Citation counts were also more affected than paper counts. The correlation between the rankings generated from whole counting and those from the other methods were low or negative in the middle ranges. Based on the findings, this study concluded that straight counting and fractional counting were better choices for paper count and citation count in the institutional level research evaluation.  相似文献   

7.
This study compares the two-year impact factor (JIF2), JIF2 without journal self-citation (JIF2_noJSC), five-year impact factor (JIF5), eigenfactor score and article influence score (AIS) and investigates their relative changes with time. JIF2 increased faster than JIF5 overall. The relative change between JIF2 and JIF_noJSC shows that the control of JCR over journal self-citation is effective to some extent. JIF5 is more discriminative than JIF2. The correlation between JIF5 and AIS is stronger than that between JIF5 and the eigenfactor score. The relative change in journal rank according to different indicators varies with the ratio of the indicators and can be up to 60 % of the number of journals in a subject category. There is subject category discrepancy in the average AIS and its change over time. Through the screening of journals according to variations in the ratio of JIF2 to JIF5 for journals in individual subject categories, we found that journals in the same subject categories can have considerably different citation patterns. To provide a fair comparison of journals in individual subject categories, we argue that it is better to replace JIF2 with the ready-made JIF5 when ranking journals.  相似文献   

8.
One of the flaws of the journal impact factor (IF) is that it cannot be used to compare journals from different fields or multidisciplinary journals because the IF differs significantly across research fields. This study proposes a new measure of journal performance that captures field-different citation characteristics. We view journal performance from the perspective of the efficiency of a journal's citation generation process. Together with the conventional variables used in calculating the IF, the number of articles as an input and the number of total citations as an output, we additionally consider the two field-different factors, citation density and citation dynamics, as inputs. We also separately capture the contribution of external citations and self-citations and incorporate their relative importance in measuring journal performance. To accommodate multiple inputs and outputs whose relationships are unknown, this study employs data envelopment analysis (DEA), a multi-factor productivity model for measuring the relative efficiency of decision-making units without any assumption of a production function. The resulting efficiency score, called DEA-IF, can then be used for the comparative evaluation of multidisciplinary journals’ performance. A case study example of industrial engineering journals is provided to illustrate how to measure DEA-IF and its usefulness.  相似文献   

9.
Publication patterns of 79 forest scientists awarded major international forestry prizes during 1990-2010 were compared with the journal classification and ranking promoted as part of the ‘Excellence in Research for Australia’ (ERA) by the Australian Research Council. The data revealed that these scientists exhibited an elite publication performance during the decade before and two decades following their first major award. An analysis of their 1703 articles in 431 journals revealed substantial differences between the journal choices of these elite scientists and the ERA classification and ranking of journals. Implications from these findings are that additional cross-classifications should be added for many journals, and there should be an adjustment to the ranking of several journals relevant to the ERA Field of Research classified as 0705 Forestry Sciences.  相似文献   

10.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behavior across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analyzed. The effect of journal self-citations over the normalization process is also studied.  相似文献   

11.
The purpose of this study was to examine the relationship between publication rate, top journal publications and excellence during the first eight years of the career, and how well publication rate, top journal publications and highly cited publications during the first four years of the career can predict whether an author attain excellence in the fifth to the eighth year. The dataset consisted of publication track records of 406 early career mathematicians in the sub-field of number theory collected from the MathSciNet database. Logistic regression and dominance analysis was applied to the data. The major conclusions were (1) publication rate had a positive effect on excellence during the first eighth years of the career. However, those who publish many articles in top journals, which implicitly require a high publication count, had an even higher probability of attaining excellence. These results suggest that publishing in top journals is very important in the process of attaining excellence in the early career in addition to publishing many papers; and (2) a dominance analysis indicated that the number of top journal publications and highly cited publications during the first four years of the career were the most important predictors of who will attain excellence in the later career. The results are discussed in relation to indicator development and science policy.  相似文献   

12.
Many studies demonstrate differences in the coverage of citing publications in Google Scholar (GS) and Web of Science (WoS). Here, we examine to what extent citation data from the two databases reflect the scholarly impact of women and men differently. Our conjecture is that WoS carries an indirect gender bias in its selection criteria for citation sources that GS avoids due to criteria that are more inclusive. Using a sample of 1250 U.S. researchers in Sociology, Political Science, Economics, Cardiology and Chemistry, we examine gender differences in the average citation coverage of the two databases. We also calculate database-specific h-indices for all authors in the sample. In repeated simulations of hiring scenarios, we use these indices to examine whether women's appointment rates increase if hiring decisions rely on data from GS in lieu of WoS. We find no systematic gender differences in the citation coverage of the two databases. Further, our results indicate marginal to non-existing effects of database selection on women's success-rates in the simulations. In line with the existing literature, we find the citation coverage in WoS to be largest in Cardiology and Chemistry and smallest in Political Science and Sociology. The concordance between author-based h-indices measured by GS and WoS is largest for Chemistry followed by Cardiology, Political Science, Sociology and Economics.  相似文献   

13.
近六年《江西图书馆学刊》载文、引文及作者分析   总被引:4,自引:0,他引:4  
以《江西图书馆学刊》1998~2003年载、引、被引及作情况为研究对象,分析了载的数量、栏目和所占页数,引的数量、来源、语种和年代,载被引情况以及作的单位分布和作数量,从而反映出该刊近六年的栽情况及学术影响。  相似文献   

14.
Citation averages, and Impact Factors (IFs) in particular, are sensitive to sample size. Here, we apply the Central Limit Theorem to IFs to understand their scale-dependent behavior. For a journal of n randomly selected papers from a population of all papers, we expect from the Theorem that its IF fluctuates around the population average μ, and spans a range of values proportional to σ/n, where σ2 is the variance of the population's citation distribution. The 1/n dependence has profound implications for IF rankings: The larger a journal, the narrower the range around μ where its IF lies. IF rankings therefore allocate an unfair advantage to smaller journals in the high IF ranks, and to larger journals in the low IF ranks. As a result, we expect a scale-dependent stratification of journals in IF rankings, whereby small journals occupy the top, middle, and bottom ranks; mid-sized journals occupy the middle ranks; and very large journals have IFs that asymptotically approach μ. We obtain qualitative and quantitative confirmation of these predictions by analyzing (i) the complete set of 166,498 IF & journal-size data pairs in the 1997–2016 Journal Citation Reports of Clarivate Analytics, (ii) the top-cited portion of 276,000 physics papers published in 2014–2015, and (iii) the citation distributions of an arbitrarily sampled list of physics journals. We conclude that the Central Limit Theorem is a good predictor of the IF range of actual journals, while sustained deviations from its predictions are a mark of true, non-random, citation impact. IF rankings are thus misleading unless one compares like-sized journals or adjusts for these effects. We propose the Φ index, a rescaled IF that accounts for size effects, and which can be readily generalized to account also for different citation practices across research fields. Our methodology applies to other citation averages that are used to compare research fields, university departments or countries in various types of rankings.  相似文献   

15.
Microsoft Academic is a free academic search engine and citation index that is similar to Google Scholar but can be automatically queried. Its data is potentially useful for bibliometric analysis if it is possible to search effectively for individual journal articles. This article compares different methods to find journal articles in its index by searching for a combination of title, authors, publication year and journal name and uses the results for the widest published correlation analysis of Microsoft Academic citation counts for journal articles so far. Based on 126,312 articles from 323 Scopus subfields in 2012, the optimal strategy to find articles with DOIs is to search for them by title and filter out those with incorrect DOIs. This finds 90% of journal articles. For articles without DOIs, the optimal strategy is to search for them by title and then filter out matches with dissimilar metadata. This finds 89% of journal articles, with an additional 1% incorrect matches. The remaining articles seem to be mainly not indexed by Microsoft Academic or indexed with a different language version of their title. From the matches, Scopus citation counts and Microsoft Academic counts have an average Spearman correlation of 0.95, with the lowest for any single field being 0.63. Thus, Microsoft Academic citation counts are almost universally equivalent to Scopus citation counts for articles that are not recent but there are national biases in the results.  相似文献   

16.
Questions of definition and measurement continue to constrain a consensus on the measurement of interdisciplinarity. Using Rao-Stirling (RS) Diversity sometimes produces anomalous results. We argue that these unexpected outcomes can be related to the use of “dual-concept diversity” which combines “variety” and “balance” in the definitions (ex ante). We propose to modify RS Diversity into a new indicator (DIV) which operationalizes “variety,” “balance,” and “disparity” independently and then combines them ex post. “Balance” can be measured using the Gini coefficient. We apply DIV to the aggregated citation patterns of 11,487 journals covered by the Journal Citation Reports 2016 of the Science Citation Index and the Social Sciences Citation Index as an empirical domain and, in more detail, to the citation patterns of 85 journals assigned to the Web-of-Science category “information science & library science” in both the cited and citing directions. We compare the results of the indicators and show that DIV provides improved results in terms of distinguishing between interdisciplinary knowledge integration (citing references) versus knowledge diffusion (cited impact). The new diversity indicator and RS diversity measure different features. A routine for the measurement of the various operationalization of diversity (in any data matrix) is made available online.  相似文献   

17.
以《中国血吸虫病防治杂志》为例,从保持期刊特色、提高期刊服务能力、扩大国内外影响、加强编辑部及审稿队伍建设等4个方面探讨如何提高期刊的影响力。  相似文献   

18.
Objectives:Systematic reviews and meta-analyses (SRs/MAs) are designed to be rigorous research methodologies that synthesize information and inform practice. An increase in their publication runs parallel to quality concerns and a movement toward standards to improve reporting and methodology. With the goal of informing the guidance librarians provide to SR/MA teams, this study assesses online journal author guidelines from an institutional sample to determine whether these author guidelines address SR/MA methodological quality.Methods:A Web of Science Core Collection (Clarivate) search identified SRs/MAs published in 2014–2019 by authors affiliated with a single institution. The AMSTAR 2 checklist was used to develop an assessment tool of closed questions specific to measures for SR/MA methodological quality in author guidelines, with questions added about author guidelines in general. Multiple reviewers completed the assessment.Results:The author guidelines of 141 journals were evaluated. Less than 20% addressed at least one of the assessed measures specific to SR/MA methodological quality. There was wide variation in author guidelines between journals from the same publisher apart from the American Medical Association, which consistently offered in-depth author guidelines. Normalized Eigenfactor and Article Influence Scores did not indicate author guideline breadth.Conclusions:Most author guidelines in the institutional sample did not address SR/MA methodological quality. When consulting with teams embarking on SRs/MAs, librarians should not expect author guidelines to provide details about the requirements of the target journals. Librarians should advise teams to follow established SR/MA standards, contact journal staff, and review SRs/MAs previously published in the journal.  相似文献   

19.
There are many indicators of journal quality and prestige. Although acceptance rates are discussed anecdotally, there has been little systematic exploration of the relationship between acceptance rates and other journal measures. This study examines the variability of acceptance rates for a set of 5094 journals in five disciplines and the relationship between acceptance rates and JCR measures for 1301 journals. The results show statistically significant differences in acceptance rates by discipline, country affiliation of the editor, and number of reviewers per article. Negative correlations are found between acceptance rates and citation-based indicators. Positive correlations are found with journal age. These relationships are most pronounced in the most selective journals and vary by discipline. Open access journals were found to have statistically significantly higher acceptance rates than non-open access journals. Implications in light of changes in the scholarly communication system are discussed.  相似文献   

20.
从坚持办刊宗旨和期刊特色,充分利用科学图书馆的文献和信息资源提高期刊内涵质量,开展多元化的文献信息服务,深化期刊工作等方面介绍<长江流域资源与环境>的办刊理念和实践,以期为科技期刊尤其是科学图书馆主办的科技期刊的发展提供参考.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号