首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

2.
In a recent paper in the Journal of Informetrics, Habibzadeh and Yadollahie [Habibzadeh, F., & Yadollahie, M. (2008). Journal weighted impact factor: A proposal. Journal of Informetrics, 2(2), 164–172] propose a journal weighted impact factor (WIF). Unlike the ordinary impact factor, the WIF of a journal takes into account the prestige or the influence of citing journals. In this communication, we show that the way in which Habibzadeh and Yadollahie calculate the WIF of a journal has some serious problems. Due to these problems, a ranking of journals based on WIF can be misleading. We also indicate how the problems can be solved by changing the way in which the WIF of a journal is calculated.  相似文献   

3.
The journal impact factor (JIF) reported in journal citation reports has been used to represent the influence and prestige of a journal. Whereas the consideration of the stochastic nature of a statistic is a prerequisite for statistical inference, the estimation of JIF uncertainty is necessary yet unavailable for comparing the impact among journals. Using journals in the Database of Research in Science Education (DoRISE), the current study proposes bootstrap methods to estimate the JIF variability. The paper also provides a comprehensive exposition of the sources of JIF variability. The collections of articles in the year of interest and in the preceding years both contribute to JIF variability. In addition, the variability estimate differs depending on the way a database selects its journals for inclusion. In the bootstrap process, the nested structure of articles in a journal was accounted for to ensure that each bootstrap replication reflects the actual citation characteristics of articles in the journal. In conclusion, the proposed point and interval estimates of the JIF statistic are obtained and more informative inferences on the impact of journals can be drawn.  相似文献   

4.
There are many indicators of journal quality and prestige. Although acceptance rates are discussed anecdotally, there has been little systematic exploration of the relationship between acceptance rates and other journal measures. This study examines the variability of acceptance rates for a set of 5094 journals in five disciplines and the relationship between acceptance rates and JCR measures for 1301 journals. The results show statistically significant differences in acceptance rates by discipline, country affiliation of the editor, and number of reviewers per article. Negative correlations are found between acceptance rates and citation-based indicators. Positive correlations are found with journal age. These relationships are most pronounced in the most selective journals and vary by discipline. Open access journals were found to have statistically significantly higher acceptance rates than non-open access journals. Implications in light of changes in the scholarly communication system are discussed.  相似文献   

5.
Microsoft Academic is a free academic search engine and citation index that is similar to Google Scholar but can be automatically queried. Its data is potentially useful for bibliometric analysis if it is possible to search effectively for individual journal articles. This article compares different methods to find journal articles in its index by searching for a combination of title, authors, publication year and journal name and uses the results for the widest published correlation analysis of Microsoft Academic citation counts for journal articles so far. Based on 126,312 articles from 323 Scopus subfields in 2012, the optimal strategy to find articles with DOIs is to search for them by title and filter out those with incorrect DOIs. This finds 90% of journal articles. For articles without DOIs, the optimal strategy is to search for them by title and then filter out matches with dissimilar metadata. This finds 89% of journal articles, with an additional 1% incorrect matches. The remaining articles seem to be mainly not indexed by Microsoft Academic or indexed with a different language version of their title. From the matches, Scopus citation counts and Microsoft Academic counts have an average Spearman correlation of 0.95, with the lowest for any single field being 0.63. Thus, Microsoft Academic citation counts are almost universally equivalent to Scopus citation counts for articles that are not recent but there are national biases in the results.  相似文献   

6.
We analyse the difference between the averaged (average of ratios) and globalised (ratio of averages) author-level aggregation approaches based on various paper-level metrics. We evaluate the aggregation variants in terms of (1) their field bias on the author-level and (2) their ranking performance based on test data that comprises researchers that have received fellowship status or won prestigious awards for their long-lasting and high-impact research contributions to their fields. We consider various direct and indirect paper-level metrics with different normalisation approaches (mean-based, percentile-based, co-citation-based) and focus on the bias and performance differences between the two aggregation variants of each metric. We execute all experiments on two publication databases which use different field categorisation schemes. The first uses author-chosen concept categories and covers the computer science literature. The second covers all disciplines and categorises papers by keywords based on their contents. In terms of bias, we find relatively little difference between the averaged and globalised variants. For mean-normalised citation counts we find no significant difference between the two approaches. However, the percentile-based metric shows less bias with the globalised approach, except for citation windows smaller than four years. On the multi-disciplinary database, PageRank has the overall least bias but shows no significant difference between the two aggregation variants. The averaged variants of most metrics have less bias for small citation windows. For larger citation windows the differences are smaller and are mostly insignificant.In terms of ranking the well-established researchers who have received accolades for their high-impact contributions, we find that the globalised variant of the percentile-based metric performs better. Again we find no significant differences between the globalised and averaged variants based on citation counts and PageRank scores.  相似文献   

7.
Recently, two new indicators (Equalized Mean-based Normalized Proportion Cited, EMNPC; Mean-based Normalized Proportion Cited, MNPC) were proposed which are intended for sparse scientometrics data, e.g., alternative metrics (altmetrics). The indicators compare the proportion of mentioned papers (e.g. on Facebook) of a unit (e.g., a researcher or institution) with the proportion of mentioned papers in the corresponding fields and publication years (the expected values). In this study, we propose a third indicator (Mantel-Haenszel quotient, MHq) belonging to the same indicator family. The MHq is based on the MH analysis – an established method in statistics for the comparison of proportions. We test (using citations and assessments by peers, i.e. F1000Prime recommendations) if the three indicators can distinguish between different quality levels as defined on the basis of the assessments by peers. Thus, we test their convergent validity. We find that the indicator MHq is able to distinguish between the quality levels in most cases while MNPC and EMNPC are not. Since the MHq is shown in this study to be a valid indicator, we apply it to six types of zero-inflated altmetrics data and test whether different altmetrics sources are related to quality. The results for the various altmetrics demonstrate that the relationship between altmetrics (Wikipedia, Facebook, blogs, and news data) and assessments by peers is not as strong as the relationship between citations and assessments by peers. Actually, the relationship between citations and peer assessments is about two to three times stronger than the association between altmetrics and assessments by peers.  相似文献   

8.
中国和印度作为世界公认的亚洲两个实力较为接近的发展中国家,有必要对其在科技、政治、军事等领域的差异作进一步的分析,以取长补短。学术期刊可以体现一个国家的科研竞争力,同时也是科学生产率的重要表现形式之一。因此,比较中国和印度学术期刊的相关情况具有一定的现实意义。本文主要选取中国和印度均被《期刊引证报告》(Journal Citation Reports,JCR)收录的期刊,通过比较各个期刊的各项指标以及这些期刊所涵盖学科的基本情况,从而得出中国相对于印度在此类科技期刊上的优势和劣势,并通过比较和分析,得出相关结论。  相似文献   

9.
Studies on the relationship between the numbers of citations and downloads of scientific publications is beneficial for understanding the mechanism of citation patterns and research evaluation. However, seldom studies have considered directionality issues between downloads and citations or adopted a case-by-case time lag length between the download and citation time series of each individual publication. In this paper, we introduce the Granger-causal inference strategy to study the directionality between downloads and citations and set up the length of time lag between the time series for each case. By researching the publications on the Lancet, we find that publications have various directionality patterns, but highly cited publications tend to feature greater possibilities to have Granger causality. We apply a step-by-step manner to introduce the Granger-causal inference method to information science as four steps, namely conducting stationarity tests, determining time lag between time series, establishing cointegration test, and implementing Granger-causality inference. We hope that this method can be applied by future information scientists in their own research contexts.  相似文献   

10.
This paper proposes an empirical analysis of several scientists based on their time regularity, defined as the ability of generating an active and stable research output over time, in terms of both quantity/publications and impact/citations. In particular, we empirically analyse three recent bibliometric tools to perform qualitative/quantitative evaluations under the new perspective of regularity. These tools are respectively (1) the PY/CY diagram, (2) the publication/citation Ferrers diagram and triad indicators, and (3) a year-by-year comparison of the scientists’ output (Borda's ranking). Results of the regularity analysis are then compared with those obtained under the classical perspective of overall production.The proposed evaluation tools can be applied to competitive examinations for research position/promotion, as complementary instruments to the commonly adopted bibliometric techniques.  相似文献   

11.
文章应用文献计量的方法对改版后的<大学图书情报学刊>(2002-2004)的来稿、载文、作者和引文进行了统计与分析,以求客观地评价<学刊>的成绩及不足,促进<学刊>可持续稳步发展.  相似文献   

12.
Assessing the scholarly impact of academic institutions has become increasingly important. The achievements of editorial board members can create benchmarks for research excellence and can be used to evaluate both individual and institutional performance. This paper proposes a new method based on journal editor data for assessing an institution’s scholarly impact. In this paper, a journal editorship index (JEI) that simultaneously accounts for the journal rating (JR), editor title (ET), and board size (BS) is constructed. We assess the scholarly impact of economics institutions based on the editorial boards of 211 economics journals (which include 8640 editorial board members) in the ABS Academic Journal Guide. Three indices (JEI/ET, JEI/JR, and JEI/BS) are also used to rank the institutions. It was found that there was only a slight change in the relative institutional rankings using the JEI/ET and JEI/BS compared to the JEI. The BS and ET weight factors did not have a substantial influence on the ranking of institutions. It was also found that the journal rating weight factor had a large effect on the ranking of institutions. This paper presents an alternative approach to using editorial board memberships as the basis for assessing the scholarly impact of economics institutions.  相似文献   

13.
This article reports a comparative study of five measures that quantify the degree of research collaboration, including the collaborative index, the degree of collaboration, the collaborative coefficient, the revised collaborative coefficient, and degree centrality. The empirical results showed that these measures all capture the notion of research collaboration, which is consistent with prior studies. Moreover, the results showed that degree centrality, the revised collaborative coefficient, and the degree of collaboration had the highest coefficient estimates on research productivity, the average JIF, and the average number of citations, respectively. Overall, this article suggests that the degree of collaboration and the revised collaborative coefficient are superior measures that can be applied to bibliometric studies for future researchers.  相似文献   

14.
本文以高校实际使用的学术成果分级目录为视角,探讨中文图书情报类期刊的分层结构,为图情研究人员确定成果发表渠道、图书馆员调整期刊订购方案、高校管理人员制定学术成果分级决策提供参考;通过网络调研,汇总整理了国内63所有图书情报类硕士点(含专硕)高校发布的学术成果分级目录,提取了24种图书情报类期刊,统一编码、整理,对数据聚类产生期刊分层结果;发现图书情报类期刊的层次区分比较明显,处于上层的期刊多年来一直保持相对稳定,高校学术成果分级目录明显受CSSCI来源期刊遴选和北大核心期刊评选结果影响,表明核心期刊与非核心期刊的"桶"分类系统对高校学术成果分级政策起到了明显的导向作用。  相似文献   

15.
我国8种图书馆学主要期刊载文、作者及引文的定量分析   总被引:19,自引:0,他引:19  
依据原始文献,采用文献计量学的方法,对《中国图书馆学报》等我国8种图书馆学主要期刊2001年的载文情况、作者构成及引用文献的现状进行一次调查研究和分析评价。  相似文献   

16.
试用h指数评价科技期刊   总被引:2,自引:0,他引:2  
2005年,Hirsch提出了用h指数评价科学工作者的个人成就。文章对2003年被中国科技引文数据库收录的化学和土木建筑领域类期刊按h指数和相对h指数排名情况进行分析,目的在于说明h指数、相对h指数与影响因子排名的不同,及h指数和相对h指数用于评价期刊的现实意义。最后指出在使用h指数和相对h指数评价期刊时,应该全面、综合性地考虑。  相似文献   

17.
18.
Graduate students at the University of Manitoba were surveyed to find out if they used reference management software (RMS), features used, challenges and barriers to using RMS. Interest in different types of PDF management features and training options were also investigated. Both users and non-users of reference management software were invited to participate. Non-users managed their citations and references with a variety of other tools. The principal reasons for non-use were that students were not aware of options that were available, and the amount of time needed to learn the program. RMS users also mentioned the steep learning curve, problems with extracting metadata from PDFs, technical issues, and problems with inaccurate citation styles. Most of the students saved PDF documents to their computer. Students were most interested in full-text searching of PDFs, automatic renaming of PDFs, and automatically extracting citation metadata from a PDF. PDF annotation and reading tools were also of some interest. Mobile features were of the least interest. There were no statistically significant differences in the interest of PDF management features between the user and non-user groups but there were statistically significant differences in the interest of some of the training options between the groups.  相似文献   

19.
科学文献的相互引用关系是引文分析的主要依据.引文分析是以文献引用数据为基础,用以揭示其数量特征和规律的一种文献计量分析方法.作者对<大学图书情报学刊>1998年与2004年的引文数量进行了量化分析,指出了刊物在引文数量、引文文献类型、引文语种、引文主题、引文原始来源期刊的发展和变化情况,提出了值得重视的有关问题.  相似文献   

20.
文章运用文献计量学的方法,对2001-2005年2月<大学图书情报学刊>的载文、引文情况进行了统计分析,为该刊今后的发展提供参考数据.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号