首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Altmetrics from Altmetric.com are widely used by publishers and researchers to give earlier evidence of attention than citation counts. This article assesses whether Altmetric.com scores are reliable early indicators of likely future impact and whether they may also reflect non-scholarly impacts. A preliminary factor analysis suggests that the main altmetric indicator of scholarly impact is Mendeley reader counts, with weaker news, informational and social network discussion/promotion dimensions in some fields. Based on a regression analysis of Altmetric.com data from November 2015 and Scopus citation counts from October 2017 for articles in 30 narrow fields, only Mendeley reader counts are consistent predictors of future citation impact. Most other Altmetric.com scores can help predict future impact in some fields. Overall, the results confirm that early Altmetric.com scores can predict later citation counts, although less well than journal impact factors, and the optimal strategy is to consider both Altmetric.com scores and journal impact factors. Altmetric.com scores can also reflect dimensions of non-scholarly impact in some fields.  相似文献   

2.
We reproduce the article-level, field-independent citation metric Relative Citation Ratio (RCR) using the Scopus database, and extend it beyond the biomedical field to all subject areas. We compare the RCR to the Field-Weighted Citation Impact (FWCI), also an article-level, field-normalised metric, and present the first results of correlations, distributions and application to research university benchmarking for both metrics. Our analyses demonstrate that FWCI and RCR of articles correlate with varying strengths across different areas of research. Additionally, we observe that both metrics are comparably stable across different subject areas of research. Moreover, at the level of universities, both metrics correlate strongly.  相似文献   

3.
肖宏  伍军红  孙隽 《编辑学报》2017,29(4):340-344
在学术期刊的计量评价指标体系中,影响因子和总被引频次是2项最为重要的指标,占据了较高的权重;但是,期刊办刊历史长短、发表论文多少、出版周期长短、学科人群多少等都会影响总被引频次的大小.尤其是一些发表大量低水平论文的期刊,依靠论文数量众多,依然可以获得较高的总被引频次;但其影响因子却很低,论文质量很差.如何客观甄别这类论文数量巨大而质量效益不高的期刊?本文提出一个全新的衡量期刊量效关系的指标——期刊量效指数(journal mass index,JMI).“量”指期刊的发文量,“效”则引入期刊影响因子.JMI定义为某刊影响因子与该刊影响因子对应的发文量的比值,意义是平均每篇文献对该刊影响因子的贡献值.JMI能客观反映同一个学科中量大质低的期刊的“臃肿程度”.在《中国学术期刊影响因子年报(2016版)》中,JMI被应用于修正期刊影响力指数(CI)排序,使CI排序更准确地反映学术期刊的学科影响力排名.实践证明,JMI是一个对学术期刊量效关系进行客观评判的有用的计量指标.  相似文献   

4.
The main objective of this study is to describe the life cycle of altmetric and bibliometric indicators in a sample of publications. Altmetrics (Downloads, Views, Readers, Tweets, and Blog mentions) and bibliometric counts (Citations) (in this study, the indicators will be capitalized to differentiate them from the general language) of 5185 publications (19,186 observations) were extracted from PlumX to observe their distribution according to the publication age. Correlations between these metrics were calculated from month to month to observe the evolution of these relationships. The results showed that mention metrics (Tweets and Blog mentions) are the earliest metrics that become available most quickly and have the shortest life cycle. Next, Readers are the metrics with the highest prevalence and with the second fastest growth. Views and Downloads show a continuous growth, being the indicators with the longest life cycles. Finally, Citations are the slowest indicators and have a low prevalence. Correlations show a strong relationship between mention metrics and Readers and Downloads, and between Readers and Citations. These results enable us to create a schematic diagram of the relationships between these metrics from a longitudinal view.  相似文献   

5.
[目的/意义] 探讨Altmetrics指标对学术图书影响力进行评价的有效性,为学术图书评价工作提出合理建议。[方法/过程] 获取Twitter提及量、Mendeley阅读量、在线书评数量以及馆藏量指标数据,对数据集的覆盖率、分位数等统计量分析后,将被引频数与Altmetrics指标进行了指标间相关系数检验,再对高Altmetrics指标值的学术图书进行年份分布、学科差异及图书主题等实证分析,探究各指标在学术图书影响力评价中的应用。[结果/结论] 传统计量指标被引频数与Altmetrics指标之间的相关性较低,说明Altmetrics可以作为学术图书评价的一个新视角,不同Altmetrics指标反映了学术图书影响力的不同维度。未来的学术图书影响力评价建议结合学术图书的年份、学科等特征,将传统的引文与Altmetrics指标相结合,探索更全面有效的评价机制。  相似文献   

6.
[目的/意义]学术评价对整个学术生态系统的发展具有重要意义。以影响因子和谷歌学术指标为视角,跟踪国内外学术评价指标发展的新动态,思考和探索学术评价指标优化发展的可能方向。[方法/过程]分别选取h5指数排名前50的中英文出版物并查询对应影响因子,分析并检验h5指数与影响因子的关系;对比中英文出版物在学科分布、时间范围、"被引"统计标准等方面的异同和优缺点,总结学术评价指标应考虑的诸多因素。从评价主体、评价对象等维度对网络环境下新的学术评价方式进行探索,对Altmetrics、RCR、PubPeer的创新性学术评价实践进行案例分析。[结果/结论]学术评价体系系统而复杂,与学术出版、交流与传播、保存利用等各环节密切关联,科学合理的评价体系应平衡数量与质量、保持客观中立、兼顾内容与形式,应分层次、多维度、全方位进行学术评价。  相似文献   

7.
[目的/意义] 研究Altmetrics指标的主要特征及其与传统文献计量指标的相关性,以及它们随时间的演化情况;同时,基于Altmetrics指标全面评价学术论文的社会影响力和学术影响力,对于发展和完善Altmetrics计量系统至关重要。[方法/过程] 以2014-2016年Altmetric Top 100论文为样本,对每年的高Altmetrics指标论文的来源期刊、学科分布、获取方式、作者地域及研究机构分布进行统计分析,并讨论这些论文的社会影响力,同时对论文的Altmetric分数与其Web of Science上的被引频次进行相关性分析,研究相关性随时间的动态演化。[结果/结论] 研究结果表明,高Altmetrics指标论文主要来源于一些高影响因子期刊,其学科主要集中于医疗健康与生物科学,论文作者主要来自于欧美发达国家的高水平研究机构,且高Altmetrics指标论文中开放及自由获取的比例逐年增加;Altmetric分数能够定量地反映学术论文在社交和新闻媒体上被公众关注的程度,从而在一定程度上体现出学术论文的社会影响力;高Altmetrics指标论文的Altmetric分数与其被引频次存在一定正相关,表明高Altmetrics指标论文同时具有较高的学术影响力。  相似文献   

8.
利用Google的PageRank原理进行期刊引文分析,提出期刊在引文网络中的影响力测度指标--引文网络影响力序位(Journal Impact Rank in Citation Net,Impact Rank或IR).通过对118种生物学领域的期刊进行期刊引文网络影响力测度,并将IR结果与JCR提供的影响因子(Impact Factor, IF)值进行统计学分析以考察二者的相关性和差异性.结果表明,IR值与IF值的相关性较弱,其差异性具有统计学意义.分析其原因,IR考虑了引证期刊的权重和期刊间的相互影响,更适于反映期刊在其相关学科或领域的引文网络中的相对影响力;IF值因其实质上是期刊论文篇均被引频次,其计算不考虑期刊之间的相互联系和引证期刊的权威性,因而更适用于期刊自身的纵向评价;IR与IF从两种不同角度评价期刊影响力,可互为补充.  相似文献   

9.
基于PLOS API的论文影响力选择性计量指标研究   总被引:1,自引:0,他引:1  
以PLOS API平台提供的"论文层面计量"数据集为样本,对20个选择性计量指标进行标准化处理和正态性检验,提出选择性计量指标的适用广度和分布密度两个特征,通过Spearman相关分析方法得到非参数相关系数矩阵,利用R程序包Corrplot绘制系数矩阵的可视化颜色图,直观显示出选择性计量指标的相关程度;利用SPSS、Ucinet和NetDraw软件对选择性计量指标进行多维尺度和社会网络分析,得到的指标维度虽有细微差异,但基本呈现一致性。最后,总结选择性计量指标的主要作用、适用范围和难度。  相似文献   

10.
Can altmetric data be validly used for the measurement of societal impact? The current study seeks to answer this question with a comprehensive dataset (about 100,000 records) from very disparate sources (F1000, Altmetric, and an in-house database based on Web of Science). In the F1000 peer review system, experts attach particular tags to scientific papers which indicate whether a paper could be of interest for science or rather for other segments of society. The results show that papers with the tag “good for teaching” do achieve higher altmetric counts than papers without this tag – if the quality of the papers is controlled. At the same time, a higher citation count is shown especially by papers with a tag that is specifically scientifically oriented (“new finding”). The findings indicate that papers tailored for a readership outside the area of research should lead to societal impact.If altmetric data is to be used for the measurement of societal impact, the question arises of its normalization. In bibliometrics, citations are normalized for the papers’ subject area and publication year. This study has taken a second analytic step involving a possible normalization of altmetric data. As the results show there are particular scientific topics which are of especial interest for a wide audience. Since these more or less interesting topics are not completely reflected in Thomson Reuters’ journal sets, a normalization of altmetric data should not be based on the level of subject categories, but on the level of topics.  相似文献   

11.
We analyse the difference between the averaged (average of ratios) and globalised (ratio of averages) author-level aggregation approaches based on various paper-level metrics. We evaluate the aggregation variants in terms of (1) their field bias on the author-level and (2) their ranking performance based on test data that comprises researchers that have received fellowship status or won prestigious awards for their long-lasting and high-impact research contributions to their fields. We consider various direct and indirect paper-level metrics with different normalisation approaches (mean-based, percentile-based, co-citation-based) and focus on the bias and performance differences between the two aggregation variants of each metric. We execute all experiments on two publication databases which use different field categorisation schemes. The first uses author-chosen concept categories and covers the computer science literature. The second covers all disciplines and categorises papers by keywords based on their contents. In terms of bias, we find relatively little difference between the averaged and globalised variants. For mean-normalised citation counts we find no significant difference between the two approaches. However, the percentile-based metric shows less bias with the globalised approach, except for citation windows smaller than four years. On the multi-disciplinary database, PageRank has the overall least bias but shows no significant difference between the two aggregation variants. The averaged variants of most metrics have less bias for small citation windows. For larger citation windows the differences are smaller and are mostly insignificant.In terms of ranking the well-established researchers who have received accolades for their high-impact contributions, we find that the globalised variant of the percentile-based metric performs better. Again we find no significant differences between the globalised and averaged variants based on citation counts and PageRank scores.  相似文献   

12.
The new web-based academic communication platforms do not only enable researchers to better advertise their academic outputs, making them more visible than ever before, but they also provide a wide supply of metrics to help authors better understand the impact their work is making. This study has three objectives: a) to analyse the uptake of some of the most popular platforms (Google Scholar Citations, ResearcherID, ResearchGate, Mendeley and Twitter) by a specific scientific community (bibliometrics, scientometrics, informetrics, webometrics, and altmetrics); b) to compare the metrics available from each platform; and c) to determine the meaning of all these new metrics. To do this, the data available in these platforms about a sample of 811 authors (researchers in bibliometrics for whom a public profile Google Scholar Citations was found) were extracted. A total of 31 metrics were analysed. The results show that a high number of the analysed researchers only had a profile in Google Scholar Citations (159), or only in Google Scholar Citations and ResearchGate (142). Lastly, we find two kinds of metrics of online impact. First, metrics related to connectivity (followers), and second, all metrics associated to academic impact. This second group can further be divided into usage metrics (reads, views), and citation metrics. The results suggest that Google Scholar Citations is the source that provides more comprehensive citation-related data, whereas Twitter stands out in connectivity-related metrics.  相似文献   

13.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

14.
Altmetrics have been proposed as a way to assess the societal impact of research. Although altmetrics are already in use as impact or attention metrics in different contexts, it is still not clear whether they really capture or reflect societal impact. This study is based on altmetrics, citation counts, research output and case study data from the UK Research Excellence Framework (REF), and peers’ REF assessments of research output and societal impact. We investigated the convergent validity of altmetrics by using two REF datasets: publications submitted as research output (PRO) to the REF and publications referenced in case studies (PCS). Case studies, which are intended to demonstrate societal impact, should cite the most relevant research papers. We used the MHq’ indicator for assessing impact – an indicator which has been introduced for count data with many zeros. The results of the first part of the analysis show that news media as well as mentions on Facebook, in blogs, in Wikipedia, and in policy-related documents have higher MHq’ values for PCS than for PRO. Thus, the altmetric indicators seem to have convergent validity for these data. In the second part of the analysis, altmetrics have been correlated with REF reviewers’ average scores on PCS. The negative or close to zero correlations question the convergent validity of altmetrics in that context. We suggest that they may capture a different aspect of societal impact (which can be called unknown attention) to that seen by reviewers (who are interested in the causal link between research and action in society).  相似文献   

15.
Today, it is not clear how the impact of research on other areas of society than science should be measured. While peer review and bibliometrics have become standard methods for measuring the impact of research in science, there is not yet an accepted framework within which to measure societal impact. Alternative metrics (called altmetrics to distinguish them from bibliometrics) are considered an interesting option for assessing the societal impact of research, as they offer new ways to measure (public) engagement with research output. Altmetrics is a term to describe web-based metrics for the impact of publications and other scholarly material by using data from social media platforms (e.g. Twitter or Mendeley). This overview of studies explores the potential of altmetrics for measuring societal impact. It deals with the definition and classification of altmetrics. Furthermore, their benefits and disadvantages for measuring impact are discussed.  相似文献   

16.
Most current machine learning methods for building search engines are based on the assumption that there is a target evaluation metric that evaluates the quality of the search engine with respect to an end user and the engine should be trained to optimize for that metric. Treating the target evaluation metric as a given, many different approaches (e.g. LambdaRank, SoftRank, RankingSVM, etc.) have been proposed to develop methods for optimizing for retrieval metrics. Target metrics used in optimization act as bottlenecks that summarize the training data and it is known that some evaluation metrics are more informative than others. In this paper, we consider the effect of the target evaluation metric on learning to rank. In particular, we question the current assumption that retrieval systems should be designed to directly optimize for a metric that is assumed to evaluate user satisfaction. We show that even if user satisfaction can be measured by a metric X, optimizing the engine on a training set for a more informative metric Y may result in a better test performance according to X (as compared to optimizing the engine directly for X on the training set). We analyze the situations as to when there is a significant difference in the two cases in terms of the amount of available training data and the number of dimensions of the feature space.  相似文献   

17.
期刊评价中的关键指标评析及相关性研究   总被引:2,自引:0,他引:2  
从期刊被引视角出发,选取影响因子、期刊h指数、特征因子、新期刊扩散因子进行评析。以国内图书情报学部分期刊为实证对象,对比这四种期刊评价指标的数值,并分析指标间的相关性。这四种指标既相关,又相异,可以相互配合弥补现有期刊影响因子评价指标的缺陷与应用偏差,现实中可以组合使用期刊影响因子、期刊h指数、特征因子和新期刊扩散因子,用于期刊评级的尝试。  相似文献   

18.
We evaluate article-level metrics along two dimensions. Firstly, we analyse metrics’ ranking bias in terms of fields and time. Secondly, we evaluate their performance based on test data that consists of (1) papers that have won high-impact awards and (2) papers that have won prizes for outstanding quality. We consider different citation impact indicators and indirect ranking algorithms in combination with various normalisation approaches (mean-based, percentile-based, co-citation-based, and post hoc rescaling). We execute all experiments on two publication databases which use different field categorisation schemes (author-chosen concept categories and categories based on papers’ semantic information).In terms of bias, we find that citation counts are always less time biased but always more field biased compared to PageRank. Furthermore, rescaling paper scores by a constant number of similarly aged papers reduces time bias more effectively compared to normalising by calendar years. We also find that percentile citation scores are less field and time biased than mean-normalised citation counts.In terms of performance, we find that time-normalised metrics identify high-impact papers better shortly after their publication compared to their non-normalised variants. However, after 7 to 10 years, the non-normalised metrics perform better. A similar trend exists for the set of high-quality papers where these performance cross-over points occur after 5 to 10 years.Lastly, we also find that personalising PageRank with papers’ citation counts reduces time bias but increases field bias. Similarly, using papers’ associated journal impact factors to personalise PageRank increases its field bias. In terms of performance, PageRank should always be personalised with papers’ citation counts and time-rescaled for citation windows smaller than 7 to 10 years.  相似文献   

19.
作为衡量新型影响力的计量方法,Altmetrics的出现引起了科技评价领域的广泛讨论。研究影响高Altmetrics指标论文的特征因素及其演变情况,可为合理使用Altmetrics指标提供借鉴,为发展和完善Altmetrics计量方法提供参考。因此,本文选取Altmetric Top 100论文作为研究样本,对比分析2013-2018年6年的高Altmetrics指标论文的时间分布、期刊分布、研究领域分布以及多源指标贡献度的演变情况。研究结果表明,高Altmetrics指标论文的网络关注度大体上逐年上升,并主要发表在Nature、Science、PNAS等10种高影响力期刊上,分布于医学健康科学、生物科学等8个研究领域。在多项Altmetrics指标中,News、Blogs以及Twitter表现出显著优势。  相似文献   

20.
PurposeThis paper aims to examine whether Altmetric data can be used as an indicator for identifying predatory journals.Design/methodology/approachThis is an applied study which uses citation and Altmetrics methods. The study selected 21 predatory journals from the Beall's list and Kscien's list, as well as 18 non-predatory open access journals from the DOAJ's list, in the field of Library and Information Science. The Altmetric score for articles published in these journals was obtained from the Altmetric Explorer, a service provided by Altmetric.com. Web of Science was used to search for citation data of articles published in these journals.FindingsThe predatory journals almost have no presence in social media, with poor Altmetric score. In contrast, non-predatory open access journals have a high presence rate and Altmetric score. There is a significant positive correlation between the number of articles cited and the number of articles having Altmetric score among non-predatory open-access journals, but not among predatory journals. Poor Altmetric score may be viewed as a potential characteristic of predatory journals, but other indicators would also need to be considered to determine whether a journal is predatory.Originality/valueDistinct from the traditional research methods, this study combined citation analysis and Altmetrics analysis. By comparing the characteristics of predatory journals and non-predatory open access journals, the findings contribute to the identification of predatory journals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号