首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We analyse the difference between the averaged (average of ratios) and globalised (ratio of averages) author-level aggregation approaches based on various paper-level metrics. We evaluate the aggregation variants in terms of (1) their field bias on the author-level and (2) their ranking performance based on test data that comprises researchers that have received fellowship status or won prestigious awards for their long-lasting and high-impact research contributions to their fields. We consider various direct and indirect paper-level metrics with different normalisation approaches (mean-based, percentile-based, co-citation-based) and focus on the bias and performance differences between the two aggregation variants of each metric. We execute all experiments on two publication databases which use different field categorisation schemes. The first uses author-chosen concept categories and covers the computer science literature. The second covers all disciplines and categorises papers by keywords based on their contents. In terms of bias, we find relatively little difference between the averaged and globalised variants. For mean-normalised citation counts we find no significant difference between the two approaches. However, the percentile-based metric shows less bias with the globalised approach, except for citation windows smaller than four years. On the multi-disciplinary database, PageRank has the overall least bias but shows no significant difference between the two aggregation variants. The averaged variants of most metrics have less bias for small citation windows. For larger citation windows the differences are smaller and are mostly insignificant.In terms of ranking the well-established researchers who have received accolades for their high-impact contributions, we find that the globalised variant of the percentile-based metric performs better. Again we find no significant differences between the globalised and averaged variants based on citation counts and PageRank scores.  相似文献   

2.
Given the growing use of impact metrics in the evaluation of scholars, journals, academic institutions, and even countries, there is a critical need for means to compare scientific impact across disciplinary boundaries. Unfortunately, citation-based metrics are strongly biased by diverse field sizes and publication and citation practices. As a result, we have witnessed an explosion in the number of newly proposed metrics that claim to be “universal.” However, there is currently no way to objectively assess whether a normalized metric can actually compensate for disciplinary bias. We introduce a new method to assess the universality of any scholarly impact metric, and apply it to evaluate a number of established metrics. We also define a very simple new metric hs, which proves to be universal, thus allowing to compare the impact of scholars across scientific disciplines. These results move us closer to a formal methodology in the measure of scholarly impact.  相似文献   

3.
In order to take multiple co-authorship appropriately into account, a straightforward modification of the Hirsch index was recently proposed. Fractionalised counting of the papers yields an appropriate measure which is called the hm-index. The effect of this procedure is compared in the present work with other variants of the h-index and found to be superior to the fractionalised counting of citations and to the normalization of the h-index with the average number of authors in the h-core. Three fictitious examples for model cases and one empirical case are analysed.  相似文献   

4.
The normalized citation indicator may not be sufficiently reliable when a short citation time window is used, because the citation counts for recently published papers are not as reliable as those for papers published many years ago. In a limited time period, recent publications usually have insufficient time to accumulate citations and the citation counts of these publications are not sufficiently reliable to be used in the citation impact indicators. However, normalization methods themselves cannot solve this problem. To solve this problem, we introduce a weighting factor to the commonly used normalization indicator Category Normalized Citation Impact (CNCI) at the paper level. The weighting factor, which is calculated as the correlation coefficient between citation counts of papers in the given short citation window and those in the fixed long citation window, reflects the degree of reliability of the CNCI value of one paper. To verify the effect of the proposed weighted CNCI indicator, we compared the CNCI score and CNCI ranking of 500 universities before and after introducing the weighting factor. The results showed that although there was a strong positive correlation before and after the introduction of the weighting factor, some universities’ performance and rankings changed dramatically.  相似文献   

5.
Do academic journals favor authors who share their institutional affiliation? To answer this question we examine citation counts, as a proxy for paper quality, for articles published in four leading international relations journals during the years 2000–2015. We compare citation counts for articles written by “in-group members” (authors affiliated with the journal’s publishing institution) versus “out-group members” (authors not affiliated with that institution). Articles written by in-group authors received 18% to 49% fewer Web of Science citations when published in their home journal (International Security or World Politics) vs. an unaffiliated journal, compared to out-group authors. These results are mainly driven by authors who received their PhDs from Harvard or MIT. The findings show evidence of a bias within some journals towards publishing papers by faculty from their home institution, at the expense of paper quality.  相似文献   

6.
数字参考咨询服务评价指标探讨   总被引:3,自引:0,他引:3  
论述开展数字参考咨询服务的涵义及开展数字参考咨询评价的意义;分析国外《数字参考咨询服务中的质量评价》、SERVQUAL评价、LibQUAL +评价、ISO/11620评价和国内数字参考咨询服务评价的现状,在此基础上探讨从数字参考咨询馆员、数字参考咨询系统、用户满意度对数字参考咨询服务评价指标体系,并提出数字参考咨询服务的管理措施。  相似文献   

7.
Modern retrieval test collections are built through a process called pooling in which only a sample of the entire document set is judged for each topic. The idea behind pooling is to find enough relevant documents such that when unjudged documents are assumed to be nonrelevant the resulting judgment set is sufficiently complete and unbiased. Yet a constant-size pool represents an increasingly small percentage of the document set as document sets grow larger, and at some point the assumption of approximately complete judgments must become invalid. This paper shows that the judgment sets produced by traditional pooling when the pools are too small relative to the total document set size can be biased in that they favor relevant documents that contain topic title words. This phenomenon is wholly dependent on the collection size and does not depend on the number of relevant documents for a given topic. We show that the AQUAINT test collection constructed in the recent TREC 2005 workshop exhibits this biased relevance set; it is likely that the test collections based on the much larger GOV2 document set also exhibit the bias. The paper concludes with suggested modifications to traditional pooling and evaluation methodology that may allow very large reusable test collections to be built.
Ellen VoorheesEmail:
  相似文献   

8.
Empirical analysis of the relationship between the impact factor – as measured by the average number of citations – and the proportion of uncited material in a collection dates back at least to van Leeuwen and Moed (2005) where graphical presentations revealed striking patterns. Recently Hsu and Huang (2012) have proposed a simple functional relationship. Here it is shown that the general features of these observed regularities are predicted by a well-established informetric model which enables us to derive a theoretical van Leeuwen–Moed lower bound. We also question some of the arguments of Hsu and Huang (2012) and Egghe (2013) while various issues raised by Egghe, 2008, Egghe, 2013 are also addressed.  相似文献   

9.
Despite the increasing use of citation-based metrics for research evaluation purposes, we do not know yet which metrics best deliver on their promise to gauge the significance of a scientific paper or a patent. We assess 17 network-based metrics by their ability to identify milestone papers and patents in three large citation datasets. We find that traditional information-retrieval evaluation metrics are strongly affected by the interplay between the age distribution of the milestone items and age biases of the evaluated metrics. Outcomes of these metrics are therefore not representative of the metrics’ ranking ability. We argue in favor of a modified evaluation procedure that explicitly penalizes biased metrics and allows us to reveal metrics’ performance patterns that are consistent across the datasets. PageRank and LeaderRank turn out to be the best-performing ranking metrics when their age bias is suppressed by a simple transformation of the scores that they produce, whereas other popular metrics, including citation count, HITS and Collective Influence, produce significantly worse ranking results.  相似文献   

10.
数字馆藏服务绩效监控指标体系与数据获取   总被引:3,自引:0,他引:3  
提出数字馆藏服务绩效监控的理念,构建了一个由驱动性指标和结果性指标构成的监控指标体系,并对监控指标的获取方法进行说明,以期据此实现对数字馆藏服务绩效的有效管理与控制。  相似文献   

11.
网络信息资源评价指标研究的回顾及相关问题的思考   总被引:1,自引:0,他引:1  
袁静 《图书馆论坛》2006,26(5):280-282
对网络信息资源评价指标的国内外主要研究成果进行回顾,并指出存在的问题,最后提出今后研究中应注意的几个问题。  相似文献   

12.
Equalizing bias (EqB) is a systematic inaccuracy which arises when authorship credit is divided equally among coauthors who have not contributed equally. As the number of coauthors increases, the diminishing amount of credit allocated to each additional coauthor is increasingly composed of equalizing bias such that when the total number of coauthors exceeds 12, the credit score of most coauthors is composed mostly of EqB. In general, EqB reverses the byline hierarchy and skews bibliometric assessments by underestimating the contribution of primary authors, i.e. those adversely affected by negative EqB, and overestimating the contribution of secondary authors, those benefitting from positive EqB. The positive and negative effects of EqB are balanced and sum to zero, but are not symmetrical. The lack of symmetry exacerbates the relative effects of EqB, and explains why primary authors are increasingly outnumbered by secondary authors as the number of coauthors increases. Specifically, for a paper with 50 coauthors, the benefit of positive EqB goes to 39 secondary authors while the burden of negative EqB befalls 11 primary authors. Relative to harmonic estimates of their actual contribution, the EqB of the 50 coauthors ranged from <−90% to >350%. Senior authorship, when it occurs, is conventionally indicated by a corresponding last author and recognized as being on a par with a first author. If senior authorship is not recognized, then the credit lost by an unrecognized senior author is distributed among the other coauthors as part of their EqB. The powerful distortional effect of EqB is compounded in bibliometric indices and performance rankings derived from biased equal credit. Equalizing bias must therefore be corrected at the source by ensuring accurate accreditation of all coauthors prior to the calculation of aggregate publication metrics.  相似文献   

13.
档案与大数据的关系问题是档案学界研究大数据的首要的基本问题。本文从大数据的本质、档案是不是大数据、大数据是不是档案、“档案大数据”的内涵等方面阐述了档案与大数据的关系。  相似文献   

14.
朱大明 《编辑学报》2015,27(2):154-155
提出科技期刊相互影响力的概念.科技期刊相互影响力,是指基于一定时间段内2种期刊之间相互引用频次统计量的比值来衡量其相互之间学术影响力的大小.简述科技期刊相互影响力评价指标计算方法,指出科技期刊相互影响力分析评价的目的和意义,并提出相关建议.  相似文献   

15.
16.
学科馆员机制与科技查新制度的对接   总被引:11,自引:0,他引:11  
分析高等院校图书馆开展的科技查新工作与学科馆员制度相结合的积极作用,提出要将专职查新人员和学科馆员相统一的思路,并提出实现这种合二为一的相应措施。  相似文献   

17.
温芳芳 《图书情报工作》2019,63(21):117-127
[目的/意义]自引本是科学交流的一种普遍现象,但在科学评价问题上却陷入了长久的争议。自引研究综述有助于增进学者们对自引的认识和了解,理清对自引的偏见与误解,启发更多人对自引做持续的关注与思索。[方法/过程]通过国内外相关文献的系统调研分析,客观描述自引研究的发展历程与现状,梳理其学术脉络和演化轨迹,归纳主要成果和思想,指出当前研究存在的问题,预测未来研究重点与方向。[结果/结论]自引研究经历了长期的质疑和反复的求证,至今并无定论,分歧源于研究视角的差异。自引研究亟需新的突破,重心将从单纯的计量与统计分析转向数据背后规律和机理的深度挖掘与解析,自引在考察学术传承和知识扩散方面的功能也将被进一步发掘。  相似文献   

18.
王一华 《图书情报工作》2011,55(16):144-148
把专家评价作为待考察的文献计量学指标的“参照系”,探讨用IF(JCR)、IF(Scopus)、H指数、SJR值、SNIP值进行学术期刊评价的效果。以IF(JCR)、IF(Scopus)、H指数、SJR值、SNIP值与专家评价相比较,进行Spearman非参数相关分析,结果表明:这些期刊评价指标与专家评价值相关性排序(从大到小)依次为SJR值、IF(JCR)、IF(Scopus)、SNIP值、H指数。这表明SJR值可作为IF(JCR)的替代工具,可能由于它同时兼顾期刊被引的数量与质量,与其他评价指标相比,其期刊评价效果更优。建议我国期刊数据库适当开发适应中文期刊需要的评价指标,并指出我国期刊评价未来将呈现多样化、综合化、自动化、国际化的发展趋势。  相似文献   

19.
This study examined the effect of narrative messages of a massive fire crisis on optimistic bias by experimentally comparing the effect of narrative describing a personal story on the crisis incident and that of non-narrative message (Study 1). Researchers further sought the interaction between controllability and the narrative message and the mediated moderation model of risk perception. In Study 2, the effect of narrative message describing a group story on the crisis incident on optimistic bias was further tested in terms of South Korea’s collectivistic culture. Collectivism, along with controllability, was used as a moderator, and mediated moderation models of risk perception were tested. The present research offers several major findings: (1) a narrative message describing a personal story decreased optimistic bias, (2) among people who read a narrative describing a personal story, those with high controllability had a lower level of optimistic bias than those with low controllability, (3) among people who read the narrative of a group story, those with high collectivism had a lower level of optimistic bias than those with low collectivism, and (4) the interaction between message types and collectivism affected risk perception and this risk perception increased optimistic bias. Theoretical implications of these findings are discussed.  相似文献   

20.
ABSTRACT

Diversity education is increasingly recognized as important to the health of a university, however, little empirical work has examined the intergroup processes at play and the effectiveness of online diversity education on college students. This research utilized a repeated-measures mixed factorial design to examine the implicit and explicit effects of online diversity education delivered at a large public university during the course of a semester. The study design was informed by intergroup contact, social identity, and computer-mediated communication research. Findings contribute to theorizing about intergroup processes in the reception of and learning from diversity education and point to practical avenues for employing online diversity education in higher education. Recommendations are made for practitioners interested in designing and delivering diversity training online in an interactive learning environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号