首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We analyse the difference between the averaged (average of ratios) and globalised (ratio of averages) author-level aggregation approaches based on various paper-level metrics. We evaluate the aggregation variants in terms of (1) their field bias on the author-level and (2) their ranking performance based on test data that comprises researchers that have received fellowship status or won prestigious awards for their long-lasting and high-impact research contributions to their fields. We consider various direct and indirect paper-level metrics with different normalisation approaches (mean-based, percentile-based, co-citation-based) and focus on the bias and performance differences between the two aggregation variants of each metric. We execute all experiments on two publication databases which use different field categorisation schemes. The first uses author-chosen concept categories and covers the computer science literature. The second covers all disciplines and categorises papers by keywords based on their contents. In terms of bias, we find relatively little difference between the averaged and globalised variants. For mean-normalised citation counts we find no significant difference between the two approaches. However, the percentile-based metric shows less bias with the globalised approach, except for citation windows smaller than four years. On the multi-disciplinary database, PageRank has the overall least bias but shows no significant difference between the two aggregation variants. The averaged variants of most metrics have less bias for small citation windows. For larger citation windows the differences are smaller and are mostly insignificant.In terms of ranking the well-established researchers who have received accolades for their high-impact contributions, we find that the globalised variant of the percentile-based metric performs better. Again we find no significant differences between the globalised and averaged variants based on citation counts and PageRank scores.  相似文献   

2.
In order to take multiple co-authorship appropriately into account, a straightforward modification of the Hirsch index was recently proposed. Fractionalised counting of the papers yields an appropriate measure which is called the hm-index. The effect of this procedure is compared in the present work with other variants of the h-index and found to be superior to the fractionalised counting of citations and to the normalization of the h-index with the average number of authors in the h-core. Three fictitious examples for model cases and one empirical case are analysed.  相似文献   

3.
电子资源评价之重要影响因子的灰色统计研究*   总被引:13,自引:1,他引:12  
在汇集整理出电子资源评价指标的主要影响因素集合的基础上,面向211工程的大学图书馆,根据各个馆的专家经验和图书馆工作实际,进行评价指标影响因子重要性程度和易获得性的调查,利用灰色理论和方法重点进行数据的灰色统计和分析,筛选出其中重要的影响因子,以此重新构造出具有一定合理性和可操作性的评价指标体系。  相似文献   

4.
It is well-known that the distribution of citations to articles in a journal is skewed. We ask whether journal rankings based on the impact factor are robust with respect to this fact. We exclude the most cited paper, the top 5 and 10 cited papers for 100 economics journals and recalculate the impact factor. Afterwards we compare the resulting rankings with the original ones from 2012. Our results show that the rankings are relatively robust. This holds both for the 2-year and the 5-year impact factor.  相似文献   

5.
Field normalized citation rates are well-established indicators for research performance from the broadest aggregation levels such as countries, down to institutes and research teams. When applied to still more specialized publication sets at the level of individual scientists, also a more accurate delimitation is required of the reference domain that provides the expectations to which a performance is compared. This necessity for sharper accuracy challenges standard methodology based on pre-defined subject categories. This paper proposes a way to define a reference domain that is more strongly delimited than in standard methodology, by building it up out of cells of the partition created by the pre-defined subject categories and their intersections. This partition approach can be applied to different existing field normalization variants. The resulting reference domain lies between those generated by standard field normalization and journal normalization. Examples based on fictive and real publication records illustrate how the potential impact on results can exceed or be smaller than the effect of other currently debated normalization variants, depending on the case studied. The proposed Partition-based Field Normalization is expected to offer advantages in particular at the level of individual scientists and other very specific publication records, such as publication output from interdisciplinary research.  相似文献   

6.
Modern retrieval test collections are built through a process called pooling in which only a sample of the entire document set is judged for each topic. The idea behind pooling is to find enough relevant documents such that when unjudged documents are assumed to be nonrelevant the resulting judgment set is sufficiently complete and unbiased. Yet a constant-size pool represents an increasingly small percentage of the document set as document sets grow larger, and at some point the assumption of approximately complete judgments must become invalid. This paper shows that the judgment sets produced by traditional pooling when the pools are too small relative to the total document set size can be biased in that they favor relevant documents that contain topic title words. This phenomenon is wholly dependent on the collection size and does not depend on the number of relevant documents for a given topic. We show that the AQUAINT test collection constructed in the recent TREC 2005 workshop exhibits this biased relevance set; it is likely that the test collections based on the much larger GOV2 document set also exhibit the bias. The paper concludes with suggested modifications to traditional pooling and evaluation methodology that may allow very large reusable test collections to be built.
Ellen VoorheesEmail:
  相似文献   

7.
Given the growing use of impact metrics in the evaluation of scholars, journals, academic institutions, and even countries, there is a critical need for means to compare scientific impact across disciplinary boundaries. Unfortunately, citation-based metrics are strongly biased by diverse field sizes and publication and citation practices. As a result, we have witnessed an explosion in the number of newly proposed metrics that claim to be “universal.” However, there is currently no way to objectively assess whether a normalized metric can actually compensate for disciplinary bias. We introduce a new method to assess the universality of any scholarly impact metric, and apply it to evaluate a number of established metrics. We also define a very simple new metric hs, which proves to be universal, thus allowing to compare the impact of scholars across scientific disciplines. These results move us closer to a formal methodology in the measure of scholarly impact.  相似文献   

8.
In this paper, we propose the application of a novel methodology to build composite indicators, in order to evaluate university performance. We analyse separately the three basic dimensions of our university system (research, teaching and technology transfer), because we are interested in getting a more accurate vision of each of them. In order to build the composite indicators, we use a multi-criteria analysis technique, based on the double reference point method. One advantage of this technique is the possibility to use reference levels, in such a way that the results obtained are easily interpreted in terms of the performance of the university with respect to these levels. Besides, aggregations for different compensation degrees are provided. In order to illustrate the advantages of this method, it has been applied to evaluate the performance of the public universities of the Spanish region of Andalucía, for year 2008. The results show that the performance of the Andalusian public universities in the teaching block is better than in the research and technology transfer blocks. The application lets us conclude that the methodology offers a warning system to assist in strategic decision making, and the values of the indicators allow us to find fields of improvement in all areas.  相似文献   

9.
网络信息资源评价指标研究的回顾及相关问题的思考   总被引:1,自引:0,他引:1  
袁静 《图书馆论坛》2006,26(5):280-282
对网络信息资源评价指标的国内外主要研究成果进行回顾,并指出存在的问题,最后提出今后研究中应注意的几个问题。  相似文献   

10.
借鉴当前美国图书馆界最有影响的服务质量评估方法LibQUAL+TM模式,同时借助免费的网络调查平台——知己知彼网,制定了海南省高校图书馆读者满意度测评指标体系,并在省内高校图书馆进行了调查。  相似文献   

11.
针对当前的网站评价需求,指出网站自动评价的必要性。结合已有的网站自动评价应用及典型工具,提出一种全新的网站自动评价方法,并将其归结为两个关键环节:评价指标形式化和网站度量的自动计量。在对指标映射模型、指标的形式化表示、度量库的建立及自动计量等问题进行深入研究的基础上,提出将评价逻辑与评价引擎相分离的网站自动评价系统框架,辅助评价者进行快速网站评价,将评价者从繁杂的工作中解脱出来,并最大限度地消除主观因素。  相似文献   

12.
对我国627所高校的自报科研绩效评估指标和源生科研绩效评估指标的分布规律作了拟合研究,研究结果肯定了国内以前的同类研究的结论,即高校的自报科研绩效评估指标的客观性较差;同时也指出了国内有关研究中的"排序-数值"分布拟合方法的瑕疵与不足,提出了更科学的科研指标分布拟合方法——"等级-频度"分布拟合,并通过实证分析证明了这种拟合方法的科学性。  相似文献   

13.
The journal impact factor (JIF) is the average of the number of citations of the papers published in a journal, calculated according to a specific formula; it is extensively used for the evaluation of research and researchers. The method assumes that all papers in a journal have the same scientific merit, which is measured by the JIF of the publishing journal. This implies that the number of citations measures scientific merits but the JIF does not evaluate each individual paper by its own number of citations. Therefore, in the comparative evaluation of two papers, the use of the JIF implies a risk of failure, which occurs when a paper in the journal with the lower JIF is compared to another with fewer citations in the journal with the higher JIF. To quantify this risk of failure, this study calculates the failure probabilities, taking advantage of the lognormal distribution of citations. In two journals whose JIFs are ten-fold different, the failure probability is low. However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.  相似文献   

14.
新世纪我国医学期刊发展的思考   总被引:13,自引:1,他引:13  
王青 《编辑学报》2001,13(5):272-274
对《1999年度中国科技期刊引证报告》中555种医学期刊的影响因子、载文量、总被引频次的统计进行排序,分析我国医学期刊发展的不足,提出新世纪应如何更好地发展的建议。  相似文献   

15.
数字参考咨询服务评价指标探讨   总被引:3,自引:0,他引:3       下载免费PDF全文
论述开展数字参考咨询服务的涵义及开展数字参考咨询评价的意义;分析国外《数字参考咨询服务中的质量评价》、SERVQUAL评价、LibQUAL +评价、ISO/11620评价和国内数字参考咨询服务评价的现状,在此基础上探讨从数字参考咨询馆员、数字参考咨询系统、用户满意度对数字参考咨询服务评价指标体系,并提出数字参考咨询服务的管理措施。  相似文献   

16.
科技期刊部分评价指标的评估作用分析   总被引:15,自引:4,他引:15  
汤先忻  张人镜 《编辑学报》2002,14(2):147-148
采用平面直角坐标系找出产出力和影响力等几个重要指标之间的关系,认为影响因子是相对数,单纯讲影响因子越大,期刊学术影响力就越大是欠全面的.科技期刊应努力提高被引频次和他引率.  相似文献   

17.
国际标准化组织(ISO)与国际图书馆协会联合会(IFLA)分别制定了图书馆绩效评估指标体系。文章介绍、分析了这两大指标体系的形成与发展过程,对比分析了这两大指标体系的异同,最后得出了关于建设我国图书馆绩效评估体系的几点启示。  相似文献   

18.
    
Evaluating scholars’ achievements is an important problem in the science of science with applications in the evaluation of grant proposals and promotion applications. Since the number of scholars and the number of scholarly outputs grow exponentially with time, well-designed ranking metrics that have the potential to assist in these tasks are of prime importance. To rank scholars, it is important to put their achievements in perspective by comparing them with the achievements of other scholars active in the same period. We propose here a particular way of doing so: by computing the evaluated scholar's share on each year's citations which quantifies how the scholar fares in competition with the others. We assess the resulting ranking method using the American Physical Society citation data and four prestigious physics awards. Our results show that the new method significantly outperforms other ranking methods in identifying the prize laureates.  相似文献   

19.
开放存取期刊评价模型构建   总被引:1,自引:0,他引:1  
陈铭 《图书情报工作》2010,54(14):11-15
首先探讨开放存取期刊评价模型的构建方法和构建步骤,然后在学术期刊评价指标和电子期刊评价指标的基础上,结合开放存取期刊的特点,向专家发放调查问卷,通过对指标数据的分析处理,确定16个定量评价指标,利用层次分析法建立开放存取期刊的评价指标体系,并探讨权重的确定,最后构建开放存取期刊的评价模型。  相似文献   

20.
应用模糊理论获得电子资源绩效指标权重的有效性研究   总被引:3,自引:0,他引:3  
徐革 《情报学报》2007,(2):191-197
本文应用模糊理论中的模糊关系和模糊转换法来处理专家群体对于电子资源绩效评价指标的重要性程度的问卷调查和数据分析,以基于重心的解模糊化方法确定模糊权重值,充分地利用专家所给的评判信息。最后将电子资源绩效指标的模糊权重结果与传统AHP法和特征值法的结果进行了分析比较,证实此方法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号